id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2305.05623
Analysis and numerical simulation of a generalized compressible Cahn-Hilliard-Navier-Stokes model with friction effects
We propose a new generalized compressible diphasic Navier-Stokes Cahn-Hilliard model that we name G-NSCH. This new G-NSCH model takes into account important properties of diphasic compressible fluids such as possible non-matching densities and contrast in mechanical properties (viscosity, friction) between the two phases of the fluid. the model also comprises a term to account for possible exchange of mass between the two phases. Our G-NSCH system is derived rigorously and satisfies basic mechanics of fluids and thermodynamics of particles. Under some simplifying assumptions, we prove the existence of global weak solutions. We also propose a structure preserving numerical scheme based on the scalar auxiliary variable method to simulate our system and present some numerical simulations validating the properties of the numerical scheme and illustrating the solutions of the G-NSCH model.
Charles Elbar, Alexandre Poulain
2023-05-06T07:57:01Z
http://arxiv.org/abs/2305.05623v2
Analysis and numerical simulation of a generalized compressible Cahn-Hilliard-Navier-Stokes model with friction effects ###### Abstract Motivated by the mathematical modeling of tumor invasion in healthy tissues, we propose a generalized compressible diphasic Navier-Stokes Cahn-Hilliard model that we name G-NSCH. We assume that the two phases of the fluid represent two different populations of cells: cancer cells and healthy tissue. We include in our model possible friction and proliferation effects. The model aims to be as general as possible to study the possible mechanical effects playing a role in invasive growth of a tumor. In the present work, we focus on the analysis and numerical simulation of the G-NSCH model. Our G-NSCH system is derived rigorously and satisfies basic mechanics of fluids and thermodynamics of particles. Under simplifying assumptions, we prove the existence of global weak solutions. We also propose a structure preserving numerical scheme based on the scalar auxiliary variable method to simulate our system and present some numerical simulations validating the properties of the numerical scheme and illustrating the solutions of the G-NSCH model. 2010 _Mathematics Subject Classification._ 35B40; 35B45; 35G20 ; 35Q35; 35Q92; 65M08 _Keywords and phrases._ Cahn-Hilliard equation; Navier-Stokes equation; Asymptotic analysis; Mathematical modeling; Numerical simulations; Scalar Auxiliary Variable method. ## 1 Introduction We derive, analyze and simulate numerically the generalized compressible Navier-Stokes-Cahn-Hilliard variant (_G-NSCH_ in short) \[\frac{\partial\rho}{\partial t}+\operatorname{div}\left(\rho \mathbf{v}\right)=0, \tag{1.1}\] \[\frac{\partial(\rho c)}{\partial t}+\operatorname{div}\left(\rho c \mathbf{v}\right)=\operatorname{div}\left(b(c)\nabla\mu\right)+F_{c},\] (1.2) \[\rho\mu=-\gamma\Delta c+\rho\frac{\partial\psi_{0}}{\partial c},\] (1.3) \[\frac{\partial(\rho\mathbf{v})}{\partial t}+\operatorname{div} \left(\rho\mathbf{v}\otimes\mathbf{v}\right)= -\left[\nabla p+\gamma\operatorname{div}\left(\nabla c\otimes\nabla c- \frac{1}{2}|\nabla c|^{2}\mathbf{1}\right)\right]+\operatorname{div}\left( \nu(c)\left(\nabla\mathbf{v}+\nabla\mathbf{v}^{T}\right)\right)\] \[-\frac{2}{3}\nabla\left(\nu(c)\left(\operatorname{div}\left( \mathbf{v}\right)\right)\right)-\kappa(\rho,c)\mathbf{v}, \tag{1.4}\] stated in \((0,T)\times\Omega\), where \(T>0\) is finite time, and \(\Omega\subset\mathbb{R}^{d}\) (\(d=1,2,3\)) is an open bounded domain with a smooth boundary \(\partial\Omega\). Interested in the modeling of invasive growth of tumors in healthy tissues, we motivate the different terms of the model with this biological application in mind. System (1.1)-(1.4) models the motion of a diphasic fluid composed of two immiscible components,_i.e._ the cells of the two different types, in a porous matrix and comprises viscosity effects, surface tension, and friction on the rigid fibers constituting the medium. In System (1.1)-(1.4), \(\rho\) is the total density of the mixture (_i.e._ the sum of the two partial densities), \(c\) is the relative mass fraction of one component (_e.g._ the cancer cells), \(\mathbf{v}\) is the mass averaged total velocity, \(\mu\) is called the chemical potential, \(p\) is the pressure. The coefficient \(\gamma\) is related to the surface tension and is equal to the square of the width of the diffuse interface existing between the two populations. The friction coefficient \(\kappa(\cdot)\) is a monotone increasing function of the density and takes into account the possible difference of friction strength between the two populations. We use this friction term to model possible adhesive effects on the extracellular matrix (_ECM_ in short). The coefficient \(\nu(\cdot)\) represents the viscosity of the mixture and again possible differences of viscosities could be considered for the two populations. The function \(\psi_{0}\) represents the separation of the two components of the mixture and phenomenologically models the behavior of cells (_i.e._ cells tend to form aggregates of the same cell type). The function \(F_{c}(\cdot)\) accounts for the possible proliferation and death of cells. The non-negative function \(b(\cdot)\) models the mobility of cells and is assumed to be doubly degenerated to, again, correspond to the behavior of cells. This latter assumption models the probability for a cell of any of the two populations to find an available neighboring spot to which it can move. More details about the general assumptions and precise forms of the different functions will be given in the next sections. The motivation of our model stands from the modeling of tumor progression and invasion in healthy tissues. Indeed, our model can be viewed as a representation of a proliferating population of cells (_i.e._ the tumor cells) in a domain filled with a non-proliferating population (_i.e._ the healthy cells). However, we emphasize that this article concerns the analysis and numerical simulation of the general G-NSCH model (1.1)-(1.4). This latter comprises effects that are negligible in biological situations, _e.g._ inertia effects. Since the general model is of interest to material sciences, physics and fluid mechanics, we focus propose here an analysis of the model and a structure preserving numerical scheme for the G-NSCH model while keeping in mind our initial application, _i.e._ invasive tumor growth modeling. We also emphasize that the G-NSCH is the basis of a reduced model that takes into account only biologically relevant physical effects that play a role in invasive tumor growth. Therefore, this work has to be seen as the first part. The second will concern numerical simulations, and sensitivity analysis of the reduced model, presented here in Appendix B as _Problem 2_, that will rely heavily on the present work. Literature reviewThe motion of a binary mixture of two immiscible and compressible fluids can be described by the Navier-Stokes equation coupled to the Cahn-Hilliard model. The well-known incompressible variant of the compressible NSCH model has been denominated model H (see _e.g._[32, 34]). Model H has been proposed to represent viscous fluid flow in an incompressible binary mixture under phase separation. This model assumes matching densities, _i.e._\(\rho_{1}=\rho_{2}\) and, hence, constant total density \(\rho\). To consider non-matching densities, Lowengrub and Truskinovsky [50] proposed the compressible Navier-Stokes Cahn-Hilliard model (_NSCH model_ in short). Expanding the divergence term in the mass balance equation, the authors found a relation denoting the quasi-compressible nature of the fluid. Concomitantly, Anderson, FcFadden, and Wheeler [10] proposed a similar system and we use this latter in the present work. We also remark that a very recent work [59] proposed a unified framework for the incompressible NSCH system and shows that the different NSCH models found in the literature only differ from their general model by specific constitutive hypotheses. Under some simplifying assumptions compared to the system proposed in [50] but being closer to the system in [10], the analysis of the compressible NSCH model with no-flux boundary conditions has been realized by Abels and Feireisl [4]. Their analysis requires simplifying the model proposed in [50] to avoid zones with zero density which would make this analysis a lot more difficult since the control from certain estimates would be lost. In another article, for the same system, Abels proved the existence of strong solutions for short times [2]. Considering the same assumptions and dynamic boundary conditions, Cherfils _et al._[20] proved the well-posedness of the compressible NSCH model with these special boundary conditions. These latter allow to model the interaction of the fluid components and the walls of the domain. Results on the analysis of the incompressible variant of the NSCH model, _i.e._ the model H, are numerous and we here mention only a few of them since a complete review would be out of the scope of the present article. With a non-degenerate mobility coefficient (\(b(c)\) in our notation) and a physically relevant choice of potential, the well-posedness and regularity analysis of model H has been performed by Abels [1] using tools both from the analysis of Navier-Stokes model and the Cahn-Hilliard model. It is worth mentioning that the non-degeneracy of the mobility coefficient leads to non-physical effects, _i.e._ Ostwald ripening effects (see [5]). For this reason, Abels, Depner and Garcke studied model H with a degenerate mobility [3]. Their analysis relies on a regularization of the mobility and singular potential into, respectively, a non-degenerate and non-singular potential. Then, suitable _a-priori_ estimates uniform in the regularization parameter allow to pass to the limit in the regularization and show the existence of weak solutions to the degenerate model H. We now focus on the Cahn-Hilliard equation alone and its use for the modelling of tumors. The Cahn-Hilliard equation has been initially used to represent the phase separation in binary mixtures and has been applied to the spinodal decomposition of binary alloys under a sudden cooling [16, 17]. The model represents the two phases of the fluids as continua separated by a diffuse interface. This equation has been used later in many different applications and we do not intend here to give an overview of all these. However, we refer the reader interested in the topic to the presentation of the Cahn-Hilliard equation and its applications to the review book [51]. We are interested here in the application of the Cahn-Hilliard framework to tumor modelling (see _e.g._[48, 49]). Latter, different variants of the Cahn-Hilliard model appeared: _e.g._, without giving a complete overview again, its coupling to Darcy's law [27], Brinkman's law [21], chemotaxis [56]. Recently, a new variant has been used to better represent the growth and organization of tumors. The main change is to consider a single-well logarithmic degenerate potential instead of a double-well potential [7, 18, 55]. This type of potential has been proposed in [9] to represent the action of the cells depending only on their own local density, _i.e._ attraction at low cell density and repulsion for large cell density representing the tendency of cells to avoid overcrowding. The numerical simulation of Model H for binary fluids with non-matching densities has been the subject of numerous works (see _e.g._[35] and references therein). However, in part due to its complexity, the numerical simulation of the compressible NSCH system has been less explored. A \(C^{0}\) finite element numerical scheme for a variant of the quasi-compressible NSCH model proposed in [50] has been proposed in [29]. Around the same time, Giesselmann and Pryer [8, 28] designed a discontinuous Galerkin finite element scheme to simulate the quasi-incompressible NSCH system which preserves the total mass and the energy dissipation. A numerical method has also been proposed in [33] in the case of constant mobility \(b(c)\) and smooth polynomial potential \(\psi(c)\). Furthermore, the system simulated in [33] is a simplification of the compressible NSCH system since the pressure does not appear in the definition of the chemical potential \(\mu\) in their system. The previous works we presented for the simulation of the compressible or quasi-compressible NSCH systems deal with constant mobility combined with a smooth polynomial potential. We aim to simulate the compressible NSCH model with choices of mobility and potential relevant for biology (but also relevant for material sciences and fluid mechanics), _i.e._ degenerate mobil ity combined with a logarithmic potential. We now review briefly some relevant discretization method for the Cahn-Hilliard equation alone with degenerate mobility and singular potentials. Considering a degenerate mobility and a double-well logarithmic potential, we mention the work of Barrett, Blowey and Garcke [11]. In this article the authors proposed a finite element scheme with a variational inequality to preserve the bounds of the solution. Based on these ideas, Agosti _et al._[7] proposed a similar finite element scheme for the case of single-logarithmic potential. The difficulty in this latter case lies in the fact that the degeneracy and the singularity sets do not coincide and negative considering an order parameter that must remain within the bounds \([0,1)\) negative solutions can appear if a standard discretization method is used. The method proposed in [7] solves this issue but does not preserve the exact mass. In a more recent work, Agosti [6] proposed a discontinuous Galerkin finite element scheme that preserves the bounds \([0,1)\) and preserves the exact mass. However, the main drawback of the previously mentioned methods is that they are computationally expensive and solve a strongly coupled nonlinear system and use iterative algorithms. Since the Cahn-Hilliard equation is a gradient flow (see _e.g._[44]), a structure-preserving linear scheme can be constructed using the Scalar Auxiliary Variable (_SAV_ in short) method [57]. This has been successfully used in [37]. In this latter work, the scheme is structure-preserving from the use of a scalar variable that represents the discrete energy, and an additional equation is solved to ensure dissipation at the discrete level. The bounds of the order parameter are ensured using a transformation that maps \(\mathbb{R}\) to the physical relevant interval (\((0,1)\) in the case of a double-well potential). To the best of our knowledge, the SAV method has not been applied to the compressible NSCH system. Objectives of our workThe first objective of our work is to study the well-posedness of the G-NSCH model under some simplifying assumptions (_i.e._ smooth potential and positive mobility). The second objective is the design of an efficient and structure-preserving numerical scheme for the G-NSCH model with singular double-well potential and degenerate mobility. The third focus of the present work concerns the rigorous derivation of the G-NSCH model that is presented in the Appendix. Outline of the paperSection 2 presents the notation, functional spaces and assumptions we use in our work for the analytical part but also for the numerical part. Section 3 deals with the proof of the existence of weak solutions for the G-NSCH system (1.1)-(1.4) under simplifying assumptions. A structure preserving numerical scheme based on the SAV method is then proposed in Section 4 and some numerical results are presented in Section 5. Our model's equations come from a thermodynamically consistent derivation of the compressible Navier-Stokes-Cahn-Hilliard model including friction effects and source terms. The derivation is described in Appendix A. From a general model, we propose in Appendix B two reductions: The G-NSCH studied and simulated in the present work and one biologically relevant reduction that will be the focus of a forthcoming work. ## 2 General assumptions, notations and functional setting The equations are set in a domain \(\Omega_{T}=\Omega\times(0,T)\) with \(\Omega\) an open and bounded subset of \(\mathbb{R}^{3}\). We assume that the boundary \(\partial\Omega\) is sufficiently smooth. We indicate the usual Lebesgue and Sobolev spaces by respectively \(L^{p}(\Omega)\), \(W^{m,p}(\Omega)\) with \(H^{m}(\Omega):=W^{m,2}(\Omega)\), where \(1\leq p\leq+\infty\) and \(m\in\mathbb{N}\). For \(q\in[1,+\infty]\), we indicate the Bochner spaces by \(L^{q}(0,T;X)\) (where \(X\) is a Banach space). Finally, \(C\) denotes a generic constant that appears in inequalities and whose value can change from one line to another. This constant can depend on various parameters unless specified otherwise. ### Assumptions on functionals We divide the assumptions on the different terms appearing in system (1.1)-(1.4) into two parts: analytical and numerical assumptions. Indeed we are not able to prove the existence of weak solutions in the general setting used in the numerical simulations. For instance, the case of the usual logarithmic double-well potential in the Cahn-Hilliard equation is not treated but can be implemented in our numerical scheme. However, we can analyze our system with a polynomial approximation of the double well. We also consider non-degenerate mobilities to obtain estimates on the chemical potential \(\mu\) directly. The case of degenerate mobility, see for instance [23], seems unavailable as we do not have anymore the classical "entropy" estimates of the Cahn-Hilliard equation that provide bound on second-order derivatives of the mass fraction \(c\). **Framework for numerical simulations** We assume that the viscosity \(\nu(c)\) and permeability \(\kappa(\rho,c)\) coefficients are smooth non-negative functions of the mass fraction \(c\). The mobility is a non-negative function of the order parameter (mass fraction) \(c\). Hence, we assume that \[b\in C^{1}([0,1];\mathbb{R}^{+}),\ \ \ \text{and}\ \ \ b(c)\geq 0\ \ \ \text{for}\ \ \ 0\leq c\leq 1. \tag{2.1}\] In agreement with the literature (see e.g [20]), the homogeneous free energy \(\psi_{0}(\rho,c)\) is assumed to be of the form \[\psi_{0}(\rho,c)=\psi_{e}(\rho)+\psi_{\text{mix}}(\rho,c), \tag{2.2}\] with \(\psi_{\text{mix}}(\rho,c)=H(c)\log\rho+Q(c)\) and \(Q(c)\) is a double-well (or single-well) potential. Then, using the constitutive relation for the pressure, we have \[p(\rho,c)=\rho^{2}\frac{\partial\psi_{0}}{\partial\rho}=p_{e}(\rho)+\rho H(c), \tag{2.3}\] where \(p_{e}=\rho^{2}\psi_{e}^{\prime}(\rho)\) and is assumed to satisfy \[p_{1}\rho^{a-1}-p_{2}\leq p_{e}^{\prime}(\rho)\leq p_{3}(1+\rho^{a-1}),\ \ \ \text{for}\ \ \ a>3/2,\ \ \ p_{1},p_{2},p_{3}>0. \tag{2.4}\] We assume that the source term \(F_{c}\) (that can depend on the mass fraction and the density) is bounded, \[|F_{c}(\rho,c)|+\left|\frac{F_{c}(\rho,c)}{\rho}\right|\ \leq C,\,\forall(\rho,c)\in \mathbb{R}^{2}. \tag{2.5}\] **Remark 2.1** (Double-well logarithmic potential).: In the present work, we aim to use a double-well logarithmic potential in the definition of the mixing potential. A relevant example of potential is \[\psi_{\text{mix}}=\frac{1}{2}\left(\alpha_{1}(1-c)\log(\rho(1-c))+\alpha_{2}c \log(\rho c)\right)-\frac{\theta}{2}(c-\frac{1}{2})^{2}+k. \tag{2.6}\] This potential gives \[H(c)=\alpha_{1}(1-c)+\alpha_{2}c,\ \ \ Q(c)=\frac{1}{2}\left(\alpha_{1}(1-c) \log(1-c)+\alpha_{2}c\log(c)\right)-\frac{\theta}{2}(c-\frac{1}{2})^{2}+k,\] where \(\theta>1\) and \(k\) is an arbitrary constant. Additional assumptions for the existence of weak solutionsConcerning the existence of weak solutions, we need to strengthen our assumptions. The viscosity coefficient \(\nu(c)\) is assumed to be bounded from below by a positive constant and the friction coefficient \(\kappa(\rho,c)\) is assumed to be nonnegative. Moreover both \(\nu(c)\) and \(\kappa(\rho,c)\) are two functions bounded in \(L^{2}(0,T;L^{2}(\Omega))\) whenever \(c\) is bounded in \(L^{\infty}(0,T;H^{1}(\Omega))\) and \(\rho\) is smooth (for instance \(C(0,T;C^{2}(\overline{\Omega}))\). We consider \(a>2\) the exponent of the pressure law. In the numerical simulations, we take degenerate mobilities of the form \(b(c)=c(1-c)^{\alpha}\). However, in the analysis, we consider a non-degenerate mobility by truncating the previous mobility. For instance, using a small parameter \(0<\varepsilon_{b}<<1\), we approximate the mobility \(b(\cdot)\) by \[b_{\varepsilon_{b}}(c)=\begin{cases}b(1-\varepsilon_{b}),\quad\text{if }c\geq 1- \varepsilon_{b},\\ b(\varepsilon_{b}),\quad\text{if }c\leq\varepsilon_{b},\\ b(c),\quad\text{otherwise},\end{cases}\] and consider the case of a fixed \(\varepsilon_{b}\). We obtain that \[b\in C^{1}(\mathbb{R};\mathbb{R}^{+}),\quad\text{and}\quad b(c)\geq C>0\quad \forall c\in\mathbb{R}. \tag{2.7}\] Concerning the functionals appearing in the definition of the free energy \(\psi_{0}\) we assume that \(H\) and \(H^{\prime}\) are bounded and that \(Q\) is a polynomial approximation of the double well potential. More precisely we take \[\begin{split}& H_{1}\leq H^{\prime}(c),\,H(c)\leq H_{2},\quad c \in\mathbb{R},\quad H_{1},H_{2}>0,\\ & Q(c)=\frac{1}{4}c^{2}(1-c)^{2}.\end{split} \tag{2.8}\] The case of the double-well logarithmic potential has not been tackled yet even though this is the main motivation for the decomposition of \(\psi_{mix}\) as in the works [4] and [20]. Also, to make the computations simpler, we assume that * \(a>6\) where \(a\) is the pressure exponent, * \(\psi_{e}(\rho)=\frac{\rho^{a-1}}{a-1}\) and therefore \(p_{e}(\rho)=\rho^{a}\). These two assumptions are not necessary but simplify the analysis. We refer for instance to [4, 25] for the more general setting. For instance, the condition \(a>6\) is used to not introduce another parameter in the approximating scheme which would make the article even longer. Note that the assumptions on \(\psi_{0}\) imply in particular the following lemma which is essential to obtain estimates on the energy dissipation: **Lemma 2.2**.: _There exists a constant \(C\) such that_ \[\left|\rho\frac{\partial\psi_{0}}{\partial c}\right|\leq C\rho\psi_{0}+C.\] Its proof uses the assumption on \(H\) and the fact that for \(c\) large, \(Q^{\prime}(c)\approx c^{3}\leq c^{4}+1\approx Q(c)+1\). ## 3 Existence of weak solutions We now turn to the proof of the existence of weak solutions for the G-NSCH model (1.1)-(1.4) subjected to boundary conditions \[\mathbf{v}=\frac{\partial c}{\partial\mathbf{n}}=b(c)\frac{\partial\mu}{ \partial\mathbf{n}}=0\quad\text{on }\partial\Omega, \tag{3.1}\] and initial conditions \[\rho(0,x)=\rho_{0}\geq 0\in L^{a}(\Omega),\quad c(0,x)=c_{0}\in H^{1}(\Omega) \quad\rho_{0}\mathbf{v}(0,x)=\mathbf{m}_{0},\,\text{with }\frac{|\mathbf{m}_{0}|^{2}}{\rho_{0}}\in L^{1}(\Omega). \tag{3.2}\] Also, we suppose \(\rho_{0}\neq 0\). The proof of the result is quite long and technical. Therefore, when necessary and for the sake of clarity, we omit some proofs and give instead appropriate references. Outline of the analysisFor readability reasons, we here present the plan we use for the analysis of the G-NSCH model. We first start with the analysis of a "truncated" version of G-NSCH model in the sense that the double-well is truncated for large values of \(c\) with a parameter \(\varepsilon_{Q}\). Then, for this fixed truncation, we prove the existence of weak solutions using the ideas of [4, 20, 25, 42]. Then, we pass to the limit \(\varepsilon_{Q}\to 0\). Namely, recalling that \(Q(c)=\frac{1}{4}c^{2}(1-c)^{2}\) we first consider \(Q_{\varepsilon_{Q}}(c)\) a smooth truncated approximation of \(Q\) which satisfies \[|Q_{\varepsilon_{Q}}|,|Q^{\prime}_{\varepsilon_{Q}}|,|Q^{\prime\prime}_{ \varepsilon_{Q}}|\leq C\left(\frac{1}{\varepsilon_{Q}}\right). \tag{3.3}\] In the first subsections, we work with the regularized problem and we drop the \(\varepsilon_{Q}\) notation. We will use the \(\varepsilon_{Q}\) notation when we pass to the limit, and for the moment we benefit from the properties of the regularization. ### Energy estimates The G-NSCH system comes with an energy structure which is useful to obtain first a priori estimates. **Proposition 3.1**.: _Smooth solutions of the system (1.1)-(1.4) satisfy the following energy relation_ \[\frac{d}{dt}E+D=\int_{\Omega}\mu F_{c}\,\mathrm{d}x, \tag{3.4}\] _where \(E\) is the energy, and \(D\) is the dissipation defined as_ \[E =\int_{\Omega}\rho\frac{|\mathbf{v}|^{2}}{2}+\rho\psi_{0}+\frac{ \gamma}{2}|\nabla c|^{2}\,\mathrm{d}\mathbf{x}, \tag{3.5}\] \[D =\int_{\Omega}\frac{\nu(c)}{2}\left|\nabla\mathbf{v}+\nabla \mathbf{v}^{T}-\frac{2}{3}\mathrm{div}(\mathbf{v})\mathbb{I}\right|^{2}+b(c)| \nabla\mu|^{2}+\kappa(\rho,c)|\mathbf{v}|^{2}\,\mathrm{d}\mathbf{x}. \tag{3.6}\] _This yields a priori estimates on the solution i.e. there exists a positive constant \(C\) such that_ \[E(t)+\int_{0}^{t}D(s)\,\mathrm{d}s\leq C+CE(0).\] Note that the energy is bounded from below since \(\rho\log\rho H(c)\) is bounded from below with (2.8). Also, the purpose of the assumptions \(\nu(c)\) and \(b(c)\) bounded from below by a positive constant become clear, they are crucial to obtain estimates on the \(H^{1}(\Omega)\) norm of \(\mu\) and \(v\). Proof.: We recall the formula \[\nabla c\Delta c=\mathrm{div}(\nabla c\otimes\nabla c)-\frac{1}{2}\nabla| \nabla c|^{2}. \tag{3.7}\] We denote by \(\mathbb{T}\) the tensor \(\nu(c)(\nabla\mathbf{v}+\nabla\mathbf{v}^{T}-\frac{2}{3}\mathrm{div}(\mathbf{ v})\mathbb{I})\). Then we multiply Equation (1.1) by \(\frac{|\mathbf{v}|^{2}}{2}\) and sum it with the scalar product of Equation (1.4) with \(\mathbf{v}\). We obtain \[\frac{\partial}{\partial t}\left(\rho\frac{|\mathbf{v}|^{2}}{2} \right)+\mathrm{div}\left(\frac{1}{2}\rho|\mathbf{v}|^{2}\mathbf{v}+p(\rho,c) \mathbf{v}-\mathbb{T}\cdot\mathbf{v}\right)+\mathbb{T}:\nabla\mathbf{v}+\kappa (\rho,c)\mathbf{v}^{2}=p(\rho,c)\mathrm{div}(\mathbf{v})\\ +\gamma\mathrm{div}(\frac{1}{2}|\nabla c|^{2}\mathbb{I}-(\nabla c \otimes\nabla c))\cdot\mathbf{v},\] which is equivalent to \[\frac{\partial}{\partial t}\left(\rho\frac{|\mathbf{v}|^{2}}{2}\right)+ \mathrm{div}\left(\frac{1}{2}\rho|\mathbf{v}|^{2}\mathbf{v}+p(\rho,c)\mathbf{ v}-\mathbb{T}\cdot\mathbf{v}\right)+\mathbb{T}:\nabla\mathbf{v}+\kappa(\rho,c) \mathbf{v}^{2}=p(\rho,c)\mathrm{div}\mathbf{v}-\gamma\Delta c\nabla c\cdot \mathbf{v}. \tag{3.8}\] Then, we multiply Equation (1.2) by \(\mu\) and obtain using also (1.1) \[\rho\mu(\partial_{t}c+\mathbf{v}\cdot\nabla c)=\operatorname{div}(b(c)\nabla\mu) \mu+\mu F_{c}.\] And, using (1.3) we obtain \[\rho\frac{\partial\psi_{0}}{\partial c}(\partial_{t}c+\mathbf{v}\cdot\nabla c)= \operatorname{div}(b(c)\nabla\mu)\mu+\gamma\Delta c(\partial_{t}c+\mathbf{v} \cdot\nabla c)+\mu F_{c}.\] The previous equation can be rewritten using the chain rule as \[\partial_{t}(\rho\psi_{0})+\operatorname{div}(\rho\psi_{0}\mathbf{ v}) -\psi_{0}(\partial_{t}\rho+\operatorname{div}(\rho\mathbf{v}))-\rho\frac{ \partial\psi_{0}}{\partial\rho}(\partial_{t}\rho+\mathbf{v}\cdot\nabla\rho)\] \[=\operatorname{div}(b(c)\nabla\mu)\mu+\gamma\Delta c(\partial_{t }c+\mathbf{v}\cdot\nabla c)+\mu F_{c}.\] We have \(\rho\frac{\partial\psi_{0}}{\partial\rho}(\partial_{t}\rho+\mathbf{v}\cdot \nabla\rho)=\rho\frac{\partial\psi_{0}}{\partial\rho}(-\rho\operatorname{div }(\mathbf{v}))=-p\operatorname{div}(\mathbf{v})\) (see Equation (2.3) for the definition of the pressure). Moreover, we know that \(\Delta c\partial_{t}c=\operatorname{div}(\partial_{t}c\nabla c)-\partial_{t} \left(\frac{|\nabla c|^{2}}{2}\right)\) and, hence, \[\partial_{t}(\rho\psi_{0})+\operatorname{div}(\rho\psi_{0} \mathbf{v})+p\operatorname{div}(\mathbf{v})=\operatorname{div}(b(c)\nabla\mu) \mu+\gamma\left[\operatorname{div}(\partial_{t}c\nabla c)-\partial_{t}\left( \frac{|\nabla c|^{2}}{2}\right)+\Delta c\mathbf{v}\cdot\nabla c\right]\\ +\mu F_{c}. \tag{3.9}\] Summing (3.8) and (3.9) we obtain \[\frac{\partial}{\partial t}\left(\rho\frac{|\mathbf{v}|^{2}}{2}+ \rho\psi_{0}+\frac{\gamma}{2}|\nabla c|^{2}\right)+\operatorname{div}\left( \rho\psi_{0}\mathbf{v}+\frac{1}{2}\rho|\mathbf{v}|^{2}\mathbf{v}+p(\rho,c) \mathbf{v}-\mathbb{T}:\mathbf{v}-\gamma\partial_{t}c\nabla c\right)- \operatorname{div}(b(c)\nabla\mu)\mu\\ +\mathbb{T}:\nabla\mathbf{v}+\kappa(\rho,c)|\mathbf{v}|^{2}=\mu F _{c}.\] Now we use the fact that \[\mathbb{T}:\nabla\mathbf{v}=\frac{\nu(c)}{2}\left|\nabla\mathbf{v}+\nabla \mathbf{v}^{T}-\frac{2}{3}\operatorname{div}(\mathbf{v})\mathbb{I}\right|^{2}. \tag{3.10}\] Integrating in space and using the boundary conditions (3.1) ends the proof of the first part of the proposition. To prove the second part, we integrate the equation in time and control the right-hand side. Indeed, due to the assumption on the source term (2.5), we have \[\left|\int_{0}^{t}\int_{\Omega}\mu F_{c}\,\mathrm{d}x\,\mathrm{d}t\right|\leq C \int_{0}^{t}\int_{\Omega}|\mu|.\] We want to use Lemma 3.6 to control the \(L^{1}\) norm of \(\mu\). Integrating the equations on \(\rho\) to obtain \(\int_{\Omega}\rho\,\mathrm{d}x=\int_{\Omega}\rho_{0}\,\mathrm{d}x>M_{0}\) we satisfy the first assumption of the lemma. For the second, we notice that we can consider a variant of this lemma such that instead of asking \(\rho\) to be in \(L^{1+\varepsilon}\) we have the inequality \[\left\|\mathbf{u}-\frac{1}{\Omega}\int_{\Omega}\rho\mathbf{u}\right\|_{L^{2}} \leq C\|\nabla\mathbf{u}\|_{L^{2}}+\|\rho\|_{L^{1+\varepsilon}}.\] Using Young's inequality, the fact that in the energy \(\rho\psi_{0}\) contains a term of the form \(\rho^{\mu+1}\) we obtain for \(\widetilde{C}\) small enough \[\int_{0}^{t}\int_{\Omega}|\mu|\,\mathrm{d}x\leq C+\widetilde{C}\int_{0}^{t} \int_{\Omega}|\mu|^{2}\,\mathrm{d}x\leq C+CE(t)+\frac{\inf_{c}b(c)}{2}\int_{ \Omega}|\nabla\mu|^{2}\,\mathrm{d}x+C\left|\int_{\Omega}\rho\mu\,\mathrm{d}x \right|.\] Since the energy dissipation controls the third term of the right-hand side, it remains to control the last term of the right-hand side. We recall that \(\rho\mu=\rho\frac{\partial\psi_{0}}{\partial c}-\gamma\Delta c\). Using the Neumann boundary conditions on \(c\), it remains to control \(\left|\int_{\Omega}\rho\frac{\partial\psi_{0}}{\partial c}\right|\). Using Lemma 2.2, we obtain \[\left|\int_{\Omega}\rho\frac{\partial\psi_{0}}{\partial c}\,\mathrm{d}x\right| \leq C+CE(t).\] We conclude using Gronwall's lemma. ### Existence of weak solutions for fixed \(\varepsilon_{Q}\) The weak solutions of system (1.1)-(1.4) are defined as follows **Definition 3.2**.: We say that \((\rho,\mathbf{v},c,\mu)\) is a weak of system (1.1)-(1.4) provided: * \(\rho\geq 0\) and we have the regularity \[\rho\in L^{\infty}(0,T;L^{a}(\Omega)),\] \[\mathbf{v}\in L^{2}(0,T;H^{1}_{0}(\mathbb{R}^{3})),\quad\sqrt{ \rho}\mathbf{v}\in L^{\infty}(0,T;L^{2}(\Omega;\mathbb{R}^{3})),\quad\mathbb{T }:\nabla\mathbf{v}\in L^{1}(0,T;L^{1}(\Omega)),\] \[c\in L^{\infty}(0,T;H^{1}(\Omega)),\] \[\mu\in L^{2}(0,T;H^{1}(\Omega)).\] * Equations (1.1)-(1.4) are satisfied in the distributional sense. * The initial conditions (3.2) are satisfied a.e. in \(\Omega\). * The boundary conditions (3.1) are satisfied. In order to prove the existence of weak solutions, we use an approximating scheme with a small parameter \(\varepsilon>0\) borrowing the idea from [25, 43]. More precisely, let \(X_{n}=\mathrm{span}\{\eta_{i}\}_{i=1,\dots,n}\) be the set of the first \(n\) vectors of a basis of \(H^{1}_{0}(\Omega;\mathbb{R}^{3})\) such that \(X_{n}\subset C^{2}(\overline{\Omega};\mathbb{R}^{3})\). We consider the following problem for \((\rho,\mathbf{v}_{n},c)\) with \(\mathbf{v}_{n}\in X_{n}\) (with coordinates depending on time): \[\partial_{t}\rho+\mathrm{div}(\rho\mathbf{v}_{n})=\varepsilon\Delta\rho, \tag{3.11}\] and for every \(\eta\in X_{n}\), \[\int_{\Omega}\rho\mathbf{v}_{n}(t)\cdot\eta\,\mathrm{d}x-\int_{ \Omega}\mathbf{m}_{0}\cdot\eta\,\mathrm{d}x-\int_{0}^{t}\int_{\Omega}\rho \mathbf{v}_{n}\otimes\mathbf{v}_{n}:\nabla\eta\,\mathrm{d}x\,\mathrm{d}s-\int _{0}^{t}\int_{\Omega}p(\rho,c)\mathrm{div}(\eta)\,\mathrm{d}x\,\mathrm{d}s\\ +\varepsilon\int_{0}^{t}\int_{\Omega}(\nabla\mathbf{v}_{n}\nabla \rho)\cdot\eta\,\mathrm{d}x\,\mathrm{d}s+\int_{0}^{t}\int_{\Omega}\mathbb{T}: \nabla\eta\,\mathrm{d}x\,\mathrm{d}s+\gamma\int_{0}^{t}\int_{\Omega}(\frac{1} {2}|\nabla c|^{2}\mathbb{I}-(\nabla c\otimes\nabla c)):\nabla\eta\,\mathrm{d}x \,\mathrm{d}s\\ +\int_{\Omega}\int_{0}^{t}\kappa(\rho,c)\mathbf{v}_{n}\cdot\eta \,\mathrm{d}x\,\mathrm{d}s=0. \tag{3.12}\] And for the equation on the mass fraction \[\partial_{t}c+\mathbf{v}_{n}\cdot\nabla c=\frac{1}{\rho}\mathrm{div}(b(c) \nabla\mu)+\frac{F_{c}}{\rho},\quad\mu=\frac{\partial\psi_{0}}{\partial c}- \gamma\frac{\Delta c}{\rho}. \tag{3.13}\] We consider Neumann boundary conditions \[\nabla\rho\cdot\mathbf{n}=b(c)\nabla\mu\cdot\mathbf{n}=\nabla c\cdot\mathbf{n }=0\quad\text{on }\partial\Omega, \tag{3.14}\] and the Dirichlet boundary condition for \(\mathbf{v}_{n}\) is included in the definition of \(X_{n}\). Finally, we consider the initial conditions \[\rho(0,\cdot)=\rho_{0,\varepsilon}>0,\quad c(0,\cdot)=c_{0,\varepsilon}, \quad\rho\mathbf{v}_{n}(0,\cdot)=\mathbf{m}_{0}, \tag{3.15}\] where \(\rho_{0,\varepsilon}\), \(c_{0,\varepsilon}\) satisfy the Neumann boundary conditions and they are smooth approximations of \(\rho_{0},c_{0}\) (when \(\varepsilon\to 0\)). We now comment on the scheme used above and detail the strategy of the proof. We add the artificial diffusion in (3.11) with the parameter \(\varepsilon>0\). Here, \(\mathbf{v}_{n}\) is fixed and we can conclude the existence of classical solutions to (3.11) which are positive since the initial condition is positive (and using maximum principle). Using this positivity, we conclude the existence of a strong solution to Equation (3.13) which is in fact a fourth-order parabolic equation. Having obtained \(c\), we focus on Equation (3.12) and we prove existence for a small time with Schauder's fixed point theorem. Note the presence of the additional term \(\varepsilon\int(\nabla\mathbf{v}_{n}\nabla\rho)\cdot\eta\) which is useful to cancel energy terms introduced by \(\varepsilon\Delta\rho\) in (3.11). Having obtained existence on a short time interval we compute the energy of the system and obtain global existence. Then, we pass to the limit \(n\to\infty\). It remains to send \(\varepsilon\) and \(\varepsilon_{Q}\) to \(0\) and obtain solutions of system (1.1)-(1.4). We first turn our attention to Equation (3.11). From [25], we obtain the following proposition, and lemma **Proposition 3.3**.: _Let \(\Omega\subset\mathbb{R}^{3}\) be a bounded domain of class \(C^{2+\beta}\) for some \(\beta>0\). For a fixed \(\mathbf{v}_{n}\in X_{n}\), there exists a unique solution to Equation (3.11) with Neumann boundary conditions (3.14) and initial data conditions (3.15). Furthermore, the mapping \(\mathbf{v}_{n}\mapsto\rho[\mathbf{v}_{n}]\), that assigns to any \(\mathbf{v}_{n}\in X_{n}\) the unique solution of (3.11), takes bounded sets in the space \(C(0,T;C_{0}^{2}(\overline{\Omega},\mathbb{R}^{d}))\) into bounded sets in the space_ \[V:=\{\partial_{t}\rho\in C(0,T;C^{\beta}(\overline{\Omega})),\,\rho\in C(0,T; C^{2+\beta}(\overline{\Omega}))\}.\] **Lemma 3.4**.: _The solutions of (3.11) satisfy_ \[(\inf_{x\in\Omega}\rho(0,x))\exp\left(-\int_{0}^{t}\|\mathrm{div} \,\mathbf{v}_{n}(s)\|_{L^{\infty}(\Omega)}\,\mathrm{d}s\right) \leq\rho(t,x)\] \[\leq(\sup_{x\in\Omega}\rho(0,x))\exp\left(\int_{0}^{t}\|\mathrm{ div}\,\mathbf{v}_{n}(s)\|_{L^{\infty}(\Omega)}\,\mathrm{d}s\right),\] _for all \(t\in[0,T]\) and \(x\in\Omega\)._ Using the latter lemma, if the velocity field is in \(W^{1,\infty}\), the density is bounded from below by a positive constant (provided the initial condition is positive). We now focus on Equation (3.13). **Proposition 3.5**.: _Let \(\rho\) be given such that \(\rho\in C(0,T;C^{2}(\overline{\Omega}))\) and \(\rho\geq\underline{\rho}>0\). Then Equation (3.13) with Neumann boundary conditions (3.14) admits a strong solution. Moreover, the mapping \(\mathbf{v}_{n}\mapsto c[\mathbf{v}_{n}]\) takes bounded sets in the space \(C(0,T;C_{0}^{2}(\overline{\Omega},\mathbb{R}^{3}))\) into bounded sets in the space_ \[W:=\{c\in L^{\infty}(0,T;H^{1}(\Omega))\cap L^{2}(0,T;H^{3}(\Omega))\}. \tag{3.16}\] The existence of a strong solution is based on the remark that the highest order term of this equation is \(-\gamma\frac{b(c)}{\rho}\Delta^{2}c\). Using \(b(c),\rho\geq C>0\) we obtain a fourth-order parabolic equation with smooth coefficients and with zero Neumann boundary conditions. Therefore, we can admit the existence of a strong solution and we focus on the estimates (3.16). In the proof, we need the following two lemmas **Lemma 3.6** (Lemma 3.2 in [25]).: _Let \(\Omega\in\mathbb{R}^{3}\) be a bounded Lipschitz domain and let \(M_{0}>0\), \(K>0\). Assume that \(\rho\) is a nonnegative function such that_ \[0<M_{0}\leq\int_{\Omega}\rho\,\mathrm{d}x,\int_{\Omega}\rho^{a}\,\mathrm{d}x \leq K,\quad\text{ with }a>1.\] _Then, there exists a positive constant \(C=C(M_{0},K,a)\) such that the inequality_ \[\left\|\mathbf{u}-\frac{1}{|\Omega|}\int_{\Omega}\rho\mathbf{u}\right\|_{L^{2 }(\Omega;\mathbb{R}^{3})}\leq C\|\nabla\mathbf{u}\|_{L^{2}(\Omega;\mathbb{R} ^{3\times 3})},\] _holds for any \(\mathbf{u}\in W^{1,2}(\Omega;\mathbb{R}^{3})\)._ **Lemma 3.7** (Theorem 10.17 in [26]).: _Let \(\Omega\subset\mathbb{R}^{3}\) be a bounded Lipschitz domain, and let \(1<p<+\infty\), \(M_{0}>0\), \(K>0\), \(a>1\). Then there exists a postive constant \(C=C(p,M_{0},K,a)\) such that the inequality_ \[\|\mathbf{u}\|_{W^{1,p}(\Omega;\mathbb{R}^{3})}\leq C\left(\|\nabla\mathbf{u}+ \nabla^{T}\mathbf{u}-\frac{2}{3}\mathrm{div}\mathbf{u}\|_{L^{p}(\Omega; \mathbb{R}^{3\times 3})}+\int_{\Omega}\rho|\mathbf{u}|\,\mathrm{d}x\right),\] _holds for any \(\mathbf{u}\in W^{1,p}(\Omega;\mathbb{R}^{3})\) and any non-negative function \(\rho\) such that_ \[0<M_{0}\leq\int_{\Omega}\rho\,\mathrm{d}x,\quad\int_{\Omega}\rho^{a}\,\mathrm{ d}x\leq K.\] Proof of Proposition 3.5.: We admit the existence of solutions and focus on a priori estimates. We multiply Equation (3.13) by \(-\Delta c\). Using the boundary conditions and integrating in space yields \[\partial_{t}\int_{\Omega}\frac{|\nabla c|^{2}}{2}\,\mathrm{d}x+ \gamma\int_{\Omega}b(c)\left|\nabla\left(\frac{\Delta c}{\rho}\right)\right|^ {2}\mathrm{d}x\\ =\int_{\Omega}\frac{1}{2}\mathrm{div}(\mathbf{v}_{n})|\nabla c|^ {2}-\nabla\mathbf{v}_{n}:\nabla c\otimes\nabla c\,\mathrm{d}x+\int_{\Omega}b( c)\nabla\left(\frac{\partial\psi_{0}}{\partial c}\right)\cdot\nabla\left(\frac{\Delta c }{\rho}\right)\mathrm{d}x-\int_{\Omega}\frac{F_{c}}{\rho}\Delta c.\] Here, we have also used the formula (3.7). We use the \(L^{\infty}\) bounds on \(\mathbf{v}_{n}\), \(\mathrm{div}(\mathbf{v}_{n})\), \(b(c),\rho\) the fact that \(\frac{F_{c}}{\rho}\) is also bounded in \(L^{\infty}\), properties on \(\partial_{c}\psi_{0}\) (3.3), and obtain \[\partial_{t}\int_{\Omega}\frac{|\nabla c|^{2}}{2}\,\mathrm{d}x+\gamma\int_{ \Omega}b(c)\left|\nabla\left(\frac{\Delta c}{\rho}\right)\right|^{2}\mathrm{d} x\leq C\int_{\Omega}|\nabla c|^{2}\,\mathrm{d}x+C\int_{\Omega}\left|\nabla \frac{\Delta c}{\rho}\right|\mathrm{d}x+C\int_{\Omega}|\Delta c|.\] We want to control the last term on the right-hand side. We use Lemma 3.6 with \(\mathbf{u}=\frac{\Delta c}{\rho}(1,0,0)^{T}\) and obtain, together with Neumann boundary conditions on \(c\), \[\left\|\frac{\Delta c}{\rho}\right\|_{L^{2}(\Omega)}\leq C\left\|\nabla\left( \frac{\Delta c}{\rho}\right)\right\|_{L^{2}(\Omega;\mathbb{R}^{3})}. \tag{3.17}\] Then, writing \(\Delta c=\rho\frac{\Delta c}{\rho}\) and using the \(L^{\infty}\) bound on \(\rho\), \[\int_{\Omega}|\Delta c|\leq C\left\|\nabla\left(\frac{\Delta c}{\rho}\right) \right\|_{L^{2}(\Omega;\mathbb{R}^{3})}.\] Finally, using Young's inequality and Gronwall's lemma, we obtain \[\sup_{t\in(0,T)}\int_{\Omega}|\nabla c|^{2}\,\mathrm{d}x+\gamma\int_{0}^{T} \int_{\Omega}b(c)\left|\nabla\left(\frac{\Delta c}{\rho}\right)\right|^{2} \mathrm{d}x\leq C. \tag{3.18}\] With Lemma 3.6 (and integrating the equation on \(\rho c\) using also the boundary conditions) we obtain the bound \[c\in L^{\infty}(0,T;H^{1}(\Omega))\cap L^{2}(0,T;H^{3}(\Omega)). \tag{3.19}\] Having defined \(\rho\) and \(c\), we now solve Equation (3.12) with a fixed point argument. We define the operator \[\mathcal{M}[\rho]:X_{n}\to X_{n}^{*},\quad\langle\mathcal{M}[\rho]\mathbf{v}, \mathbf{w}\rangle:=\int_{\Omega}\rho\mathbf{v}\cdot\mathbf{w}dx,\quad\mathbf{ v},\mathbf{w}\in X_{n}.\] This operator ([25]) \(\mathcal{M}[\rho]\) is invertible, and \[\|\mathcal{M}^{-1}[\rho]\|_{\mathcal{L}(X^{*}_{n};X_{n})}\leq\frac{1}{\inf_{ \Omega}\rho},\qquad\|\mathcal{M}^{-1}[\rho_{1}]-\mathcal{M}^{-1}[\rho_{2}]\|_{ \mathcal{L}(X^{*}_{n};X_{n})}\leq C(n,\underline{\rho})\|\rho_{1}-\rho_{2}\|_ {L^{1}(\Omega)}, \tag{3.20}\] for any \(\rho_{1},\rho_{2}\geq\underline{\rho}\). Finally, Equation (3.12) can be reformulated as \[\mathbf{v}_{n}(t)=\mathcal{M}^{-1}[\rho(t)]\left(\mathbf{m}^{*}_{0}+\int_{0}^{ t}\mathcal{N}[\mathbf{v}_{n}(s),\rho(s),c(s)]\,\mathrm{d}s\right), \tag{3.21}\] with \[\langle\mathbf{m}^{*}_{0},\eta\rangle=\int_{\Omega}\mathbf{m}_{0}\cdot\eta\, \mathrm{d}x,\] and \[\langle\mathcal{N}[\mathbf{v}_{n},\rho,c],\eta\rangle=\int_{ \Omega}\left(\rho\mathbf{v}_{n}\otimes\mathbf{v}_{n}-\mathbb{T}-\frac{\gamma }{2}|\nabla c|^{2}\mathbb{I}+\gamma\nabla c\otimes\nabla c\right) :\nabla\eta+p(\rho,c)\mathrm{div}(\eta)\] \[-(\varepsilon\nabla\mathbf{v}_{n}\nabla\rho+\kappa(\rho,c) \mathbf{v}_{n})\cdot\eta\,\mathrm{d}x.\] To prove that Equation (3.21) has a solution, we apply Schauder's fixed-point theorem in a short time interval \([0,T(n)]\). Then, we need uniform estimates to iterate the procedure. **Lemma 3.8** (Schauder Fixed Point Theorem).: _Let \(X\) be a Hausdorff topological vector space and \(S\) be a closed, bounded, convex, and non-empty subset of \(X\). Then, any compact operator \(A:S\to S\) has at least one fixed point._ With notation of the lemma 3.8, we call \(A\) the operator from Equation (3.21) and \(S=B(\mathbf{u}_{0,n})\) the unit ball with center \(\mathbf{u}_{0,n}\) in \(C([0,T];X_{n})\), \(\mathbf{u}_{0,n}\) is defined by \[\int_{\Omega}\rho_{0}\mathbf{u}_{0,n}\cdot\eta\,\mathrm{d}x=\int_{\Omega} \mathbf{m}_{0}\cdot\eta\,\mathrm{d}x,\quad\forall\eta\in X_{n}.\] More precisely, we consider \[A: S\to C([0,T];X_{n}),\] \[\mathbf{u}\mapsto\mathcal{M}^{-1}[\rho(t)]\left(\mathbf{m}^{*}_{ 0}+\int_{0}^{t}\mathcal{N}[\mathbf{u}(s),\rho(s),c(s)]\,\mathrm{d}s\right).\] **Lemma 3.9**.: _There exists a time \(T=T(n)\) small enough such that the operator \(A\) maps \(S\) into itself. Moreover, the mapping is continuous._ Proof.: By definition of \(A\) and \(\mathbf{m}^{*}_{0}\), we need to prove that \(\|\mathcal{M}^{-1}[\rho(t)]\int_{0}^{t}\mathcal{N}(s)ds\|_{C(0,T;X_{n})}\leq 1\). With properties (3.20), it is sufficient to prove that there exists a final time \(T\) small enough such that \[\left\|\int_{0}^{t}N(s)\,\mathrm{d}s\right\|_{C(0,T;X^{*}_{n})}\leq\inf_{ \Omega_{T}}\rho.\] Note that the infimum of \(\rho\) needs to be taken over the set \(\Omega_{T}=(0,T)\times\Omega\) as \(\rho\) depends on time. But, since we only consider small times, using Lemma 3.4 we see that this infimum is bounded by below. More precisely, for every \(T_{0}\), there exists \(C(T_{0})>0\) such that for every \(T\leq T_{0}\), \(\inf_{\Omega_{T}}\rho\geq C(T_{0})\). We recall that \(X_{n}\subset C^{2}(\Omega;\mathbb{R}^{3})\) is finite-dimensional and we estimate \[\int_{0}^{t}\int_{\Omega}(\rho\mathbf{u}\otimes\mathbf{u}-\mathbb{ T}-\frac{\gamma}{2}|\nabla c|^{2}\mathbb{I}+\gamma\nabla c\otimes\nabla c): \nabla\eta+p(\rho,c)\mathrm{div}(\eta)-(\varepsilon\nabla\mathbf{u}\nabla\rho+ \kappa(\rho,c)\mathbf{u})\cdot\eta\,\mathrm{d}x\,\mathrm{d}s\] \[\leq\sqrt{T}(\|\eta\|_{X_{n}}+\|\nabla\eta\|_{X_{n}})(\|\rho\|_{L^ {\infty}}\|\mathbf{u}\|_{L^{\infty}}^{2}+C\|\nu(c)\|_{L^{2}}\|\nabla\mathbf{u}\| _{L^{\infty}}+C\|\nabla c\|_{L^{4}}^{2}+\|\rho\|_{L^{\infty}}^{2}+\|\rho\|_{L^ {\infty}}\|H(c)\|_{L^{\infty}}\] \[+\varepsilon\|\mathbf{u}\|_{X_{n}}\|\nabla\rho\|_{L^{\infty}}+\| \mathbf{u}\|_{L^{\infty}}\|\kappa(\rho,c)\|_{L^{2}}).\] Using assumptions of the subsection 2.1 and Propositions 3.3-3.5, we prove that all the quantities on the right-hand side are bounded, except \(\|\nabla c\|_{L^{4}}\) which needs an argument. Note that from (3.16), we deduce \(\nabla c\) is bounded in \(L^{2}(0,T;H^{2}(\Omega))\cap L^{\infty}(0,T;L^{2}(\Omega))\) (by a constant which depends on \(\rho\), and also on \(\|\mathbf{u}\|_{L^{\infty}},\|\nabla\mathbf{u}\|_{L^{\infty}}\)). By Sobolev embedding with \(d=3\), and interpolation, we obtain an \(L^{4}(0,T;L^{4}(\Omega))\) bound. With the previous estimates, and for \(T\) small enough, we obtain the result. **Lemma 3.10**.: _The image of \(S\) under \(A\) is in fact a compact subset of \(S\). Therefore, \(A\) admits a fixed point._ Proof.: We want to apply the Arzela-Ascoli theorem to deduce the relative compactness of \(A(S)\). From the previous computation, and using the fact that \(X_{n}\) is finite-dimensional, we can prove that \(A(S)\) is pointwise relatively compact. It remains to prove its equicontinuity. We want to estimate for \(t^{\prime}\leq t\) the \(X_{n}\) norm of \(\mathcal{M}^{-1}[\rho(t)]\left(\mathbf{m}_{0}^{*}+\int_{0}^{t}\mathcal{N}[ \mathbf{u}(s),\rho(s),c(s)]\,\mathrm{d}s\right)-\mathcal{M}^{-1}[\rho(t^{ \prime})]\left(\mathbf{m}_{0}^{*}+\int_{0}^{t^{\prime}}\mathcal{N}[\mathbf{u }(s),\rho(s),c(s)]\,\mathrm{d}s\right)\). For simplicity, we write \(\mathcal{N}(s):=\mathcal{N}[\mathbf{u}(s),\rho(s),c(s)]\), and rewrite the previous difference as \[\mathcal{M}^{-1}[\rho(t)-\rho(t^{\prime})]\left(m_{0}^{*}+\int_{0}^{t} \mathcal{N}(s)\,\mathrm{d}s\right)+\mathcal{M}^{-1}[\rho(t^{\prime})]\left(m _{0}^{*}+\int_{t^{\prime}}^{t}\mathcal{N}(s)\,\mathrm{d}s\right).\] For the first term, we use (3.20) and the Holder continuity of \(\rho\) given by Proposition 3.3. For the second term, we repeat the computations in the proof of Lemma 3.9. This ends the result. We have the existence of a small interval \([0,T(n)]\). To iterate the procedure in order to prove that \(T(n)=T\), it remains to find a bound on \(\mathbf{v}_{n}\) independent of \(T(n)\). **Lemma 3.11**.: \(\mathbf{v}_{n}\) _is bounded in \(X_{n}\) independently of \(T(n)\)._ Proof.: Note that we do not ask for a bound independent of \(n\) but only of \(T(n)\) since we use in the proof the fact that \(X_{n}\) is finite-dimensional. The proof uses the energy structure of the equation. We differentiate Equation (3.12) in time and take \(\eta=\mathbf{v}_{n}\) as a test function. This yields \[\frac{d}{dt}\int_{\Omega}\rho\frac{|\mathbf{v}_{n}|^{2}}{2}\, \mathrm{d}x+\frac{1}{2}\int_{\Omega}\left(\partial_{t}\rho+\mathrm{div}(\rho \mathbf{v}_{n})\right)|\mathbf{v}_{n}|^{2}\,\mathrm{d}x-\int_{\Omega}p(\rho, c)\mathrm{div}(\mathbf{v}_{n})\,\mathrm{d}x-\frac{\varepsilon}{2}\int_{ \Omega}\Delta\rho|\mathbf{v}_{n}|^{2}\,\mathrm{d}x\\ +\int_{\Omega}\mathbb{T}:\nabla\mathbf{v}_{n}\,\mathrm{d}x+ \gamma\int_{\Omega}(\frac{1}{2}|\nabla c|^{2}\mathbb{I}-(\nabla c\otimes\nabla c )):\nabla\mathbf{v}_{n}\,\mathrm{d}x+\int_{\Omega}\kappa(\rho,c)|\mathbf{v}_ {n}|^{2}\,\mathrm{d}x=0. \tag{3.22}\] Here we used \[\int_{\Omega}\partial_{t}(\rho\mathbf{v}_{n})\cdot\mathbf{v}_{n }=\frac{1}{2}\frac{d}{dt}\int_{\Omega}\rho|\mathbf{v}_{n}|^{2}\,\mathrm{d}x+ \frac{1}{2}\int_{\Omega}\partial_{t}\rho|\mathbf{v}_{n}|^{2}\,\mathrm{d}x,\] \[\int_{\Omega}\mathrm{div}(\rho\mathbf{v}_{n}\otimes\mathbf{v}_{n })\cdot\mathbf{v}_{n}\,\mathrm{d}x=\frac{1}{2}\int_{\Omega}\mathrm{div}(\rho \mathbf{v}_{n})|\mathbf{v}_{n}|^{2}\,\mathrm{d}x,\] \[\varepsilon\int_{\Omega}(\nabla\mathbf{v}_{n}\nabla\rho)\cdot \mathbf{v}_{n}\,\mathrm{d}x=-\frac{\varepsilon}{2}\int_{\Omega}\Delta\rho| \mathbf{v}_{n}|^{2}\,\mathrm{d}x.\] With (3.11), we see that (3.22) reads \[\frac{d}{dt}\int_{\Omega}\rho\frac{|\mathbf{v}_{n}|^{2}}{2}\, \mathrm{d}x-\int_{\Omega}p(\rho,c)\mathrm{div}(\mathbf{v}_{n})\,\mathrm{d}x+ \int_{\Omega}\mathbb{T}:\nabla\mathbf{v}_{n}\,\mathrm{d}x\\ +\gamma\int_{\Omega}(\frac{1}{2}|\nabla c|^{2}\mathbb{I}-(\nabla c \otimes\nabla c)):\nabla\mathbf{v}_{n}\,\mathrm{d}x+\int_{\Omega}\kappa(\rho, c)|\mathbf{v}_{n}|^{2}\,\mathrm{d}x=0. \tag{3.23}\] Now as in (3.9), we obtain with the artificial viscosity \[\partial_{t}(\rho\psi_{0})+\operatorname{div}(\rho\psi_{0}\mathbf{v }_{n})+p\mathrm{div}(\mathbf{v}_{n})-\psi_{0}\varepsilon\Delta\rho-\varepsilon \rho\frac{\partial\psi_{0}}{\partial\rho}\Delta\rho=\operatorname{div}(b(c) \nabla\mu)\mu\\ +\operatorname{div}(\partial_{t}c\nabla c)-\partial_{t}\left(\frac {|\nabla c|^{2}}{2}\right)+\gamma\Delta c\mathbf{v}_{n}\cdot\nabla c+\mu F_{c}.\] Integrating this equation in space, and summing with (3.23), we obtain \[\frac{d}{dt}\int_{\Omega}\rho\left(\frac{|\mathbf{v}_{n}|^{2}}{2 }+\psi_{0}\right)+\gamma\frac{|\nabla c|^{2}}{2}\,\mathrm{d}x+\varepsilon\int_ {\Omega}\nabla\left(\psi_{0}+\rho\frac{\partial\psi_{0}}{\partial\rho}\right) \cdot\nabla\rho\,\mathrm{d}x\\ +\int_{\Omega}\mathbb{T}:\nabla\mathbf{v}_{n}\,\mathrm{d}x+\int_ {\Omega}b(c)|\nabla\mu|^{2}\,\mathrm{d}x+\int_{\Omega}\kappa(\rho,c)|\mathbf{ v}_{n}|^{2}\,\mathrm{d}x=\int_{\Omega}\mu F_{c}\,\mathrm{d}x. \tag{3.24}\] By definition of \(\psi_{0}\), we obtain \[\varepsilon\int_{\Omega}\nabla\left(\psi_{0}+\rho\frac{\partial \psi_{0}}{\partial\rho}\right)\cdot\nabla\rho\,\mathrm{d}x=\varepsilon\int_{ \Omega}\left(((a-1)+(a-1)^{2})\rho^{a-2}+\frac{H(c)}{\rho}\right)|\nabla\rho| ^{2}\,\mathrm{d}x\\ +\varepsilon\int_{\Omega}\left(H^{\prime}(c)(\log(\rho)+1)+Q^{ \prime}(c)\right)\nabla c\cdot\nabla\rho\,\mathrm{d}x.\] Therefore, the energy reads \[\frac{d}{dt}\int_{\Omega}\rho\left(\frac{|\mathbf{v}_{n}|^{2}}{2 }+\psi_{0}\right)+\gamma\frac{|\nabla c|^{2}}{2}\,\mathrm{d}x+\varepsilon\int_ {\Omega}\left(((a-1)+(a-1)^{2})\rho^{a-2}+\frac{H(c)}{\rho}\right)|\nabla\rho |^{2}\,\mathrm{d}x\\ +\int_{\Omega}\mathbb{T}:\nabla\mathbf{v}_{n}\,\mathrm{d}x+\int_ {\Omega}b(c)|\nabla\mu|^{2}\,\mathrm{d}x+\int_{\Omega}\kappa(\rho,c)|\mathbf{ v}_{n}|^{2}\,\mathrm{d}x=\int_{\Omega}\mu F_{c}\,\mathrm{d}x\\ -\varepsilon\int_{\Omega}\left(H^{\prime}(c)(\log(\rho)+1)+Q^{ \prime}(c)\right)\nabla c\cdot\nabla\rho\,\mathrm{d}x. \tag{3.25}\] We need to prove that the right-hand side can be controlled in term of the left-hand side to obtain estimates. For the first term on the right-hand side, we treat it as in the proof of Proposition 3.1. For the second term, we know by assumption on \(H\) and \(Q\), and the fact that \((\log(\rho)+1)^{2}\) is bounded by a constant times \(\frac{1}{\rho}+(a-1)\rho^{a-2}\) that it can be bounded in terms of the left-hand side. Note that we used the hypothesis \(|Q^{\prime}(c)|\leq C.\) This is based on the fact that \(Q\) is in fact \(Q_{\varepsilon_{Q}}\) so that we have \(|Q^{\prime}(c)|\leq C(\frac{1}{\varepsilon_{Q}})\) with a constant that blows up when \(\varepsilon_{Q}\) is sent to \(0\). As we intend to send \(\varepsilon_{Q}\to 0\) in the next step, it is important to notice that we can still manage to have this energy inequality since in fact the term \(\varepsilon\int_{\Omega}Q^{\prime}(c)\nabla c\cdot\nabla\rho\,\mathrm{d}x\) can be estimated by \(\frac{\varepsilon}{4}\int_{\Omega}\left(((a-1)+(a-1)^{2})\rho^{a-2}+\frac{H(c)} {\rho}\right)|\nabla\rho|^{2}\,\mathrm{d}x\) and \(\int_{\Omega}\varepsilon C(\frac{1}{\varepsilon_{Q}})|\nabla c|^{2}\, \mathrm{d}x\). Since \(\varepsilon\) will be sent to \(0\) before \(\varepsilon_{Q}\), the energy inequality will still hold independently of \(\varepsilon_{Q}\) in the limit \(\varepsilon\to 0\). With Gronwall's lemma, and properties of the tensor \(\mathbb{T}\), we deduce that \(\mathbf{v}_{n}\) is bounded in \(L^{2}(0,T(n);H^{1}(\Omega;\mathbb{R}^{3}))\) independently of \(T(n)\). Since all the norms are equivalent, it is also bounded in \(L^{1}(0,T(n);W^{1,\infty}(\Omega,\mathbb{R}^{3}))\). Therefore, we can apply the maximum principle stated in Lemma 3.4, and obtain that the density \(\rho\) is bounded from below by a constant independent of \(T(n)\). Then, using once again the energy inequality, we obtain that \(\mathbf{v}_{n}\) is bounded uniformly in time in \(L^{2}(\Omega;\mathbb{R}^{3})\). This procedure can be repeated for every final time \(T\). Finally, we are left with the following proposition **Proposition 3.12**.: _For any fixed \(n\) and \(T\), there exists a solution (\(\rho,c,\mathbf{v}_{n}\)) defined on \((0,T)\) (with appropriate regularity) to (3.11)-(3.13)-(3.12) subject to boundary conditions (3.14) and initial conditions (3.11). Moreover, this solution satisfies the energy dissipation inequality_ \[E(t)+\varepsilon\int_{\Omega_{t}}\left((a+a^{2})\rho^{a-1}+\frac{H( c)}{\rho}\right)|\nabla\rho|^{2}\,\mathrm{d}x\,\mathrm{d}t\\ +\int_{\Omega_{t}}\mathbb{T}:\nabla\mathbf{v}_{n}\,\mathrm{d}x\, \mathrm{d}t+\int_{\Omega_{t}}b(c)|\nabla\mu|^{2}\,\mathrm{d}x\,\mathrm{d}t+ \int_{\Omega_{t}}\kappa(\rho,c)|\mathbf{v}_{n}|^{2}\,\mathrm{d}x\,\mathrm{d}t \leq C+CE(0), \tag{3.26}\] _where_ \[E(t)=\int_{\Omega}\rho\left(\frac{|\mathbf{v}_{n}|^{2}}{2}+\psi_{0}\right)+ \gamma\frac{|\nabla c|^{2}}{2}\,\mathrm{d}x,\] _and with a constant \(C=C\left(1,\frac{\varepsilon}{\varepsilon Q}\right)\)._ Now, we need to find estimates, independent of \(n\), to pass to the limit \(n\to\infty\). Since \(\rho\) and \(c\) depend on \(n\), we write \(\rho_{n}\) and \(c_{n}\) from now on. **Proposition 3.13**.: _We have the following estimates uniformly in \(n\) and \(\varepsilon\):_ 1. \(\{\rho_{n}\psi_{0}\}\) _in_ \(L^{\infty}(0,T;L^{1}(\Omega))\)_,_ 2. \(\{\rho_{n}\}\) _in_ \(L^{\infty}(0,T;L^{a}(\Omega))\)_,_ 3. \(\{\mathbb{T}:\nabla\mathbf{v}_{n}\}\) _in_ \(L^{1}(0,T;L^{1}(\Omega))\)_,_ 4. \(\{\sqrt{\rho_{n}}\mathbf{v}_{n}\}\) _in_ \(L^{\infty}(0,T;L^{2}(\Omega;\mathbb{R}^{3}))\)_,_ 5. \(\{\sqrt{b(c_{n})}\nabla\mu_{n}\}\) _in_ \(L^{2}(0,T;L^{2}(\Omega;\mathbb{R}^{3}))\)_,_ 6. \(\{\mathbf{v}_{n}\}\) _in_ \(L^{2}(0,T;H^{1}_{0}(\Omega;\mathbb{R}^{3}))\)_,_ 7. \(\{\sqrt{\varepsilon}\nabla\rho_{n}\}\) _in_ \(L^{2}(0,T;L^{2}(\Omega))\)_,_ 8. \(\{c_{n}\}\) _in_ \(L^{\infty}(0,T;H^{1}(\Omega))\)_,_ 9. \(\{\rho_{n}\partial_{c}\psi_{0}\}\) _in_ \(L^{\infty}(0,T;L^{r}(\Omega))\) _for_ \(r<\frac{6a}{6+a}\)_,_ 10. \(\{\mu_{n}\}\) _in_ \(L^{2}(0,T;H^{1}(\Omega))\)_,_ 11. \(\{\rho_{n}\mu_{n}\}\) _in_ \(L^{2}(0,T;L^{6a/(6+a)})\)_,_ 12. \(\{c_{n}\}\) _in_ \(L^{2}(0,T;W^{2,r}(\Omega))\cap L^{2+\nu}(0,T;W^{1,2+\nu})\) _for some_ \(\nu>0\)_,_ 13. \(\{\rho_{n}c_{n}\}\) _in_ \(L^{\infty}(0,T;L^{\frac{6a}{6+a}}(\Omega))\)_,_ 14. \(\{\rho_{n}c_{n}\mathbf{v}_{n}\}\) _in_ \(L^{2}(0,T;L^{\frac{6a}{3+4a}}(\Omega))\)_,_ 15. \(\{p(\rho_{n},c_{n})\}\) _in_ \(L^{1+\tilde{\nu}}((0,T)\times\Omega))\) _for some_ \(\tilde{\nu}>0\)_._ Proof.: Estimates (A1)-(A2)-(A3)-(A4)-(A5) follow immediately from the energy equality (3.26). Estimate (A6) is the result of Lemma 3.7 and estimates (A2)-(A3)-(A4). To obtain estimate (A7), we multiply Equation (3.11) by \(\rho_{n}\), and using integration by parts, we obtain \[2\varepsilon\int_{0}^{T}\int_{\Omega}|\nabla\rho_{n}|^{2}\,\mathrm{d}x\, \mathrm{d}t\leq\|\rho_{0}\|_{L^{2}(\Omega)}^{2}+\|\rho_{n}\|_{L^{\infty}(0,T;L^ {2}(\Omega))}^{2}+\|\rho_{n}\|_{L^{2}(0,T;L^{4}(\Omega))}^{2}\|\nabla\mathbf{v }_{n}\|_{L^{2}(0,T;L^{2}(\Omega)^{d})}.\] Using (A2) and (A6), we deduce (A7). To prove Estimate (A8), we first notice that equality (3.26) provides the uniform bound on \(\{\nabla c_{n}\}\) in \(L^{2}(0,T;L^{2}(\Omega))\). To conclude with Lemma 3.6, we need to bound \(\int_{\Omega}\rho_{n}c_{n}\). Combining Equations (3.11)-(3.13), we obtain \[\partial_{t}(\rho_{n}c_{n})+\mathrm{div}(\rho_{n}c_{n}\mathbf{v}_{n})=- \varepsilon c\Delta\rho+\mathrm{div}(b(c)\nabla\mu)+F_{c}.\] Integrating in space, using the boundary conditions, and Estimate (A7), the \(L^{2}\) bound on \(\{\nabla c_{n}\}\), assumption 2.5 yields \(\{\int_{\Omega}\rho_{n}c_{n}\}\) is in \(L^{\infty}(0,T)\). We deduce Estimate (A8). Estimate (A9) follows from the definition of \(\psi_{0}\) and Estimate (A1). Estimate (A10) follows from Estimates (A5)-(A9) and Lemma 3.6. Estimate (A11) follows from Estimates (A2)-(A10). Estimate (A12) is a consequence of Equation (1.3), the previous estimates and interpolation. The two next estimates are a consequence of the other estimates and Sobolev embeddings. Finally, the last estimate on the pressure can be adapted from [20, Subsection 2.5]. This estimate is useful when we obtain the convergence a.e. of \(\rho_{n}\) and \(c_{n}\) so we can obtain strong convergence of \(p(\rho_{n},c_{n})\) in \(L^{1}\) by Vitali's convergence theorem. From [25], we also obtain the following Proposition **Proposition 3.14**.: _There exists \(r>1\) and \(p>2\) such that_ \[\partial_{t}\rho_{n},\Delta\rho_{n}\quad\text{are bounded in }L^{r}((0,T) \times\Omega),\] \[\nabla\rho_{n}\quad\text{is bounded in }L^{p}(0,T;L^{2}(\Omega, \mathbb{R}^{3})),\] _independently of \(n\) (but not independently of \(\varepsilon\))._ With all the previous bound, we can pass to the limit when \(n\to\infty\) and obtain the different equation and energy estimates in a weak formulation. Since the passage to the limit \(n\to\infty\) is simpler than the next passage \(\varepsilon\to 0\), we only detail the latter. Indeed, as \(n\to\infty\) we can obtain easily strong convergence of \(\rho\) which helps a lot in the different limits. So we assume that we can pass to the limit and that the bounds obtained in Proposition 3.13 still hold independently of \(\varepsilon\). It remains now to send \(\varepsilon\) to \(0\). We recall the equations that we want to pass to the limit into: \[\partial_{t}\rho_{\varepsilon}+\operatorname{div}(\rho_{ \varepsilon}\mathbf{v}_{\varepsilon})=\varepsilon\Delta\rho_{\varepsilon}, \tag{3.27}\] \[\partial_{t}(\rho_{\varepsilon}c_{\varepsilon})+\operatorname{ div}(\rho_{\varepsilon}c_{\varepsilon}\mathbf{v}_{\varepsilon})=- \varepsilon c_{\varepsilon}\Delta\rho_{\varepsilon}+\operatorname{div}(b(c_ {\varepsilon})\nabla\mu_{\varepsilon})+F_{c_{\varepsilon}}, \tag{3.28}\] and for every \(\eta\) (sufficiently regular) \[\int_{\Omega}\rho_{\varepsilon}\mathbf{v}_{\varepsilon}(t)\cdot \eta\,\mathrm{d}x-\int_{\Omega}\mathbf{m}_{0}\cdot\eta\,\mathrm{d}x-\int_{0} ^{t}\int_{\Omega}\rho_{\varepsilon}\mathbf{v}_{\varepsilon}\otimes\mathbf{v }_{\varepsilon}:\nabla_{x}\eta\,\mathrm{d}x\,\mathrm{d}s-\int_{0}^{t}\int_{ \Omega}p(\rho_{\varepsilon},c_{\varepsilon})\operatorname{div}(\eta)\, \mathrm{d}x\,\mathrm{d}s\\ +\varepsilon\int_{0}^{t}\int_{\Omega}(\nabla\mathbf{v}_{ \varepsilon}\nabla\rho_{\varepsilon})\cdot\eta\,\mathrm{d}x\,\mathrm{d}s+\int_ {0}^{t}\int_{\Omega}\mathbb{T}_{\varepsilon}:\nabla\eta\,\mathrm{d}x\,\mathrm{d }s+\gamma\int_{0}^{t}\int_{\Omega}(\frac{1}{2}|\nabla c_{\varepsilon}|^{2} \mathbb{I}-(\nabla c_{\varepsilon}\otimes\nabla c_{\varepsilon})):\nabla \eta\,\mathrm{d}x\,\mathrm{d}s\\ +\int_{\Omega}\int_{0}^{t}\kappa(c_{\varepsilon})\mathbf{v}_{ \varepsilon}\cdot\eta\,\mathrm{d}x\,\mathrm{d}s=0. \tag{3.29}\] Using Proposition 3.13, which yields uniform estimates in \(\varepsilon\), we pass to the limit in the previous equations. The difficult terms are the one involving nonlinear combinations. Indeed, it is not clear that we can obtain strong convergence of \(\rho_{\varepsilon}\) as we have no estimates on higher order derivatives. We use the following lemma, see [43]. **Lemma 3.15**.: _Let \(g_{n}\), \(h_{n}\) converge weakly to \(g\), \(h\) respectively in \(L^{p_{1}}(0,T;L^{p_{2}}(\Omega))\), \(L^{q_{1}}(0,T;L^{q_{2}}(\Omega))\) where \(1\leq p_{1},p_{2}\leq+\infty\) and_ \[\frac{1}{p_{1}}+\frac{1}{q_{1}}=\frac{1}{p_{2}}+\frac{1}{q_{2}}=1.\] _We assume in addition that_ \[\frac{\partial g_{n}}{\partial t}\quad\text{is bounded in }L^{1}(0,T;W^{-m,1}( \Omega))\text{ for some }m\geq 0\text{ independent of }n, \tag{3.30}\] _and_ \[\|h_{n}-h_{n}(t,\cdot+\xi)\|_{L^{q_{1}}(0,T;L^{q_{2}}(\Omega))}\to 0\quad \text{as }|\xi|\to 0\text{, uniformly in }n. \tag{3.31}\] _Then, \(g_{n}h_{n}\) converges to \(gh\) in the sense of distributions._ **Remark 3.16**.: This lemma admits many variants, and it is possible to identify the weak limit of the products with lower regularity, we refer for instance to [52]. We want to apply the previous lemma to the terms \(\rho_{\varepsilon}\mathbf{v}_{\varepsilon}\), \(\rho_{\varepsilon}c_{\varepsilon}\), \(\rho_{\varepsilon}\mu_{\varepsilon}\), \(\rho_{\varepsilon}c_{\varepsilon}^{2}\), \(\rho_{\varepsilon}\mathbf{v}_{\varepsilon}\), \(\rho_{\varepsilon}\mathbf{v}_{\varepsilon}\otimes\mathbf{v}_{\varepsilon}\), \(\rho_{\varepsilon}\mathbf{v}_{\varepsilon}c_{\varepsilon}\). We admit that \(\frac{\partial\rho_{\varepsilon}}{\partial t}\), \(\frac{\partial\rho_{\varepsilon}\mathbf{v}_{\varepsilon}}{\partial t}\) and \(\frac{\partial\rho_{\varepsilon}c_{\varepsilon}}{\partial t}\) satisfy (3.30) by using Proposition 3.13 and Equations (3.27)-(3.28)-(3.29). The compactness in space required in (3.31) also uses Proposition 3.13. We refer also to [20, Subsection 3.1] for similar results. The terms \(\varepsilon c_{\varepsilon}\Delta\rho_{\varepsilon}\) and \(\varepsilon\int_{0}^{t}\int_{\Omega}(\nabla\mathbf{v}_{\varepsilon}\nabla \rho)\cdot\eta\,\mathrm{d}x\,\mathrm{d}s\) converge to \(0\) (the first one in the distributional sense) thanks to estimates (A7)-(A8). It remains to pass to the limit in (i.e identifying the weak limits) \[p(\rho_{\varepsilon},c_{\varepsilon}),\quad\frac{1}{2}|\nabla c _{\varepsilon}|^{2},\quad\nabla c_{\varepsilon}\otimes\nabla c_{\varepsilon}, \tag{3.32}\] \[b(c_{\varepsilon})\nabla\mu_{\varepsilon},\quad F_{c_{\varepsilon }}(\rho_{\varepsilon},c_{\varepsilon}),\quad\rho_{\varepsilon}\partial_{c} \Psi_{0}. \tag{3.33}\] The convergence of the last term is used to identify \(\rho\mu\). To prove the previous convergences, we need to prove strong compactness in \(L^{2}\) of \(c_{\varepsilon},\nabla c_{\varepsilon}\) and convergence a.e. of \(\rho_{\varepsilon}\) to use Vitali's convergence theorem. But they follow from the arguments in [4] and [20, Section 3.3 and 3.4]. Altogether, we can pass to the limit in every term of the equations. This concludes the argument. ### Sending \(\varepsilon_{Q}\to 0\) The last step in our proof is to let \(\varepsilon_{Q}\) vanishes and recover the existence of weak solutions for the double well potential \(Q(c)=\frac{1}{4}c^{2}(1-c)^{2}.\) Since we have the energy estimates from before, the work is essentially the same but we have to be careful about two points. The first one is to indeed have an energy estimate independent of \(\varepsilon_{Q}\). We discussed this point after Equation (3.25) and, hence, we do not repeat it here. The second point are the estimates obtained in Proposition 3.13. However, the estimates are essentially the same, except for estimate (A9) which becomes \[\{\rho\partial_{c}\psi_{0}\}\text{ in }L^{\infty}(0,T;L^{\frac{2\alpha}{a+2}}( \Omega)). \tag{3.34}\] This can be proved knowing that, when \(\varepsilon_{Q}\approx 0\), we have that for \(c\) large \(\rho Q^{\prime}_{\varepsilon_{Q}}(c)\approx\rho c^{3}\), and we use estimates (A2)-(A8). Altogether, the reasoning to pass to the limit is the same and we conclude. ## 4 Numerical scheme for G-NSCH model We recall that in the following, we use the assumptions on the functionals provided in Section 2 for the numerical part. These particular choices lead to stability problems and degeneracy of mobility, and friction in certain regions. Indeed, our model comprises a Navier-Stokes part that needs to be stabilized to be simulated and a degenerate Cahn-Hilliard system that is well-known to be difficult to simulate numerically (see our introductory section). We here propose a numerical scheme that combines recent advances in numerical analysis. We use the numerical scheme for the simplified variant of the compressible NSCH system in [33] and fast structure-preserving scheme for degenerate parabolic equations [36, 37]. Namely, we adapt the relaxation [38] of the Navier-stokes part as used in [33]. The Cahn-Hilliard part is stabilized using the SAV method [58]. More precisely, a variant designed for degenerate parabolic models that preserves the physical bounds of the solution [36, 37]. Indeed, we expect that the volume fraction \(c\) remains within the physically (or biologically) relevant bounds \(c\in(0,1)\) since we are here using a double-well logarithmic potential. Thus, following [36, 37], we construct the invertible mapping \(T:\mathbb{R}\rightarrow(0,1)\), with \(c=T(v)\) transforming Equations (1.2)-(1.3) into \[\rho\left(\partial_{t}v+(\mathbf{v}\cdot\nabla)v\right) =\frac{1}{T^{\prime}(v)}\left(\operatorname{div}(b(c)\nabla\mu)+F_ {c}\right), \tag{4.1}\] \[\rho\mu =-\gamma T^{\prime}(v)\Delta v-\gamma T^{\prime\prime}(v)|\nabla v |^{2}+\rho\frac{\partial\psi_{0}}{\partial c}.\] Following [36] and [37], we can choose \[T(v)=\frac{1}{2}\tanh(v)+\frac{1}{2},\text{ or }T(v)=\frac{1}{1+\exp(-v)},\] thus preserving the bounds \(c\in(0,1)\). The SAV method allows to solve efficiently (and also linearly) the nonlinear Cahn-Hilliard part while preserving the dissipation of a modified energy. In the following, we assume that it exists a positive constant \(\underline{C}\) such that the energy associated with the Cahn-Hilliard part, _i.e._ \[E[t](\rho,c)=\int_{\Omega}\frac{\gamma}{2}|\nabla c|^{2}+\rho\psi_{0}(\rho,c)=E _{0}[t]+E_{1}[t],\] with \(E_{1}\) the nonlinear part of the energy, and \(E_{0}\) the linear part, is bounded from below, _i.e._\(E_{1}+\underline{C}\geq 1\) We define \[r(t)=E(t)+C_{0},\quad\text{with}\quad C_{0}=2\underline{C}+\|E(\rho^{0},c^{0}) \|_{L^{\infty}(\Omega)},\] and apply the SAV method. System (4.1) becomes \[\rho\left(\partial_{t}v+(\mathbf{v}\cdot\nabla)v\right) =\frac{1}{T^{\prime}(v)}\left(\operatorname{div}(b(c)\nabla\mu)+ F_{c}\right), \tag{4.2}\] \[\rho\mu =-\gamma T^{\prime}(v)\Delta v-\gamma T^{\prime\prime}(v)|\nabla v |^{2}+\rho\frac{\partial\psi_{0}}{\partial c},\] \[\frac{\mathrm{d}r}{\mathrm{d}t} =-\frac{r(t)}{E[t]+C_{0}}\int_{\Omega}b(c)|\nabla\mu|^{2}-\mu F_{ c}\ \mathrm{d}x,\] One can easily see that the previous modifications do not change our system at the continuous level. **Remark 4.1**.: As stated in [37], the transformation \(T(v)=\frac{1}{1+\exp(-v)}\) allows us to write \[\log(c)-\log(1-c) =v,\] \[T^{\prime}(v) =(1-c)c,\] \[\frac{T^{\prime\prime}(v)}{T^{\prime}(v)} =1-2c.\] ### One-dimensional scheme We consider our problem in a one-dimensional domain \(\Omega=(0,L)\). Even though \(\mathbf{v}\) is now a scalar, we still denote it in bold font. As mentioned previously, we relax the Navier-Stokes part of our system. Namely, we introduce a relaxation parameter \(\eta\geq 0\) and \(U=(\rho,\rho\mathbf{v})\). We rewrite Equation (1.4) as (up to rescaling \(\nu(c)\) as \(\frac{3}{4}\nu(c)\)) \[\begin{cases}\partial_{t}U+\partial_{x}V=G(U),\\ \partial_{t}V+A\partial_{x}U=-\frac{1}{\eta}(V-F(U)),\end{cases} \tag{4.3}\] in which \(G(U)=(0,-\kappa\mathbf{v})\), \(F(U)=(\rho\mathbf{v},\rho\mathbf{v}^{2}+p-\nu(c)\partial_{x}\mathbf{v}+\frac{ \gamma}{2}|\partial_{x}c|^{2})\) and \(A=\operatorname{diag}(a_{1},a_{2})\) satisfying Liu's subcharacteristic condition \[A\geq F^{\prime}(U),\quad\forall U.\] In what follows, and following [33], we use \[a_{1}=a_{2}=\max\left\{\sup\left(\mathbf{v}+\sqrt{\partial_{\rho}p}\right)^{2},\sup\left(\mathbf{v}-\sqrt{\partial_{\rho}p}\right)^{2}\right\}.\] We discretize the domain using a set of \(N_{x}\) nodes located at the center of control volumes of size \(\Delta x\) such that \(\Omega=\bigcup_{j=0,\ldots,N_{x}}[x_{j-\frac{1}{2}},x_{j+\frac{1}{2}}]\). The time interval \([0,T]\) is also discretized using a uniform time step \(\Delta t\). Our scheme follows the discrete set of equations \[U_{j}^{*} =U_{j}^{n}, \tag{4.4}\] \[V_{j}^{*} =V_{j}^{n}-\frac{\Delta t}{\eta}\left(V_{j}^{*}-F(U_{j}^{*}) \right),\] (4.5) \[U_{j}^{n+1} =U_{j}^{*}-\frac{\Delta t}{\Delta x}\left(V_{j+\frac{1}{2}}^{*}-V _{j-\frac{1}{2}}^{*}\right)+\Delta tG(U_{j}^{n+1}),\] (4.6) \[V_{j}^{n+1} =V_{j}^{*}-\frac{\Delta t}{\Delta x}A\left(U_{j+\frac{1}{2}}^{*} -U_{j-\frac{1}{2}}^{*}\right),\] (4.7) \[\frac{\overline{v}_{j}^{n+1}-v_{j}^{n}}{\Delta t} +\mathbf{v}_{j}^{n+1}\cdot(\nabla\overline{v}^{n+1})_{j}=g(c^{n}, \mu^{n+1},\rho^{n+1})_{j},\] (4.8) \[g(c^{n},\mu^{n+1},\rho^{n+1})_{j} =\frac{1}{T^{\prime}(v_{j}^{n})\rho_{j}^{n+1}}\left(\frac{1}{ \Delta x}\left((b(c^{n})\nabla\mu^{n+1})_{j+\frac{1}{2}}-(b(c^{n})\nabla\mu^{ n+1})_{j-\frac{1}{2}}\right)+F_{c}(\rho_{j}^{n},c_{j}^{n})\right),\] (4.9) \[\mu_{j}^{n+1} =\frac{1}{\rho_{j}^{n+1}}\left(-\gamma T^{\prime}(v_{j}^{n})( \Delta\overline{v}^{n+1})_{j}-\gamma T^{\prime\prime}(v_{j}^{n})|(\nabla v^{n} )_{j}|^{2}\right)+\left(\frac{\partial\psi_{0}}{\partial c}\right)_{j}^{n},\] (4.10) \[\int_{\Omega}\lambda T(\overline{v}^{n+1})\,\mathrm{d}x =\int_{\Omega}c^{n}+\Delta tF_{c}\,\mathrm{d}x,\] (4.11) \[\overline{c}_{j}^{n+1} =\lambda_{j}T(\overline{v}_{j}^{n+1}),\] (4.12) \[\frac{1}{\Delta t}\left(r^{n+1}-r^{n}\right) =-\frac{r^{n+1}}{E(\overline{c}^{n+1})+C_{0}}\Delta x\sum_{j}b( \overline{c}_{j}^{n+1})|(\nabla\mu^{n+1})_{j}|^{2}+\] \[+\frac{r^{n}}{E(\overline{c}^{n+1})+C_{0}}\Delta x\sum_{j}\mu_{j} ^{n+1}F_{c}(\rho_{j}^{n+1},\overline{c}_{j}^{n+1}),\] (4.13) \[\xi^{n+1} =\frac{r^{n+1}}{E(\overline{c}^{n+1})+C_{0}},\] (4.14) \[c_{j}^{n+1} =\nu^{n+1}\overline{c}_{j}^{n+1},\quad\text{with}\quad\nu^{n+1}= 1-(1-\xi^{n+1})^{2},\] (4.15) \[v_{j}^{n+1} =\nu^{n+1}\overline{v}_{j}^{n+1}. \tag{4.16}\] **Remark 4.2**.: We emphasize that the scheme (4.4)-(4.16) is linear. Indeed, Equations (4.4) to (4.7) are obviously linear. Then, the coupling between Equation (4.8) and Equation (4.10) is also linear (nonlinear terms are taken at the previous time step to linearize the equations). The solution \(\overline{v}^{n+1},\mu^{n+1}\) of these equations, together with the array \(\lambda\), is used in Equation (4.13) to find \(r^{n+1}\) and, then, in Equation (4.14), \(\xi^{n+1}\). At this point, we solve Equation (4.15) and(4.16) from the solution of the previous steps. **Remark 4.3**.: To obtain the interface values \(U_{j+\frac{1}{2}}^{*},U_{j-\frac{1}{2}}^{*}\) and \(V_{j+\frac{1}{2}}^{*},V_{j-\frac{1}{2}}^{*}\), we use the an upwind method. We also mention that similarly to [33], one can implement a MUSCL scheme (see _e.g._[41]) to obtain a higher order reconstruction. The upwind method permits to rewrite Equations (4.6)-(4.7) as \[U_{j}^{n+1} =U_{j}^{*}-\frac{\Delta t}{2\Delta x}(V_{j+1}^{*}-V_{j-1}^{*})+ \frac{\Delta t}{2\Delta x}\sqrt{a}(\delta_{x}^{2}U_{j}^{*})+\Delta tG(U_{j}^{ n+1}), \tag{4.17}\] \[V_{j}^{n+1} =V_{j}^{*}-\frac{a\Delta t}{2\Delta x}(U_{j+1}^{*}-U_{j-1}^{*})+ \frac{\Delta t}{2\Delta x}\sqrt{a}(\delta_{x}^{2}V_{j}^{*}), \tag{4.18}\] where we used the notation \(\delta_{x}^{2}u=u_{j+1}-2u_{j}+u_{j-1}\). In Equations (4.17)-(4.18), we emphasize that \(U^{*}=U^{n}\) and \(V^{*}=V^{n}-\frac{\Delta t}{\Delta x}\left(V^{*}-F(U^{n})\right)\). In the following, we use the notations, \[\langle U,V\rangle=\Delta x\sum_{j}U_{j}V_{j},\quad\text{and}\quad\|U\|^{2}= \langle U,U\rangle.\] We also use \(\Delta_{0,x}U:=\frac{1}{2}(U_{j+1}-U_{j-1})\). Our numerical scheme possesses the following important properties: **Proposition 4.4** (Energy stability, bounds and mass preserving).: _Assuming the CFL-like condition \(\frac{\Delta t}{\Delta x}\sqrt{a_{1}}\leq 1\) and the condition_ \[\Delta t<\frac{E(\overline{c}^{n+1})+C_{0}}{C\int_{\Omega}\lvert\mu^{n+1} \rvert\,\mathrm{d}x}, \tag{4.19}\] _our numerical scheme satisfies the energy dissipation-like inequality_ \[\|\sqrt{a}U^{n+1}\|+\|V^{n+1}\|+r^{n+1}\leq\|\sqrt{a}U^{n}\|+\|V^{\star}\|+C^{n +1}r^{n}, \tag{4.20}\] _where_ \[C^{n+1}=\frac{1+\frac{\Delta t}{E(\overline{c}^{n+1})+C_{0}}\int_{\Omega}\mu ^{n+1}F_{c}(\rho^{n+1},\overline{c}^{n+1})\ \mathrm{d}x}{1+\frac{\Delta t}{E( \overline{c}^{n+1})+C_{0}}\int_{\Omega}b(\overline{c}^{n+1})\lvert\nabla\mu^{ n+1}\rvert^{2}\ \mathrm{d}x}.\] _Furthermore, the numerical scheme preserves the physically relevant bounds of the mass fraction, i.e._ \[0<c^{n+1}<1.\] _Finally, the scheme is mass preserving, i.e._ \[\sum_{x_{j}}\rho_{j}^{n+1}=\sum_{x_{j}}\rho_{j}^{n}.\] **Remark 4.5**.: _Note that the constant \(C^{n+1}\) is smaller than \(1\) whenever the nonnegative part of the dissipation of the energy is greater than the increase of energy induced by the source term \(F_{c}\). This of course satisfied when we have \(F_{c}=0\) for instance. When the source term is dominant, we only know that \(C^{n+1}\) behaves like \(1+C\Delta t\)._ Proof.: We start with Equation (4.17), and using the definition of the function \(G(U_{j}^{n+1})\) as well as assuming \(\kappa(c)\geq 0\) (for \(c\in\mathbb{R}\)), after taking the square on both sides, multiplying by \(\Delta x\) and summing over the nodes \(j=0,...,N_{x}\), we have \[\|U^{n+1}\|^{2}\leq\|U^{n}\|^{2}+\left(\frac{\Delta t}{2\Delta x }\right)^{2}\|\Delta_{0,x}V^{\star}\|^{2} +\left(\frac{\Delta t\sqrt{a}}{2\Delta x}\right)^{2}\|\delta_{ x}^{2}U^{n}\|^{2}-\frac{\Delta t}{\Delta x}\langle\Delta_{0,x}V^{\star},U^{n}\rangle\] \[+\frac{\Delta t\sqrt{a}}{\Delta x}\langle U^{n},\delta_{x}^{2}U^ {n}\rangle-\frac{\sqrt{a}\Delta t^{2}}{2\Delta x^{2}}\langle\Delta_{0,x}V^{ \star},\delta_{x}^{2}U^{n}\rangle.\] Repeating the same computations for Equation (4.18), we have \[\|U^{n+1}\|^{2}\leq\|U^{n}\|^{2}+\left(\frac{a\Delta t}{2\Delta x }\right)^{2}\|\Delta_{0,x}U^{n}\|^{2} +\left(\frac{\Delta t\sqrt{a}}{2\Delta x}\right)^{2}\|\delta_{ x}^{2}V^{\star}\|^{2}-\frac{a\Delta t}{\Delta x}\langle\Delta_{0,x}U^{n},V^{ \star}\rangle\] \[+\frac{\Delta t\sqrt{a}}{\Delta x}\langle U^{\star},\delta_{x}^ {2}V^{\star}\rangle-\frac{a^{\frac{3}{2}}\Delta t^{2}}{2\Delta x^{2}}\langle \Delta_{0,x}U^{n},\delta_{x}^{2}V^{\star}\rangle.\] At this point, the proof is similar to the proof of [33, Theorem 4.1] (these steps use the periodic boundary conditions and the summation by parts formula to cancel some terms when summing both of the previous equations together), to obtain for a constant \(C>0\), \[\|\sqrt{a}U^{n+1}\|^{2}+\|V^{n+1}\|^{2}\leq C\left(\|\sqrt{a}U^{n}\|^{2}+\|V^{ n}\|^{2}\right).\] Then, for the Cahn-Hilliard part, we easily obtain from Equation (4.13) \[r^{n+1}=\frac{r^{n}\left(1+\frac{\Delta t}{E(\overline{c}^{n+1})+C_{0}}\int_{ \Omega}\mu^{n+1}F_{c}(\rho^{n+1},\overline{c}^{n+1})\ \mathrm{d}x\right)}{1+\frac{\Delta t}{E( \overline{c}^{n+1})+C_{0}}\int_{\Omega}b(c^{n})|\nabla\mu^{n+1}|^{2}}.\] So as long as \[1+\frac{\Delta t}{E(\overline{c}^{n+1})+C_{0}}\int_{\Omega}\mu^{n+1}F_{c}(\rho ^{n+1},\overline{c}^{n+1})\ \mathrm{d}x\geq 0,\] so does \(r^{n+1}\). This implies that, assuming \(\|F_{c}\|_{L^{\infty}}<C\), \[\Delta t<\frac{E(\overline{c}^{n+1})+C_{0}}{C\int_{\Omega}|\mu^{n+1}|\, \mathrm{d}x}.\] Under the previous condition, if \(r^{n}\geq 0\) so does \(r^{n+1}\), and (4.20) follows. Then, from the definition of \(\xi^{n+1}\) and \(C_{0}\), we have \[0<\xi^{n+1}<\frac{r^{0}}{E(\overline{c}^{n+1})+C_{0}}\leq 2,\] from the definition of the constant \(C_{0}\). **Remark 4.6**.: We emphasize that analytically, we can not verify the condition (4.19) as the solution can tend to \(0\) or \(1\) and consequently the integral of \(|\mu|\) can blow up. However, we observe during numerical simulations that the condition (4.19) is obtained for reasonably small \(\Delta t\). We also note that if we do not consider any source term, _i.e._\(F_{c}=0\), the scheme satisfies the dissipation relation \[\|\sqrt{a}U^{n+1}\|+\|V^{n+1}\|+r^{n+1}\leq\|\sqrt{a}U^{n}\|+\|V^{\star}\|+r^{ n},\] with the stability condition \[\frac{\Delta t}{\Delta x}\sqrt{a}\leq 1.\] ### Two-dimensional scheme We describe the two-dimensional scheme. This scheme possesses the same properties as the one-dimensional scheme. The proof of this is easily obtained from a very simple adaptation of the proof of Proposition 4.4. We write the velocity field \(\mathbf{v}=(u_{x},u_{y})\). System (1.1)-(1.4) with the transformation proposed at the beginning of this section, reads \[\partial_{t}\rho+\partial_{x}(\rho u_{x})+\partial_{y}(\rho u_{y})=0, \tag{4.21}\] \[\partial_{t}\left(\rho\begin{bmatrix}u_{x}\\ u_{y}\end{bmatrix}\right) +\begin{bmatrix}\partial_{x}(\rho u_{x}^{2}+p)\\ \partial_{y}(\rho u_{y}^{2}+p)\end{bmatrix}+\begin{bmatrix}\partial_{y}(\rho u _{x}u_{y})\\ \partial_{x}(\rho u_{x}u_{y})\end{bmatrix}=2\begin{bmatrix}\partial_{x}\left( \nu(c)\partial_{x}u_{x}\right)\\ \partial_{y}\left(\nu(c)\partial_{y}u_{y}\right)\end{bmatrix}+\begin{bmatrix} \partial_{y}\left(\nu(c)(\partial_{y}u_{x}+\partial_{x}u_{y})\right)\\ \partial_{x}\left(\nu(c)(\partial_{y}u_{x}+\partial_{x}u_{y})\right)\end{bmatrix}\] \[-\frac{2}{3}\begin{bmatrix}\partial_{x}\left(\nu(c)(\partial_{x} u_{x}+\partial_{y}u_{y})\right)\\ \partial_{y}\left(\nu(c)(\partial_{x}u_{x}+\partial_{y}u_{y})\right)\end{bmatrix}- \frac{\gamma}{2}\begin{bmatrix}\partial_{x}((\partial_{x}c)^{2}-(\partial_{y} c)^{2})\\ \partial_{y}((\partial_{y}c)^{2}-(\partial_{x}c)^{2})\end{bmatrix}-\gamma \begin{bmatrix}\partial_{y}(\partial_{x}c\partial_{y}c)\\ \partial_{x}(\partial_{x}c\partial_{y}c)\end{bmatrix}\] \[-\kappa(\rho,c)\begin{bmatrix}u_{x}\\ u_{y}\end{bmatrix}, \tag{4.22}\] \[\rho\left(\partial_{t}v+u_{x}\partial_{x}v+u_{y}\partial_{y}v\right) =\frac{1}{T^{\prime}(v)}\left(\partial_{x}(b(c)\partial_{x}\mu)+ \partial_{y}(b(c)\partial_{y}\mu)\right)+\frac{1}{T^{\prime}(v)}F_{c}, \tag{4.23}\] \[\rho\mu =-\gamma T^{\prime}(v)(\partial_{xx}c+\partial_{yy}c)-\gamma T^{ \prime\prime}(v)\left((\partial_{x}v)^{2}+(\partial_{y}v)^{2}\right)+\rho \frac{\partial\psi_{0}}{\partial c},\] (4.24) \[\frac{\mathrm{d}r}{\mathrm{d}t} =-\frac{r(t)}{E[t]+C_{0}}\int_{\Omega}b(c)|\nabla\mu|^{2}-\mu F_{ c}\ \mathrm{d}x. \tag{4.25}\] We introduce the notations \(U=(\rho,\rho u_{x},\rho u_{y})\), \(G(U)=(0,-\kappa u_{x},-\kappa u_{y})\) and \[F(U)=(\rho u_{x},\rho u_{x}^{2}+p-2\nu(c)\partial_{x}u_{x}+\frac{2 }{3}\nu(c)(\partial_{x}u_{x}+\partial_{y}u_{y})+\frac{1}{2}\gamma\left(( \partial_{x}c)^{2}-(\partial_{y}c)^{2}\right),\] \[\rho u_{x}u_{y}-\nu(c)\left(\partial_{y}u_{x}+\partial_{x}u_{y} \right)+\gamma\partial_{x}c\partial_{y}c),\] \[K(U)=(\rho u_{y},\rho u_{x}u_{y}-\nu(c)\left(\partial_{y}u_{x}+ \partial_{x}u_{y}\right)+\gamma\partial_{x}c\partial_{y}c,\] \[\rho u_{y}^{2}+p-2\nu(c)\partial_{y}u_{y}+\frac{2}{3}\nu(c)( \partial_{x}u_{x}+\partial_{y}u_{y})+\frac{1}{2}\gamma\left((\partial_{x}c)^{ 2}-(\partial_{y}c)^{2}\right)).\] The stabilization (see [33, 39]) of the Navier-Stokes part of our system reads \[\begin{cases}\partial_{t}U+\partial_{x}V+\partial_{y}W=G(U),\\ \partial_{t}V+A\partial_{x}U=-\frac{1}{\eta}(V-F(U)),\\ \partial_{t}W+B\partial_{y}U=-\frac{1}{\eta}(W-K(U)),\end{cases} \tag{4.26}\] in which \(A=\operatorname{diag}(a_{1},a_{2},a_{3})\) and \(B=\operatorname{diag}(b_{1},b_{2},b_{3})\). In the following, we choose \[a_{1}=a_{2}=a_{3}=\max\{\sup\left(u_{x}+\sqrt{\partial_{\rho}p} \right)^{2},\sup u_{x}^{2},\sup\left(u_{x}-\sqrt{\partial_{\rho}p}\right)^{2 }\},\] \[b_{1}=b_{2}=b_{3}=\max\{\sup\left(u_{y}+\sqrt{\partial_{\rho}p} \right)^{2},\sup u_{y}^{2},\sup\left(u_{y}-\sqrt{\partial_{\rho}p}\right)^{2 }\}.\] We assume that our two-dimensional domain is a square \([0,L]\times[0,L]\). We discretize the domain using square control volumes of size \(\Delta x\times\Delta y\). The cell centers are located at positions \((x_{j},y_{j})\), and we approximate the value of a variable at the cell center by its mean, _e.g._ \[\rho_{j,i}=\frac{1}{\Delta x\Delta y}\int_{x_{j-\frac{1}{2}}}^{x_{j+\frac{1}{ 2}}}\int_{y_{j-\frac{1}{2}}}^{y_{j+\frac{1}{2}}}\rho(\mathbf{x},t)\ \mathrm{d}\mathbf{x}.\] Simply employing a first-order time discretization, the numerical scheme becomes \[U^{*}_{j,i} =U^{n}_{j,i}, \tag{4.27}\] \[V^{*}_{j,i} =V^{n}_{j,i}-\frac{\Delta t}{\eta}\left(V^{*}_{j,i}-F(U^{*}_{j,i}) \right),\] (4.28) \[W^{*}_{j,i} =W^{n}_{j,i}-\frac{\Delta t}{\eta}\left(W^{*}_{j,i}-K(U^{*}_{j,i}) \right),\] (4.29) \[U^{n+1}_{j,i} =U^{*}_{j,i}-\frac{\Delta t}{\Delta x}\left(V^{*}_{j+\frac{1}{2}, i}-V^{*}_{j-\frac{1}{2},i}\right)-\frac{\Delta t}{\Delta y}\left(W^{*}_{j,i+ \frac{1}{2}}-W^{*}_{j,i-\frac{1}{2}}\right)+\Delta tG(U^{n+1}_{i,j}),\] (4.30) \[V^{n+1}_{j,i} =V^{*}_{j,i}-\frac{\Delta t}{\Delta x}A\left(U^{*}_{j+\frac{1}{2},i}-U^{*}_{j-\frac{1}{2},i}\right),\] (4.31) \[W^{n+1}_{j,i} =W^{*}_{j,i}-\frac{\Delta t}{\Delta y}B\left(U^{*}_{j,i+\frac{1}{ 2}}-U^{*}_{j,i-\frac{1}{2}}\right),\] (4.32) \[\frac{\overline{v}^{n+1}_{j,i}-v^{n}_{j,i}}{\Delta t} +\mathbf{v}^{n+1}_{j,i}\cdot(\nabla\overline{v}^{n+1})_{j,i}=g(c^{ n},\mu^{n+1},\rho^{n+1})_{j,i},\] (4.33) \[g(c^{n},\mu^{n+1},\rho^{n+1})_{j,i} =\frac{1}{T^{\prime}(v^{n}_{j,i})\rho^{n+1}_{j,i}\Delta x}\left( \left(b(c^{n})\nabla\mu^{n+1}\right)_{j+\frac{1}{2},i}-\left(b(c^{n})\nabla \mu^{n+1}\right)_{j-\frac{1}{2},i}\right)\] (4.34) \[\quad+\frac{1}{T^{\prime}(v^{n}_{j,i})\rho^{n+1}_{j,i}\Delta y} \left(\left(b(c^{n})\nabla\mu^{n+1}\right)_{j,i+\frac{1}{2}}-\left(b(c^{n}) \nabla\mu^{n+1}\right)_{j,i-\frac{1}{2}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{F_{c }(\rho^{n}_{j,i},c^{n}_{j,i})}{T^{\prime}(v^{n}_{j,i})\rho^{n+1}_{j,i}},\] \[\mu^{n+1}_{j,i} =\frac{1}{\rho^{n+1}_{j,i}}\left(-\gamma T^{\prime}(v^{n}_{j,i})( \Delta\bar{v}^{n+1})_{j,i}-\gamma T^{\prime\prime}(v^{n}_{j,i})|(\nabla v^{n}) _{j,i}|^{2}\right)+\left(\frac{\partial\psi_{0}}{\partial c}\right)^{n}_{j,i},\] (4.35) \[\int_{\Omega}\lambda T(\overline{v}^{n+1})\,\mathrm{d}x =\int_{\Omega}c^{n}+\Delta tF_{c}\,\mathrm{d}x,\] (4.36) \[\overline{c}^{n+1}_{j,i} =\lambda T(\overline{v}^{n+1}_{j,i}),\] (4.37) \[\frac{1}{\Delta t}\left(r^{n+1}-r^{n}\right) =-\frac{r^{n+1}}{E(\overline{c}^{n+1})+C_{0}}\int_{\Omega}b( \overline{c}^{n+1})|\nabla\mu^{n+1}|^{2}\,\,\mathrm{d}\mathbf{x}+\] \[+\frac{r^{n}}{E(\overline{c}^{n+1})+C_{0}}\int_{\Omega}\mu^{n+1}F_ {c}(\rho^{n+1},\overline{c}^{n+1})\,\,\mathrm{d}\mathbf{x},\] (4.38) \[\xi^{n+1} =\frac{r^{n+1}}{E(\overline{c}^{n+1})+C_{0}},\] (4.39) \[c^{n+1}_{j,i} =\nu^{n+1}\overline{c}^{n+1}_{j,i},\quad\text{with}\quad\nu^{n+1} =1-(1-\xi^{n+1})^{2},\] (4.40) \[v^{n+1}_{j,i} =\nu^{n+1}\overline{v}^{n+1}_{j,i}. \tag{4.41}\] ## 5 Numerical experiments In the section, we begin using the one-dimensional scheme (4.4)-(4.16) with no source term and friction, _i.e._\(\kappa(\rho,c)=0\) and \(F_{c}=0\), and we verify that the scheme preserves all the properties stated in Proposition 4.4. Furthermore, we verify the spatial and temporal convergence orders for the scheme. **Remark 5.1** (Implementation details).: All numerical schemes are implemented using Python 3 and the Numpy and Scipy modules. The linear systems present in the schemes are solved using the Generalized Minimal RESidual iteration (GMRES) indirect solver (function available in the **scipy.sparse.linalg** module). ### One dimensional numerical test case 1: non-matching densities We start with a one-dimensional test case to show the spatiotemporal evolution of the density, mass fraction, and velocity. We also verify numerically the properties stated in Proposition 4.4. In this subsection, we use the double-well logarithmic potential \[\psi_{\rm mix}=\frac{1}{2}\left(\alpha_{1}(1-c)\log(\rho(1-c))+\alpha_{2}c\log (\rho c)\right)-\frac{\theta}{2}(c-\frac{1}{2})^{2}+k,\] with \(k=100\), \(\alpha_{1}=1.2\), \(\alpha_{2}=0.5\), and \(\theta=4\). This allows us to model a fluid for which the phase denoted by the index \(1\) is denser compared to the phase indicated by the index \(2\). Indeed, this can been seen on the effect of the values \(\alpha_{1}\) and \(\alpha_{2}\) on the potential. Taking \(\alpha_{2}<\alpha_{1}\) shifts the well corresponding to phase \(1\) very close to \(1\) compared to the other phase. This models the fact that the fluid \(1\) is in fact more compressible and thus aggregates of pure phase \(1\) appear denser. We use the computational domain \(\Omega=(0,1)\) discretized in \(N_{x}=128\) cells. We take \(T=0.3\) (this has been chosen because the system reaches a meta-stable state by that time) and use the initial time step \(\Delta t=1\times 10^{-6}\) (this time step size is adapted from the CFL-like condition stated in Proposition 4.4, however, the time step will never be larger than \(\Delta t_{\rm max}=1\times 10^{-5}\)). We choose the width of the diffuse interface to be \(\gamma=1/500\), the viscosity to be constant \(\nu(c)=1\times 10^{-2}\), the relaxation parameter to be \(\eta=1\times 10^{-3}\), and the exponent for the barotropic pressure equals to \(a=3\). We choose constant initial conditions for the density and the pressure, _i.e._ \[\rho^{0}=0.9,\quad\mathbf{v}^{0}=1.0.\] The initial mass fraction is assumed to be a constant with a small random noise, _i.e._ \[c^{0}=\underline{c}-0.05r,\] with \(\underline{c}=0.5\) and \(r\) is a vector of random values between \(0\) and \(1\) given by the uniform distribution. Figure 1 shows snapshots of \((\rho,c,\mathbf{v})\) at different times (starting from the initial condition (_i.e._ Figure 1a) to the stable state depicted in Figure 1d). We observe that the numerical scheme catches well the spinodal decomposition of the binary fluid while it is transported to the right. Indeed, after an initial regularization of the initial condition (Fig. 1b), the separation of the two phases of the fluid occurs and small aggregates appear (Fig. 1b). Then, the coarsening of the small aggregates into larger ones occurs. We arrive at the end of the simulation to the solution depicted in Fig. 1d. During the last time steps, the evolution of the mass fraction was very slow. This slow evolution is due to the degeneracy of the potential. The solution in the longtime is expected to depict evenly spaced aggregates of equal size. We emphasize that even at the end of the simulation, the fluid continues to move to the right since the velocity \(\mathbf{v}\) is positive. Due to the fact that \(\alpha_{1}>\alpha_{2}\), fluid \(1\) is denser. This is observed in Fig. 1d as the total fluid density \(\rho\) and the pressure \(p\) are larger in the zones \(c\approx 1\) (corresponding to the pure phase of fluid \(1\)). We also observe that the pressure in pure aggregates of fluid \(1\) is larger compared to the other zones. Figures 2 show that our numerical scheme preserved the properties presented in Proposition 4.4. Indeed, denoting by \[\frac{\mathrm{d}E}{\mathrm{d}t}=\|\sqrt{a}U^{n+1}\|+\|V^{n+1}\|+r^{n+1}-\left[ \|\sqrt{a}U^{n}\|+\|V^{n}\|+C^{n+1}r^{n}\right],\] we observe using Fig. 2a that Inequality (4.20) is satisfied by our numerical solution, _i.e._ the derivative of the discrete energy remains negative indicating monotonic decay of the energy. The total mass \(\int_{\Omega}\rho\ \mathrm{d}x\) is conserved throughout the simulation as observed in Fig 2b. The scalar variable \(\xi\) remains very close to \(1\) as depicted in Fig 2c. Finally, the bounds of the mass fraction are ensured as seen in Fig. 2d ### Convergence tests We study the numerical convergence of our one dimensional scheme (4.4)-(4.16) with \(\kappa(\rho,c)=0\) and \(F_{c}=0\). The computational domain is \(\Omega=(0,1)\). The final time is \(T=0.1\) and choose \(\alpha_{1}=\alpha_{2}=1\). The other parameters \(\gamma\), \(\beta\), \(\eta\), \(\nu\) are chosen as for test case 1 (see Subsection 5.1). The initial condition for the mass fraction is given by \[c^{0}=0.5+0.01\cos(6\pi x).\] The initial conditions for the velocity and the total density are chosen as in test case 1. #### 5.2.1 Convergence in space We start by fixing the time step \(\Delta t=1\times 10^{-5}\) and we vary the grid size. We vary the number of points \(N_{x}=\{64,128,256,512,1024,2048\}\). For each \(\Delta x\), we compute the error at time Figure 1: Spatio-temporal evolution of density \(\rho\), mass fraction \(c\), velocity \(\mathbf{v}\) and pressure \(p\) for test case 1. \(t=T=0.1\) (we chose this value since we observed that the solution reached a stable state at that time) following \[\text{error}=\|\rho_{\Delta x}-\rho_{\Delta x/2}\|+\|c_{\Delta x}-c_{\Delta x/2} \|+\|\mathbf{v}_{\Delta x}-\mathbf{v}_{\Delta x/2}\|, \tag{5.1}\] in which each of the norms is computed following \[\|\rho_{\Delta x}-\rho_{\Delta x/2}\|=\left(\Delta x\sum_{j=1}^{N_{x}}\left( \rho_{\Delta x}(x_{2(j-1)+1})\right)-\rho_{\Delta x/2}(x_{j})\right)^{2}+ \left(\rho_{\Delta x}((x_{2(j-1)+2}))-\rho_{\Delta x/2}(x_{j}))\right)^{1/2},\] with \(N_{x}\) the number of points on the \(\Delta x/2\) grid, hence, the solution on the coarse grid (_i.e._\(\Delta x/2\)) is extended on the fine grid \(\Delta x\). Figure 2: Temporal evolution of the dissipation of the energy \(\frac{\mathrm{d}E}{\mathrm{d}t}\), of the total mass \(\int_{\Omega}\rho\ \mathrm{d}x\), of the scalar variable \(\xi\), and of the minimum and maximal values of the mass fraction \(c\) for test case 1. We arrive at the results given in Figure 3. As expected by the upwind scheme, the spatial order of convergence is a little less than \(1\) for the total density \(\rho\) (see Figure 2(a)) and the velocity \(\mathbf{v}\) (see Figure 2(c)). #### 5.2.2 Convergence in time We here fix the grid size and select \(N_{x}=128\) points. We choose \(\Delta t=1\times 10^{-4}\), and vary the time steps according to \(\Delta t_{\text{array}}=\{\Delta t,\frac{\Delta t}{2},\frac{\Delta t}{4},\frac {\Delta t}{8},\frac{\Delta t}{16},\frac{\Delta t}{32}\}\). The other parameters and initial conditions are chosen as in the spatial convergence test (see Subsection 5.2.1). The error between the reference solution and a computed solution at time \(t=T=0.2\) is given by \[\|\rho_{\Delta t}-\rho_{\Delta t/2}\|=\left(\Delta x\sum_{j=1}^{N_{x}}(\rho_{ \Delta t}(x_{j})-\rho_{\Delta t/2}(x_{j}))^{2}\right)^{1/2}.\] We obtain the results depicted in Figure 4 We observe that the order of convergence in time for our scheme is exactly \(1\) as expected. Figure 4: Convergence in time for the total density \(\rho\), the mass fraction \(c\) and the velocity \(\mathbf{v}\). The orange dashed line represents the slope \(1\). Figure 3: Convergence in space for the total density \(\rho\), the mass fraction \(c\) and the velocity \(\mathbf{v}\). The orange dashed line represents the slope \(1\). ### Conclusion and simulation of a growing tumor in a healthy tissue To conclude this article, we would like to go back to our initial interest, _i.e._ the modeling of tumor growth. We now present a simple numerical simulation for this application and introduce a forthcoming work focusing only on the modelling of tumor. Here, we assume that \(F_{c}\) acts as a transfer of mass from the healthy tissue to the cancerous population (this can be viewed as tumor cells using material from the environment such as nutrients to divide and grow). Furthermore, we assume that the two cell populations adhere with different strengths to the ECM. Altogether, we choose \[F_{c} =20\rho c(1-c/c^{*}),\] \[c^{*} =0.9,\] \[\kappa(\rho,c) =\left(\kappa_{1}\rho c+\kappa_{2}\rho(1-c)\right),\gamma=\frac{1 }{1000},\] with \(c^{*}\) representing the maximum saturation (decided to be \(c^{*}=1\) in the following) and \(\kappa_{1}=0\) and \(\kappa_{2}=20\). For this test case, we choose the initial conditions as follows: \[c_{0} =0.008+0.6\exp\left(-100(x-0.5)^{2}-100(y-0.5)^{2}\right),\] \[\rho_{0} =\,c+0.5\,(1-c),\] \[\mathbf{v}_{0} =0,\] We chose a square domain \(\Omega\) of length \(1\) and \(64\times 64\) cells for the spatial discretization. The other parameters that we did not specify are chosen as in Subsection 5.1. Figs 5 show the spatio-temporal evolution of the tumor with \(\alpha_{1}=\alpha_{2}=1\). We observe that as the tumor grows the shape remains radially symmetrical. We think that, due friction effects, instabilities may appear. The reason we are not able to see them in that case is double. Firstly, the scheme is less than first order in space and small structures may not be captured. Secondly, the regularization given by the double Laplacian from the Cahn-Hilliard equation may be too strong and the finger-like instabilities are not seen. To verify the latter assertion, we propose another test case with the parameters \[\alpha_{1}=1.2,\quad\alpha_{2}=0.8,\] \[\kappa_{2}=100,\quad\gamma=\frac{1}{1500},\] and \[c_{0}=0.01+0.9\exp\left(-100(x-0.5)^{2}-100(y-0.5)^{2}\right).\] The other parameters and the initial conditions are chosen as the previous test case. Figs 6 show our numerical results for this "tumor growth" test case. We observe that as the tumor grows the shape does not remain perfectly symmetrical. This result emphasize that the behavior of the solution depends strongly on the regularization provided by the bi-laplacian term and the strength of the friction on the rigid fibers of the medium. Furthermore, one possible other explanation of the instabilities could be the difference of densities of between the two fluids. This is controlled by taking \(\alpha_{1}\neq\alpha_{2}\) (see Figs 1 to better see one-dimensional numerical results highlighting the effect of non-matching densities). Indeed, from Figs 1 we know that taking \(\alpha_{2}>\alpha_{2}\) leads to a heterogeneous pressure field which leads to pressure gradients. Hence, since the initial velocity field is zero in our case, the direction of the velocity is given by \(-\nabla p\). Consequently, \(\mathbf{v}\) tends to transport the cell densities to regions of less pressure, _i.e._ away from the tumor cell clusters. This movement in addition to heterogeneous friction effects is known to produce complex patterns (see _e.g._[47]). We argue that, from these numerical simulations, our model seems capable to qualitatively represent patterns of invasive growth of tumors and could unravel the possible mechanical effects playing a role in the emergence of heterogeneous structures observed in tumor invasion of healthy tissue. However, we emphasize that to achieve these latter goals, we have to be able to capture accurately the possible fine structures emerging during the numerical simulations. Thus, our numerical scheme will be improved to increase the spatial and temporal orders of accuracy by taking advantage of the flexibility of the SAV method. In a forthcoming work, we will develop a high-order finite element scheme for the compressible SAV-NSCH system we proposed in the present work. This numerical scheme will allow efficient simulations of compressible diphasic fluids and will be used to simulate relevant test cases with applications in fluid mechanics such as rising bubbles and Rayleigh-Taylor instabilities. In the previously mentioned future work, we will study the possible mechanical effects at play during invasive tumor growth. Our strategy will be relatively simple: As presented in Appendix A, we will reduce the G-NSCH system to better represent tumor growth while removing non-necessary effects such as inertia. Our goal will be to present numerical simulations capturing Saffman-Taylor-like instabilities depicted by the protrusions of the tumor in the healthy tissue and commonly observed in the context of, _e.g._, skin cancer [18]. On the basis of the present work, numerous other work directions can be envisioned. Indeed, the analytical aspects of this work can be improved. This direction is challenging because as pointed out in the present work, necessary assumptions on the potential and the mobility term to perform the solutions' existence proof do not allow to take physically or biologically relevant forms. In fact, singular potentials, degenerate mobilities and viscosity functions are not allowed. Working in this direction would require us to use recent results concerning the Cahn-Hilliard equation as well as the compressible Navier-Stokes model and adapting them to the context of the NSCH system. One possible solution that we will investigate is to derive a Bresch-Desjardins entropy estimate [12, 13] for the compressible NSCH as it has been done recently by Vasseur and Yu [60], and Bresch, Vasseur and Yu [14], for the compressible Navier-Stokes model with degenerate viscosities. We would like to conclude this research article by presenting a possible research perspective the authors will investigate. We believe that the coupling proposed by the G-NSCH model between fluid movement and tumor growth could have an original application to the mathematical representation of tumor-on-chips [46]. Indeed, these microfluidic devices present a growing interest in oncology since they can replicate in a very accurate manner the micro-environment of the tumor. Therefore, they present a lot of advantages for experiments and the G-NSCH model could be used to replicate such experiments in an _in-silico_ manner. We believe this could be useful to optimize the devices and also help to answer questions in oncology. ## Acknowledgements The authors would like to thank Tommaso Lorenzi for his comments concerning the derivation of our G-NSCH model and for the very interesting discussions we had about the modelling of tumor growth. The authors would like also to thank Alain Miranville for fruitful discussions about the compressible NAvier-Stokes-Cahn-Hilliard model and diphasic fluid dynamics. ## Appendix A Derivation of the model In this Appendix, we present the rigourous derivation of our G-NSCH model. ### Notation and definitions We formulate our problem in Eulerian coordinates and in a smooth bounded domain \(\Omega\subset\mathbb{R}^{d}\) (where \(d=\{1,2,3\}\) is the dimension). The balance laws derived in the following sections are in local form. We have two cell populations in the model where \(\rho_{1},\rho_{2}\) are the relative densities of respectively populations \(1\) and \(2\). Thus, \(\rho_{i}\) represents the mass of the population \(M_{i}\) per volume occupied by the \(i\)-th phase \(V_{i}\), _i.e._ \[\rho_{i}=\frac{M_{i}}{V_{i}}.\] Then, we define the volume fractions \(\varphi_{1},\varphi_{2}\) which are defined by the volume occupied by the \(i\)-th phase over the total volume of the mixture \[\varphi_{i}=\frac{V_{i}}{V}.\] Therefore, the mass density of population \(i\) which is the mass of population \(i\) in volume \(V\) is given by \[\phi_{i}=\rho_{i}\varphi_{i}.\] We further assume that the fluid is saturated, _i.e._ \[\varphi_{1}+\varphi_{2}=1.\] The total density of the mixture is then given by \[\rho=\phi_{1}+\phi_{2}.\] Figure 5: Temporal evolution of the tumor density in the case \(\alpha_{1}=\alpha_{2}\) We also introduce the mass fractions \(c_{i}=M_{i}/M\) and we have the relations \[\rho c_{i}=\phi_{i},\quad\text{and}\quad c_{1}=(1-c_{2}).\] (A.1) We denote by \(p\) the pressure inside the mixture and \(\mathbf{v}_{1},\mathbf{v}_{2}\) are the velocities of the different phases. We use a mass-average mixture velocity \[\mathbf{v}=\frac{1}{\rho}\left(\phi_{1}\mathbf{v}_{1}+\phi_{2}\mathbf{v}_{2} \right).\] (A.2) We define the material derivative for a generic function \(g\) (scalar or vector-valued) by \[\frac{\mathrm{D}g}{\mathrm{D}t}=\frac{\partial g}{\partial t}+\mathbf{v}\cdot \nabla g,\] (A.3) and indicate the definition of the differential operator \[\mathbf{v}\cdot\nabla g=\sum_{j=1}^{d}\mathbf{v}_{j}\frac{\partial g}{ \partial x_{j}}.\] In the following, we denote vectors with bold roman letters and we use bold Greek letters to denote second-order tensors. Figure 6: Temporal evolution of the tumor density in the case \(\alpha_{1}=0.8\) and \(\alpha_{2}=1.2\). ### Mass balance equations We assume that each component has its own velocity and the component 1 is proliferating. The fluid being saturated, _i.e._\(c_{1}+c_{2}=1\) Therefore, we have the mass balance equations for each component \[\begin{cases}\frac{\partial\phi_{1}}{\partial t}+\operatorname{div}\left(\phi_{ 1}\mathbf{v}_{1}\right)=F_{1}(\rho,c_{1},c_{2}),\\ \frac{\partial\phi_{2}}{\partial t}+\operatorname{div}\left(\phi_{2}\mathbf{v }_{2}\right)=F_{2}(\rho,c_{1},c_{2}).\end{cases}\] (A.4) In the previous system, the functions \(F_{i}(\rho,c_{1},c_{2})\) (\(i=1,2\)) act as source terms of mass. Summing the two equations, we obtain the continuity equation for the total density of the mixture, and using the mass fractions (denoting \(c_{1}=c\)) and the relations (A.1), we obtain the balance equation for the density of the mixture \[\frac{\partial\rho}{\partial t}+\operatorname{div}\left(\rho\mathbf{v}\right) =F_{1}+F_{2}=:F_{\rho}.\] (A.5) To obtain a system analogous to (A.4), we rewrite the first equation of (A.4) using the definition of the mass fraction (A.1) to obtain \[\frac{\partial\rho c}{\partial t}+\operatorname{div}\left(\rho c\mathbf{v}_{ 1}\right)=F_{1}(\rho,c,1-c)=:F_{c}.\] (A.6) The mass of the component 1 is transported by the average velocity \(\mathbf{v}\) and the remaining diffusive flux \(\mathbf{J}_{1}=\rho c\left(\mathbf{v}-\mathbf{v}_{1}\right)\). Therefore, we can replace the previous equation by \[\frac{\partial\rho c}{\partial t}+\operatorname{div}\left(\rho c\mathbf{v} \right)=\operatorname{div}\left(\mathbf{J}_{1}\right)+F_{c}.\] Then, using the definition of the material derivative (A.3) and the mass balance equation for the total mixture (A.5), the left-hand side of the previous equation reads \[\frac{\partial\rho c}{\partial t}+\operatorname{div}\left(\rho c\mathbf{v} \right)=\rho\frac{\mathrm{D}c}{\mathrm{D}t}+c\left[\frac{\partial\rho}{ \partial t}+\operatorname{div}\left(\rho\mathbf{v}\right)\right]=\rho\frac{ \mathrm{D}c}{\mathrm{D}t}+cF_{\rho}.\] Altogether, we obtain the balance equation for the mass fraction of the component 1 \[\rho\frac{\mathrm{D}c}{\mathrm{D}t}=\operatorname{div}\left(\mathbf{J}_{1} \right)+F_{c}-cF_{\rho}.\] (A.7) Since \(c_{2}=1-c\), solving the equations (A.5) and (A.7) is equivalent to solving the system (A.4). In the following, we refer to \(c\) as the order parameter (terminology often used in the framework of the Cahn-Hilliard model [16, 17]). ### Balance of linear momentum We write the balance of linear momentum [22], which describes the evolution of the velocity \(\mathbf{v}\) due to internal stresses. Indeed, we neglect the effect of any external forces, including gravity. Following continuum mechanics, the Cauchy stress tensor gives the stresses acting inside the mixture due to viscous and non-viscous effects. An additional stress must be taken into account to represent the effect of concentration gradients [24]. Altogether, we assume that the stress tensor is a function of the total density \(\rho\), the order parameter \(c\) (i.e. the mass fraction of population 1), its gradient \(\nabla c\), and the total velocity of the mixture \(\mathbf{v}\) i.e. \[\boldsymbol{\sigma}=\boldsymbol{\sigma}(\rho,c,\nabla c,\mathbf{v}).\] The friction around the pores of the medium is modeled by a drag term in the balance equation [54] with a permeability coefficient \(\kappa(\rho,c)=\kappa_{1}(\rho,c)+\kappa_{2}(\rho,c)\) (the sum of the two friction coefficients for each component of the mixture). The permeability coefficient relates the properties of the fluid and the porous medium. For each dimension (for example if \(d=3\), then \(j=\{x,y,z\}\)), the balance of linear momentum reads [22] \[\frac{\partial\rho\mathbf{v}_{j}}{\partial t}+\operatorname{div}\left(\rho \mathbf{v}_{j}\mathbf{v}\right)=\operatorname{div}\left(\boldsymbol{\sigma} \right)_{j}-\kappa(\rho,c)\mathbf{v}_{j}+F_{\mathbf{v}_{j}},\] where \(F_{\mathbf{v}_{j}}(\mathbf{v}_{j},\rho)\) represents the gain or loss of velocity in the \(j\)-th direction from different effects such as external forces. Then, using the continuity equation (A.5), we can rearrange the left-hand side to obtain \[\frac{\partial\rho\mathbf{v}_{j}}{\partial t}+\operatorname{div}\left(\rho \mathbf{v}_{j}\mathbf{v}\right)=\rho\frac{\operatorname{D}\mathbf{v}_{j}}{ \operatorname{D}t}+\mathbf{v}_{j}\left[\frac{\partial\rho}{\partial t}+ \operatorname{div}\left(\rho\mathbf{v}\right)\right]=\rho\frac{\operatorname{ D}\mathbf{v}_{j}}{\operatorname{D}t}+\mathbf{v}_{j}F_{\rho}+F_{\mathbf{v}_{j}}.\] Therefore, we have \[\rho\frac{\operatorname{D}\mathbf{v}_{j}}{\operatorname{D}t}=\operatorname{ div}(\boldsymbol{\sigma})_{j}-\left(\kappa(\rho,c)+F_{\rho}\right)\mathbf{v}_{j}+F_{ \mathbf{v}_{j}}.\] Then, we can rewrite the balance of linear momentum in a more compact form \[\rho\frac{\operatorname{D}\mathbf{v}}{\operatorname{D}t}=\operatorname{div}( \boldsymbol{\sigma})-\left(\kappa(\rho,c)+F_{\rho}\right)\mathbf{v}+F_{ \mathbf{v}},\] (A.8) where \(F_{\mathbf{v}}(\mathbf{v},\rho)\) is the vector of coordinates \(F_{\mathbf{v}_{j}}\). ### Energy balance The total energy of the mixture is the sum of the kinetic energy \(\rho\frac{1}{2}|\mathbf{v}|^{2}\) and of the internal energy \(\rho u\), where \(u=u(\rho,c,\nabla c)\) is a specific internal energy. Compared to the classical conservation law for the total energy, we have an additional energy flux \(\boldsymbol{\tau}\frac{\operatorname{D}c}{\operatorname{D}t}\). Indeed, due to the interface region, surface effects must be taken into account. Following this direction, Gurtin [30] proposed to include in the second law of thermodynamics, the effect of an additional force called the _microscopic-stress_ which is related to forces acting at the microscopic scale. We denote this supplementary stress by \(\boldsymbol{\tau}\). Since we assume that the system is maintained in an isothermal state, the balance equation for the energy is given by [22] \[\begin{split}\frac{\partial}{\partial t}\left(\rho\frac{1}{2}| \mathbf{v}|^{2}+\rho u\right)&+\operatorname{div}\left(\rho \left(\frac{1}{2}|\mathbf{v}|^{2}+u\right)\mathbf{v}\right)\\ &=\operatorname{div}\left(\boldsymbol{\sigma}^{T}\mathbf{v} \right)+\operatorname{div}\left(\boldsymbol{\tau}\frac{\operatorname{D}c}{ \operatorname{D}t}\right)-\operatorname{div}\left(\mathbf{q}\right)+\rho g+c_ {\rho}F_{\rho}+c_{c}F_{c}+c_{\mathbf{v}}F_{\mathbf{v}},\end{split}\] (A.9) where \(\mathbf{q}\) is the heat flux and \(\rho g\) is the density of heat sources to maintain the temperature constant. The last three terms in Equation (A.9) account for the energy supply coming from the mass and velocity sources (see _e.g._[31, 40]). The prefactors \(c_{\rho},c_{c},c_{\mathbf{v}}\) will be determined later to satisfy the free energy imbalance. Then, repeating the same calculations on the left-hand side to use the balance of mass (A.5), we get \[\frac{\partial}{\partial t}\left(\rho\frac{1}{2}|\mathbf{v}|^{2}+\rho u\right) +\operatorname{div}\left(\rho\left(\frac{1}{2}|\mathbf{v}|^{2}+u\right) \mathbf{v}\right)=\rho\left[\frac{\operatorname{D}}{\operatorname{D}t}\left( \frac{1}{2}|\mathbf{v}|^{2}+u\right)\right]+\left(\frac{1}{2}|\mathbf{v}|^{2} +u\right)F_{\rho}.\] Applying the chain rule to the kinetic part, we get \[\rho\frac{\operatorname{D}}{\operatorname{D}t}\left(\frac{1}{2}|\mathbf{v}|^{2 }\right)=\rho\mathbf{v}\cdot\frac{\operatorname{D}\mathbf{v}}{\operatorname {D}t},\] and using the balance of linear momentum (A.8), we obtain \[\rho\mathbf{v}\cdot\frac{\operatorname{D}\mathbf{v}}{\operatorname{D}t}= \mathbf{v}\cdot\operatorname{div}(\boldsymbol{\sigma})-\left(\kappa(\rho,c)+F_ {\rho}\right)|\mathbf{v}|^{2}+F_{\mathbf{v}}\cdot\mathbf{v}.\] Using these previous equations inside (A.9), we obtain the balance equation for the internal energy \[\rho\frac{\mathrm{D}u}{\mathrm{D}t}=\mathrm{div}\left(\mathbf{\sigma}^{T} \mathbf{v}\right)-\mathbf{v}\cdot\mathrm{div}\left(\mathbf{\sigma}\right) +\mathrm{div}\left(\mathbf{\tau}\frac{\mathrm{D}c}{\mathrm{D}t}\right) +\left(\kappa(\rho,c)+F_{\rho}\right)|\mathbf{v}|^{2}-F_{v}\mathbf{v}\] \[-\mathrm{div}\left(\mathbf{q}\right)+\rho g-\left(\frac{1}{2}| \mathbf{v}|^{2}+u\right)F_{\rho}+c_{\rho}F_{\rho}+c_{c}F_{c}+c_{\mathbf{v}}F_{ \mathbf{v}}.\] However, since \[\mathbf{v}\cdot\left(\mathrm{div}\left(\mathbf{\sigma}\right)\right)-\mathrm{div} \left(\mathbf{\sigma}^{T}\mathbf{v}\right)=-\mathbf{\sigma}:\nabla\mathbf{v},\] where \(\nabla\mathbf{v}=\left(\partial_{x_{j}}\mathbf{v}_{i}\right)_{i,j=1,\ldots,d}\) is the Jacobi matrix and, we have \(A\!:\!B=\!\sum_{i,j}A_{ij}B_{ij}\), for two matrices \(A,B\). Altogether, we have the balance equation for the internal energy \[\rho\frac{\mathrm{D}u}{\mathrm{D}t}=\mathbf{\sigma}:\nabla\mathbf{v} +\mathrm{div}\left(\mathbf{\tau}\frac{\mathrm{D}c}{\mathrm{D}t}\right) +\left(\kappa(\rho,c)+F_{\rho}\right)|\mathbf{v}|^{2}-F_{v}\mathbf{v}\] (A.10) \[-\mathrm{div}\left(\mathbf{q}\right)+\rho g-\left(\frac{1}{2}| \mathbf{v}|^{2}+u\right)F_{\rho}+c_{\rho}F_{\rho}+c_{c}F_{c}+c_{\mathbf{v}}F_ {\mathbf{v}}.\] ### Entropy balance and Clausius-Duhem inequality We aim to apply the second law of thermodynamics. To do so, we define the entropy \(s=s(\rho,c,\nabla c)\) and the Helmholtz free energy \(\mathcal{F}=\mathcal{F}(\rho,c,\nabla c)\), both related through the equation \[\mathcal{F}=u-Ts,\] (A.11) where \(T\) denotes the temperature. From the mass balance equation (A.5), we have the entropy balance equation \[\frac{\partial\rho s}{\partial t}+\mathrm{div}(s\rho\mathbf{v})=\rho\frac{ \mathrm{D}s}{\mathrm{D}t}+s\left[\frac{\partial\rho}{\partial t}+\mathrm{div} \left(\rho\mathbf{v}\right)\right]=\rho\frac{\mathrm{D}s}{\mathrm{D}t}+sF_{ \rho}.\] (A.12) Then, using the definition of the Helmholtz free energy (A.11) and the balance of energy (A.10), we obtain \[\rho\frac{\mathrm{D}s}{\mathrm{D}t} =-\frac{\rho}{T}\frac{\mathrm{D}\mathcal{F}}{\mathrm{D}t}+\frac{ \rho}{T}\frac{\mathrm{D}u}{\mathrm{D}t}\] (A.13) \[\qquad\qquad\qquad-\mathrm{div}\left(\mathbf{q}\right)+\rho g- \left(\frac{1}{2}|\mathbf{v}|^{2}+u\right)F_{\rho}+c_{\rho}F_{\rho}+c_{c}F_{c} +c_{\mathbf{v}}F_{\mathbf{v}}\big{]},\] where we have replaced the material derivative of the internal energy using its balance equation (A.10). The constitutive relations for the functions constituting the Navier-Stokes-Cahn-Hilliard model are often derived to satisfy the Clausius-Duhem inequality (Coleman-Noll Procedure) [22]. Indeed, this inequality provides a set of restrictions for the dissipative mechanisms occurring in the system. However, in our case, due to the presence of source terms, we can not ensure that this inequality holds without some assumptions on the proliferation and friction of the fluid around the pores. Therefore, we use here a different method: the Lagrange multipliers method. Indeed, the Liu [45] and Muller [53] method is based on using Lagrange multipliers to derive a set of restrictions on the constitutive relations that can be applied even in the presence of source terms. Following classical Thermodynamics [53], we state the second law as an entropy inequality, i.e., the Clausius-Duhem inequality in the local form [22] \[\rho\frac{\mathrm{D}s}{\mathrm{D}t}\geq-\mathrm{div}\left(\frac{\mathbf{q}}{ T}\right)+\frac{\rho g}{T}+\mathrm{div}\left(\mathcal{J}\right),\] (A.14) where \(\mathcal{J}\) is the entropy flux. The inequality (A.14) results from the fact that the entropy of the mixture can only increase. Using the equation (A.13), we obtain \[\begin{split}\frac{\rho}{T}\frac{D\mathcal{F}}{Dt}-\frac{1}{T}& \big{[}\boldsymbol{\sigma}:\nabla\mathbf{v}+\mathrm{div}\left( \boldsymbol{\tau}\frac{\mathrm{D}c}{\mathrm{D}t}\right)+\left(\kappa(\rho,c)+F _{\rho}\right)\left|\mathbf{v}\right|^{2}\\ &-F_{v}\mathbf{v}-\left(\frac{1}{2}|\mathbf{v}|^{2}+u\right)F_{ \rho}+c_{\rho}F_{\rho}+c_{c}F_{c}+c_{\mathbf{v}}F_{\mathbf{v}}\big{]}+\mathrm{ div}\left(\mathcal{J}\right)\leq 0.\end{split}\] (A.15) Then, using the chain rule \[\frac{D\mathcal{F}}{Dt}=\frac{D\rho}{Dt}\frac{\partial\mathcal{F}}{\partial \rho}+\frac{Dc}{Dt}\frac{\partial\mathcal{F}}{\partial c}+\frac{D\nabla c}{Dt }\cdot\frac{\partial\mathcal{F}}{\partial\nabla c},\] and \[\begin{split}\frac{D\nabla c}{Dt}=\nabla\left[\frac{Dc}{Dt} \right]-\left(\nabla\mathbf{v}\right)^{T}\nabla c,\quad\frac{\mathrm{D}\rho}{ \mathrm{D}t}=-\rho\mathrm{div}(\mathbf{v})+F_{\rho},\end{split}\] in the entropy inequality (A.15), we obtain \[\begin{split}\rho&\left[\left(-\rho\mathrm{div}( \mathbf{v})+F_{\rho}\right)\frac{\partial\mathcal{F}}{\partial\rho}+\frac{ \mathrm{D}c}{\mathrm{D}t}\frac{\partial\mathcal{F}}{\partial c}+\left(\nabla \left[\frac{Dc}{Dt}\right]-\left(\nabla\mathbf{v}\right)^{T}\nabla c\right) \cdot\frac{\partial\mathcal{F}}{\partial\nabla c}\right]-\mathrm{div}\left( \boldsymbol{\tau}\frac{\mathrm{D}c}{\mathrm{D}t}\right)-\sigma:\nabla\mathbf{ v}\\ &-\left[\left(\kappa(\rho,c)+F_{\rho}\right)\left|\mathbf{v} \right|^{2}-F_{v}\mathbf{v}-\left(\frac{1}{2}|\mathbf{v}|^{2}+u\right)F_{ \rho}+c_{\rho}F_{\rho}+c_{c}F_{c}+c_{\mathbf{v}}F_{\mathbf{v}}\right]+T \mathrm{div}\left(\mathcal{J}\right)\leq 0.\end{split}\] (A.16) By the chain rule, we have \[\mathrm{div}\left(\boldsymbol{\tau}\frac{\mathrm{D}c}{\mathrm{D}t}\right)= \boldsymbol{\tau}\nabla\left[\frac{Dc}{Dt}\right]+\frac{\mathrm{D}c}{\mathrm{ D}t}\mathrm{div}\left(\boldsymbol{\tau}\right).\] Furthermore, we know that \[-\rho^{2}\mathrm{div}\left(\mathbf{v}\right)\frac{\partial\mathcal{F}}{ \partial\rho}=-\rho^{2}\frac{\partial\mathcal{F}}{\partial\rho}\mathbf{1}: \nabla\mathbf{v},\] and \[-\rho\left(\left(\nabla\mathbf{v}\right)^{T}\nabla c\right)\cdot\frac{ \partial\mathcal{F}}{\partial\nabla c}=-\rho\left(\nabla c\otimes\frac{ \partial\mathcal{F}}{\partial\nabla c}\right):\nabla\mathbf{v}.\] Gathering the previous three relations and reorganizing the terms of (A.16), we obtain \[\begin{split}&\left(-\rho^{2}\frac{\partial\mathcal{F}}{ \partial\rho}\mathbf{1}-\rho\nabla c\otimes\frac{\partial\mathcal{F}}{ \partial\nabla c}-\boldsymbol{\sigma}\right):\nabla\mathbf{v}+\left(\rho \frac{\partial\mathcal{F}}{\partial c}-\mathrm{div}(\boldsymbol{\tau}) \right)\frac{\mathrm{D}c}{\mathrm{D}t}\\ &+\left(\rho\frac{\partial\mathcal{F}}{\partial\nabla c}- \boldsymbol{\tau}\right)\nabla\left[\frac{\mathrm{D}c}{\mathrm{D}t}\right]+T \mathrm{div}\left(\mathcal{J}\right)\\ &-\left[\left(\kappa(\rho,c)+F_{\rho}\right)\left|\mathbf{v} \right|^{2}-F_{v}\mathbf{v}-\left(\frac{1}{2}|\mathbf{v}|^{2}+u-\rho\frac{ \partial\mathcal{F}}{\partial\rho}\right)F_{\rho}+c_{\rho}F_{\rho}+c_{c}F_{c} +c_{\mathbf{v}}F_{\mathbf{v}}\right]\leq 0.\end{split}\] (A.17) Then, we use Liu's Lagrange multipliers method [45]. We denote by \(L_{c}\) the Lagrange multiplier associated with the mass fraction equation (A.7). The method of Lagrange multipliers consists in setting the following local dissipation inequality that has to hold for arbitrary values of \((\rho,c,\nabla\rho,\nabla c,\mathbf{v},p)\) (A.18) Since, \[\mathrm{div}\left(L_{c}\mathbf{J}_{1}\right)=L_{c}\mathrm{div}\left(\mathbf{J}_{1} \right)+\nabla L_{c}\cdot\mathbf{J}_{1},\] we reorganize the terms of (A.18) to obtain \[-D_{\mathrm{iss}}:= \left(-\rho^{2}\frac{\partial\mathcal{F}}{\partial\rho}\mathbf{1} -\rho\nabla c\otimes\frac{\partial\mathcal{F}}{\partial\nabla c}-\boldsymbol{ \sigma}\right):\nabla\mathbf{v}\] \[+\left(\rho\frac{\partial\mathcal{F}}{\partial c}-\mathrm{div}( \boldsymbol{\tau})-\rho L_{c}\right)\frac{\mathrm{D}c}{\mathrm{D}t}+\left( \rho\frac{\partial\mathcal{F}}{\partial\nabla c}-\boldsymbol{\tau}\right) \nabla\left[\frac{\mathrm{D}c}{\mathrm{D}t}\right]+\mathrm{div}\left(T \mathcal{J}+L_{c}\mathbf{J}_{1}\right)\] (A.19) \[-\nabla L_{c}\cdot\mathbf{J}_{1}\] \[-\left[\left(\kappa(\rho,c)+F_{\rho}\right)|\mathbf{v}|^{2}-F_ {v}\mathbf{v}-\left(\frac{1}{2}|\mathbf{v}|^{2}+u-\rho\frac{\partial\mathcal{ F}}{\partial\rho}\right)F_{\rho}\right.\] \[\left.\qquad\qquad\qquad+c_{p}F_{\rho}+c_{c}F_{c}+c_{\mathbf{v} }F_{\mathbf{v}}+L_{c}(F_{c}+cF_{\rho})\right]\leq 0.\] ### Constitutive assumptions and model equations First of all, we assume that the free energy density \(\mathcal{F}\) is of Ginzburg-Landau type and has the following form [16, 17] \[\mathcal{F}(\rho,c,\nabla c)\coloneqq\psi_{0}(\rho,c)+\frac{\gamma}{2}|\nabla c |^{2},\] (A.20) where \(\psi_{0}\) is the homogeneous free energy accounting for the processes of phase separation and the gradient term \(\frac{\gamma}{2}|\nabla c|^{2}\) represents the surface tension between the two phases. This free energy is the basis of the Cahn-Hilliard model which describes the phase separation occurring in binary mixtures. Furthermore, as obtained in Wise _et al._[61], the adhesion energy between different cell species is indeed well represented by such a choice of the free energy functional. To satisfy the inequality (A.19), we first choose \[\boldsymbol{\tau}\coloneqq\rho\frac{\partial\mathcal{F}}{\partial\nabla c}= \gamma\rho\nabla c.\] Then, we define the chemical potential \(\mu(\rho,c,\nabla c)\) by \[\mu\coloneqq\frac{\partial\mathcal{F}}{\partial c}-\frac{1}{\rho}\mathrm{div }(\boldsymbol{\tau})=\frac{\partial\mathcal{F}}{\partial c}-\frac{1}{\rho} \mathrm{div}(\rho\frac{\partial\mathcal{F}}{\partial\nabla c})=\frac{\partial \psi_{0}}{\partial c}-\frac{\gamma}{\rho}\mathrm{div}\left(\rho\nabla c \right),\] which in turn gives a condition for the Lagrange multiplier \[L_{c}=\mu.\] (A.21) Using these previous constitutive relations, we have already canceled some terms in the entropy inequality \[\left(\rho\frac{\partial\mathcal{F}}{\partial c}-\mathrm{div}(\boldsymbol{ \tau})-\rho L_{c}\right)\frac{\mathrm{D}c}{\mathrm{D}t}+\left(\rho\frac{ \partial\mathcal{F}}{\partial\nabla c}-\boldsymbol{\tau}\right)\nabla\left[ \frac{\mathrm{D}c}{\mathrm{D}t}\right]=0.\] Then, using classical results on isothermal diffusion [22, 50], we have \[\mathcal{J}\coloneqq-\frac{\mu\mathbf{J}_{1}}{T},\] (A.22) and using a generalized Fick's law, we have \[\mathbf{J}_{1}\coloneqq b(c)\nabla\mu,\] (A.23) where \(b(c)\) is a nonnegative mobility function that we will specify in the following. The two constitutive relations for the diffusive fluxes (A.22) and (A.23) together with (A.21), we obtain \[\mathrm{div}\left(T\mathcal{J}+L_{c}\mathbf{J}_{1}\right)-\nabla L_{c}\cdot \mathbf{J}_{1}=-b(c)|\nabla\mu|^{2}\leq 0.\] Following [4, 50], we define the pressure inside the mixture by \[p\coloneqq\rho^{2}\frac{\partial\psi_{0}}{\partial\rho}.\] (A.24) From standard rheology, we assume that the fluid meets Newton's rheological laws. The stress tensor is composed of two parts for the viscous \(\tilde{\mathbf{P}}\) and non-viscous \(\mathbf{P}\) contributions of stress \[\boldsymbol{\sigma}\coloneqq\mathbf{P}+\tilde{\mathbf{P}},\] (A.25) and we have by standard continuum mechanics (see _e.g._[4, 10, 22]) \[\begin{cases}\mathbf{P}=-\left(p-\frac{\gamma}{2}|\nabla c|^{2}\right) \mathbf{1}-\gamma\rho\nabla c\otimes\nabla c,\\ \tilde{\mathbf{P}}=\nu(c)\left(\nabla\mathbf{v}+\nabla\mathbf{v}^{T}\right)+ \lambda(c)\left(\operatorname{div}\left(\mathbf{v}\right)\right)\mathbf{1}. \end{cases}\] (A.26) The second term in the non-viscous part of the stress (namely \(-\gamma\left(\rho\nabla c\otimes\nabla c\right)\)) is representing capillary stresses that act at the interface of the two populations. Furthermore, we assume that the bulk viscosity is zero and, consequently, we set \(\lambda(c)=-\frac{2}{3}\nu(c)\). This form for the stress tensor is also the same used for Navier-Stokes fluids [24]. Using (A.26), we can cancel a new term in (A.19) \[\left(-\rho^{2}\frac{\partial\mathcal{F}}{\partial\rho}\mathbf{1}-\rho\nabla c \otimes\frac{\partial\mathcal{F}}{\partial\nabla c}-\boldsymbol{\sigma} \right):\nabla\mathbf{v}=0.\] Therefore, the remaining terms of the entropy inequality are the ones associated with proliferation and friction. The last step to satisfy the entropy inequality is to choose arbitrarily a value for the Lagrange multiplier \(c_{\rho}\), such that \[-\big{[}\left(\kappa(\rho,c)+F_{\rho}\right)|\mathbf{v}|^{2} -F_{v}\mathbf{v}-\left(\frac{1}{2}|\mathbf{v}|^{2}+u-\rho\frac{ \partial\mathcal{F}}{\partial\rho}\right)F_{\rho}\] \[+c_{\rho}F_{\rho}+c_{c}F_{c}+c_{\mathbf{v}}F_{\mathbf{v}}+L_{c}( F_{c}+cF_{\rho})\big{]}\leq 0.\] Reorganizing the terms we have \[-\kappa(\rho,c)|\mathbf{v}|^{2}-F_{\rho}\left[c_{\rho}+|\mathbf{v}|^{2}-\left( \frac{1}{2}|\mathbf{v}|^{2}+u-\rho\frac{\partial\mathcal{F}}{\partial\rho} \right)+\mu c\right]-F_{\mathbf{v}}\left[c_{\mathbf{v}}-\mathbf{v}\right]-F_ {c}\left[c_{c}+\mu\right]\leq 0.\] The obvious choices are of course \[\begin{cases}c_{\rho}=-|\mathbf{v}|^{2}+\left(\frac{1}{2}|\mathbf{v}|^{2}+u- \rho\frac{\partial\mathcal{F}}{\partial\rho}\right)-\mu c,\\ c_{\mathbf{v}}=\mathbf{v},\\ c_{c}=-\mu.\end{cases}\] From the previous constitutive relations and choices of Lagrange multipliers, we have that the dissipation inequality (A.19) is satisfied. ### Summary of the model's equations Using the previous constitutive relations our general model is the following compressible Navier-Stokes-Cahn-Hilliard system \[\begin{split}\frac{\partial\rho}{\partial t}&=- \operatorname{div}\left(\rho\mathbf{v}\right)+F_{\rho},\\ \rho\frac{\mathrm{D}c}{\mathrm{D}t}&=\operatorname{div} \left(b(c)\nabla\mu\right)+F_{c}-cF_{\rho},\\ \rho\mu&=-\gamma\mathrm{div}\left(\rho\nabla c\right)+\rho \frac{\partial\psi_{0}}{\partial c},\\ \rho\frac{\mathrm{D}\mathbf{v}}{\mathrm{D}t}&=-\left[\nabla p+ \gamma\mathrm{div}\left(\rho\nabla c\otimes\nabla c\right)\right]+\operatorname {div}\left(\nu(c)\left(\nabla\mathbf{v}+\nabla\mathbf{v}^{T}\right)\right) \\ &\qquad\qquad\qquad-\frac{2}{3}\nabla\left(\nu(c)\left( \operatorname{div}\left(\mathbf{v}\right)\right)\right)-\left(\kappa(\rho,c)+ F_{\rho}\right)\mathbf{v}+F_{\mathbf{v}},\end{split}\] (A.27) with \(p\) defined in (A.24). Appendix B Model reduction, general assumptions and biologically relevant choices of the model's functions ### Specific choices of functionals and model reductions Problem 1: General compressible NSCH with friction term and mass transfer.Assuming no creation of mass nor transfer of mass from the exterior of the system we have \[F_{c}=-F_{1-c},\] (B.1) leading to mass conservation \[F_{\rho}=0.\] (B.2) Furthermore, we assume no external source of velocity and energy, leading to \[F_{\mathbf{v}}=0,\text{ and }F_{u}=0.\] (B.3) Furthermore, using the same simplifying assumption as in Abels and Feireisl [4] to avoid vacuum zones, our final reduced system of equations is \[\frac{\partial\rho}{\partial t}+\text{div}\left(\rho\mathbf{v} \right),\] (B.4) \[\frac{\partial\rho c}{\partial t}+\text{div}\left(\rho c\mathbf{ v}\right)=\text{div}\left(b(c)\nabla\mu\right)+F_{c},\] (B.5) \[\rho\mu=-\gamma\Delta c+\rho\frac{\partial\psi_{0}}{\partial c},\] (B.6) \[\frac{\partial\rho\mathbf{v}}{\partial t}+\text{div}\left(\rho \mathbf{v}\otimes\mathbf{v}\right)= -\left[\nabla p+\gamma\text{div}\left(\nabla c\otimes\nabla c-\frac{1}{2}| \nabla c|^{2}\mathbf{1}\right)\right]+\text{div}\left(\nu(c)\left(\nabla \mathbf{v}+\nabla\mathbf{v}^{T}\right)\right)\] \[-\frac{2}{3}\nabla\left(\nu(c)\left(\text{div}\left(\mathbf{v} \right)\right)\right)-\kappa(\rho,c)\mathbf{v},\] (B.7) **Remark B.1**.: In this article, we prove the existence of a weak solutions for Problem 1 and propose an efficient structure and bounds-preserving scheme. Problem 2: Biologically relevant variant of the system. For this variant of the system, we assume the production of mass and neglect certain effects. Namely, we neglect inertia effects, and the viscosity of the fluid, and assume no external source of velocity. This leads to the momentum equation \[\nabla p+\kappa(\rho,c)\mathbf{v}=-\gamma\text{div}\left(\nabla c\otimes \nabla c-\frac{1}{2}|\nabla c|^{2}\mathbf{1}\right)-F_{\rho}\mathbf{v}.\] Then, we assume one cell population proliferates while the other does not, leading to \[F_{c}=F_{\rho}=\rho cP_{c}(p),\quad\text{and}\quad F_{1-c}=0,\] with a pressure-dependent proliferation rate \(P_{c}(p)\geq 0\). The growth function \(P_{c}(p)\) is used to represent the capacity of cells to divide accordingly to the pressure exerted on them. It is well known that cells are able to divide as long as the pressure is not too large. Once a certain pressure \(p_{\text{max}}\) is reached cells enter a quiescent state. Therefore, we assume that \[P_{c}^{\prime}(p)\leq 0,\quad\text{and}\quad P_{c}(p)=0\quad\text{for}\quad p >p_{\text{max}}.\] (B.8) Combining these changes, the model becomes \[\begin{cases}\frac{\partial\rho}{\partial t}+\operatorname{div}\left(\rho{\bf v} \right)=\rho cP_{c}(p),\\ \frac{\partial\rho}{\partial t}+\operatorname{div}\left(\rho c{\bf v}\right)= \operatorname{div}\left(b(c)\nabla\mu\right)+\rho cP_{c}(p),\\ \rho\mu=-\gamma\Delta c+\rho\frac{\partial\psi_{0}}{\partial c},\\ \nabla p+\kappa(\rho,c){\bf v}=-\gamma\operatorname{div}\left(\nabla c\otimes \nabla c-\tfrac{1}{2}|\nabla c|^{2}{\bf 1}\right)-\rho cP_{c}(p){\bf v}.\end{cases}\] (B.9) ### Biologically consistent choices of functions As said in the derivation of the model, the free energy density \(\mathcal{F}\) is the sum of two terms: \(\frac{\gamma}{2}|\nabla c|^{2}\) taking into account the surface tension effects existing between the phases of the mixture and the potential \(\psi_{0}(\rho,c)\) representing the cell-cell interactions and pressure. Thus, we choose \[\psi_{0}(\rho,c)=\psi_{e}(\rho)+\psi_{\text{mix}}(\rho,c),\] (B.10) with \(\psi_{\text{mix}}(\rho,c)=H(c)\log\rho+Q(c)\). Then, using the constitutive relation for the pressure we have \[p(\rho,c)=\rho^{2}\frac{\partial\psi_{0}}{\partial\rho}=p_{e}(\rho)+\rho H(c).\] (B.11) The function \(b(\rho)\) is the active mobility of the cells of the growing population. Let us explain how the choices of functions for the free energy density and mobility are motivated by biological observations. To satisfy the conditions (2.7), we propose to choose \[b(c)=C_{b}c(1-c)^{\alpha},\quad\alpha\geq 1,\] (B.12) where \(C_{b}\) is a positive constant. **Remark B.2**.: Note that we cannot prove the existence of solutions for this mobility because it is degenerate. Instead, in the analysis part of this paper, we use an approximate mobility bounded away from zero. The difficulty of the degenerate mobility comes when one tries to identify the chemical potential \(\mu\) when the parameters in the approximating schemes are removed. Indeed the estimate of the dissipation of the energy \(\int_{\Omega}b(c)|\nabla\mu|^{2}\,\mathrm{d}x\) does not provide estimates on \(\nabla\mu\) anymore. We use for the pressure a power law such that \[p_{e}(\rho)=\frac{1}{a-1}\rho^{a-1}.\] (B.13) For \(H(c)\) and \(G(c)\), two choices can be considered depending on the behavior of the cells we want to represent. If the two cell populations exert attractive forces when they recognize cells of the same type and repulsion with the other type, the potential has to take a form of a double-well for which the two stable phases are located at the bottom of the two wells (see e.g. Figure 7a). This is a situation close to the phase separation in binary fluids. Thermodynamically consistent potentials are of Ginzburg-Landau type with the presence of logarithmic terms. Even though the double-well form of the potential is originally used for applications dealing with materials, it can also be motivated for biological purposes. Indeed, considering an application where the mixture is saturated with two cell types, a double-well potential is biologically relevant and reflects correctly the expected behavior of cells: they are attracted to each other respectively to their cell type at low densities and after a certain density they start to repel each other to avoid the creation of overcrowded zones. A typical example of biologically relevant double-well potential is given by \[\psi_{\text{mix}}=\frac{1}{2}\left(\alpha_{1}(1-c)\log(\rho(1-c))+\alpha_{2}c \log(\rho c)\right)-\frac{\theta}{2}(c-\frac{1}{2})^{2}+k,\] (B.14) thus giving \[H(c)=\frac{1}{2}\left(\alpha_{1}(1-c)+\alpha_{2}c\right),\quad Q(c)=\frac{1}{2} \left(\alpha_{1}(1-c)\log(1-c)+\alpha_{2}c\log(c)\right)-\frac{\theta}{2}(c- \frac{1}{2})^{2}+k,\] where \(\theta>1\) and \(k\) is an arbitrary constant. To meet the phenomenological observations of the interaction between cells when the mixture is composed of only one cell population (and the other component of the mixture is supposed to be much more compressible), a single-well potential seems more appropriate [19, 15]. Indeed, when the distance between cells falls below a certain value (i.e. if the cell density is large enough), cells are attracted to each other. Then, it exists a threshold value called the mechanical equilibrium for which \(\rho H(c_{e})+Q(c_{e})=0\) i.e. there is an equilibrium between attractive and repulsive forces. For larger cell densities, cells are packed too close to each other, they thus experience a repulsive force. When cells are so packed that they fill the whole control volume, then the repulsive force becomes infinite due to the pressure. The representation of such functional is depicted in Figure (b)b. A typical example of single-well potential which has been used for the modeling of living tissue and cancer [7, 19] is \[\psi_{\text{mix}}(\rho,c)=-(1-c_{e})\log(\rho(1-c))-\frac{c^{3}}{3}-(1-c_{e}) \frac{c^{2}}{2}-(1-c_{e})c+k,\] (B.15) thus giving \[H(c)=-(1-c_{e}),\quad Q(c)=-(1-c_{e})\log(1-c)-\frac{c^{3}}{3}-(1-c_{e})\frac{ c^{2}}{2}-(1-c_{e})c+k,\] (B.16) where \(k\) is an arbitrary constant.
2307.13638
The Giant Radio Array for Neutrino Detection
Ultra-high-energy cosmic neutrinos (UHE), with energies above 100 PeV, are unparalleled probes of the most energetic astrophysical sources and weak interactions at energies beyond the reach of accelerators. GRAND is an envisioned observatory of UHE particles - neutrinos, cosmic rays, and gamma rays - consisting of 200,000 radio antennas deployed in sub-arrays at different locations worldwide. GRAND aims to detect the radio emission from air showers induced by UHE particle interactions in the atmosphere and underground. For neutrinos, it aims to reach a flux sensitivity of $\sim 10^{-10}$ GeV cm$^{-2}$ s$^{-1}$ sr$^{-1}$, with a sub-degree angular resolution, which would allow it to test the smallest predicted diffuse fluxes of UHE neutrinos and to discover point sources. The GRAND Collaboration operates three prototype detector arrays simultaneously: GRAND@Nan\c{c}ay in France, GRANDProto300 in China, and GRAND@Auger in Argentina. The primary purpose of GRAND@Nan\c cay is to serve as a testbench for hardware and triggering systems. On the other hand, GRANDProto300 and GRAND@Auger are exploratory projects that pave the way for future stages of GRAND. GRANDProto300 is being built to demonstrate autonomous radio-detection of inclined air showers and study cosmic rays near the proposed transition between galactic and extragalactic sources. All three arrays are in the commissioning stages. It is expected that by 2028, the detector units of the final design could be produced and deployed, marking the establishment of two GRAND10k arrays in the Northern and Southern hemispheres. We will survey preliminary designs, simulation results, construction plans, and the extensive research program made possible by GRAND.
JoΓ£o R. T. de Mello Neto
2023-07-25T16:41:33Z
http://arxiv.org/abs/2307.13638v1
# The Giant Radio Array for Neutrino Detection ###### Abstract: Ultra-high-energy cosmic neutrinos (UHE), with energies above 100 PeV, are unparalleled probes of the most energetic astrophysical sources and weak interactions at energies beyond the reach of accelerators. GRAND is an envisioned observatory of UHE particles - neutrinos, cosmic rays, and gamma rays - consisting of 200,000 radio antennas deployed in sub-arrays at different locations worldwide. GRAND aims to detect the radio emission from air showers induced by UHE particle interactions in the atmosphere and underground. For neutrinos, it aims to reach a flux sensitivity of \(\sim 10^{-10}\) GeV cm\({}^{-2}\) s\({}^{-1}\) sr\({}^{-1}\), with a sub-degree angular resolution, which would allow it to test the smallest predicted diffuse fluxes of UHE neutrinos and to discover point sources. The GRAND Collaboration operates three prototype detector arrays simultaneously: GRAND@Nancay in France, GRANDProto300 in China, and GRAND@Auger in Argentina. The primary purpose of GRAND@Nancay is to serve as a testbench for hardware and triggering systems. On the other hand, GRANDProto300 and GRAND@Auger are exploratory projects that pave the way for future stages of GRAND. GRANDProto300 is being built to demonstrate autonomous radio-detection of inclined air showers and study cosmic rays near the proposed transition between galactic and extragalactic sources. All three arrays are in the commissioning stages. It is expected that by 2028, the detector units of the final design could be produced and deployed, marking the establishment of two GRAND10k arrays in the Northern and Southern hemispheres. We will survey preliminary designs, simulation results, construction plans, and the extensive research program made possible by GRAND. Introduction Ultra-high-energy cosmic rays (UHECRs) are atomic nuclei with energies exceeding approximately \(10^{18}\) electron volts (eV), and their origins remain a puzzle [1]. The mechanisms by which they attain such extreme energies are still poorly understood. Throughout their journey from the point of acceleration to their arrival at Earth, cosmic rays interact with matter and radiation fields along their trajectory, producing numerous secondary particles, including neutrinos and photons. This production establishes a significant multi-messenger connection. UHECRs are deflected from their original paths by intervening magnetic fields and are attenuated by cosmic photon backgrounds during their propagation. UHE photons are similarly attenuated. In contrast, neutrinos travel to Earth largely unaffected by intervening obstacles However, neutrinos, being largely unaffected by obstacles, making them a valuable messenger for exploring the vast reaches of the high-energy, large-redshift non-thermal Universe. The Giant Radio Array for Neutrino Detection (GRAND) [2] is a proposed large-scale observatory specifically designed to unravel and investigate the sources of UHECRs. One of its primary objectives is to discover and study UHE neutrinos. GRAND will detect the radio signals emitted when UHE cosmic rays, gamma rays, and neutrinos produce extensive air showers (EAS) in the Earth's atmosphere. Its configuration also enables comprehensive fundamental particle physics, cosmology, and radioastronomy studies. GRAND will also play a significant role in detecting neutrino emissions from transient astrophysical sources [3]. Below, we will delve into the concept of GRAND, its physics topics, simulated performance, the ongoing development of prototyping arrays, and the proposed phased implementation. ## 2 The GRAND project ### Radio-detection of UHE neutrinos When a cosmic particle interacts with the Earth's atmosphere, it initiates an EAS. This cascade of particles, in turn, produces electromagnetic radiation primarily due to the deflection of charged particles within the shower by the Earth's magnetic field. This phenomenon, known as geomagnetic emission, exhibits coherence in the tens of MHz frequency range. Consequently, it generates short-duration (\(\sim 100\) ns), transient electromagnetic pulses with amplitudes significant enough to enable the detection of the EAS, given that the energy of the shower exceeds approximately \(10^{16.5}\) eV [4, 5]. Radio-detection of EAS is a mature technique that benefits from the valuable experience gained through numerous previous experiments, such as AERA, LOFAR, CODALEMA, Tunka-Rex, and TREND, that presented the proof of principle that an antenna array can detect EAS in stand-alone mode [6]. Cosmic neutrinos are less likely to be detected through interactions with the atmosphere due to their extremely small interaction cross-section with matter. Nevertheless, \(\nu_{\tau}\) neutrinos can produce \(\tau\) leptons beneath the Earth's surface via charged-current interactions with rock. Thanks to their considerable range in rock (50 m per PeV of energy before decaying) and short lifetime (0.29 ps), \(\tau\) leptons can emerge into the atmosphere and decay, initiating a detectable EAS [7]. Only Earth-skimming trajectories allow for such a scenario, since the Earth acts as a barrier to neutrinos with energies surpassing \(10^{17}\) eV. This characteristic proves to be advantageous for radio-detection purposes. Due to relativistic effects, the radio emission becomes highly focused in a forward-directed cone, with its opening defined by the Cherenkov angle \(\theta_{\rm C}\leq 1^{\circ}\). For vertically incoming showers, this results in a radio footprint on the ground that spans only a few hundred meters in diameter. Consequently, a dense array of antennas is required to sample the signal in this scenario adequately. However, for an air shower with highly inclined trajectory, the increased distance between the antennas and the emission zone, coupled with the projection effect of the signal on the ground, leads to a few kilometers-long footprint [4, 5]. By targeting air showers with such inclined trajectories, it becomes feasible to detect them using a sparser and larger array, typically employing one antenna per square kilometer. This capability serves as a crucial feature of the GRAND detector. GRAND also incorporates the strategy of selecting mountainous regions with advantageous topographies as deployment sites. An optimal topography involves two parallel mountain ranges spaced several tens of kilometers apart. One range serves as the target for neutrino interactions, while the other functions as a screen onto which the subsequent radio signal is projected. Simulations reveal that such configurations lead to an enhanced detection efficiency, approximately four times greater than that of a flat site [2]. ### Simulated performance The end-to-end simulation pipeline used to determine the sensitivity of GRAND incorporates the intricate topography of the deployment site and the extensive instrumented area. Given the complexity of the task at hand, we ensure the inclusion of all relevant physics while striving to optimize computational performance. We individually validate each simulation stage by comparing it with existing codes. The complete simulation chain is described in detail in [2]. The approach described in [8] has been applied in a preliminary analysis to reconstruct the depth of maximum development of cosmic ray-induced showers (\(X_{\rm max}\)) using a GRAND-like array. This method achieves \(X_{\rm max}\) resolution smaller than 40 g cm\({}^{-2}\), assuming knowledge of the shower energy and core position [9]. Another study, based on a spherical fit of the wavefront, even though it does not measure \(X_{\rm max}\) directly, uses a figure of merit to estimate that the resolution will be slightly worse than 17 g cm\({}^{-2}\)[10]. See figure 1, upper right. Innovative reconstruction techniques performing fits to the strength of the radio signal as a function of the angle from the shower axis (angular distribution function, ADF) have showcased the potential to achieve angular resolutions of approximately 0.1\({}^{\circ}\) in determining the arrival direction of particles, as shown in figure 1, lower right [11]. Even though this result was developed and tested using simulated data only, this level of precision opens up the possibility of conducting neutrino and gamma-ray astronomy with the GRAND observatory. For neutrino searches, the shower energy of the EAS only provides a lower bound on the initial neutrino energy. The initial findings regarding energy resolution are promising. Using a preliminary reconstruction based on deep learning methods, an energy resolution of 15% for incoming protons and iron nuclei was achieved without implementing an antenna response and in an idealized scenario for radio-detection [12]. Another preliminary global reconstruction method that uses the angular distribution function also yields a 20% energy resolution [11]. Other machine learning and analytical methods are under development within the Collaboration. Consequently, a final energy resolution of 10% can likely be attained. ## 3 GRAND science case ### Ultra-high-energy messengers The interaction between UHECRs and the cosmic microwave background (CMB) and extragalactic background light (EBL) generates _cosmogenic_ UHE neutrinos (and photons), with energies in excess of \(10^{17}\) eV. Despite our limited understanding of the sources of UHECRs, their existence is assured. Even with pessimistic assumptions, GRAND has the potential to discover UHE neutrinos - even if their flux is low. The sensitivity of GRAND to UHE neutrino reaches \(4\times 10^{-10}\) GeV cm\({}^{-2}\) s\({}^{-1}\) sr\({}^{-1}\) as shown in figure 1, left. Cosmogenic neutrino studies indicate that the outcomes of measurements conducted by GRAND will have significant implications for constraining the sources of UHECRs [13, 14]. Additionally, they will provide constraints on the proton fraction at ultra-high energies [15]. The remarkable sensitivity of GRAND, coupled with its sub-degree angular resolution, will unlock the potential for conducting UHE neutrino astronomy, enabling the identification of point sources. Figure 2, left, shows the sensitivity limit of GRAND for point sources. The sources of UHECRs and UHE neutrinos could be distinct. Therefore, even if a heavy composition is observed Figure 1: _Left_: Predicted cosmogenic neutrino flux, compared to experimental upper limits and sensitivities. Gray-shaded regions are generated by fitting UHECR simulations to Auger spectral and mass-composition data [2]. _Upper right_: Mean value of the radio grammage \(\{X_{\rm e}\) distribution per energy slices for proton and iron. \(X_{\rm e}\) can be considered in this model as the (static) point-like source of the radio emission and it is a free parameter of the wavefront model. The error bars show the statistical fluctuations, taken equal to \(\sigma_{X_{\rm e}}/\sqrt{N}\) where \(\sigma_{X_{\rm e}}\) is the standard deviation of the \(X_{\rm e}\) values for each simulation and \(N\) is the number of simulated showers per energy slice. The shaded areas correspond to the additional uncertainties associated to error on the direction of origin of the showers for \(1-\sigma\) values of \(0.1^{\circ}\) and \(0.5^{\circ}\), where \(\sigma\) is the standard deviation of the gaussian distribution around the true direction to account for reconstruction errors. [10]. _Lower right_: Distributions of the angular distances \(\Psi\) for the GRANDProto300-like layout (see section 4) [11]. in UHECRs, it does not necessarily imply a suppression in the flux of neutrinos at EeV energies [16]. Similarly to UHE neutrinos, the cosmogenic flux of UHE photons is also guaranteed. They may be emitted by astrophysical sources, depending on their opacity. However, distant objects cannot be directly observed as they are attenuated by the CMB and EBL and reprocessed to lower energies. The most stringent upper limits on UHE photons can be improved by two orders of magnitude after three years of data-taking by GRAND. See figure 2, right [2]. ### Multimessenger astronomy With its excellent angular resolution and extensive sky coverage, GRAND has the potential to detect UHE neutrinos associated with transient events in conjunction with electromagnetic emissions. Using a single array site containing 10,000 antennas (GRAND10k), the instantaneous field of view is about 5% of the sky; the daily field of view, however, reaches about 80%. Using multiple such sites deployed at different locations, as is intended for final configuration of GRAND, the field of view is even larger. GRAND will be a crucial triggering partner for multimessenger observations, enabling the precise reconstruction of the arrival direction of neutrino-induced air showers near the horizon with sub-degree accuracy and minimal latency. This capability allows GRAND to issue alerts to other experiments or coordinated systems promptly. Additionally, as a follow-up partner, GRAND can swiftly validate alerts generated by other experiments as well as gravitational-wave detectors. If the target directions fall within the instantaneous field of view of GRAND, it becomes possible to establish constraints on UHE neutrino emissions originating from a transient. Figure 2: _Left_: The sensitivity limits of GRAND for point sources [2]. It is important to note that these GRAND limits assume the deployment of 200k antennas at a single location. _Right_: The projected upper limits of GRAND on the sensitivity to UHE photons after three years of operation are presented. For comparison, we also include the current upper limits from Auger and TA and the projected capabilities of Auger by 2025. Additionally, we overlay the predicted cosmogenic UHE photon flux resulting from pure-proton and pure-iron UHECRs, as estimated in [17]. ## 4 Experimental setups for prototyping A dedicated design was formulated for the antenna used in the GRAND project, known as the HorizonAntenna [2]. Its design is optimized to select very inclined EAS. This antenna features three perpendicular arms, enabling comprehensive polarization measurements of the signal. Positioned at a height of 3.5 m above the ground and optimized for the frequency range of 50-200 MHz, it exhibits optimal sensitivity to near-horizontal signals. Currently, the GRAND Collaboration has three ongoing prototype arrays taking data. Thirteen detection units (GRANDProto13), consisting of antennas and associated electronics, were deployed in the Gobi desert, Gansu Province, China, in February 2023. Data is being collected and analyzed from this initial setup, which serves as the foundation for the GRANDProto300 array [18]. One transient event is shown in figure 3, right. We are validating the detector unit design with the ones already deployed, and once this is done, we will deploy the remaining 70 units already built. This array should already be enough to detect cosmic rays. In the near future, we will build and deploy the remaining 200 units to form GRANDProto300. The addition of particle detectors to the prototype array is still under consideration. By utilizing the GRANDProto300 array, we will have the capability to investigate cosmic rays within the energy range of \(10^{16.5}\) to \(10^{18}\) eV, which encompasses the transitional region between between Galactic and extragalactic UHECRs. The array will also allow for the detection of radio transients. Moreover, if particle detectors are used, they may help to address the discrepancies observed between simulations and measurements of muons [19]. Four detection units were deployed at the Nancay Radio Observatory in France in the autumn of 2022 [20]. The primary objective of the GRAND@Nancay test array is to conduct hardware and trigger testing. A preliminary spectrum obtained on-site is shown in figure 3, upper left. Additionally, ten detection units are being deployed at the Pierre Auger Observatory site from March to August 2023. The main goal is to perform cross-calibration and validation of reconstruction using coincident events with Auger. By taking an average spectrum we could verify the presence of FM radio and TV stations in Malargue, as shown in the preliminary spectrum of figure 3, lower left. The GRAND Collaboration will complete the deployment and continue the operation of prototype arrays: GRAND@Nancay, GRANDProto300, and GRAND@Auger. Meanwhile, it will also focus on characterizing the radio background and exploring the features of autonomous detection of inclined extensive air showers. Furthermore the GRAND Collaboration is committed to minimizing its carbon footprint throughout its operations. By implementing sustainable practices and using energy-efficient technologies, we aim to reduce our environmental impact. Efforts are made to optimize the use of resources, reduce waste generation, and promote recycling and reuse. The Collaboration strives to minimize travel-related emissions by employing remote collaboration tools and favoring virtual meetings whenever possible [21]. ## 5 Future GRAND timeline We expect that, by 2028, the design of the detector units will be finalized, leading to the construction of two GRAND10k arrays (with 10000 units each). Candidates for the bases of GRAND-North and GRAND-South are China and Argentina, respectively, which will assure a full sky coverage. Subsequently, in the 2030s, the replication of GRAND10k is expected to commence, resulting in twenty subarrays comprising the entire GRAND project. By scaling up production to an industrial level, the front-end electronics can be transitioned to a fully integrated ASIC design, leading to cost reduction, improved reliability, and greater reproducibility of individual units. Furthermore, the design of each subarray can be customized based on location, topography, or specific scientific objectives.
2301.08828
AI enabled RPM for Mental Health Facility
Mental healthcare is one of the prominent parts of the healthcare industry with alarming concerns related to patients depression, stress leading to self-harm and threat to fellow patients and medical staff. To provide a therapeutic environment for both patients and staff, aggressive or agitated patients need to be monitored remotely and track their vital signs and physical activities continuously. Remote patient monitoring (RPM) using non-invasive technology could enable contactless monitoring of acutely ill patients in a mental health facility. Enabling the RPM system with AI unlocks a predictive environment in which future vital signs of the patients can be forecasted. This paper discusses an AI-enabled RPM system framework with a non-invasive digital technology RFID using its in-built NCS mechanism to retrieve vital signs and physical actions of patients. Based on the retrieved time series data, future vital signs of patients for the upcoming 3 hours and classify their physical actions into 10 labelled physical activities. This framework assists to avoid any unforeseen clinical disasters and take precautionary measures with medical intervention at right time. A case study of a middle-aged PTSD patient treated with the AI-enabled RPM system is demonstrated in this study.
Thanveer Shaik, Xiaohui Tao, Niall Higgins, Haoran Xie, Raj Gururajan, Xujuan Zhou
2023-01-20T23:47:16Z
http://arxiv.org/abs/2301.08828v1
# AI enabled RPM for Mental Health Facility ###### Abstract. Mental healthcare is one of the prominent parts of the healthcare industry with alarming concerns related to patients' depression, stress leading to self-harm and threat to fellow patients and medical staff. To provide a therapeutic environment for both patients and staff, aggressive or agitated patients need to be monitored remotely and track their vital signs and physical activities continuously. Remote patient monitoring (RPM) using non-invasive technology could enable contactless monitoring of acutely ill patients in a mental health facility. Enabling the RPM system with AI unlocks a predictive environment in which future vital signs of the patients can be forecasted. This paper discusses an AI-enabled RPM system framework with a non-invasive digital technology RFID using its in-built NCS mechanism to retrieve vital signs and physical actions of patients. Based on the retrieved time series data, future vital signs of patients for the upcoming 3 hours and classify their physical actions into 10 labelled physical activities. This framework assists to avoid any unforeseen clinical disasters and take precautionary measures with medical intervention at right time. A case study of a middle-aged PTSD patient treated with the AI-enabled RPM system is demonstrated in this study. 20220220220220220220220220220220222022202 dedicated sensors on patients' bodies, which might cause inconvenience to acutely ill patients. Recent innovations like near-field coherent sensing (NCS) have demonstrated the ability in tracking human vital signs using Radio Frequency Identification (RFID) (Beng et al., 2017) tags and reader-antennas (Kumar et al., 2018). This would also assist in elderly health monitoring to monitor their physical actions to avoid any falls (Kumar et al., 2019). The AI-enabled RPM system is being developed to revolutionise patient care via tracking vital signs and detecting physical activities of acutely ill patients in psychiatric care (Kumar et al., 2019). The primary aim of the study is to perform observations of patients' vital signs and physical actions using non-invasive technology, predict future vital signs, and retrieve them to a handheld tablet of the medical staff. This paper introduces an end-to-end patient monitoring system configured with AI to collect the vital signs and physical actions of acutely ill patients in a contained mental health facility where patients are admitted to mental health facilities. The proposed framework could assist medical staff in tracking their patients' health status without even touching their bodies by placing passive RFID tags on clothes in different areas of the body. Using AI models, the staff can retrieve their patients' current health status and forecasted vital signs onto their handheld tablets and intervene at the right time to avoid any unanticipated issues related to self-harm or clinical deterioration events as shown in Fig. 1. The contributions of this study are: * To provide contact-less health monitoring of aggressive or agitated patients in a mental health facility. * A multi-type data processing and information fusion for comprehensive monitoring of patient's health. * To forecast the patients' vital signs for the upcoming 3 hours using prediction modelling. * To monitor the current physical status of patients by classifying their actions. The rest of the paper is organised as follows: Section 2 presents the related work in early detection of clinical deterioration and existing works of RPM using RFID technology in combination with AI for prediction and classification. Section 3 discussed the research problem being focused on in this study. In Section 4, the proposed research framework for an AI-enabled RPM system is discussed in detail along with the prediction and classification models used in this study. A case study adopting the proposed framework is presented in Section 5. Finally, this paper is concluded and suggested future work to further enhance the RPM system in Section 6. ## 2. Related Works Patient safety is one of the concerns in global public health. Misidentifying patients in healthcare leads to medical errors in hospitals, causing a major risk to a patient's safety. To overcome this, advanced tracking technology like RFID can be implemented to build a wristband for patients using passive RFID tags. Patient details like name, age, blood type, allergies, treatments required, and insurance can be retrieved by scanning their passive tag with an RFID reader (Beng et al., 2017). Implementing smart identification would assist both patients and medical staff and enhance the safety measures in a hospital (Kumar et al., 2019). However, the limitation of this smart application is wristband contact with patients' skin, and they may resist holding the wristband in aggressive situations. But RFID technology could help in identifying the patients. Near-field Coherent Sensing (NCS) mechanism in RFID tags has been discovered by researchers at Cornell University. The mechanism is based on electromagnetic energy, in which mechanical motion on the surface and inside a body is modulated onto multiplexed radio signals. This helps to monitor the mechanical motion of the internal organs and reflected RFID tags capture magnetic values. There are limited sensing capabilities and sampling rates in existing systems like electrocardiogram (ECG) and this may compromise monitoring the heart rate, respiration, breath rate, and blood Figure 1. Graphical Abstract pressure (Kranzfelder et al., 2017). Furthermore, the ECG and acoustics methods might limit the comfort, body motion, and wearing convenience as they require direct contact with patients' skin and effects long-term wearing (Kranzfelder et al., 2017). Sharma et al. (Sharma et al., 2019) deployed an RFID passive tag in the chest area to track heart rate, breath rhythm and body motion using the NCS mechanism. In this experiment, the quality of sleep was assessed based on the vital signs' heartbeat, respiration, and upper body motion together. The authors conducted semi-supervised learning to classify the body motion using a support vector machine (SVM) and achieved an accuracy of 91.06 percent. The research community is working beyond the identification of vulnerable patients to ensure their care and safety. It includes tracking mentally depressed patients with suicidal tendencies. Even in psychiatric care, patients can be managed in a smarter way and provide medical treatment without any delay (Beng et al., 2017). To monitor patients, data related to patient's health status is required and this can be extracted by integrating RFID technology with the internet of things (Kranzfelder et al., 2017), machine learning and Artificial Intelligence (AI). From RFID applications in health care, tracking and monitoring of patients, are most of the researchers and hospitals explored. Without any medical and human errors, patients can be served with safety backup (Kranzfelder et al., 2017). Kranzfelder et al. (Kranzfelder et al., 2017) proposed RFID technology for tracking and monitoring retained surgical sponges and surgeons in operating rooms. The authors used passive tags in the experiment for stationary surgical sponges and active tags for the operating team. Health monitoring also includes detecting bed and chair exits of hospitalised elderly people to prevent falls. Shinmoto et al. (Shinmoto et al., 2019; Shinmoto et al., 2020; Sharma et al., 2019) proposed a battery-less and wireless wearable sensor system to prevent elderly people fall in the hospital. The proposed battery-less and wireless wearable sensor system were able to achieve a precision of 66.8%, recall of 81.4%, and F1-score of 72.4% for joint chair and bed recognition. Zhao et al. (Zhao et al., 2020) built Efficient Motion Detection of Device Free Object (EMoD) to detect and track device-free objects by deploying a few pairs of tags at critical places. The related works provided a comprehensive understanding of AI strategies implemented in RPM systems. Also, there is a need to adopt these strategies along with RPM systems to monitor mental health patients without intervening in their daily activities. The proposed AI-enabled framework aims to set a benchmark for any subsequent study for enhanced RPM systems. ## 3. Research Problem The research problem is to track patients' vital signs and monitor their physical actions without hindering their daily activities. Based on past and present time-series data of vital signs, future vital signs will need to be predicted, and physical activities need to be classified. This study is to perform vital signs observation of possibly over-sedated and potentially aggressive patients in a secure, contained mental health facility. Vital signs such as heart rate, respiration, and physical actions will need to be monitored using non-invasive technology. Further to this, build a decision support system to assist clinicians to understand their patients' future vital signs to take appropriate actions against patients' self-harm or physical violence against nursing staff or fellow patients. ## 4. Framework In this section, the AI-enabled RPM framework proposed in this study will be discussed. This study was conducted in a simulated ward for real-time data collection. Each patient's vital signs such as heart rate, respiration, and physical activities will be extracted using passive RFID tags placed at different parts of the body as shown in Fig. 2. To detect these passive RFID tags on the patient's body, two ultra-high frequency (UHF) 870 readers with integrated antennas were installed in the simulated ward. Four passive RFID tags were placed on different areas of the body such as the chest, abdomen, left arm, and right ankle of the patients. The tag in the chest area retrieves mechanical motions of heartbeat to estimate heart rate, and the tag at the abdomen extracts respiration rate based on contraction and expansion while breathing. With regard to physical activities, the two passive RFID tags placed at the right arm and left ankle retrieve limb movements of a patient. The RFID tags were detected by the reader-antennas based on a measurement called received signal strength indicator (RSSI) which is the power received from the returned signal from a tag to the reader-antennas. As part of this data collection, the two UHF 870 antennas were fixed to the side walls of the simulated laboratory and the RSSI values retrieved from each passive tag were passed to the computer. Based on the RSSI values and frequency of the passive tag, the vital signs were processed, whereas physical activities were manually labelled using the phase orientation of the tags. Patient demographics such as height, weight, age, and gender were added to the extracted vital signs and labelled physical activities to enable personalised monitoring. ### Prediction Modelling In prediction modelling, the study is aimed to provide early detection of vital signs' deterioration based on time series prediction. The idea is to predict future vital signs of patients for the upcoming three hours. For this, the real-time streaming data being extracted from the RFID tags are segmented into windows of 1-hour size and the prediction can be provided every 15 minutes based on the previous 75 min (1-hour window size + 15 minutes) as shown in Fig. 3. In this prediction process, two vital signs heart rate and respiration for the upcoming 3 hours can be estimated based on the 75 minutes data samples. The prediction study is formulated as a regression problem in which the extracted features from the segmented 1-hour windows of the vital signs. A multilayer perceptron (MLP) (Shi et al., 2017) model which is a class of feed-forward artificial neural network model (ANN) is adopted for the regression modelling. The model consists of an input layer, three hidden layers, and an output layer with non-linearly activation nodes. Each node is a layer that connects to every node in the next layer with a certain weight \(w\). Rectified linear unit (ReLU) activation function is used in each layer of the MLP regressor model. The activation function removes negative values by setting them to zero, as shown in Equation 1. Considering the predicted value is a continuous numerical value, the loss function of the model is set to mean absolute error (MAE). \[f(x)=\max(0,x) \tag{1}\] The MLP regressor model prediction is mathematically presented as Equation 2 in which each input feature is multiplied with weight \(w\) and added with a bias \(b\). The updated features are passed through the activation function ReLU. Adam optimization algorithm runs averages of both the gradients and the second moments of the gradients. The Adam optimizer is used along with the MAE loss function in the model compilation. \[f(y)=\sum_{i=1}^{n}ReLU(b+w_{i}x_{i}) \tag{2}\] The vital signs processed from RFID passive tags data and patient demographics will be trained to the MLP model with an input node for each input variable in the input layer. The data will be further processed in three hidden layers with weight, bias, and activation units. The output layer will provide prediction results comprising future vital signs. ### Classification Modelling In classification modelling, 10 different activities such as standing still, climbing stairs, sitting and relaxing, lying down, walking, waist bends forward, running, the frontal elevation of arms, knees bending, and jump front & back are classified based on the phase and orientation of RFID tags on a patient. A classification version of the MLP model, MLP Classifier, is used for the classification of physical activities. The classification model was configured with an input layer, three hidden layers, and an output layer. The hyper-parameters ReLU activation function and Adam optimiser in the classification model are similar to the prediction model, except for the loss function, which is changed to binary cross-entropy. The binary cross entropy has been calculated using the log loss function as shown in Equation 3. This loss function compares the predicted probabilities of the activity labels to the actual activity labels which result in a value ranging from 0 to 1 to observe how close the predicted probability of activity is close or far from the actual activity label. The classification problem in this study is multi-label, considering individual physical activity as a label with 1 to classify records of values. \[\mathit{Logloss}=\frac{1}{N}\sum_{i=1}^{N}-(y_{i}*log(p_{i})+(i-y_{i})*log(1-p _{i})) \tag{3}\] ### Evaluation In this study, two sets of evaluation metrics were used. The proposed framework can be evaluated with publicly available benchmark datasets like MIMIC-III (Moh et al., 2017) and MHEALTH (Mobile HEALTH) dataset (Mobile et al., 2018; Wang et al., 2018). The dataset can be processed for each subject for personalized monitoring. Individual subject data is split into 80% training data for model learning and 20% testing data for model evaluation. For the prediction model, the performance was evaluated using the metrics mean absolute error (MAE) and mean squared error (MSE). The predicted vital signs for input variables in testing data Figure 3. Prediction of Future Vital Signs Figure 2. Proposed research framework need to be compared to actual vital signs. The deviation of the predicted values from actual values can be measured in MAE and MSE. To evaluate the performance of the classification model, a traditional confusion matrix evaluation was adopted to estimate the precision, recall, and F1 score of each physical activity classification. A balanced accuracy metric was also adopted to estimate the model performance on individual physical activity classification. The trained classification model predicts the probability of each physical activity based on input variables in testing data. A threshold value needs to be set for the probability values of each record to classify the physical activities. After the classification, the model performance can be evaluated using the confusion matrix and the balanced accuracy. ## 5. Case Study The proposed AI framework can be adapted to mental healthcare facility settings for monitoring aggressive patients without touching their bodies. This can be achieved by installing UHF 870 RFID reader-antennas in a hospital ward, and arranging the passive RFID tags in a patient's hospital clothes at a readable range to the antennas. The RFID signals from the tags will need to be retrieved to a computer in a medical staff room via the reader-antennas for remote monitoring of the patient. The technological setup would assist medical staff in a mental health facility to monitor the patients who are aggressive or agitated and might cause harm to fellow patients, or staff or self-harm. David was a 47-year-old mentally challenged patient in a mental health facility in Queensland. He was suffering from post-traumatic stress disorder (PTSD) after sustaining a broken jaw, head injury and compound fracture of the left leg, in a road accident, 3 years earlier. He reported that in the accident, he had been driving a car and hit rail barriers. He had lost his closest friend on the spot and got trapped against the steering wheel and dashboard in an unconscious state. The patient left his job and joined the mental health facility for psychiatric care. He was suffering from PTSD symptoms like intrusive memories, and behaviour changes including quick temper, stress, sleep disturbance, social phobia, extreme panic attacks and anxiety (Han et al., 2017). Diagnosing and treating PTSD cannot be the same for any two patients as each patient experiences different symptoms. David had enrolled for inpatient treatments which include psychological interventions like acceptance commitment therapy and cognitive behavioural therapy. The patient's vital signs are fluctuating due to severe PTSD symptoms and had involved impulsive aggression due to failure of the prefrontal cortex to evaluate the impulse and weigh the consequences. The medical staff had experienced aggressive and agitated behaviour at times in the general ward. To prevent any self-harm and provide a safe environment to fellow patients and staff, precautionary measures were taken by moving David to a special ward in which the proposed AI-enabled RPM system is installed. In this case, the patient needs to be under continuous monitoring as there were severe symptoms of PTSD and this might lead to fluctuations in vital signs such as respiration and heart rate, deteriorating the patient's health status. Furthermore, the patient's behaviour was negatively related to anxiety, depression, and cause self-harm. This demands the need of monitoring the patient's physical activity. The patient was moved to a special ward with an RPM system installed and set up 4 passive RFID tags in his clothes to track his vital signs and external body movements. The patient was allowed to do his daily activities without any restrictions. The physical body movements and the vital signs were extracted from the passive RFID tags arranged in his hospital clothes. The retrieved parameters were combined with his demographics such as age, height, weight, body mass index (BMI), and so on. The data was then preprocessed to the train prediction and classification models to predict the patient's vital signs in the upcoming 3 hours, as well as classify his activities into the labelled 10 different activities. The patient's vital signs and physical actions were being monitored for 15 minutes using the data retrieved from RFID tags in the previous 75 minutes. The output plots shown in Fig. 4 present the current physical status of the patient, real-time heart rate and respiration readings. This is achieved based on real-time data collection from the tag data retrieved by the RFID reader antenna in the special hospital ward. In addition to these real-time readings, the classification model accuracy justifies and evident the physical status of the patient. The prediction model results are presented, in which heart rate and respiration for the upcoming 3 hours are forecasted. The forecasted vital signs are presented in alignment with the patient's physical activities. Line charts in Fig. 4 are more sensible according to the patient's physical activities. The heart rate and respiration of the patient are fluctuations according to the physical activities. ## 6. Conclusion and Future work Recent advancements in AI have contributed to many new techniques and increased the efficiency of healthcare applications. RPM systems are high in demand, and it requires efficient AI techniques to monitor patients in different health settings. This study presents the AI-enabled RPM system framework with the aim to monitor aggressive and agitated patients in a contained mental health facility. AI models were adopted to predict the vital signs of patients and classify their physical activities. The prediction model was able to predict future heart rate and respiration for the upcoming 3 hours based on the time-series data retrieved from passive RFID tags. The classification model can assist in classifying the current physical status of a patient by labelling to 10 physical activities. A case study of PTSD patients adopting the framework is illustrated in this study. This framework would assist medical staff to monitor their vulnerable patients with enhanced techniques of contact-less monitoring, can see the forecast of their health status, provide safety to patients and staff, and avoid unanticipated events like suicides. The proposed framework can be further enhanced to have an adaptive learning mechanism to learn the behaviour patterns and adapt this to the general ward where multiple patients are being treated while safeguarding their privacy using personalised monitoring techniques. Our future work will be focused on developing a deep reinforcement learning model [27], considering each patient as an individual learning agent in a hospital environment trying to achieve maximum rewards by following the designed policy to achieve the goal to stay safe clinically. Large medical corpora including MIMIC [14] database will be adapted to include a huge number of vital signs and train the reinforcement learning model. Figure 4: Output Plots
2304.13368
Strichartz estimates for Maxwell equations on domains with perfectly conducting boundary conditions
We consider Maxwell equations on a smooth domain with perfectly conducting boundary conditions in isotropic media in two and three dimensions. In the charge-free case we recover Strichartz estimates due to Blair--Smith--Sogge for wave equations on domains up to endpoints. For the proof we suitably extend Maxwell equations over the boundary, which introduces coefficients on the full space with codimension-$1$ Lipschitz singularity. This system can be diagonalized to half-wave equations amenable to the results of Blair--Smith--Sogge. In two dimensions, we improve the local well-posedness of the Maxwell system with Kerr nonlinearity via Strichartz estimates.
Nicolas Burq, Robert Schippa
2023-04-26T08:25:30Z
http://arxiv.org/abs/2304.13368v1
# Strichartz estimates for Maxwell equations on domains with perfectly conducting boundary conditions ###### Abstract. We consider Maxwell equations on a smooth domain with perfectly conducting boundary conditions in isotropic media in two and three dimensions. In the charge-free case we recover Strichartz estimates due to Blair-Smith-Sogge for wave equations on domains up to endpoints. For the proof we suitably extend Maxwell equations over the boundary, which introduces coefficients on the full space with codimension-1 Lipschitz singularity. This system can be diagonalized to half-wave equations amenable to the results of Blair-Smith-Sogge. In two dimensions, we improve the local well-posedness of the Maxwell system with Kerr nonlinearity via Strichartz estimates. Key words and phrases:Maxwell equations, quasilinear wave equations, Strichartz estimates 2020 Mathematics Subject Classification: 35B45, 35L03, 35Q61 *Corresponding author. 1Certainly, the present arguments extend to \(\partial\Omega\in C^{N}\) for \(N\) large enough corresponding to a generalization of the results due to Blair-Smith-Sogge [1] to the \(C^{N}\)-category. We are not attempting to minimize the required regularity. 2This constant is the regularity required for the metric such that the results of Blair-Smith-Sogge hold true. It is conceivable that \(N=2\) suffices, but this is currently unclear. Maxwell equations in media describe the electromagnetism of matter and are of great physical importance. We refer to the physics' literature for a detailed explanation (cf. [6, 16]). We also refer occasionally to the lecture notes surveying basic results by Schnaubelt [21]. Let \(\nu\in C^{\infty}(\partial\Omega,\mathbb{R}^{3})\) denote the outer unit normal. Below \([\cdot]_{x^{\prime}\in\partial\Omega}\) denotes the trace of a function at the boundary. Here we consider the _perfectly conducting boundary conditions_ \[[\mathcal{E}\times\nu]_{x^{\prime}\in\partial\Omega}=0,\qquad[\mathcal{B} \cdot\nu]_{x^{\prime}\in\partial\Omega}=0. \tag{1.5}\] The boundary conditions of the perfect electric conductor are among the physically most relevant ones (cf. [24, 21]). We define _surface charges_\(\rho_{\Sigma}\) and _surface currents_\(J_{\Sigma}\) by (cf. [21, Eq. (2.3)]): \[[\mathcal{D}\cdot\nu]_{x^{\prime}\in\partial\Omega}=\rho_{\Sigma},\qquad[ \mathcal{H}\times\nu]_{x^{\prime}\in\partial\Omega}=J_{\Sigma}. \tag{1.6}\] Furthermore, we require the normal component of \(\mathcal{J}_{e}\) to vanish at the boundary, which is physically sensible: \[[\mathcal{J}_{e}\cdot\nu]_{x^{\prime}\in\partial\Omega}=0. \tag{1.7}\] The Maxwell equations satisfy finite speed of propagation (see [24, Chapter 6]). Hence, in the interior of the domain we can use previously established results on the whole space for local-in-time results (see previous works by Dumas-Sueur [5] and the second author [19, 17]). Thus, it suffices to work close to the boundary, at which we resolve the Maxwell system in geodesic normal coordinates; see Section 2. At the boundary, we write the equation in geodesic normal coordinates to localize to the half-space \(\mathbb{R}^{3}_{>0}=\{x^{\prime}\in\mathbb{R}^{3}:x_{3}^{\prime}>0\}\). The cometric is given by \[g^{-1}=\begin{pmatrix}g^{11}&g^{12}&0\\ g^{21}&g^{22}&0\\ 0&0&1\end{pmatrix}.\] As short-hand notation, we write \(\sqrt{g}:=\sqrt{\det g}\). This effectively gives rise to anisotropic permittivity \(\sqrt{g}g^{-1}\varepsilon\) and permeability \(\sqrt{g}g^{-1}\mu\): \[\left\{\begin{array}{llll}\partial_{t}(\sqrt{g}g^{-1}\varepsilon\mathcal{E} )&=\nabla\times\mathcal{H},&(\mathcal{E}\times e_{3})|_{x_{3}^{\prime}=0}&= &0,\quad(t,x^{\prime})\in\mathbb{R}\times\mathbb{R}^{3}_{>0},\\ \partial_{t}(\sqrt{g}g^{-1}\mu\mathcal{H})&=-\nabla\times\mathcal{E},&( \mathcal{H}\cdot e_{3})|_{x_{3}^{\prime}=0}&=&0\end{array}\right. \tag{1.8}\] with the divergence conditions now reading \[\nabla\cdot(\sqrt{g}g^{-1}\varepsilon\mathcal{E})=\sqrt{g}\rho_{e},\qquad \nabla\cdot(\sqrt{g}g^{-1}\varepsilon\mathcal{H})=0.\] It is important to note that the boundary conditions (1.5) are respected by \(g^{-1}\). Note that taking time derivatives in (1.5) and plugging in (1.1) yields compatibility conditions. In order to maintain a less technical introduction, we postpone the discussion of compatibility conditions to Section 2 after we have localised Maxwell's equations to (1.8). The second compatibility condition simplifies under the assumption \[\partial\mu|_{x^{\prime}\in\partial\Omega}=0. \tag{1.9}\] Spitz [25, 26] showed existence and local well-posedness in \(H^{3}(\Omega)\) (also in the quasilinear case) provided that the compatibility conditions up to second order are satisfied. These are precisely the conditions, which are meaningful in the sense of traces. First, we tend to homogeneous solutions with \(\mathcal{J}_{e}=0\). Accordingly, we let \[\mathcal{H}^{3}(\Omega)=\{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{3 }(\Omega)^{2}\;:(\mathcal{E}_{0},\mathcal{H}_{0})\text{ satisfies homogeneous}\\ \text{compatibility conditions up to second order }\}.\] With the solutions existing, we can show Strichartz estimates for homogeneous solutions \[\|(\mathcal{E},\mathcal{H})\|_{L^{p}_{T}L^{q}}\lesssim_{T}\|(\mathcal{E}_{0}, \mathcal{H}_{0})\|_{\mathcal{H}^{\gamma+\delta}(\Omega)}+\|\rho_{e}(0)\|_{H^{ \gamma-1+\frac{1}{p}+\delta}(\Omega)} \tag{1.10}\] for certain \(2\leqslant p,q\leqslant\infty\), \(q<\infty\) with \(\gamma\) determined by scaling, and \(\delta>0\)3 such that Footnote 3: Note that \(\delta\) is chosen small enough such that boundary conditions are not relevant for the Sobolev space \(H^{\gamma-1+\frac{1}{p}+\delta}\). \[\gamma=3\big{(}\frac{1}{2}-\frac{1}{q}\big{)}-\frac{1}{p},\quad\delta<\frac{3 }{q}. \tag{1.11}\] All Strichartz estimates established in this paper are local in time. For \(0<T<\infty\), we write \(L^{p}_{T}L^{q}:=L^{p}_{T}L^{q}_{x^{\prime}}(\Omega):=L^{p}_{t}([0,T],L^{q}( \Omega))\) with \[\|A\|_{L^{p}_{T}L^{q}_{x^{\prime}}}:=\big{(}\int_{0}^{T}\big{(}\int_{\Omega}|A (t,x^{\prime})|^{q}dx^{\prime}\big{)}^{\frac{p}{q}}\big{)}^{\frac{1}{p}}\] for \(1\leqslant p,q<\infty\) with the usual modification for \(p=\infty\) or \(q=\infty\). (1.10) is proved in two steps: First, we show \[\|(\mathcal{E},\mathcal{H})\|_{L^{p}_{T}L^{q}(\Omega)}\lesssim\|(\mathcal{E}, \mathcal{H})\|_{L^{\infty}_{T}\mathcal{H}^{\gamma+\delta}(\Omega)}+\|\rho_{e}( 0)\|_{H^{\gamma-1+\frac{1}{p}+\delta}(\Omega)}.\] Then it suffices to prove energy estimates for homogeneous solutions for \(0\leqslant s\leqslant 3\): \[\|(\mathcal{E},\mathcal{H})\|_{L^{\infty}_{T}\mathcal{H}^{s}}\lesssim_{T}\|( \mathcal{E}_{0},\mathcal{H}_{0})\|_{\mathcal{H}^{s}}.\] Linearity and boundedness allows us to extend the linear solution mapping from the subspace \(\mathcal{H}^{3}(\Omega)\) of \(H^{\gamma}(\Omega)\) to its closure in the \(H^{\gamma}\)-norm: \[\mathcal{H}^{\gamma}(\Omega)=\overline{\mathcal{H}^{3}(\Omega)}^{\|\cdot\|_{H ^{\gamma}(\Omega)}}. \tag{1.12}\] We denote Sobolev spaces (of real-valued functions) on \(\Omega\) with Dirichlet boundary condition with \(H^{\gamma}_{D}(\Omega)\); the Sobolev spaces with Neumann boundary conditions are denoted by \(H^{\gamma}_{N}(\Omega)\). Since we shall estimate the regularity of \((\mathcal{E}_{0},\mathcal{H}_{0})\) only in \(H^{\gamma}\) for \(\gamma<\frac{3}{2}\), the compatibility conditions involving derivatives are not relevant. This means we actually only require the Dirichlet conditions for \(\mathcal{H}^{\gamma}\), \(\gamma<\frac{3}{2}\). We shall then recover inhomogeneous estimates by Duhamel's formula. Roughly speaking, \(\mathcal{H}^{\gamma}(\Omega)\) is the Sobolev space with relevant compatibility conditions; see Proposition 2.1. For \(\gamma<\frac{1}{2}\), this means there are no boundary conditions. For \(\frac{1}{2}<\gamma<\frac{3}{2}\), we only have Dirichlet conditions. For \(\frac{3}{2}<\gamma<\frac{5}{2}\), we have to take into account first order compatibility conditions, which are Neumann boundary conditions for \(\mathcal{H}\times\nu\). On the full space, Maxwell equations with rough coefficients and also quasilinear Maxwell equations were considered in [19] (the two-dimensional case) and the partially anisotropic case in three dimensions was analyzed in [17]. In these works, it was pointed out how Maxwell equations (at least in the case of isotropic media) admit diagonalization to two degenerate half-wave equations and four non-degenerate half-wave equations. The contribution of the degenerate components, i.e., stationary solutions, is quantified by the charges. Here we extend Maxwell equations (1.8) over the boundary via suitable reflections to carry out the diagonalization afterwards. Since the coefficients of the cometric and the permittivity and permeability are extended evenly, the extension introduces a codimension-1 Lipschitz singularity. After paradifferential decomposition, we can still carry out the diagonalization to half-wave equations similar to the more regular case covered in [17] (see [19] for the previously established two-dimensional case). After diagonalization, we can apply the Strichartz estimates for wave equations with structured Lipschitz singularity due to Blair-Smith-Sogge [1]. We find local-in-time Strichartz estimates for inhomogeneous Maxwell equations by Duhamel's formula: \[\begin{split}\|(\mathcal{E},\mathcal{H})\|_{L^{p}([0,T],L^{q}( \Omega))}\lesssim_{T}&\|(\mathcal{E},\mathcal{H})(0)\|_{ \mathcal{H}^{\gamma+\delta}(\Omega)}+\|\mathcal{J}_{e}\|_{L^{1}(0,T;\mathcal{ H}^{\gamma+\delta}(\Omega))}\\ &+\|\rho_{e}(0)\|_{H^{\gamma-1+\frac{1}{p}+\delta}(\Omega)}+\| \nabla\cdot\mathcal{J}_{e}\|_{L^{1}_{T}H^{\gamma-1+\frac{1}{p}+\delta}(\Omega )}.\end{split} \tag{1.13}\] The use of Duhamel's formula in \(\mathcal{H}^{\gamma+\delta}\) requires us to impose Dirichlet boundary conditions on \(\mathcal{J}_{e}\). Maxwell equations were previously diagonalized with pseudo-differential operators to prove Strichartz estimates in [19, 17]. However, in the corresponding diagonalizatons, the coefficients were more regular. Presently, the coefficients are only Lipschitz regular after reflection, but still the diagonalization only loses \(\delta\)-regularity compared to the scalar case. We believe our arguments to be general enough to apply to other first-order systems as well. We stress that it is not enough to diagonalize the symbol, but one also has to carefully analyze the mapping properties of the conjugation matrices (see [20]). We digress for a moment to recall Strichartz estimates for the wave equation on domains: Strichartz estimates for wave equations on (general) manifolds with boundary for Dirichlet as well as Neumann boundary conditions were first investigated by the first author _et al._[2, 3] and Blair-Smith-Sogge [1] based on the seminal contribution by Smith-Sogge [23] regarding spectral cluster estimates. Notably, there are more refined results and counterexamples on special domains due to Ivanovici _et al._[12, 11, 13, 14]. For exterior convex domains, Smith-Sogge [22] recovered the Euclidean Strichartz estimates (local-in-time) much earlier by the Melrose-Taylor parametrix. For Maxwell equations with perfectly conducting boundary conditions, we prove the following theorem: **Theorem 1.1**.: _Let \(\Omega\subseteq\mathbb{R}^{3}\) be a smooth domain with compact boundary and \(\varepsilon\), \(\mu\in C^{\infty}(\mathbb{R}^{3};\mathbb{R}_{>0})\) satisfy (1.3). Let \(2\leqslant p,q<\infty\), and let \((\mathcal{E},\mathcal{H}):\mathbb{R}\times\Omega\to\mathbb{R}^{3}\times \mathbb{R}^{3}\) denote solutions to (1.1) with material laws (1.2), which satisfy the perfectly conducting boundary conditions (1.5). Then (1.13) holds with \(\gamma\) and \(\delta\) in (1.11) provided that_ \[\frac{3}{p}+\frac{2}{q}\leqslant 1. \tag{1.14}\] Recall that the boundary conditions are indistinguishable at low regularities. We have \(H^{s}_{D}(\Omega)=H^{s}(\Omega)\) for \(s<1/2\) and \(H^{s}_{N}(\Omega)=H^{s}(\Omega)\) for \(s<\frac{3}{2}\). Since we estimate \(\mathcal{J}_{e}\) in Sobolev spaces with boundary conditions, we have to require \[[\mathcal{J}_{e}]_{x^{\prime}\in\partial\Omega}=0\] for \(\gamma\geq\frac{1}{2}\). Note that because \(\gamma-1+\frac{1}{p}+\delta<\frac{1}{2}\) the boundary condition of \(\rho_{e}\) is not relevant. We shall also discuss the two-dimensional case: \[\left\{\begin{array}{lll}\partial_{t}(\varepsilon\mathcal{E})&=\nabla_{\perp} \mathcal{H}-\mathcal{J}_{e},&(t,x^{\prime})&\in&\mathbb{R}\times\Omega,\\ \partial_{t}(\mu\mathcal{H})&=-(\nabla\times\mathcal{E})_{3}=-(\partial_{1} \mathcal{E}_{2}-\partial_{2}\mathcal{E}_{1}),&\nabla\cdot(\varepsilon \mathcal{E})&=&\rho_{e}\end{array}\right. \tag{1.15}\] with \(\nabla_{\perp}=(\partial_{2},-\partial_{1})\). Here \(\Omega\subseteq\mathbb{R}^{2}\) denotes a smooth domain in \(\mathbb{R}^{2}\) with compact boundary, and \(\mathcal{E}:\mathbb{R}\times\Omega\to\mathbb{R}^{2}\), \(\mathcal{J}_{e}:\mathbb{R}\times\Omega\to\mathbb{R}^{2}\), \(\mathcal{H}:\mathbb{R}\times\Omega\to\mathbb{R}\). We let \(\varepsilon,\mu\in C^{\infty}(\Omega)\). We require \(\varepsilon:\Omega\to\mathbb{R}\) and \(\mu:\Omega\to\mathbb{R}\) to satisfy \[\exists\lambda,\Lambda>0:\forall x^{\prime}\in\Omega:\lambda\leq\varepsilon( x^{\prime}),\mu(x^{\prime})\leq\Lambda. \tag{1.16}\] Like above, we require uniform bounds for finitely many derivatives up to the boundary for large \(N\geq 2\): \[\varepsilon,\partial\varepsilon,\ldots,\partial^{N}\varepsilon\in C( \overline{\Omega})\cap L^{\infty}(\Omega),\quad\mu,\partial\mu,\ldots, \partial^{N}\mu\in C(\overline{\Omega})\cap L^{\infty}(\Omega). \tag{1.17}\] The perfectly conducting boundary condition for (1.15) is given by \[[\mathcal{E}\wedge\nu]_{x^{\prime}\in\partial\Omega}=0. \tag{1.18}\] In Appendix A we shall see how Spitz's local well-posedness in three dimensions descends to two dimensions. In the following \(\mathcal{H}^{3}(\Omega)\) denotes the Sobolev space \(H^{3}(\Omega)\) with boundary and compatibility conditions taken into account as in the three-dimensional case. We abuse notation and define \(\mathcal{H}^{\gamma}(\Omega)\) as closure of \(\mathcal{H}^{3}(\Omega)\) in the \(H^{\gamma}(\Omega)\)-topology like in (1.12). We prove the following: **Theorem 1.2**.: _Let \(\Omega\subseteq\mathbb{R}^{2}\) be a smooth domain with compact boundary, \(2\leq p,q\leq\infty\), and suppose that_ \[\frac{3}{p}+\frac{1}{q}\leq\frac{1}{2},\quad q<\infty,\quad\gamma=2\big{(} \frac{1}{2}-\frac{1}{p}\big{)}-\frac{1}{q},\quad 0<\delta<\frac{1}{2}. \tag{1.19}\] _Let \(\varepsilon\in C^{\infty}(\Omega;\mathbb{R})\), \(\mu\in C^{\infty}(\Omega;\mathbb{R})\) satisfy (1.16) and (1.17)._ _Then the following estimate holds for solutions to (1.15) with initial data \((\mathcal{E}_{0},\mathcal{H}_{0})\in\mathcal{H}^{\gamma}(\Omega)\) satisfying boundary conditions (1.18):_ \[\|(\mathcal{E},\mathcal{H})\|_{L^{\infty}_{T}L^{q}(\Omega)}\lesssim _{T}\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{\mathcal{H}^{\gamma+ \delta}}+\|\mathcal{J}_{e}\|_{L^{1}_{T}\mathcal{H}^{\gamma+\delta}}\] \[+\|\rho_{e}(0)\|_{H^{\gamma-1+\frac{1}{p}+\delta}(\Omega)}+\| \nabla\cdot\mathcal{J}_{e}\|_{L^{1}_{T}H^{\gamma-1+\frac{1}{p}+\delta}( \Omega)}.\] In the last section of the paper, we analyze Maxwell equations with Kerr nonlinearity \(\varepsilon(\mathcal{E})=1+|\mathcal{E}|^{2}\) in two dimensions. In the following let \(\Omega\subseteq\mathbb{R}^{2}\) be a smooth domain with compact boundary. We use Strichartz estimates to improve local well-posedness of the following system: \[\left\{\begin{array}{lll}\partial_{t}(\varepsilon\mathcal{E})&=\nabla_{ \perp}\mathcal{H},&[\mathcal{E}\wedge\nu]_{x^{\prime}\in\partial\Omega}&=&0, \quad(t,x^{\prime})\in\mathbb{R}\times\Omega,\\ \partial_{t}\mathcal{H}&=-(\partial_{1}\mathcal{E}_{2}-\partial_{2}\mathcal{E} _{1}),&[\rho_{e}]_{x^{\prime}\in\partial\Omega}&=&0\end{array}\right. \tag{1.20}\] with \(\nabla_{\perp}=(\partial_{2},-\partial_{1})\). It turns out that the diagonalization can still be applied if \[\|\partial_{x}\varepsilon\|_{L^{2}_{t}L^{\infty}_{x^{\prime}}}+\|\partial_{x} \mu\|_{L^{2}_{t}L^{\infty}_{x^{\prime}}}\lesssim 1.\] By \(\partial\) we denote space-time derivatives \(\partial=\partial_{x}=\partial_{t,x^{\prime}}\). In this case we cannot use the estimates from [1], but rely instead on Strichartz estimates due to Tataru for quasilinear wave equations (see [27] and [19] for an elaboration on the half-wave equation). **Theorem 1.3** (Low regularity well-posedness of the Kerr system).: _For \(s\in(\frac{11}{6},2]\), (1.20) is locally well-posed for small initial data \(\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{\mathcal{H}_{0}^{s}}\leqslant\delta\ll 1\) and finite charges \(\|\rho_{e}(0)\|_{H^{\tilde{s}}}\leqslant D<\infty\) for some \(\tilde{s}>\frac{13}{12}\)._ By continuous dependence we mean local existence, uniqueness, and continuous dependence of solutions. By \(\mathcal{H}_{0}^{s}(\Omega)\) we denote the subspace of \(\mathcal{H}^{s}(\Omega)\) with \(\mathcal{E}_{0}\) satisfying zero boundary conditions. This ensures compatibility conditions to hold and facilitates regularization. We refer to Theorem 7.1 for a more precise statement. Moreover, we note that in particular for vanishing charges we infer low regularity local well-posedness. We remark that energy arguments in two dimensions yield local well-posedness for \(s>2\). Here we improve on this regularity making use of the dispersive properties. _Outline of the paper._ In Section 2 we write Maxwell equations in terms of differential forms to facilitate change of variables. We use this to formulate Maxwell equations on the half-space. We reduce Strichartz estimates to homogeneous estimates for the reflected solutions. A key step is the proof of energy estimates. However, these we prove on the level of the original equations posed on domains in Section 3. In Section 4 we collect facts on pseudo-differential operators. In Section 5 we diagonalize three-dimensional Maxwell equations after localization to the half-space, and in Section 6 we diagonalize two-dimensional Maxwell equations. In Section 7 we use Strichartz estimates to improve the local well-posedness of the Kerr system in two dimensions. In Appendix A, we relate the two-dimensional Maxwell system to the Maxwell system in three dimensions via cylindrical extension. This allows us to transfer the local well-posedness results at high regularity due to Spitz to two dimensions without revisiting the arguments in three dimensions. In Appendix B we revisit Strichartz estimates for Maxwell equations posed on the full space to formulate the estimates in a form, which is suitable for the purposes of the paper. In Appendix C we collect facts on Helmholtz decompositions, which are crucial for energy estimates proved in Section 3. ## 2. Maxwell equations on manifolds To investigate the behavior of Maxwell equations under coordinate transformations, we set up Maxwell equations on smooth Riemannian manifolds with boundary \((M,g)\). In this context, the fields are given at any time \(t\in\mathbb{R}\) as covectorfields \(X(t):M\to T^{*}M\), \(X\in\{\mathcal{E},\mathcal{D},\mathcal{H},\mathcal{B},\mathcal{J}_{e}\}\), i.e., sections of the cotangent bundle. Permittivity and permeability are given by \(\kappa(t):M\to\mathrm{Sym}(T^{*}M\to T^{*}M)\), \(\kappa\in\{\varepsilon,\mu\}\), and \(\rho_{e}(t):M\to\mathbb{R}\). Let \(*,d:\Lambda T^{*}M\to\Lambda T^{*}M\) denote the Hodge dual and exterior derivative. We localize Maxwell equations to the half-space via geodesic normal coordinates. This facilitates finding the compatibility conditions. These in turn allow us to find suitable extensions of the fields from the half-space to the full space. The extension respects the Sobolev regularity \(0\leqslant\gamma\leqslant 2\), which suffices for the presently considered Strichartz estimates, and the extended fields moreover satisfy Maxwell equations on the full space, albeit with coefficients with Lipschitz singularity. We first consider the more involved three-dimensional case and then shall be brief for the two-dimensional case. ### 3d manifolds With the aid of Hodge dual and the exterior derivative, we can write for the curl and divergence of vector fields \(F:\Omega\to\mathbb{R}^{3}\): \[\nabla\times F=*dF,\qquad\nabla\cdot F=*d*F.\] Consequently, the Maxwell system of equations reads \[\left\{\begin{array}{llll}\partial_{t}(\varepsilon{\mathcal{E}})&=*d{\mathcal{ H}}-{\mathcal{J}}_{e},&*d*(\varepsilon{\mathcal{E}})&=&\rho_{e},\qquad(t,x^{ \prime})\in\mathbb{R}\times M,\\ \partial_{t}(\mu{\mathcal{H}})&=-*d{\mathcal{E}},&*d*(\mu{\mathcal{H}})&=&0. \end{array}\right. \tag{2.1}\] Let \(\#:TM\to T^{*}M\) and \(\flat:T^{*}M\to TM\) denote the musical isomorphisms. The boundary conditions are given by \[[({\mathcal{E}}^{\flat})_{||}]_{{\mathcal{I}}^{\prime}\in\partial M}=0,\qquad [({\mathcal{B}}^{\flat})_{\perp}]_{{\mathcal{I}}^{\prime}\in\partial M}=0 \tag{2.2}\] We define surface current \({\mathcal{J}}_{\Sigma}\) and surface charges \(\rho_{\Sigma}\) on the boundary by \[[({\mathcal{H}}^{\flat})_{||}]_{{\mathcal{I}}^{\prime}\in\partial M}=[{ \mathcal{J}}_{\Sigma}]_{{\mathcal{I}}^{\prime}\in\partial M}\text{ and }[({\mathcal{D}}^{\flat})_{\perp}]_{{\mathcal{I}}^{\prime}\in\partial M}= \rho_{\Sigma}. \tag{2.3}\] #### 2.1.1. Finite speed of propagation and Strichartz estimates in the interior In this section, we show how we can reduce the local-in-time analysis to charts. We recall the notion of finite speed of propagation. Let \(({\mathcal{E}},{\mathcal{H}})\) be homogeneous solutions to \[\left\{\begin{array}{llll}\partial_{t}(\varepsilon{\mathcal{E}})&=\nabla \times{\mathcal{H}},&[{\mathcal{E}}\times\nu]_{{\mathcal{I}}^{\prime}\in \partial\Omega}&=&0,\quad(t,x^{\prime})\in\mathbb{R}\times\Omega,\\ \partial_{t}(\mu{\mathcal{H}})&=-\nabla\times{\mathcal{E}},&[{\mathcal{H}} \cdot\nu]_{{\mathcal{I}}^{\prime}\in\partial\Omega}&=&0.\end{array}\right.\] Note that we do not require divergence conditions here (cf. [24, Section 6]). For \(X\subseteq\Omega\) let \({\mathcal{N}}_{r}(X)=\{x^{\prime}\in\Omega:\operatorname{dist}(x^{\prime},X)<r\}\). By Maxwell equations having finite speed of propagation, we mean that there is \(0<c<\infty\) such that for \(0<t<\infty\) it holds \[\operatorname{supp}_{x^{\prime}}(({\mathcal{E}},{\mathcal{H}})(t))\subseteq{ \mathcal{N}}_{ct}(\operatorname{supp}_{x^{\prime}}({\mathcal{E}}_{0},{ \mathcal{H}}_{0})).\] We refer to [24, Theorem 6.1] for a more precise statement in terms of the backwards light cone. Let \(d:\Omega\to\mathbb{R}_{>0}\), \(d(x^{\prime})=\operatorname{dist}(x^{\prime},\partial\Omega)\) denote the distance function away from the boundary, and \(H_{\tau}=d^{-1}(\tau)\) denote corresponding level sets. By the implicit function theorem, \(H_{\tau}\) is a smooth hypersurface with metric \(g_{\tau}\) and we can write \[g=d\tau^{2}+g_{\tau}\text{ for }0\leq t\leq\tilde{\delta}.\] By compactness of \(\partial\Omega\), finitely many geodesics charts suffice to cover a set \(\{x\in\Omega:d(x)<\varepsilon\}\) close to the boundary. Shrinking the charts allows us to restrict to local-in-time solutions, which do not leave the geodesic chart. In Appendix B.1 we show Strichartz estimates in the interior based on Strichartz estimates in the full space. We find \(T\) small enough such that \(({\mathcal{E}},{\mathcal{H}})(t)\) within \(\Omega^{\text{int}}=\{x^{\prime}\in\Omega:d(x^{\prime})>\varepsilon/2\}\) only depends on \(({\mathcal{E}},{\mathcal{H}})(0)\) in \(\tilde{\Omega}^{\text{int}}=\{x^{\prime}\in\Omega:d(x^{\prime})>\varepsilon/4\}\), and the solution does not reach the boundary for times \(t\leq T\). We prove that \[\|({\mathcal{E}},{\mathcal{H}})\|_{L^{p}_{T}L^{q}(\Omega^{\text{int}})}\lesssim \|({\mathcal{E}}_{0},{\mathcal{H}}_{0})\|_{H^{s}(\Omega)}+\|\rho_{e}(0)\|_{H^{s -1+\frac{1}{p}}}. \tag{2.4}\] #### 2.1.2. Geodesic normal coordinates Let \(g=(g_{ij})\) denote the metric tensor and \(g^{-1}=(g^{ij})\) the cometric. In this work, we only consider isotropic \(\varepsilon\) and \(\mu\) on the original domain \((\Omega,\delta^{ij})\). We endow a chart in \((\Omega,\delta^{ij})\) with geodesic normal coordinates derived from the height function. Let \(x^{\prime}=(x^{*},x^{\prime}_{3})\), and \[g={dx^{\prime}_{3}}^{2}+r(x^{*},x^{\prime}_{3},(dx^{*})^{2}). \tag{2.5}\] The Hodge dual transforms by \[*({dx^{\prime}}^{i_{1}}\wedge\ldots\wedge{dx^{\prime}}^{i_{k}})=\frac{\sqrt{g }}{(n-k)!}g^{i_{1}j_{1}}\ldots g^{i_{k}j_{k}}\varepsilon_{j_{1}\ldots j_{n}} dx^{\prime j_{k+1}}\wedge\ldots\wedge dx^{\prime j_{n}}.\] Above \(\varepsilon_{j_{1}\ldots j_{n}}\) denotes the \(n\)-Levi-Civita tensor, i.e., \[\varepsilon_{j_{1}\ldots j_{n}}=\begin{cases}1,&\quad(j_{1}\ldots j_{n})\text { is an even permutation},\\ -1,&\quad(j_{1}\ldots j_{n})\text{ is an odd permutation},\\ 0,&\quad(j_{1}\ldots j_{n})\text{ is no permutation}.\end{cases}\] and \((g^{ij})\) denotes the inverse metric. Recall that we let \(\sqrt{g}=\sqrt{\det g}\). Consequently, in geodesic normal coordinates, \[*_{g}dA=\sqrt{g}\,ad(g^{-1})\nabla\times A,\qquad*_{g}d*_{g}A=\frac{1}{\sqrt{g }}\nabla\cdot(\sqrt{g}g^{-1}(\varepsilon\mathcal{E}^{\prime})).\] In the above display \(\operatorname{ad}(B)\) denotes the adjugate matrix, i.e., \[\operatorname{ad}(B)=((-1)^{i+j}B_{ji})_{i,j}\] with \(B_{ji}\) denoting the \((j,i)\)-minor of \(B\). By Cramer's rule, Maxwell equations become on the half-space \((t,x^{\prime})\in\mathbb{R}\times\mathbb{R}_{>0}^{3}\): \[\left\{\begin{array}{ll}\partial_{t}(\varepsilon(x^{\prime})\mathcal{E}^{ \prime})&=(\sqrt{g})^{-1}g\nabla\times\mathcal{H}^{\prime}-\mathcal{J}^{ \prime}_{e},&\nabla\cdot\left(\sqrt{g}g^{-1}\mu\mathcal{H}\right)&=0,\\ \partial_{t}(\mu(x^{\prime})\mathcal{H}^{\prime})&=-\big{(}\sqrt{g}\big{)}^{-1 }g\nabla\times\mathcal{E}^{\prime},&\frac{1}{\sqrt{g}}\nabla\cdot\left(\sqrt{ g}g^{-1}\varepsilon\mathcal{E}\right)&=\rho^{\prime}_{e}.\end{array}\right.\] In a sense, \(\sqrt{g}g^{-1}\varepsilon\) now plays the role of \(\varepsilon\) and \(\sqrt{g}g^{-1}\mu\) the role of \(\mu\). Also, we redefine \(\rho^{\prime}_{e}=\nabla\cdot(\sqrt{g}g^{-1}\varepsilon\mathcal{E})\), which does not effect regularity questions because \(\sqrt{g}\) is smooth. Moreover, we write \(\mathcal{J}^{\prime}_{e}:=\sqrt{g}g^{-1}\mathcal{J}^{\prime}_{e}\). Below we shall see that this is consistent with the compatibility conditions. We rearrange the equations to \[\left\{\begin{array}{ll}\partial_{t}(\sqrt{g}g^{-1}\varepsilon\mathcal{E}^{ \prime})&=\nabla\times\mathcal{H}^{\prime}-\mathcal{J}^{\prime}_{e},&\nabla \cdot\left(\sqrt{g}g^{-1}\mu\mathcal{H}\right)=0,&(t,x^{\prime})\in\mathbb{R} \times\Omega,\\ \partial_{t}(\sqrt{g}g^{-1}\mu\mathcal{H}^{\prime})&=-\nabla\times\mathcal{E}^ {\prime},&\nabla\cdot\left(\sqrt{g}g^{-1}\varepsilon\mathcal{E}\right)=\rho^ {\prime}_{e}.\end{array}\right. \tag{2.6}\] #### 2.1.3. Compatibility conditions On the half-space \(x^{\prime}\in\mathbb{R}_{>0}^{3}\), the boundary conditions are given as follows: \[[\mathcal{E}_{1}]_{x^{\prime}_{3}=0}=[\mathcal{E}_{2}]_{x^{\prime}_{3}=0}=[ \mathcal{H}_{3}]_{x^{\prime}_{3}=0}=0. \tag{2.7}\] We call a relation \[\operatorname{tr}(F(\partial^{\alpha}\mathcal{E},\partial^{\beta}\mathcal{H}) )=0,\] which follows from (2.7) by taking \(k\) time derivatives a compatibility condition of order \(k\). Hence, (2.7) are of order zero. For (1.8), the tangential derivatives are \(\partial_{t}\), \(\partial_{1}\), and \(\partial_{2}\), which allows us to express the compatibitiality conditions explicitly. It is important to observe from (2.5) that the (possibly non-diagonal) metric tensor only mixes the first and second component: \[g^{-1}=\begin{pmatrix}g^{11}&g^{12}&0\\ g^{21}&g^{22}&0\\ 0&0&1\end{pmatrix}.\] We give the first order compatibility conditions in the homogeneous case: Applying tangential derivatives \(\partial_{1}\), \(\partial_{2}\) to \(\mathcal{H}_{3}\) gives \[[\partial_{1}\mathcal{H}_{3}]_{x_{3}^{\prime}=0}=[\partial_{2}\mathcal{H}_{3 }]_{x_{3}^{\prime}=0}=0.\] The equation for the first and second component of the equation \[\partial_{t}(\sqrt{g}g^{-1}\varepsilon\mathcal{E})=\nabla\times\mathcal{H}\] yields \[[\partial_{3}\mathcal{H}_{1}]_{x_{3}^{\prime}=0}=[\partial_{3}\mathcal{H}_{2} ]_{x_{3}^{\prime}=0}=0.\] Moreover, tangential derivatives \(\partial_{1}\), \(\partial_{2}\) applied to \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) and the charge condition yields \[[\partial_{i}\mathcal{E}_{j}]_{x_{3}^{\prime}=0}=0\text{ for }i,j=1,2\text{ and }[ \nabla\cdot(\varepsilon\sqrt{g}g^{-1}\mathcal{E})]_{x_{3}^{\prime}=0}=\text{ tr}(\rho_{e}).\] Let \(\varepsilon^{\prime}=\sqrt{g}\varepsilon\). The above display becomes \[[\partial_{3}(\varepsilon^{\prime}\mathcal{E}_{3})]_{x_{3}^{\prime}=0}=\text {tr}(\rho_{e})\Leftrightarrow[(\partial_{3}\varepsilon^{\prime})\mathcal{E}_{ 3}]_{x_{3}^{\prime}=0}+[\varepsilon^{\prime}\partial_{3}\mathcal{E}_{3}]_{x_{ 3}^{\prime}=0}=\text{tr}(\rho_{e}).\] This yields a Robin boundary condition for \(\mathcal{E}_{3}\) in terms of \(\text{tr}(\rho_{e})\) and \(\rho_{\Sigma}\). If these are vanishing, we obtain Neumann boundary conditions for \(\mathcal{E}_{3}\). But note that this is not a compatibility condtion. We extend the equations to the full space as follows: Reflect \(\varepsilon\), \(\mu\), and \(g^{ij}\) evenly. Let \[\tilde{\kappa}(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime})=\begin{cases} \kappa(x_{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime}),&x_{3}^{\prime}\geq 0,\\ \kappa(x_{1}^{\prime},x_{2}^{\prime},-x_{3}^{\prime}),&x_{3}^{\prime}<0,\end{cases} \kappa\in\{\varepsilon,\mu,g^{ij}\}.\] On the other hand, \(\mathcal{E}_{1}\), \(\mathcal{E}_{2}\), and \(\mathcal{H}_{3}\) are reflected oddly, and \(\mathcal{H}_{1}\), \(\mathcal{H}_{2}\), and \(\mathcal{E}_{3}\) are reflected evenly. \(\mathcal{J}_{e1}\), \(\mathcal{J}_{e2}\) are reflected oddly and \(\mathcal{J}_{e3}\) evenly. \(\rho_{e}\) is reflected oddly. Denoting the reflected quantities with \(\tilde{X}\) and \(\sqrt{g}=\sqrt{\det\tilde{g}}\) the following system of equations holds on \(\mathbb{R}^{3}\): \[\left\{\begin{array}{ll}\partial_{t}(\sqrt{g}\tilde{g}^{-1}\tilde{ \varepsilon}\tilde{\mathcal{E}})&=\nabla\times\tilde{\mathcal{H}}-\tilde{ \mathcal{J}}_{e},&\nabla\cdot(\sqrt{\det\tilde{g}}\tilde{g}^{-1}\tilde{ \mu}\tilde{\mathcal{H}})=0,\\ \partial_{t}(\sqrt{\tilde{g}}\tilde{g}^{-1}\tilde{\mu}\tilde{\mathcal{H}})&=- \nabla\times\tilde{\mathcal{E}},&\nabla\cdot(\sqrt{\det\tilde{g}}\tilde{g}^{- 1}\tilde{\varepsilon}\tilde{\mathcal{E}})=\tilde{\rho}_{e}.\end{array}\right. \tag{2.8}\] We give the zeroth and first order compatibility conditions under assumptions (1.9): \[[\mathcal{E}\times\nu]_{x^{\prime}\in\partial\Omega}=0,\quad[ \mathcal{H}.\nu]_{x^{\prime}\in\partial\Omega}=0, \tag{2.9}\] \[[\partial_{\nu}\mathcal{H}_{\text{tang}}]_{x^{\prime}\in\partial \Omega}=0. \tag{2.10}\] We find the second compatibility condition by taking two time derivatives in geodesic normal coordinates: \[(\partial_{t}^{2}(\sqrt{g}g^{-1}\varepsilon\mathcal{E}))_{1,2}=\] \[\begin{pmatrix}\partial_{2}(\mu^{-1}\sqrt{g}^{-1}(\partial_{1} \mathcal{E}_{2}-\partial_{2}\mathcal{E}_{1}))-\partial_{3}(\mu^{-1}\sqrt{g}^{ -1}(g_{21}(\partial_{2}\mathcal{E}_{3}-\partial_{3}\mathcal{E}_{2})+g_{22}( \partial_{3}\mathcal{E}_{1}-\partial_{1}\mathcal{E}_{3})))\\ \partial_{3}(\mu^{-1}\sqrt{g}^{-1}(g_{11}(\partial_{2}\mathcal{E}_{3}- \partial_{3}\mathcal{E}_{2})+g_{12}(\partial_{3}\mathcal{E}_{1}-\partial_{1} \mathcal{E}_{3})))-\partial_{1}(\mu^{-1}\sqrt{g}^{-1}(\partial_{1}\mathcal{E} _{2}-\partial_{2}\mathcal{E}_{1}))\end{pmatrix}.\] Clearly, \[[\partial_{2}(\mu^{-1}\sqrt{g}^{-1}(\partial_{1}\mathcal{E}_{2}-\partial_{2} \mathcal{E}_{1}))]_{x_{3}^{\prime}=0}=0\] because \(\partial_{1}\) and \(\partial_{2}\) are tangential derivatives and the zeroth order compatibility conditions. Similarly, \[[\partial_{1}(\mu^{-1}\sqrt{g}^{-1}(\partial_{1}\mathcal{E}_{2}-\partial_{2} \mathcal{E}_{1}))]_{x_{3}^{\prime}=0}=0.\] Recall that we required in (1.9) \[\partial\mu|_{x^{\prime}\in\partial\Omega}=0\] to simplify the compatibility conditions. Thus, we obtain for the second order compatibility conditions by (1.9) \[\begin{split}[\partial_{3}(\sqrt{g}^{-1}(g_{21}(\partial_{2} \mathcal{E}_{3}-\partial_{3}\mathcal{E}_{2})+g_{22}(\partial_{3}\mathcal{E}_{ 1}-\partial_{1}\mathcal{E}_{3})))]_{x_{3}^{\prime}=0}&=0,\\ [\partial_{3}(\sqrt{g}^{-1}(g_{11}(\partial_{2}\mathcal{E}_{3}- \partial_{3}\mathcal{E}_{2})+g_{12}(\partial_{3}\mathcal{E}_{1}-\partial_{1} \mathcal{E}_{3})))]_{x_{3}^{\prime}=0}&=0.\end{split} \tag{2.11}\] **Proposition 2.1**.: _Let \(0\leq\gamma\leq 3\) and \(\mathcal{H}^{\gamma}(\Omega)\) be defined by (1.12) and suppose that (1.9) holds. Then, we have the following characterization:_ \[\begin{array}{llll}\bullet\ 0\leq\gamma<\frac{1}{2}:&\mathcal{H}^{\gamma}( \Omega)&=H^{\gamma}(\Omega)^{6},\\ \bullet\ \frac{1}{2}<\gamma<\frac{3}{2}:&\mathcal{H}^{\gamma}(\Omega)&\subseteq \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{\gamma}(\Omega)^{6}:(\ref{eq:2.9} )\text{ holds.}\},\\ \bullet\ \frac{1}{2}<\gamma<\frac{3}{2}:&\mathcal{H}^{\gamma}(\Omega)&\subseteq \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{\gamma}(\Omega)^{6}:(\ref{eq:2.9} )\text{ and }(\ref{eq:2.10})\text{ hold.}\},\\ \bullet\ \frac{1}{2}<\gamma\leq 3:&\mathcal{H}^{\gamma}(\Omega)&\subseteq \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{\gamma}(\Omega)^{6}:(\ref{eq:2.9} )-(\ref{eq:2.11})\text{ hold.}\}.\end{array}\] For the proof we shall change to geodesic normal coordinates. In a chart endowed with geodesic coordinates, i.e., for Maxwell equations localized to the half-space, we have \[[\mathcal{E}_{1}]_{x_{3}^{\prime}=0}=[\mathcal{E}_{2}]_{x_{3}^{ \prime}=0}=[\mathcal{H}_{3}]_{x_{3}^{\prime}=0}=0, \tag{2.12}\] \[[\partial_{3}\mathcal{H}_{1}]_{x_{3}^{\prime}=0}=[\partial_{3} \mathcal{H}_{2}]_{x_{3}^{\prime}=0}=0,\] (2.13) \[[\partial_{3}(\sqrt{g}^{-1}(g_{21}(\partial_{2}\mathcal{E}_{3}- \partial_{3}\mathcal{E}_{2})+g_{22}(\partial_{3}\mathcal{E}_{1}-\partial_{1} \mathcal{E}_{3})))]_{x_{3}^{\prime}=0}=0,\] (2.14) \[[\partial_{3}(\sqrt{g}^{-1}(g_{11}(\partial_{2}\mathcal{E}_{3}- \partial_{3}\mathcal{E}_{2})+g_{12}(\partial_{3}\mathcal{E}_{1}-\partial_{1} \mathcal{E}_{3})))]_{x_{3}^{\prime}=0}=0.\] Proof of Proposition 2.1.: Let \((\Omega_{i},\varphi_{i})_{i=1,\dots,n}\) denote a finite covering of a neighbourhood of the boundary with geodesic charts and \((\Omega_{0},\varphi_{0}=id)\) the trivial chart of the interior. We decompose \(u_{0}\in H^{\gamma}(\Omega)\) with a smooth partition of unity subordinate to \((\Omega_{j})_{j=0,\dots,n}\), \(1=\sum_{i=1}^{n}\psi_{i}+\psi_{0}\) and write \[u_{0}=\sum_{i=1}^{n}\psi_{i}u_{0}+\psi_{0}u_{0}.\] It suffices to show the claim for any \(u_{0}^{(i)}\). Within \(\Omega_{i}\) we can endow \(\Omega\) with geodesic normal coordinates and it is enough to prove the claim for the transformed fields by invariance of Sobolev spaces under changes of coordinates. For \(\Omega_{0}\) this is trivial because there is no boundary. Within \(\Omega_{i}\) we can use geodesic normal coordinates. Note that with \[\mathcal{H}^{3}(\mathbb{R}^{3}_{>0})=\{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{ 3}(\mathbb{R}^{3}_{>0}):(\ref{eq:2.12})-(\ref{eq:2.14})\text{ hold.}\},\] we now have to show that \(\overline{\mathcal{H}^{3}(\mathbb{R}^{3}_{>0})}^{|\cdot|\cdot|\mu^{s}}=H^{s}( \mathbb{R}^{3}_{>0})^{6}\) for \(0\leqslant s<\frac{1}{2}\) and \[\overline{\mathcal{H}^{3}(\mathbb{R}^{3}_{>0})}^{|\cdot|\cdot|H^{s}}\subseteq \begin{cases}\{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\mathbb{R}^{3}_{>0}) ^{6}:(2.12)\text{ holds.}\},&\frac{1}{2}<s<\frac{3}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\mathbb{R}^{3}_{>0})^{6}:(2.12),( 2.13)\text{ hold.}\},&\frac{3}{2}<s<\frac{5}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\mathbb{R}^{3}_{>0})^{6}:(2.12)-( 2.14)\text{ hold.}\},\frac{5}{2}<s\leqslant 3.\end{cases} \tag{2.15}\] The inclusion \(\overline{\mathcal{H}^{3}(\mathbb{R}^{3}_{>0})}^{|\cdot|\cdot|\mu^{s}}\subseteq H ^{s}(\mathbb{R}^{3}_{>0})^{6}\) for \(0\leqslant s<\frac{1}{2}\) is trivial, and (2.15) follows from the continuity of the trace. To show the reverse inclusion \(\overline{\mathcal{H}^{3}(\mathbb{R}^{3}_{>0})}^{|\cdot|\mu^{s}}\supseteq H^{ s}(\mathbb{R}^{3}_{>0})^{6}\), we observe that \[\overline{\mathcal{H}^{3}(\mathbb{R}^{3}_{>0})}^{|\cdot|\cdot|\mu^{s}}\supseteq \overline{C_{c}^{\infty}(\mathbb{R}^{3}_{>0})^{6}|\cdot|\mu^{s}}=H^{s}( \mathbb{R}^{3}_{>0})\] provided that \(0\leqslant s<\frac{1}{2}\). **Remark 2.2**.: We can be more precise about the conditions on \(\mathcal{H}_{0}\). Let \[\pi:\overline{\mathcal{H}^{3}(\mathbb{R}^{3}_{>0})}^{|\cdot|\cdot|\mu^{s}} \to H^{s}(\Omega)^{3},\quad(\mathcal{E}_{0},\mathcal{H}_{0})\mapsto \mathcal{H}_{0}\] denote the projection to the \(\mathcal{H}\)-initial data. We have the following: \[\operatorname{im}(\pi)=\begin{cases}H^{s}(\mathbb{R}^{3}_{>0})^{3},&0\leqslant s <\frac{1}{2},\\ H^{s}(\mathbb{R}^{3}_{>0})^{2}\times H^{s}_{D}(\mathbb{R}^{3}_{>0}),&\frac{1}{ 2}<s<\frac{3}{2},\\ H^{s}_{N}(\mathbb{R}^{3}_{>0})^{2}\times H^{s}_{D}(\mathbb{R}^{3}_{>0}),&\frac{ 3}{2}<s\leqslant 3.\end{cases}\] We show that \(\operatorname{im}(\pi)\) is a superset of the above subsets of \(H^{s}(\mathbb{R}^{3}_{>0})^{3}\). For \(0\leqslant s<\frac{1}{2}\) this has was already carried out above. For \(\frac{1}{2}<s\leqslant 3\), we extend \(\mathcal{H}_{0i}\) for \(i=1,2\) evenly to the full space and \(\mathcal{H}_{03}\) oddly. Then we regularize by convolution with a mollifier. The resulting functions are in \(H^{3}(\mathbb{R}^{3})\), satisfy the boundary conditions, and approximate the functions in \(H^{s}(\mathbb{R}^{3}_{>0})\). This is based on continuity of \[\operatorname{ext}_{D}:H^{s}_{0}(\mathbb{R}^{3}_{>0}) \to H^{s}(\mathbb{R}^{3}) \tag{2.16}\] \[f \mapsto\bar{f}_{o}\] for \(0\leqslant s\leqslant 2\) with \[\bar{f}_{o}(x^{\prime})=\begin{cases}f(x^{\prime}_{1},x^{\prime}_{2},x^{ \prime}_{3}),&x^{\prime}_{3}>0,\\ -f(x^{\prime}_{1},x^{\prime}_{2},-x^{\prime}_{3}),&x^{\prime}_{3}<0.\end{cases}\] Furthermore, even reflection yields a continuous operator for Neumann functions for \(0\leqslant s\leqslant 2\): \[\operatorname{ext}_{N}:H^{s}_{N}(\mathbb{R}^{3}_{>0}) \to H^{s}(\mathbb{R}^{3}) \tag{2.17}\] \[f \mapsto\bar{f}_{e}\] with \[\bar{f}_{e}(x^{\prime})=\begin{cases}f(x^{\prime}_{1},x^{\prime}_{2},x^{\prime }_{3}),&x^{\prime}_{3}>0,\\ f(x^{\prime}_{1},x^{\prime}_{2},-x^{\prime}_{3}),&x^{\prime}_{3}<0.\end{cases}\] It seems likely that instead of (2.15) it holds \[\overline{\mathcal{H}^{3}(\mathbb{R}^{3}_{>0})}^{|\cdot|\cdot|H^{s}}=\begin{cases} \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\mathbb{R}^{3}_{>0})^{6}:(2.12) \text{ holds.}\},&\frac{1}{2}<s<\frac{3}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\mathbb{R}^{3}_{>0})^{6}:(2.12),( 2.13)\text{ hold.}\},&\frac{3}{2}<s<\frac{5}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\mathbb{R}^{3}_{>0})^{6}:(2.12)-( 2.14)\text{ hold.}\},\frac{5}{2}<s\leqslant 3.\end{cases}\] To show this, we had to work out a more sophisticated approximation for the components of \(\mathcal{E}_{0}\). We choose to omit the details because this is not the focus of the present work. However, we can readily consider a smaller space, for which we have the complete characterization: \[\mathcal{H}_{0}^{s}(\Omega)=\overline{\mathcal{H}^{3}(\Omega)\cap(C_{c}^{ \infty}(\Omega)^{3}\times H^{3}(\Omega)^{3})}^{\|\cdot\|_{H^{s}}}.\] This subspace ensures zero-boundary conditions on \(\mathcal{E}\) and its derivatives (provided the trace makes sense). For \(\mathcal{H}_{0}\) we have the complete characterization by the above argument, for \(\mathcal{E}_{0}\) this follows from the well-known characterization of the Sobolev space with Dirichlet boundary conditions: \[H_{D}^{s}(\Omega)=\begin{cases}H^{s}(\Omega),\quad 0\leqslant s<\frac{1}{2},\\ \{f\in H^{s}(\Omega)\,:[f]_{x^{\prime}\in\partial\Omega}=0\},\quad\frac{1}{2} <s\leqslant 1.\end{cases}\] We shall see, that depending on the value of \(s\), we find the following boundary conditions for initial data in \(\mathcal{H}_{0}^{s}\): \[[\mathcal{E}_{0}]_{x^{\prime}\in\partial\Omega}=0,\;[\mathcal{H}_{0}\cdot \nu]_{x^{\prime}\in\partial\Omega}=0, \tag{2.18}\] \[[\mathcal{E}_{0}]_{x^{\prime}\in\partial\Omega}=[\partial\mathcal{ E}_{0}]_{x^{\prime}\in\partial\Omega}=0,\quad[\mathcal{H}_{0}.\nu]_{x^{ \prime}\in\partial\Omega}=0,\;[\partial_{\nu}(\mathcal{H}_{0}\times\nu)]_{x^ {\prime}\in\partial\Omega}=0\},\] (2.19) \[[\mathcal{E}_{0}]_{x^{\prime}\in\partial\Omega}=[\partial\mathcal{ E}_{0}]_{x^{\prime}\in\partial\Omega}=[\partial^{2}\mathcal{E}_{0}]_{x^{\prime} \in\partial\Omega}=0,\;\;[\mathcal{H}_{0}.\nu]_{x^{\prime}\in\partial\Omega}= 0,\;[\partial_{\nu}(\mathcal{H}_{0}\times\nu)]_{x^{\prime}\in\partial\Omega} =0\}. \tag{2.20}\] **Proposition 2.3**.: _We have the following characterization:_ \[\mathcal{H}_{0}^{s}(\Omega)=\begin{cases}\{(\mathcal{E}_{0},\mathcal{H}_{0}) \in H^{s}(\Omega)^{6}\},\quad 0\leqslant s<\frac{1}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\Omega)^{6}:\,(\ref{eq:2.18})\text { holds.}\,\},\quad\frac{1}{2}<s<\frac{3}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\Omega)^{6}:\,(\ref{eq:2.19})\text { holds.}\,\},\quad\frac{3}{2}<s<\frac{5}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\Omega)^{6}:\,(\ref{eq:2.20})\text { holds.}\,\},\quad\frac{5}{2}<s\leqslant 3.\end{cases}\] #### 2.1.4. Reductions for smooth time-independent coefficients As main step in the proof of Theorem 1.1, we show the following: **Proposition 2.4**.: _Let \(\tilde{u}=(\tilde{\mathcal{E}},\tilde{\mathcal{H}})\), and \(\tilde{\varepsilon}\), \(\tilde{\mu}\), \(\tilde{g}\) like in (2.8). Then the following estimates hold:_ \[\|\tilde{u}\|_{L^{p}L^{q}}\lesssim\|\tilde{u}\|_{L^{\infty}_{T}H^{\gamma+ \delta}}+\|\tilde{\mathcal{J}}_{\varepsilon}\|_{L^{2}_{t}H^{\gamma+\delta}}+ \|\tilde{\rho}\|_{L^{\infty}_{T}H^{\gamma-1+\frac{1}{p}+\delta}_{T}(\Omega)} \tag{2.21}\] _for \(p,q\geqslant 2\), \(q<\infty\), \(\delta>0\) satisfying the following_ \[\frac{3}{p}+\frac{2}{q}\leqslant 1,\qquad\gamma=3\big{(}\frac{1}{2}-\frac{1}{q }\big{)}-\frac{1}{p},\qquad\delta<\frac{3}{q}.\] **Remark 2.5**.: Recall that \(\rho\) is reflected oddly. The Dirichlet condition is irrelevant for \(s<\frac{1}{2}\), which is ensured with the condition on \(\delta\). We conclude the section with the following: **Proposition 2.6**.: _Suppose that Proposition 2.4 holds true and the energy estimate_ \[\|u\|_{L^{\infty}_{T}\mathcal{H}^{\gamma}(\Omega)}\lesssim_{T}\|u(0)\|_{ \mathcal{H}^{\gamma}(\Omega)} \tag{2.22}\] _is valid for homogeneous solutions \(u=(\mathcal{E},\mathcal{H})\) to (1.1). Then, Theorem 1.1 follows._ Proof.: First, we prove Theorem 1.1 for homogeneous solutions \(u=(\mathcal{E},\mathcal{H})\) with \(\mathcal{J}_{e}=0\). By virtue of the energy estimate (2.22), it suffices to show: \[\|u\|_{L^{p}([0,T],L^{q}(\Omega))}\lesssim\|u\|_{L^{\infty}_{T}\mathcal{H}^{ \gamma}}+\|\rho_{e}(0)\|_{H^{\gamma-1+\frac{1}{p}+\delta}(\Omega)}. \tag{2.23}\] But for homogeneous solutions \(u=(\mathcal{E},\mathcal{H})\) to (2.1), the transformed and extended solutions \(\tilde{u}=(\tilde{\mathcal{E}},\tilde{\mathcal{H}})\) are likewise homogeneous and satisfy the following estimates by hypothesis: \[\|\tilde{u}\|_{L^{p}L^{q}}\lesssim\|\tilde{u}\|_{L^{\infty}_{T}H^{\gamma}}+\| \tilde{\rho}_{e}(0)\|_{H^{\gamma-1+\frac{1}{p}+\delta}}. \tag{2.24}\] But clearly, \(\|u\|_{L^{p}L^{q}}\lesssim\|\tilde{u}\|_{L^{p}L^{q}}\) and by continuity of \(\mathrm{ext}_{D}\) and \(\mathrm{ext}_{N}\) for \(0\leq s\leq 2\) (see (2.16), (2.17)), we have \[\|\tilde{u}(t)\|_{H^{\gamma}}+\|\tilde{\rho}_{e}(0)\|_{H^{\gamma-1+\frac{1}{p }+\delta}(\mathbb{R}^{3})}\lesssim\|u(t)\|_{\mathcal{H}^{\gamma}}+\|\rho_{e}( 0)\|_{H^{\gamma-1+\frac{1}{p}+\delta}(\Omega)}.\] This reduces Theorem 1.1 to Proposition 2.4 for homogeneous solutions. Inhomogeneous solutions are covered by the energy estimate (2.22) and superposition. Indeed, suppose that (2.24) holds true. Let \((U(t))_{t\in\mathbb{R}}\) be the \(C_{0}\)-group of the Maxwell evolution in \(L^{2}(\Omega)^{6}\) (cf. [9, Section 3.2]). Then, we can write the general solution by Duhamel's formula \[u(t)=U(t)u_{0}+\int_{0}^{t}U(t-s)(\tilde{\mathcal{P}}u)(s)ds.\] We denote \[\tilde{\mathcal{P}}=\begin{pmatrix}\partial_{t}&-\varepsilon^{-1}\nabla\times \\ \mu^{-1}\nabla\times&\partial_{t}\end{pmatrix}.\] Changing to \(\tilde{\mathcal{P}}\) is necessary as Duhamel's formula has to be applied in conservative form. By smoothness of the coefficients, this is admissible. The proof is complete. ### 2d manifolds It is also useful to treat the two-dimensional case geometrically. In this case we rewrite (1.15) as \[\left\{\begin{array}{ll}\partial_{t}(\varepsilon(x^{\prime})\mathcal{E})&= *d\mathcal{H}-\mathcal{J}_{e},&*d*(\varepsilon\mathcal{E})=\rho_{e},\\ \partial_{t}(\mu(x^{\prime})\mathcal{H})&=-*d\mathcal{E},&(t,x^{\prime})\in \mathbb{R}\times M\end{array}\right. \tag{2.25}\] with \(\mathcal{E},\mathcal{J}_{e}(t):M\to T^{*}M\) covectorfields and \(\mathcal{H}(t):M\to\mathbb{R}\) a zero-form. In (1.15) we have like above \(M=(\Omega,\delta^{ij})\). The boundary condition is given by \[[(\mathcal{E}^{\flat})_{||}]_{x^{\prime}\in\partial M}=0.\] #### 2.2.1. Finite speed of propagation and Strichartz estimates in the interior The interior part \[\|(\mathcal{E},\mathcal{H})\|_{L^{p}_{T}L^{q}_{x^{\prime}}(\Omega^{\mathrm{ int}})}\lesssim\|(\mathcal{E},\mathcal{H})\|_{L^{\infty}_{T}H^{\gamma}}+\|\rho_{e}\|_{L^{ \infty}_{T}H^{\gamma-1+\frac{1}{p}}} \tag{2.26}\] for homogeneous solutions is handled like in Paragraph 2.1.1. The proof uses finite speed of propagation (see Appendix A) to localize the solution away from the boundary. Then we can use appropriate Strichartz estimates in the full space: Let \((s,p,q)\) be wave Strichartz admissible in two dimensions, i.e., \[2\leq p\leq\infty,\ 2\leq q<\infty,\quad\frac{2}{p}+\frac{1}{q}\leq\frac{1}{2}, \quad s=2\big{(}\frac{1}{2}-\frac{1}{q}\big{)}-\frac{1}{p}.\] Let \(u=(u_{1},u_{2},u_{3})=(u^{(1)},u^{(2)}):\mathbb{R}\times\mathbb{R}^{2}\to \mathbb{R}^{2}\times\mathbb{R}\), and \[\tilde{P}=\begin{pmatrix}\partial_{t}&0&-\partial_{2}(\mu_{1}\cdot)\\ 0&\partial_{t}&\partial_{1}(\mu_{1}\cdot)\\ \partial_{1}(\varepsilon_{21}\cdot)-\partial_{2}(\varepsilon_{11}\cdot)& \partial_{1}(\varepsilon_{22}\cdot)-\partial_{2}(\varepsilon_{12}\cdot)& \partial_{t}\end{pmatrix}.\] In the Appendix B we show Strichartz estimates \[\||D^{\prime}|^{-s}u\|_{L^{p}_{t}(0,T;L^{q}_{x^{\prime}})}\lesssim_ {T,\varepsilon,\mu_{1}}\|u\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}+\|\tilde{P}(x, D)u\|_{L^{1}_{t}L^{2}_{x^{\prime}}}\\ +\left(\||D^{\prime}|^{-1+\frac{1}{p}}\nabla\cdot u^{(1)}\|_{L^{ \infty}_{t}L^{2}_{x^{\prime}}}+\||D^{\prime}|^{-1+\frac{1}{p}}\partial_{t} \nabla\cdot u^{(1)}\|_{L^{1}_{t}L^{2}_{x^{\prime}}}\right)\] under regularity and ellipticity assumptions on \(\varepsilon_{ij}\) and \(\mu_{1}\). Above \(|D^{\prime}|^{\alpha}=(-\Delta)^{\alpha/2}\) denotes the fractional Laplacian defined as Fourier multiplier. Then, in similar spirit to the arguments in Paragraph 2.1.1, we use commutator arguments to conclude (2.26). The details are omitted to avoid repitition. #### 2.2.2. Geodesic normal coordinates In the two-dimensional context, geodesic normal coordinates are given by \[g^{ij}=g^{11}(x^{\prime}_{1},x^{\prime}_{2}){dx^{\prime}_{1}}^{2}+{dx^{\prime} _{2}}^{2}.\] Computing \(*d\) and \(*d*\) in these coordinates, we find \[\left\{\begin{array}{ccc}\partial_{t}(\varepsilon(x^{\prime})\mathcal{E}^{ \prime})&=(\sqrt{g})^{-1}g\nabla_{\perp}\mathcal{H}^{\prime}-\mathcal{J}^{ \prime}_{e},&\frac{1}{\sqrt{g}}\nabla\cdot(\sqrt{g}g^{-1}\varepsilon\mathcal{E }^{\prime})&=&\rho^{\prime}_{e},\\ \partial_{t}(\mu(x^{\prime})\mathcal{H}^{\prime})&=-(\sqrt{g})^{-1}(\partial _{1}\mathcal{E}^{\prime}_{2}-\partial_{2}\mathcal{E}^{\prime}_{1}),&(t,x^{ \prime})&\in&\mathbb{R}\times\mathbb{R}^{2}_{>0}.\end{array}\right.\] Above \(\mathbb{R}^{2}_{>0}=\{(x^{\prime}_{1},x^{\prime}_{2})\in\mathbb{R}^{2}:x^{ \prime}_{2}>0\}\) denotes the two-dimensional half-plane and \(\nabla_{\perp}=(\partial_{2},-\partial_{1})\). The boundary condition reads \[[\mathcal{E}_{1}]_{x^{\prime}_{2}=0}=0.\] We rewrite the system by redefining \(\mathcal{J}_{e}:=\sqrt{g}g^{-1}\mathcal{J}_{e}\), \(\rho_{e}:=\sqrt{g}\rho_{e}\) as \[\left\{\begin{array}{ccc}\partial_{t}(\sqrt{g}g^{-1}\varepsilon\mathcal{E})& =\nabla_{\perp}\mathcal{H}-\mathcal{J}_{e},&\nabla\cdot(\sqrt{g}g^{-1} \varepsilon\mathcal{E})&=&\rho_{e},\\ \partial_{t}(\sqrt{g}\mu\mathcal{H})&=-(\partial_{1}\mathcal{E}_{2}-\partial _{2}\mathcal{E}_{1}),&(t,x^{\prime})&\in&\mathbb{R}\times\mathbb{R}^{2}_{>0}. \end{array}\right.\] #### 2.2.3. Compatibility conditions Note that the components of \(\mathcal{J}_{e}\) and \(\mathcal{E}\) are respected by \(g^{-1}\), which is diagonal. Let \(\varepsilon^{\prime}=\sqrt{g}g^{-1}\varepsilon\) for brevity. \(\mathcal{E}_{1}\) is endowed with Dirichlet boundary conditions, and we endow \(\mathcal{H}\) with Neumann boundary conditions, which is a first order compatibility condition: \[[\partial_{2}\mathcal{H}]_{x^{\prime}_{2}=0}=0.\] For \(\mathcal{E}\) we obtain from \([\partial_{1}\mathcal{E}_{1}]_{x^{\prime}_{2}=0}=0\) the following Robin boundary condition by considering the traces of the charges: \[\partial_{1}(\varepsilon^{\prime}_{11}\mathcal{E}_{1})+\partial_{2}(\varepsilon^ {\prime}_{22}\mathcal{E}_{2})=\rho_{e}\Rightarrow[(\partial_{2}\varepsilon^{ \prime}_{22})\mathcal{E}_{2}]_{x^{\prime}\in\partial\Omega}+[\varepsilon^{ \prime}_{22}\partial_{2}\mathcal{E}_{2}]_{x^{\prime}\in\partial\Omega}=\text{tr} (\rho_{e}).\] With \(\gamma<\frac{3}{2}\) in the two-dimensional case, we choose even reflection for \(\mathcal{E}_{2}\) such that the Robin condition is not relevant. In coordinate-free notation, we find the following compatibility conditions in the two-dimensional case: \[[\mathcal{E}\wedge\nu]_{x^{\prime}\in\partial\Omega} =0, \tag{2.27}\] \[[\partial_{\nu}\mathcal{H}]_{x^{\prime}\in\partial\Omega} =0. \tag{2.28}\] In geodesic coordinates the second compatibility condition reads \[[\partial_{2}(\sqrt{g}(\partial_{1}\mathcal{E}_{2}-\partial_{2}\mathcal{E}_{1})) ]_{x^{\prime}_{2}=0}=0. \tag{2.29}\] We record the analog of Proposition 2.1: **Proposition 2.7**.: _Let \(0\leq\gamma\leq 3\) and \(\mathcal{H}^{\gamma}(\Omega)\) be defined by \(\mathcal{H}^{\gamma}(\Omega)=\overline{\mathcal{H}^{3}(\Omega)}\) and, if \(\gamma>\frac{5}{2}\), we suppose that (1.9) holds. Then, we have the following characterization:_ * \(0\leq\gamma<\frac{1}{2}\)_:_ \(\mathcal{H}^{\gamma}(\Omega)=H^{\gamma}(\Omega)^{3}\)_,_ * \(\frac{1}{2}<\gamma<\frac{3}{2}\)_:_ \(\mathcal{H}^{\gamma}(\Omega)\subseteq\{(\mathcal{E}_{0},\mathcal{H}_{0})\in H ^{\gamma}(\Omega):\eqref{eq:2.27}\text{ holds.}\}\)_,_ * \(\frac{3}{2}<\gamma<\frac{5}{2}\)_:_ \(\mathcal{H}^{\gamma}(\Omega)\subseteq\{(\mathcal{E}_{0},\mathcal{H}_{0})\in H ^{\gamma}(\Omega):\eqref{eq:2.27}\text{ and \eqref{eq:2.28} hold.}\}\)_,_ * \(\frac{5}{2}<\gamma\leq 3\)_:_ \(\mathcal{H}^{\gamma}(\Omega)\subseteq\{(\mathcal{E}_{0},\mathcal{H}_{0})\in H ^{\gamma}(\Omega):\eqref{eq:2.27}-\eqref{eq:2.29}\text{ hold.}\}\)_._ We define like in three dimensions the smaller space \[\mathcal{H}_{0}^{s}(\Omega)=\overline{\mathcal{H}^{3}(\Omega)\cap(C_{c}^{ \infty}(\Omega)^{2}\times H^{s}(\Omega))}^{\|\cdot\|_{H^{s}}}.\] For \(\mathcal{H}_{0}^{s}(\Omega)\) we have the complete characterization given by the following boundary conditions depending on the size of \(s\): \[[\mathcal{E}_{0}]_{x^{\prime}\in\partial\Omega} =0, \tag{2.30}\] \[=[\mathcal{E}_{0}]_{x^{\prime}\in\partial\Omega}=[\partial\mathcal{ E}_{0}]_{x^{\prime}\in\partial\Omega}=0,\quad[\partial_{\nu}\mathcal{H}]_{x^{ \prime}\in\partial\Omega}=0,\] (2.31) \[=0, \tag{2.32}\] The following is the analog of Proposition 2.3: **Proposition 2.8**.: _We have the following characterization:_ \[\mathcal{H}_{0}^{s}(\Omega)=\begin{cases}\{(\mathcal{E}_{0},\mathcal{H}_{0}) \in H^{s}(\Omega)^{3}\},\quad 0\leq s<\frac{1}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\Omega)^{3}:\;\text{\eqref{eq:2.3 1} holds.}\},\quad\frac{1}{2}<s<\frac{3}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\Omega)^{3}:\;\text{\eqref{eq:2.3 2} holds.}\},\quad\frac{3}{2}<s<\frac{5}{2},\\ \{(\mathcal{E}_{0},\mathcal{H}_{0})\in H^{s}(\Omega)^{3}:\;\text{\eqref{eq:2.3 2} holds.}\},\quad\frac{5}{2}<s\leq 3.\end{cases}\] #### 2.2.4. Reductions for smooth time-independent coefficients We extend the equations to the plane similar to the three-dimensional case: \(\varepsilon\), \(\mu\), and \(g^{ij}\) are reflected evenly; \(\mathcal{E}_{1}\) and \(\rho_{e}\) are reflected oddly corresponding to Dirichlet boundary conditions; \(\mathcal{E}_{2}\) and \(\mathcal{H}\) are reflected evenly. \(\mathcal{J}_{ei}\) is reflected like \(\mathcal{E}_{i}\). The extended functions are denoted with a \(\tilde{\ }\). We find the following equations on \(\mathbb{R}^{2}\): \[\left\{\begin{array}{rllll}\partial_{t}(\tilde{\varepsilon}\sqrt{\tilde{g}} \tilde{g}^{-1}\tilde{\mathcal{E}})&=\nabla_{\perp}\tilde{\mathcal{H}}-\tilde{ \mathcal{J}}_{e},&\nabla\cdot(\sqrt{\tilde{g}}\tilde{g}^{-1}\tilde{\varepsilon }\tilde{\mathcal{E}})&=&\tilde{\rho}_{e},\\ \partial_{t}(\tilde{\mu}\sqrt{\tilde{g}}\tilde{\mathcal{H}})&=-(\partial_{1} \tilde{\mathcal{E}}_{2}-\partial_{2}\tilde{\mathcal{E}}_{1}),&(t,x^{\prime})& \in&\mathbb{R}\times\mathbb{R}^{2}.\end{array}\right. \tag{2.33}\] For the proof of Theorem 1.2 it suffices to prove the following: **Proposition 2.9**.: _Let \(\tilde{u}=(\tilde{\mathcal{E}},\tilde{\mathcal{H}})\), and \((\tilde{\varepsilon},\tilde{\mu},\tilde{g})\) like in (2.33). Then the following estimate holds:_ \[\|\tilde{u}\|_{L^{p}L^{q}}\lesssim\|\tilde{u}\|_{L^{\infty}_{T}H^{\gamma+ \delta}}+\|\tilde{\mathcal{J}}_{e}\|_{L^{2}_{T}H^{\gamma+\delta}}+\|\tilde{ \rho}_{e}\|_{L^{\infty}_{T}H^{\gamma-1+\frac{1}{p}+\delta}}\] _for \(p,q\geq 2\), \(q<\infty\), satisfying the following_ \[\frac{3}{p}+\frac{1}{q}\leq\frac{1}{2},\qquad\gamma=2\big{(}\frac{1}{2}-\frac{1 }{q}\big{)}-\frac{1}{p},\qquad 0<\delta<\frac{1}{2}.\] We omit the proof of the following, which is analogous to Proposition 2.6: **Proposition 2.10**.: _Suppose that Proposition 2.4 holds true and the energy estimate_ \[\|u\|_{L^{\infty}_{T}\mathcal{H}^{\gamma}(\Omega)}\lesssim_{T}\|u(0)\|_{ \mathcal{H}^{\gamma}(\Omega)}. \tag{2.34}\] _is valid for homogeneous solutions \(u=(\mathcal{E},\mathcal{H})\) to (1.15). Then, Theorem 1.2 follows._ ## 3. Energy estimates This section is devoted to the proof of energy estimates, i.e., a priori estimates for the Sobolev norm \[\|(\mathcal{E},\mathcal{H})\|_{L^{\infty}_{T}\mathcal{H}^{*}(\Omega)}\lesssim \|(\mathcal{E},\mathcal{H})(0)\|_{\mathcal{H}^{*}(\Omega)} \tag{3.1}\] for homogeneous solutions to Maxwell equations on domains with perfectly conducting boundary conditions. For the existence of sufficiently smooth solutions, which make the integration by parts argument licit, we again refer to Spitz's previous works [25, 26, 24] relying on the energy method. We stress that the a priori estimates in low regularity do not depend on the norms of the solution in high regularity. Spitz also proved a priori estimates, which however do not appear to be suitable for the quasilinear case, as we require a more precise quantification. We take the opportunity to simplify Spitz's argument in the special cases presently considered. It turns out that the \(L^{2}\)-norm of the solutions is approximately conserved and also in the quasilinear case we obtain a suitable quantification for a Gronwall argument. To estimate higher regularities, we differentiate the equation in time to see that the time-derivatives still satisfy a Maxwell-like equation. By comparing time and spatial derivatives via the equation, we see that the \(L^{2}\)-estimate for the time derivatives can be compared to a Sobolev regularity in space of the same order. Although the strategy is always the same, the arguments are slightly different in each instance, so we opt to give the proofs. ### The two-dimensional case We begin with the two-dimensional case: \[\left\{\begin{array}{llll}\partial_{t}(\varepsilon\mathcal{E})&=\nabla_{ \perp}\mathcal{H},&[\nu\wedge\mathcal{E}]_{x^{\prime}\in\partial\Omega}&=&0, \quad(t,x^{\prime})\in\mathbb{R}\times\Omega,\\ \partial_{t}(\mu\mathcal{H})&=-(\nabla\times\mathcal{E})_{3},&\nabla\cdot( \varepsilon\mathcal{E})&=&\rho_{e}.\end{array}\right. \tag{3.2}\] \(\varepsilon\), \(\mu\in C^{\infty}(\Omega;\mathbb{R}_{>0})\) satisfy the uniform ellipticity condition (1.3). We prove the following: **Proposition 3.1**.: _Let \((\mathcal{E},\mathcal{H})\) be \(\mathcal{H}^{3}\)-solutions to (3.2). Then, for \(s\in[0,2]\), we find (3.1) to hold._ As preliminary, we record the following Helmholtz decomposition on two-dimensional domains for vector fields with certain boundary conditions. In Appendix C is explained in detail how this follows from results due to Dautray-Lions [4]. **Proposition 3.2**.: _Let \(s\in[0,1]\), and \(\mathcal{E}\in\mathcal{H}^{3}(\Omega;\mathbb{R}^{2})\), which satisfies boundary conditions:_ \[[\mathcal{E}_{||}]_{x^{\prime}\in\partial\Omega}=0.\] _Then we have the equivalence of norms:_ \[\|\mathcal{E}\|_{H^{s+1}(\Omega)}\sim\|(\nabla\times\mathcal{E})_{3}\|_{H^{s} (\Omega)}+\|\nabla\cdot\mathcal{E}\|_{H^{s}(\Omega)}+\|\mathcal{E}\|_{L^{2}( \Omega)}. \tag{3.3}\] Proof of Proposition 3.1.: Let \[M(t)=\int_{\Omega}\mathcal{D}.\mathcal{E}+\mathcal{H}.\mathcal{B}\,dx^{\prime}\] with \(\mathcal{D}=\varepsilon\mathcal{E}\) and \(\mathcal{B}=\mu\mathcal{H}\). We compute \[\partial_{t}M(t)=2\int_{\Omega}\nabla_{\perp}\mathcal{H}.\mathcal{E}\,dx^{ \prime}-2\int_{\Omega}\mathcal{H}(\partial_{1}\mathcal{E}_{2}-\partial_{2} \mathcal{E}_{1})\,dx^{\prime}.\] By form invariance as argued in Section 2, we can suppose that \(\Omega=\mathbb{R}^{2}_{>0}\), \(\nu=e_{2}\). An integration by parts, using the boundary condition for the normal derivative \(\partial_{2}\), gives \(\partial_{t}M(t)=0\). The immediate consequence is an \(L^{2}\)-a priori estimate: \[\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}(\Omega)}\lesssim\|(\mathcal{E}, \mathcal{H})(0)\|_{L^{2}(\Omega)}. \tag{3.4}\] For higher norms, we consider time derivatives of (3.2). We denote \(\partial_{t}A=\dot{A}\) and \(\partial_{t}^{2}A=\ddot{A}\) for \(A\in\{\mathcal{E},\mathcal{H}\}\). Taking one time derivative of (3.2) yields \[\left\{\begin{array}{lcl}\partial_{t}(\varepsilon\dot{\mathcal{E}})&=&\nabla _{\perp}\dot{\mathcal{H}},&\nabla\cdot(\varepsilon\dot{\mathcal{E}})&=&0,\\ \partial_{t}(\mu\dot{\mathcal{H}})&=&-(\partial_{1}\dot{\mathcal{E}}_{2}- \partial_{2}\dot{\mathcal{E}}_{1}),&[\nu\wedge\dot{\mathcal{E}}]_{x^{\prime} \varepsilon\partial\Omega}&=&0.\end{array}\right.\] Hence, \((\dot{\mathcal{E}},\dot{\mathcal{H}})\) solves (3.2), and we have the a priori estimates: \[\|(\dot{\mathcal{E}},\dot{\mathcal{H}})(t)\|_{L^{2}(\Omega)}\lesssim\|(\dot{ \mathcal{E}},\dot{\mathcal{H}})(0)\|_{L^{2}(\Omega)}.\] Note that (again from (3.2) and ellipticity of \(\varepsilon\) and \(\mu\)), we have \[\|(\dot{\mathcal{E}},\dot{\mathcal{H}})(t)\|_{L^{2}(\Omega)}\sim\|\mathcal{H} (t)\|_{\dot{H}^{1}(\Omega)}+\|\mathcal{E}(t)\|_{H_{curl}(\Omega)}.\] To estimate the full \(H^{1}\)-norm by the Helmholtz decomposition 3.2, we observe that \(\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}}\) was estimated in the previous step and for \(\|\mathcal{E}(t)\|_{H_{div}}\) we find from the condition on the charges \[\varepsilon\nabla\cdot\mathcal{E}+(\nabla\varepsilon)\mathcal{E}=\rho_{e}.\] The charges are conserved for homogeneous solutions and by (3.4) we find \[\|\mathcal{E}(t)\|_{H_{div}(\Omega)}\lesssim\|\mathcal{E}(t)\|_{L^{2}(\Omega) }+\|\rho_{e}(t)\|_{L^{2}(\Omega)}\lesssim\|(\mathcal{E},\mathcal{H})(0)\|_{L ^{2}(\Omega)}+\|\rho_{e}(0)\|_{L^{2}(\Omega)}.\] This yields \[\|(\mathcal{E},\mathcal{H})(t)\|_{H^{1}(\Omega)}\lesssim\|(\mathcal{E}, \mathcal{H})(0)\|_{H^{1}(\Omega)}.\] Taking a second time derivative in (3.2), we find \[\left\{\begin{array}{lcl}\partial_{t}(\varepsilon\ddot{\mathcal{E}})&=& \nabla_{\perp}\ddot{\mathcal{H}},&\nabla\cdot(\varepsilon\ddot{\mathcal{E}})& =&0,\\ \partial_{t}(\mu\ddot{\mathcal{H}})&=&-(\partial_{1}\ddot{\mathcal{E}}_{2}- \partial_{2}\ddot{\mathcal{E}}_{1}),&[\nu\wedge\ddot{\mathcal{E}}]_{x^{\prime }\varepsilon\partial\Omega}&=&0.\end{array}\right.\] We use \(L^{2}\)-conservation to find \[\|(\ddot{\mathcal{E}},\ddot{\mathcal{H}})(t)\|_{L^{2}}\lesssim\|(\ddot{ \mathcal{E}},\ddot{\mathcal{H}})(0)\|_{L^{2}}.\] Clearly, from iterating (3.2), we have \[\|(\ddot{\mathcal{E}},\ddot{\mathcal{H}})(0)\|_{L^{2}}\lesssim\|(\mathcal{E}, \mathcal{H})(0)\|_{H^{2}}.\] Secondly, we find \[\|\ddot{\mathcal{E}}(t)\|_{L^{2}(\Omega)}\sim\|\nabla_{\perp}\dot{\mathcal{H} }(t)\|_{L^{2}}\text{ with }\nabla_{\perp}\dot{\mathcal{H}}=O(\partial_{x^{\prime}}\mu^{-1})(\partial_{1 }\mathcal{E}_{2}-\partial_{2}\mathcal{E}_{1})+\mu^{-1}(\Delta\mathcal{E}- \nabla(\nabla\cdot\mathcal{E})).\] This gives by the conservation of \(\|(\ddot{\mathcal{E}},\ddot{\mathcal{H}})(t)\|_{L^{2}}\), the previous a priori estimate for the \(H^{1}\)-norm and conservation of charges: \[\|\Delta\mathcal{E}(t)\|_{L^{2}}\lesssim\|\nabla_{\perp}\dot{\mathcal{H}}(t) \|_{L^{2}}+\|\rho_{e}(t)\|_{H^{1}}+\|(\mathcal{E},\mathcal{H})(t)\|_{H^{1}} \lesssim\|(\mathcal{E},\mathcal{H})(0)\|_{H^{2}}. \tag{3.5}\] For two time derivatives of \(\mathcal{H}\) we find \[\mu\ddot{\mathcal{H}}=\varepsilon^{-1}\Delta\mathcal{H}+O(\partial_{x^{\prime} }\varepsilon\,\partial_{x^{\prime}}\mathcal{H}).\] It follows from the conservation of \(\|(\ddot{\mathcal{E}},\ddot{\mathcal{H}})(t)\|_{L^{2}}\) and the previously established a priori estimate for the \(H^{1}\)-norm: \[\|\Delta\mathcal{H}(t)\|_{L^{2}}\lesssim\|\ddot{\mathcal{H}}(t)\|_{L^{2}}+\| \mathcal{H}(t)\|_{H^{1}}\lesssim\|(\mathcal{E},\mathcal{H})(0)\|_{H^{2}}. \tag{3.6}\] Taking (3.4), (3.5), and (3.6) together, we find \[\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}}+\|\Delta\mathcal{E}(t)\|_{L^{2}}+\| \Delta\mathcal{H}(t)\|_{L^{2}}\lesssim\|(\mathcal{E},\mathcal{H})(0)\|_{H^{2}}.\] The proof is complete. We turn to the quasilinear case of the Kerr nonlinearity in two dimensions: \[\left\{\begin{array}{rcl}\partial_{t}(\varepsilon\mathcal{E})&=\nabla_{ \perp}\mathcal{H},&\left[\mathcal{E}\times\nu\right]_{\omega^{\prime}\in \partial\Omega}&=&0,\\ \partial_{t}\mathcal{H}&=-(\partial_{1}\mathcal{E}_{2}-\partial_{2}\mathcal{ E}_{1}),&\nabla\cdot(\varepsilon\mathcal{E})&=&\rho_{e}\end{array}\right. \tag{3.7}\] with \(\varepsilon(\mathcal{E})=1+|\mathcal{E}|^{2}\) and \(\operatorname{tr}(\rho_{e})=0\). In the following we assume that \[\sup_{t\in[0,T]}\|(\mathcal{E},\mathcal{H})(t)\|_{H^{s}(\Omega)}\leq\delta\ll 1 \tag{3.8}\] for \((\mathcal{E},\mathcal{H}):[0,T]\times\Omega\to\mathbb{R}^{3}\) an \(\mathcal{H}^{3}\)-solution and \(\delta\) to be chosen later. We prove the following: **Proposition 3.3**.: _Let \((\mathcal{E},\mathcal{H}):[0,T]\times\Omega\to\mathbb{R}^{2}\times\mathbb{R}\) be an \(\mathcal{H}^{3}\)-solution to (3.7), which satisfies (3.8). Then the following estimate holds for \(s\in[0,2)\):_ \[\|(\mathcal{E},\mathcal{H})\|_{L^{\infty}_{T}H^{s}(\Omega)}\lesssim_{\delta,T} e^{C\int_{0}^{T}|\partial_{\varepsilon}\mathcal{E}(s)|_{L^{\infty}(\Omega)}ds} \|(\mathcal{E},\mathcal{H})(0)\|_{H^{s}(\Omega)}. \tag{3.9}\] We remark that no smallness is required to prove (3.9) for \(s\in[0,1]\), but we need smallness for \(s>1\). When we apply Proposition 3.3 in the proof of improved local well-posedness for the Kerr system, it turns out that it suffices to require smallness of the initial data by a continuity argument. As prerequisite, we show a simple fractional Leibniz rule on the domain: **Lemma 3.4**.: _Let \(\Omega\subseteq\mathbb{R}^{d}\) be a smooth domain with compact boundary, and \(s\in[0,1]\). Let \(p,q_{1},q_{2},r_{1},r_{2}\in[1,\infty]\) with \(\frac{1}{p}=\frac{1}{q_{1}}+\frac{1}{q_{2}}=\frac{1}{r_{1}}+\frac{1}{r_{2}}\) with \(p,q_{1},r_{2}<\infty\). For \(f,g\in H^{s}(\Omega)\) we have_ \[\|fg\|_{W^{s,p}(\Omega)}\lesssim\|f\|_{W^{s,q_{1}}(\Omega)}\|g\|_{L^{q_{2}}( \Omega)}+\|f\|_{L^{r_{1}}(\Omega)}\|g\|_{W^{s,r_{2}}(\Omega)}.\] Proof.: For the interior part this is immediate from the usual fractional Leibniz rule (cf. [7, 8]): \[\|\langle\partial_{x}\rangle^{s}(fg)\|_{L^{p}(\mathbb{R}^{d})}\lesssim\| \langle\partial_{x}\rangle^{s}f\|_{L^{q_{1}}(\mathbb{R}^{d})}\|g\|_{L^{q_{2}} (\mathbb{R}^{d})}+\|f\|_{L^{r_{1}}(\mathbb{R}^{d})}\|\langle\partial_{x} \rangle^{s}g\|_{L^{r_{2}}(\mathbb{R}^{d})}.\] Hence, it suffices to consider (finitely many) charts at the boundary. We change to geodesic coordinates. By invariance of Sobolev spaces under changes of coordinates, it suffices to estimate: \(\|fg\|_{W^{s,p}(\mathbb{R}^{d}_{>0})}\). We extend \(f\) and \(g\) evenly and denote the extensions by \(\tilde{f}\) and \(\tilde{g}\). We have \[\|fg\|_{W^{s,p}(\mathbb{R}^{d}_{>0})}\lesssim\|\tilde{f}\tilde{g}\|_{W^{s,p}( \mathbb{R}^{d})}\] because \(\tilde{f}\tilde{g}\) is an even extension of \(fg\). The above display is clearly true for \(s\in\{0,1\}\) and follows for \(s\in(0,1)\) by interpolation. Now we are in the position to apply the usual fractional Leibniz rule on the whole space and find \[\|\tilde{f}\tilde{g}\|_{W^{s,p}(\mathbb{R}^{d})}\lesssim\|\tilde{f}\|_{W^{s,q_ {1}}(\mathbb{R}^{d})}\|\tilde{g}\|_{L^{q_{2}}(\mathbb{R}^{d})}+\|\tilde{f}\|_{ L^{r_{1}}(\mathbb{R}^{d})}\|\tilde{g}\|_{W^{s,r_{2}}(\mathbb{R}^{d})}.\] The proof is concluded by continuity of even extension for \(p\in[1,\infty]\) and \(s\in[0,1]\): \[\operatorname{ext}_{N}:W^{s,p}(\mathbb{R}^{d}_{>0}) \to W^{s,p}(\mathbb{R}^{d})\] \[f \mapsto f_{e}(x)=\begin{cases}f(x),\quad x_{d}>0,\\ f(x_{1},\dots,x_{d-1},-x_{d}),\quad x_{d}<0.\end{cases}\] Proof of Proposition 3.3.: We change to non-divergence form: Consider \[\left\{\begin{array}{rclcl}\varepsilon_{1}\partial_{t}\mathcal{E}&=\nabla_{ \perp}\mathcal{H},&&[\mathcal{E}\wedge\nu]_{\pi\in\partial\Omega}&=&0,&(t,x^{ \prime})\in\mathbb{R}\times\Omega,\\ \partial_{t}\mathcal{H}&=-(\nabla\times\mathcal{E})_{3},&&\eta(x,D)\mathcal{E }&=&\rho_{e}\end{array}\right. \tag{3.10}\] with \((\mathcal{E},\mathcal{H})(0)=(\mathcal{E}_{0},\mathcal{H}_{0})\), \(\varepsilon_{1}(t)=1+|\mathcal{E}|^{2}(t)+2\mathcal{E}\otimes\mathcal{E}(t)\), and \[\eta(x,D)=((1+|\mathcal{E}|^{2}+2\mathcal{E}_{1}^{2})\partial_{1}+2\mathcal{E }_{1}\mathcal{E}_{2}\partial_{2}\quad 2\mathcal{E}_{1}\mathcal{E}_{2}\partial_{1}+(1+| \mathcal{E}|^{2}+2\mathcal{E}_{2}^{2})\partial_{2}).\] We have by \(\mathcal{H}^{3}\)-well-posedness and Sobolev embedding \[\varepsilon_{1}(t)\in W^{1,\infty}(\Omega),\quad\partial_{t}\varepsilon_{1} \in L^{1}_{T}L^{\infty}_{x^{\prime}}(\Omega). \tag{3.11}\] This allows us to define a time-dependent (but linear) evolution operator \(\mathbb{T}(t,s):L^{2}(\Omega)\to L^{2}(\Omega)\) via linearization with \[\varepsilon_{1}(t) =1+|\mathcal{E}^{\prime}|^{2}+2\mathcal{E}^{\prime}\otimes \mathcal{E}^{\prime},\] \[\eta(x,D) =((1+|\mathcal{E}^{\prime}|^{2}+2\mathcal{E}_{1}^{\prime\,2}) \partial_{1}+2\mathcal{E}_{1}^{\prime}\mathcal{E}_{2}^{\prime}\partial_{2} \quad 2\mathcal{E}_{1}^{\prime}\mathcal{E}_{2}^{\prime}\partial_{1}+(1+| \mathcal{E}^{\prime}|^{2}+2\mathcal{E}_{2}^{\prime\,2})\partial_{2}),\] where \(\mathcal{E}^{\prime}\) denotes the \(\mathcal{H}^{3}\)-solution to the quasilinear problem. This maps data \((\mathcal{E},\mathcal{H})(s)\) to values of the solution to (3.10) at time \(t\) given by \((\mathcal{E},\mathcal{H})(t)\). We shall see that for \(s\in\{0,1\}\) \[\mathcal{T}:H^{s}(\Omega)\to L^{\infty}_{T}H^{s}(\Omega),\quad(\mathcal{E}_{0 },\mathcal{H}_{0})\mapsto((\mathcal{E},\mathcal{H})(t))_{t\in[0,T]}\] satisfies the estimate \[\sup_{t\in[0,T]}\|(\mathcal{E},\mathcal{H})(t)\|_{H^{s}(\Omega)}\lesssim e^{C _{0}^{\int_{0}^{T}\|\partial_{x}\varepsilon_{1}(s)\|_{L^{\infty}_{x^{\prime}}( \Omega)}ds}}\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{H^{s}(\Omega)}. \tag{3.12}\] By linearity and interpolation this implies (3.9) for \(s\in[0,1]\). We prove (3.12) for \(s=0\). Let \(\mathcal{D}(t)=\varepsilon_{1}(t)\mathcal{E}(t)\) and \[M(t)=\int_{\Omega}\mathcal{D}(t).\mathcal{E}(t)\,dx^{\prime}+\int_{\Omega} \mathcal{H}(t).\mathcal{B}(t)\,dx^{\prime}.\] We obtain \[\partial_{t}M(t) =\int_{\Omega}\partial_{t}\varepsilon_{1}(t)\mathcal{E}(t). \mathcal{E}(t)\,dx^{\prime}+2\int_{\Omega}\nabla_{\perp}\mathcal{H}(t). \mathcal{E}(t)dx^{\prime}-2\int_{\Omega}(\nabla\times\mathcal{E})_{3}(t) \mathcal{H}(t)\,dx^{\prime}\] \[\lesssim\|\partial_{t}\varepsilon_{1}(t)\|_{L^{\infty}(\Omega)} \|\mathcal{E}(t)\|_{L^{2}(\Omega)}^{2}\] \[\lesssim\|\partial_{t}\varepsilon_{1}(t)\|_{L^{\infty}(\Omega)}M(t).\] In the first estimate we use that the second and third term cancel each other. This follows from integration by parts using the boundary condition and resolving on the half-space. In the ultimate estimate we use that \(\varepsilon_{1}(t)\) has eigenvalues \(1+3|\mathcal{E}^{\prime}|^{2}\), \(1+|\mathcal{E}^{\prime}|^{2}\). We find \(O\in O(2)\) such that \[O^{t}\begin{pmatrix}1+|\mathcal{E}^{\prime}|^{2}+2\mathcal{E}_{1}^{\prime\,2}&2 \mathcal{E}_{1}^{\prime}\mathcal{E}_{2}^{\prime}\\ 2\mathcal{E}_{1}^{\prime}\mathcal{E}_{2}^{\prime}&1+|\mathcal{E}^{\prime}|^{2} +2\mathcal{E}_{2}^{\prime\,2}\end{pmatrix}O=\begin{pmatrix}1+3|\mathcal{E}^{ \prime}|^{2}&0\\ 0&1+|\mathcal{E}^{\prime}|^{2}\end{pmatrix}.\] This is achieved for \(\mathcal{E}^{\prime}\neq 0\) by requiring \[O\begin{pmatrix}\mathcal{E}_{1}^{\prime}\\ \mathcal{E}_{2}^{\prime}\end{pmatrix}=\begin{pmatrix}|\mathcal{E}^{\prime}|\\ 0\end{pmatrix}.\] For \(\mathcal{E}^{\prime}=0\), we can simply choose \(O=1_{2\times 2}\). We conclude the proof of (3.12) for \(s=0\) by \(M(t)\sim\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}}^{2}\) and Gronwall's inequality. We turn to the proof of (3.12) for \(s=1\). To this end, we consider the system for the first time-derivatives: \[\left\{\begin{array}{rl}\varepsilon_{1}\partial_{t}\dot{\mathcal{E}}&=\nabla_{ \perp}\dot{\mathcal{H}}-\dot{\varepsilon}_{1}\dot{\mathcal{E}},\\ \partial_{t}\dot{\mathcal{H}}&=-(\partial_{1}\dot{\mathcal{E}}_{2}-\partial_{ 2}\dot{\mathcal{E}}_{1}).\end{array}\right.\] Let \[\tilde{M}(t)=\int_{\Omega}\varepsilon_{1}(t)\dot{\mathcal{E}}(t).\dot{\mathcal{ E}}(t)\,dx^{\prime}+\int_{\Omega}\dot{\mathcal{H}}(t).\dot{\mathcal{B}}(t)\,dx^{ \prime}.\] For the time-derivative of \(\tilde{M}(t)\) we find like above by integration by parts \[\begin{split}\partial_{t}\tilde{M}(t)&=\int_{\Omega}( \partial_{t}\varepsilon_{1}(t))\dot{\mathcal{E}}(t).\dot{\mathcal{E}}(t)\,dx^ {\prime}+2\int_{\Omega}\nabla_{\perp}\dot{\mathcal{H}}(t).\dot{\mathcal{E}}(t )\,dx^{\prime}-2\int_{\Omega}\dot{\varepsilon}_{1}(t)\dot{\mathcal{E}}(t). \dot{\mathcal{E}}(t)\,dx^{\prime}\\ &\quad-2\int_{\Omega}(\nabla\times\dot{\mathcal{E}})_{3}.\dot{ \mathcal{H}}(t)dx^{\prime}\\ &\lesssim\|\partial_{t}\varepsilon_{1}(t)\|_{L^{\infty}_{x}( \Omega)}M(t).\end{split} \tag{3.13}\] Gronwall's inequality yields \[\tilde{M}(t)\lesssim e^{\int_{0}^{t}|\partial_{t}\varepsilon_{1}(s)\|_{L^{ \infty}_{x^{\prime}}(\Omega)}ds}\tilde{M}(0).\] By invoking (3.10), we find \[\tilde{M}(t)\sim\|\mathcal{H}(t)\|_{\dot{H}^{1}(\Omega)}^{2}+\|\mathcal{E}(t )\|_{H_{curl}}^{2}.\] We already control \(\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}}\). For an estimate of \(\|\mathcal{E}\|_{H^{1}(\Omega)}\), by Proposition 3.2 we have to estimate \(\|\mathcal{E}\|_{H_{div}}\). From the formula for \(\rho_{e}(t)\) we obtain \[\rho_{e}(t)=\nabla\cdot\mathcal{E}(t)+O((\mathcal{E}^{\prime})^{2}\nabla \mathcal{E}).\] This yields the estimate \[\|\mathcal{E}(t)\|_{H_{div}}\lesssim_{\delta}\|\rho_{e}(t)\|_{L^{2}}+\| \mathcal{E}\|_{H_{curl}}.\] Furthermore, we find \[\dot{\rho}_{e}(t)=\partial_{t}\varepsilon_{1}\partial_{x^{\prime}}\mathcal{E }+\eta(x,D)\partial_{t}\mathcal{E}.\] Applying the divergence to \(\varepsilon_{1}\partial_{t}\mathcal{E}=\nabla_{\perp}\mathcal{H}\), we find \(\eta(x,D)\partial_{t}\mathcal{E}=O(\partial\varepsilon_{1}\partial_{t} \mathcal{E})\) and therefore, \[\partial_{t}\|\rho_{e}(t)\|_{L^{2}_{x^{\prime}}}^{2}\lesssim\|\partial_{t} \varepsilon_{1}\|_{L^{\infty}_{x^{\prime}}}\|\partial_{x^{\prime}}\mathcal{E }\|_{L^{2}_{x^{\prime}}}\|\rho_{e}(t)\|_{L^{2}_{x^{\prime}}}+\|\partial_{x^{ \prime}}\varepsilon_{1}\|_{L^{\infty}_{x^{\prime}}}\|\partial_{t}\mathcal{E }\|_{L^{2}_{x^{\prime}}}\|\rho_{e}(t)\|_{L^{2}_{x^{\prime}}}.\] We find \[\partial_{t}\|\rho_{e}(t)\|_{L^{2}(\Omega)}^{2}\lesssim\|\partial_{x} \varepsilon_{1}\|_{L^{\infty}_{x^{\prime}}}(\tilde{M}(t)+\|\rho_{e}(t)\|_{L^{ 2}_{x^{\prime}}(\Omega)}^{2}). \tag{3.14}\] Taking the estimates (3.13) and (3.14) together, we find by Gronwall's inequality \[\tilde{M}(t)+\|\rho_{e}(t)\|_{L^{2}(\Omega)}^{2}\lesssim e^{C\int_{0}^{T}| \partial_{x}\varepsilon_{1}(t)\|_{L^{\infty}_{x^{\prime}}(\Omega)}dt}(\tilde{ M}(0)+\|\rho_{e}(0)\|_{L^{2}(\Omega)}^{2}).\] This shows (3.12) for \(s=1\) and by interpolation we infer (3.12) for \(s\in[0,1]\). In the following let \((\mathcal{E},\mathcal{H})(t)=(\mathcal{E}^{\prime},\mathcal{H}^{\prime})(t)\) denote the \(\mathcal{H}^{3}\)-solution to the quasilinear Maxwell equation. We observe that the time-derivatives satisfy the following equation: \[\left\{\begin{array}{rl}\varepsilon_{1}\partial_{t}\dot{\mathcal{E}}&= \nabla_{\perp}\dot{\mathcal{H}}-\dot{\varepsilon}_{1}\dot{\mathcal{E}},\\ \partial_{t}\dot{\mathcal{H}}&=-(\nabla\times\dot{\mathcal{E}})_{3}.\end{array}\right. \tag{3.15}\] Hence, we can write in \(H^{s}\), \(s\in[0,1]\): \[(\dot{\mathcal{E}},\dot{\mathcal{H}})(t)=\mathbb{T}_{\varepsilon_{1}}(t,0)(\dot{ \mathcal{E}},\dot{\mathcal{H}})(0)-\int_{0}^{t}\mathbb{T}_{\varepsilon_{1}}(t,s )\begin{pmatrix}\varepsilon_{1}^{-1}\dot{\varepsilon}_{1}\dot{\mathcal{E}}\\ 0\end{pmatrix}ds.\] We obtain by the estimates for \(\mathbb{T}_{\varepsilon_{1}}(t,s)\): \[\|(\dot{\mathcal{E}},\dot{\mathcal{H}})\|_{L^{\infty}_{T}H^{s}( \Omega)} \lesssim e^{\int_{0}^{t}\|\hat{\varepsilon}_{\mathcal{E}}(s)\|_{L^{ \infty}_{x^{\prime}}(\Omega)}ds}\|(\dot{\mathcal{E}},\dot{\mathcal{H}})(0)\|_{ H^{s}(\Omega)}\] \[+\int_{0}^{t}e^{\int_{0}^{t}\|\hat{\varepsilon}_{\mathcal{E}}(s) \|_{L^{\infty}_{x^{\prime}}(\Omega)}ds}\|\varepsilon_{1}^{-1}\dot{ \varepsilon}_{1}\dot{\mathcal{E}}(s)\|_{H^{s}(\Omega)}ds.\] We evaluate \(\|\varepsilon_{1}^{-1}\dot{\varepsilon}_{1}\dot{\mathcal{E}}\|_{H^{s}(\Omega)}\) by the fractional Leibniz rule proved in Lemma 3.4. There are two cases: Derivatives fall on \(\dot{\mathcal{E}}\), which case is handled by the first term, or derivatives fall on \(\mathcal{E}\) or \(\varepsilon_{1}^{-1}\) (or on the metric tensor, which results in even lower order terms), which is handled with the second term: \[\|\varepsilon_{1}^{-1}\dot{\varepsilon}_{1}\dot{\mathcal{E}}\|_{H^{s}(\Omega )}\lesssim_{\delta}\|\dot{\mathcal{E}}\|_{L^{\infty}(\Omega)}\|\dot{\mathcal{ E}}\|_{H^{s}(\Omega)}+\|\mathcal{E}\|_{W^{s,p}(\Omega)}\|\dot{\mathcal{E}}\|_{L^{ \infty}(\Omega)}\|\dot{\mathcal{E}}\|_{L^{q}(\Omega)}.\] Above we require \(\frac{1}{p}+\frac{1}{q}=\frac{1}{2}\) and \(s=2\big{(}\frac{1}{2}-\frac{1}{q}\big{)}<1\). By smallness assumption (3.8) and Sobolev embedding, we obtain \[\sup_{t\in[0,T]}\|(\dot{\mathcal{E}},\dot{\mathcal{H}})(t)\|_{H^{s}(\Omega)} \lesssim_{\delta}e^{C\int_{0}^{T}\|\hat{\varepsilon}_{\mathcal{E}}(s)\|_{L^{ \infty}_{x^{\prime}}(\Omega)}ds}\|(\mathcal{E},\mathcal{H})(0)\|_{H^{s+1}( \Omega)}. \tag{3.16}\] We have to control \(\|(\mathcal{E},\mathcal{H})\|_{H^{s+1}}\) in terms of \(\|(\dot{\mathcal{E}},\dot{\mathcal{H}})\|_{H^{s}(\Omega)}\). To this end, we use Proposition 3.2 to find \[\|(\mathcal{E},\mathcal{H})(t)\|_{H^{s+1}(\Omega)} \lesssim\|(\nabla\cdot\mathcal{E})(t)\|_{H^{s}(\Omega)}+\|( \nabla\times\mathcal{E})_{3}(t)\|_{H^{s}(\Omega)}+\|\nabla_{\perp}\mathcal{H} (t)\|_{H^{s}(\Omega)} \tag{3.17}\] \[\quad+\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}(\Omega)}.\] We have \(\|(\nabla\times\mathcal{E})_{3}(t)\|_{H^{s}(\Omega)}=\|\partial_{t}\mathcal{ H}\|_{H^{s}(\Omega)}\) and already control \[\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}(\Omega)}\lesssim e^{C\int_{0}^{t}\| \hat{\varepsilon}_{\mathcal{E}}(s)\|_{L^{\infty}_{x^{\prime}}(\Omega)}ds}\| (\mathcal{E},\mathcal{H})(0)\|_{L^{2}(\Omega)}. \tag{3.18}\] By \(\rho_{e}(t)=\eta(x,D)\mathcal{E}=\nabla\cdot\mathcal{E}+O(\mathcal{E}^{2} \partial_{x^{\prime}}\mathcal{E})\), we obtain by the fractional Leibniz rule and Sobolev embedding \[\|\nabla\cdot\mathcal{E}(t)\|_{H^{s}(\Omega)} \lesssim\|\rho_{e}(t)\|_{H^{s}(\Omega)}+\|\mathcal{E}(t)\|_{H^{s+ 1}}^{3}=\|\rho_{e}(0)\|_{H^{s}(\Omega)}+\|\mathcal{E}(t)\|_{H^{s+1}}^{3} \tag{3.19}\] \[\lesssim\|\nabla\cdot\mathcal{E}(0)\|_{H^{s}(\Omega)}+\|\mathcal{ E}(0)\|_{H^{s+1}(\Omega)}^{3}+\|\mathcal{E}(t)\|_{H^{s+1}(\Omega)}^{3}.\] For \(\|\nabla_{\perp}\mathcal{H}\|_{H^{s}}\) we estimate by the fractional Leibniz rule and Sobolev embedding \[\|\varepsilon_{1}\varepsilon_{1}^{-1}\nabla_{\perp}\mathcal{H}\|_{H^{s}} \lesssim\|\varepsilon_{1}\partial_{t}\mathcal{E}\|_{H^{s}(\Omega)} \tag{3.20}\] \[\lesssim\|\mathcal{E}(t)\|_{H^{s}(\Omega)}\|\partial_{t}\mathcal{ E}\|_{L^{2}(\Omega)}+\|\varepsilon_{1}\|_{L^{\infty}(\Omega)}\|\partial_{t} \mathcal{E}\|_{H^{s}(\Omega)}\] \[\lesssim\|(\mathcal{E},\mathcal{H})(t)\|_{H^{s+1}(\Omega)}^{2}+ \|\varepsilon_{1}\|_{L^{\infty}(\Omega)}\|\partial_{t}\mathcal{E}\|_{H^{s}( \Omega)}.\] By smallness (3.8), we have \(\|\varepsilon_{1}\|_{L^{\infty}(\Omega)}\lesssim 1\). Plugging (3.18), (3.19), and (3.20) into (3.17) yields \[\|(\mathcal{E},\mathcal{H})(t)\|_{H^{s+1}(\Omega)} \lesssim\|\nabla\cdot\mathcal{E}(0)\|_{H^{s}(\Omega)}+\|(\mathcal{ E},\mathcal{H})(0)\|_{H^{s+1}(\Omega)}^{3}+\|\partial_{t}(\mathcal{E},\mathcal{H})(t)\|_{H^{s}( \Omega)}\] \[\quad+e^{\int_{0}^{t}\|\hat{\varepsilon}_{\mathcal{E}}(s)\|_{L^ {\infty}_{x^{\prime}}(\Omega)}ds}\|(\mathcal{E},\mathcal{H})(0)\|_{L^{2}( \Omega)}.\] Again by (3.8) and (3.16) we finish the proof. We have proved energy estimates for solutions to (3.7) at the regularities we shall cover by Strichartz estimates. However, the proof of local well-posedness is anchored at \(H^{3}\). To show existence in \(H^{3}\) for the same times like time of existence in \(H^{s}\), \(s\in(11/6,2)\), we prove the following energy estimates: **Proposition 3.5** (Energy estimates at high regularity).: _Let \((\mathcal{E},\mathcal{H})\) be an \(\mathcal{H}^{3}\)-solution to (3.7) on \([0,T]\) and suppose that_ \[\|(\mathcal{E},\mathcal{H})\|_{L^{\infty}_{T}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and moreover, by Maxwell equations and Sobolev embedding, \[\|\dot{\mathcal{E}}\|_{L^{4}}^{4}\lesssim\|\mathcal{H}\|_{H^{\frac{3}{2}}}^{4} \lesssim\delta^{2}\|\mathcal{H}\|_{H^{2}}^{2}.\] We turn to the second term in (3.24): \[\partial_{t}\dot{\mathcal{H}}=-(\nabla\times\dot{\mathcal{E}})_{3}=-(\nabla \times(\varepsilon_{1}^{-1}\nabla_{\perp}\mathcal{H}))_{3}=(1+O(\mathcal{E}^ {2}))\Delta\mathcal{H}+O(\mathcal{E}\partial_{x^{\prime}}\mathcal{E}\partial_ {x^{\prime}}\mathcal{H}). \tag{3.29}\] For this reason, \[(\partial_{t}\dot{\mathcal{H}},\partial_{t}\dot{\mathcal{H}})=\|\Delta \mathcal{H}\|_{L^{2}}^{2}+O_{\delta}(\|\Delta\mathcal{H}\|_{L^{2}}^{2})+O(\| \mathcal{E}\|_{L^{\infty}}^{2}\|\partial_{x^{\prime}}\mathcal{E}\|_{L^{4}}^{2} \|\partial_{x^{\prime}}\mathcal{H}\|_{L^{4}}^{2}). \tag{3.30}\] By Sobolev embedding, we find \[(\partial_{t}\dot{\mathcal{H}},\partial_{t}\dot{\mathcal{H}})=(1+O(\delta)) \|\Delta\mathcal{H}\|_{L^{2}}^{2}+O(\delta^{2}\|(\mathcal{E},\mathcal{H})\|_ {H^{\frac{3}{2}}}^{4})\geqslant c\|\Delta\mathcal{H}\|_{L^{2}}^{2}+\delta^{2} \|(\mathcal{E},\mathcal{H})\|_{H^{2}}^{2}.\] Moreover, again by uniform ellipticity of \(\varepsilon_{1}^{-1}\) for small \(\|\mathcal{E}\|_{L^{\infty}_{x^{\prime}}}\) and Maxwell equations, \[(\varepsilon_{1}\dot{\mathcal{E}},\dot{\mathcal{E}})=(\nabla_{\perp}\mathcal{ H},\varepsilon_{1}^{-1}\nabla_{\perp}\mathcal{H})\geqslant c\|\mathcal{H}\|_{H^{1}}^{2},\quad(\dot{\mathcal{H}},\dot{\mathcal{H}})=\|(\nabla\times\mathcal{E})_{3} \|_{L^{2}}^{2}. \tag{3.31}\] Taking (3.28) - (3.31) together, we find (3.24) to hold. Next, we prove that \[A_{1}(t)+\|\rho_{e}(t)\|_{H^{1}}^{2}\lesssim(1+\delta^{2})\|(\mathcal{E}, \mathcal{H})\|_{H^{2}}^{2}, \tag{3.32}\] which establishes (3.25). Above we let \[\rho_{e}(t)=\nabla\cdot(\varepsilon\mathcal{E})=\nabla\cdot\mathcal{E}+ \nabla\cdot(|\mathcal{E}|^{2}\mathcal{E})=\nabla\cdot\mathcal{E}+O(\mathcal{ E}^{2}\partial_{x^{\prime}}\mathcal{E}).\] First, we note that by the estimates, which established (3.28): \[(\varepsilon_{1}\mathcal{\ddot{E}},\mathcal{\ddot{E}})\leqslant C\|\nabla_{ \perp}(\nabla\times\mathcal{E})_{3}\|_{L^{2}}^{2}+\delta^{2}\|(\mathcal{E}, \mathcal{H})\|_{H^{2}}^{2}. \tag{3.33}\] By (3.29), Sobolev embedding, and (3.21), we find \[\|\mathcal{\ddot{H}}\|_{L^{2}}^{2} \lesssim\|\mathcal{H}\|_{H^{2}}^{2}+\delta^{2}\|(\mathcal{E}, \mathcal{H})\|_{H^{2}}^{2}+\delta^{2}\|(\mathcal{E},\mathcal{H})\|_{H^{\frac{ 3}{2}}}^{4} \tag{3.34}\] \[\lesssim\|(\mathcal{E},\mathcal{H})\|_{H^{2}}^{2}+\delta^{2}\|( \mathcal{E},\mathcal{H})\|_{H^{2}}^{2}.\] Regarding the charges note that \[\|O(\mathcal{E}^{2}\partial_{x^{\prime}}\mathcal{E})\|_{H^{1}}\lesssim\delta ^{2}\|\mathcal{E}\|_{H^{2}}. \tag{3.35}\] Taking (3.33) - (3.35) together yields (3.32). Furthermore, (3.35) gives \[A_{1}(t)+\|\rho_{e}(t)\|_{H^{1}}^{2}\geqslant c\|(\mathcal{E},\mathcal{H})\| _{H^{2}}^{2} \tag{3.36}\] provided that \(\|(\mathcal{E},\mathcal{H})\|_{H^{\frac{3}{2}}}\leq\delta\ll 1\) and \(\delta\) sufficiently small. This is a consequence of the Helmholtz decomposition Proposition C.2, which is proved in the Appendix: \[\|(\nabla\times\mathcal{E})_{3}\|_{H^{1}}+\|\nabla\cdot\mathcal{E}\|_{H^{1}}+ \|\mathcal{E}\|_{L^{2}}\sim\|\mathcal{E}\|_{H^{2}}.\] Next, we compute \(\partial_{t}A_{1}(t)\) to apply a Gronwall argument. Recall that charges are conserved. We have seen in Proposition 3.3 by integration by parts \[\partial_{t}((\mathcal{H},\mathcal{H})+(\varepsilon\mathcal{E}, \mathcal{E})) =2(\partial_{t}\mathcal{H},\mathcal{H})+2(\partial_{t}(\varepsilon \mathcal{E}),\mathcal{E})-(\dot{\mathcal{E}}\mathcal{E},\mathcal{E})\] \[=-(\dot{\varepsilon}\mathcal{E},\mathcal{E})\lesssim_{\delta}\| \partial_{t}\mathcal{E}\|_{L^{\infty}}\|(\mathcal{E},\mathcal{H})\|_{L^{2}}^ {2}.\] Similarly, \[\partial_{t}((\varepsilon_{1}\dot{\mathcal{E}},\dot{\mathcal{E}})+(\dot{ \mathcal{H}},\dot{\mathcal{H}}))\lesssim\|\partial_{x}\mathcal{E}\|_{L^{ \infty}}\|\dot{\mathcal{E}}\|_{L^{2}}^{2}.\] We compute further for the highest order terms using (3.27) (again we use integration by parts to find cancellation of the terms with the most derivatives): \[\partial_{t}((\ddot{\mathcal{H}},\ddot{\mathcal{H}})+(\varepsilon_{1} \ddot{\mathcal{E}},\ddot{\mathcal{E}})) =2(\partial_{t}\ddot{\mathcal{H}},\ddot{\mathcal{H}})+(\dot{ \varepsilon}_{1}\ddot{\mathcal{E}},\ddot{\mathcal{E}})+2(\varepsilon_{1} \mathcal{E}^{(3)},\ddot{\mathcal{E}})\] \[=(\dot{\varepsilon}_{1}\ddot{\mathcal{E}},\ddot{\mathcal{E}})-2( \ddot{\varepsilon}_{1}\dot{\mathcal{E}},\ddot{\mathcal{E}})-4(\dot{ \varepsilon}_{1}\ddot{\mathcal{E}},\ddot{\mathcal{E}})\] \[=-3(\dot{\varepsilon}_{1}\ddot{\mathcal{E}},\ddot{\mathcal{E}})-2 (\ddot{\varepsilon}_{1}\dot{\mathcal{E}},\ddot{\mathcal{E}}).\] We have \[|(\dot{\varepsilon}_{1}\ddot{\mathcal{E}},\ddot{\mathcal{E}})|\lesssim_{ \delta}\|\partial_{x}\mathcal{E}\|_{L^{\infty}_{x^{\prime}}}\|\ddot{\mathcal{ E}}\|^{2}_{L^{2}}\] and \(\ddot{\varepsilon}_{1}=O(\mathcal{E}\ddot{\mathcal{E}})+O(\dot{\mathcal{E}}^{2})\), which implies \[|(\ddot{\varepsilon}_{1}\dot{\mathcal{E}},\ddot{\mathcal{E}})|\lesssim\| \mathcal{E}\|_{L^{\infty}_{x^{\prime}}}\|\partial_{x}\mathcal{E}\|_{L^{ \infty}_{x^{\prime}}}^{\infty}\|\ddot{\mathcal{E}}\|^{2}_{L^{2}}+\|\ddot{ \mathcal{E}}\|_{L^{2}}\|\dot{\mathcal{E}}\|_{L^{\infty}}\|\dot{\mathcal{E}}\|^ {2}_{L^{4}}.\] The first term is already in suitable form, for the second term observe by Sobolev embedding and (3.24): \[\|\dot{\mathcal{E}}\|_{L^{\infty}_{x^{\prime}}}\|\ddot{\mathcal{E} }\|_{L^{2}}\|(\mathcal{E},\mathcal{H})\|_{H^{2}} \delta \lesssim\|\dot{\mathcal{E}}\|_{L^{\infty}}(\|\ddot{\mathcal{E}} \|^{2}_{L^{2}}+\delta^{2}\|(\mathcal{E},\mathcal{H})\|^{2}_{H^{2}})\] \[\lesssim\|\dot{\mathcal{E}}\|_{L^{\infty}}(A_{1}(t)+\|\rho_{e}(t )\|^{2}_{H^{1}}).\] This shows \[\partial_{t}(A_{1}(t)+\|\rho_{e}(t)\|^{2}_{H^{1}})\lesssim_{\delta}\|\partial _{x}(\mathcal{E},\mathcal{H})\|_{L^{\infty}}(A_{1}(t)+\|\rho_{e}(t)\|^{2}_{H^ {1}}).\] Hence, by Gronwall's argument, we find \[A_{1}(t)+\|\rho_{e}(t)\|^{2}_{H^{1}}\lesssim e^{C\int_{0}^{t}\|\partial_{x}( \mathcal{E},\mathcal{H})(s)\|_{L^{\infty}_{x^{\prime}}}ds}(A_{1}(0)+\|\rho_{e }(0)\|_{H^{1}}).\] We use (3.32) and (3.36) to infer (3.22). We turn to the a priori estimate in \(H^{3}\): Presently, we replace \(A_{1}\) by \[A_{2}(t)=(\varepsilon_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3)})+(\mathcal{H}^{ (3)},\mathcal{H}^{(3)})+A_{1}(t).\] Record that \[\left\{\begin{array}{ll}\varepsilon_{1}\mathcal{E}^{(4)}&=\nabla_{\perp} \mathcal{H}^{(3)}-3\dot{\varepsilon}_{1}\mathcal{E}^{(3)}-\varepsilon_{1}^{(3 )}\dot{\mathcal{E}}-3\ddot{\varepsilon}_{1}\mathcal{E}^{(2)},\quad(t,x^{ \prime})\in\mathbb{R}\times\Omega,\\ \partial_{t}\mathcal{H}^{(3)}&=-(\nabla\times\mathcal{E}^{(3)})_{3}.\end{array}\right. \tag{3.37}\] We shall first show that \[A_{2}(t)\geq c(\|(\nabla\times\mathcal{E})_{3}\|^{2}_{H^{2}}+\|\mathcal{E}\|^ {2}_{L^{2}}+\|\mathcal{H}\|^{2}_{H^{3}})+O_{\delta}(\|(\mathcal{E},\mathcal{H} )\|^{2}_{H^{3}}). \tag{3.38}\] We have \[(\varepsilon_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3)})=(\nabla_{\perp}\ddot{ \mathcal{H}},\varepsilon_{1}^{-1}\nabla_{\perp}\ddot{\mathcal{H}})-2(\dot{ \varepsilon}_{1}\mathcal{E}^{(2)},\mathcal{E}^{(3)})-(\ddot{\varepsilon}_{1} \dot{\mathcal{E}},\mathcal{E}^{(3)})\] and like above we shall see that the first term is of leading order: \[(\varepsilon_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3)})\geq c\|\nabla_{\perp} \ddot{\mathcal{H}}\|^{2}_{L^{2}}+O_{\delta}(\|(\mathcal{E},\mathcal{H})\|^{2} _{H^{3}}). \tag{3.39}\] To this end, we find by Maxwell equations and Sobolev embedding for \(|(\dot{\varepsilon}_{1}\mathcal{E}^{(2)},\mathcal{E}^{(3)})|\): \[|(\dot{\varepsilon}_{1}\mathcal{E}^{(2)},\mathcal{E}^{(3)})|\lesssim\|\mathcal{ E}\|_{L^{\infty}_{x^{\prime}}}\|\partial_{t}\mathcal{E}\|_{L^{4}}\|\mathcal{E}^{(2)} \|_{L^{4}}\|\mathcal{E}^{(3)}\|_{L^{2}}\lesssim\delta^{2}\|\mathcal{E}^{(2)} \|_{L^{4}}\|\mathcal{E}^{(3)}\|_{L^{2}}. \tag{3.40}\] Furthermore, by boundedness of \(\|\varepsilon_{1}^{-1}\|_{L^{\infty}}\) due to smallness of \(\|\mathcal{E}\|_{L^{\infty}}\), Maxwell equations, and Holder's inequality we find \[\|\ddot{\mathcal{E}}\|_{L^{4}} \lesssim\|\varepsilon_{1}^{-1}\nabla_{\perp}\dot{\mathcal{H}}\|_{ L^{4}}+\|\varepsilon_{1}^{-1}\dot{\varepsilon}_{1}\dot{\mathcal{E}}\|_{L^{4}}\] \[\lesssim\|\partial_{x^{\prime}}^{2}\mathcal{E}\|_{L^{4}}+\| \mathcal{E}\|_{L^{\infty}}\|\partial_{x^{\prime}}\mathcal{H}\|^{2}_{L^{8}} \lesssim\|(\mathcal{E},\mathcal{H})\|_{H^{3}}.\] We have \(\tilde{\varepsilon}_{1}=O(\mathcal{E}\tilde{\mathcal{E}})+O(\dot{\mathcal{E}}^{2})\). \(|(O(\mathcal{E}\tilde{\mathcal{E}})\dot{\mathcal{E}},\mathcal{E}^{(3)})|\) is estimated like in (3.40). Secondly, by Sobolev embedding and Maxwell equations, we find \[|(\dot{\mathcal{E}}^{3},\mathcal{E}^{(3)})| \lesssim\|\mathcal{E}^{(3)}\|_{L^{2}}\|\dot{\mathcal{E}}\|_{L^{ 6}}^{3}\] \[\lesssim\|\mathcal{E}^{(3)}\|_{L^{2}}\|(\mathcal{E},\mathcal{H}) \|_{H^{3}}\delta^{2}\lesssim\delta^{2}(\|\mathcal{E}^{(3)}\|_{L^{2}}^{2}+\|( \mathcal{E},\mathcal{H})\|_{H^{3}}^{2}).\] This concludes (3.39). In (3.29) we had shown that \[\partial_{t}^{2}\mathcal{H}=(1+O(\mathcal{E}^{2}))\Delta\mathcal{H}+O( \mathcal{E}\partial_{x^{\prime}}\mathcal{E}\partial_{x^{\prime}}\mathcal{H}). \tag{3.41}\] Hence, \[\nabla_{\perp}\partial_{t}^{2}\mathcal{H}=\nabla_{\perp}\Delta\mathcal{H}+O( \mathcal{E}\partial_{x^{\prime}}\mathcal{E}\Delta\mathcal{H})+O((\partial_{x ^{\prime}}\mathcal{E})^{2}\partial_{x^{\prime}}\mathcal{H})+O(\mathcal{E} \partial_{x^{\prime}}^{2}\mathcal{E}\partial_{x^{\prime}}\mathcal{H})+O( \mathcal{E}\partial_{x^{\prime}}\mathcal{E}\partial_{x^{\prime}}^{2}\mathcal{ H}).\] We compute by Maxwell equations, Holder's inequality and Sobolev embedding: \[\|O(\mathcal{E}\partial_{x^{\prime}}\mathcal{E}\Delta\mathcal{H})\|_{L^{2}} \lesssim\|\mathcal{E}\|_{L^{\infty}_{x^{\prime}}}\|\partial_{x^{\prime}} \mathcal{E}\|_{L^{4}}\|\Delta\mathcal{H}\|_{L^{4}}\lesssim\delta^{2}\|( \mathcal{E},\mathcal{H})\|_{H^{3}}. \tag{3.42}\] For the second term, we find \[\|O((\partial_{x^{\prime}}\mathcal{E})^{2}\partial_{x^{\prime}}\mathcal{H})\| _{L^{2}_{x^{\prime}}}\lesssim\|\partial_{x^{\prime}}\mathcal{H}\|_{L^{2}}\| \partial_{x^{\prime}}\mathcal{E}\|_{L^{4}}^{2}\lesssim\delta^{2}\|(\mathcal{E },\mathcal{H})\|_{H^{3}}. \tag{3.43}\] By (3.42), (3.43), and the Cauchy-Schwarz inequality, we find \[\|\nabla_{\perp}\partial_{t}^{2}\mathcal{H}\|_{L^{2}}^{2}=\|\mathcal{H}\|_{H ^{3}}^{2}+O(\delta^{2}\|(\mathcal{E},\mathcal{H})\|_{H^{3}}^{2}).\] This shows that \[(\varepsilon_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3)})\geqslant c\|\mathcal{H} \|_{H^{3}}^{2}+O(\delta^{2}\|(\mathcal{E},\mathcal{H})\|_{H^{3}}^{2}). \tag{3.44}\] By (3.41) again, we obtain \[\partial_{t}^{3}\mathcal{H} =(1+O(\mathcal{E}^{2}))\Delta(\nabla\times\mathcal{E})_{3}+O( \mathcal{E}\varepsilon_{1}^{-1}\partial_{x^{\prime}}\mathcal{H}\Delta \mathcal{H})\] \[\quad+O(\varepsilon_{1}^{-1}\nabla_{\perp}\mathcal{H}\partial_{x ^{\prime}}\mathcal{E}\partial_{x^{\prime}}\mathcal{H})+O(\mathcal{E} \partial_{x^{\prime}}(\varepsilon_{1}^{-1}\partial_{x^{\prime}}\mathcal{H}) \partial_{x^{\prime}}\mathcal{H})+O(\mathcal{E}\partial_{x^{\prime}}\mathcal{E }\partial_{x^{\prime}}^{2}\mathcal{E}).\] Regarding the error terms, we find by Holder's inequality, Sobolev embedding, and the smallness condition: \[\|O(\mathcal{E}\varepsilon_{1}^{-1}\partial_{x^{\prime}}\mathcal{H} \Delta\mathcal{H})\|_{L^{2}} \lesssim\|\mathcal{E}\|_{L^{\infty}_{x^{\prime}}}\|\varepsilon_{1 }^{-1}\|_{L^{\infty}_{x^{\prime}}}\|\partial_{x^{\prime}}\mathcal{H}\|_{L^{4}} \|\Delta\mathcal{H}\|_{L^{4}} \tag{3.45}\] \[\lesssim\delta^{2}\|\varepsilon_{1}^{-1}\|_{L^{\infty}}\|\mathcal{ H}\|_{H^{3}}.\] Secondly, \[\|O(\varepsilon_{1}^{-1}\nabla_{\perp}\mathcal{H}\partial_{x^{\prime}} \mathcal{E}\partial_{x^{\prime}}\mathcal{H})\|_{L^{2}_{x^{\prime}}} \lesssim\|\partial_{x^{\prime}}\mathcal{H}\|_{L^{4}}^{2}\| \partial_{x^{\prime}}\mathcal{E}\|_{L^{2}}\lesssim\delta^{2}\|(\mathcal{E}, \mathcal{H})\|_{H^{3}}. \tag{3.46}\] The third and fourth error term can be treated as variants. Hence, \[\|\partial_{t}^{3}\mathcal{H}\|_{L^{2}}^{2}\geqslant c\|\Delta(\nabla\times \mathcal{E})_{3}\|_{L^{2}}^{2}+O(\delta^{2}\|(\mathcal{E},\mathcal{H})\|_{H^{3 }}^{2}) \tag{3.47}\] by the Cauchy-Schwarz inequality for some \(c>0\) provided that \(\delta\) is chosen small enough. By (3.44), (3.47), and (3.24), we conclude (3.38). In order to establish a lower bound in terms of \(\|\mathcal{E}\|_{H^{3}}\), we use the Helmholtz decomposition in \(H^{3}\): \[\|(\nabla\times\mathcal{E})_{3}\|_{H^{2}}+\|\nabla\cdot\mathcal{E}\|_{H^{2}}+ \|\mathcal{E}\|_{L^{2}}\sim\|\mathcal{E}\|_{H^{3}}.\] Like above, we shall add \(\|\rho_{e}(t)\|_{H^{2}}^{2}\) to \(A_{2}\). Again we use that \[\rho_{e}(t)=\nabla\cdot\mathcal{E}+O(\mathcal{E}^{2}\partial_{x^{\prime}} \mathcal{E}).\] Note that \[\|\mathcal{E}^{2}\partial_{x^{\prime}}\mathcal{E}\|_{H^{2}} \lesssim\|\mathcal{E}\|_{L^{\infty}}^{2}\|\mathcal{E}\|_{H^{3}}+\|O( \mathcal{E}\partial_{x^{\prime}}^{2}\mathcal{E}\partial_{x^{\prime}} \mathcal{E})\|_{L^{2}}+\|O(\partial_{x^{\prime}}\mathcal{E})\|_{L^{2}}\] \[\lesssim\delta^{2}\|\mathcal{E}\|_{H^{3}}.\] Therefore, we find \[A_{2}(t)+\|\rho_{e}(t)\|_{H^{2}}^{2}\geqslant c\|(\mathcal{E},\mathcal{H})\|_{H^{3 }}^{2}+O_{\delta}(\|(\mathcal{E},\mathcal{H})\|_{H^{3}}^{2}). \tag{3.48}\] Next, we show the estimate \[\partial_{t}(A_{1}(t)+A_{2}(t)+\|\rho_{e}(t)\|_{H^{2}}^{2})\lesssim B(t)(A_{1} (t)+A_{2}(t)+\|\rho_{e}(t)\|_{H^{2}}^{2}) \tag{3.49}\] for \(B(t)=\|\partial_{x^{\prime}}(\mathcal{E},\mathcal{H})(t)\|_{L^{\infty}_{x^{ \prime}}}+\|(\mathcal{E},\mathcal{H})(t)\|_{H^{2}}\) to apply Gronwall's argument. We have already shown \(\partial_{t}(A_{1}(t)+\|\rho_{e}(t)\|_{H^{1}}^{2})\lesssim\|\partial_{x^{ \prime}}\mathcal{E}(t)\|_{L^{\infty}}(A_{1}(t)+\|\rho_{e}(t)\|_{H^{1}}^{2})\). We compute for the highest order derivatives: \[\partial_{t}((\varepsilon_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3) })+(\mathcal{H}^{(3)},\mathcal{H}^{(3)}))\] \[=2(\varepsilon_{1}\mathcal{E}^{(4)},\mathcal{E}^{(3)})+2( \mathcal{H}^{(4)},\mathcal{H}^{(3)})+(\dot{\varepsilon}_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3)})\] \[=-6(\dot{\varepsilon}_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3)})-2 (\varepsilon_{1}^{(3)}\dot{\mathcal{E}},\mathcal{E}^{(3)})-6(\ddot{ \varepsilon}_{1}\mathcal{E}^{(2)},\mathcal{E}^{(3)})+(\dot{\varepsilon}_{1} \mathcal{E}^{(3)},\mathcal{E}^{(3)}).\] For justifying \[(\nabla_{\perp}\mathcal{H}^{(3)},\mathcal{E}^{(3)})-((\nabla\times\mathcal{E} ^{(3)})_{3},\mathcal{H}^{(3)})=0\] for solutions \((\mathcal{E},\mathcal{H})\in C([0,T],H^{3}))\) we refer to [21]. Hence, for (3.49) we shall show that \[|(\dot{\varepsilon}_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3)})|+|(\varepsilon_{1 }^{(3)}\dot{\mathcal{E}},\mathcal{E}^{(3)})|+|(\ddot{\varepsilon}_{1} \mathcal{E}^{(2)},\mathcal{E}^{(3)})|\lesssim B(t)(A_{1}(t)+A_{2}(t)+\|\rho_{ e}(t)\|_{H^{2}}^{2}). \tag{3.50}\] The first estimate for (3.50) is trivial: \[|(\dot{\varepsilon}_{1}\mathcal{E}^{(3)},\mathcal{E}^{(3)})|\lesssim_{ \delta}\|\partial_{x^{\prime}}(\mathcal{E},\mathcal{H})\|_{L^{\infty}_{x^{ \prime}}}\|\mathcal{E}^{(3)}\|_{L^{2}}^{2}.\] Regarding the second estimate for (3.50), we compute \[\dot{\varepsilon}_{1}=O(\mathcal{E}\dot{\mathcal{E}}),\quad\ddot{\varepsilon }_{1}=O(\mathcal{E}\ddot{\mathcal{E}})+O(\dot{\mathcal{E}}^{2}),\quad\varepsilon _{1}^{(3)}=O(\mathcal{E}\mathcal{E}^{(3)})+O(\dot{\mathcal{E}}\mathcal{E}^{ (2)}).\] We split \[|(\varepsilon_{1}^{(3)}\dot{\mathcal{E}},\mathcal{E}^{(3)})|\lesssim|(O( \mathcal{E}\mathcal{E}^{(3)})\dot{\mathcal{E}},\mathcal{E}^{(3)})|+|(O(\dot{ \mathcal{E}}\mathcal{E}^{(2)})\dot{\mathcal{E}},\mathcal{E}^{(3)})|. \tag{3.51}\] We have for the first term in (3.51): \[|(O(\mathcal{E}\mathcal{E}^{(3)})\dot{\mathcal{E}},\mathcal{E}^{(3)})| \lesssim\|\mathcal{E}\|_{L^{\infty}}\|\dot{\mathcal{E}}\|_{L^{\infty}}\| \mathcal{E}^{(3)}\|_{L^{2}}^{2}\lesssim\delta\|\dot{\mathcal{E}}\|_{L^{\infty}} \|\mathcal{E}^{(3)}\|_{L^{2}}^{2}.\] For the second term in (3.51) we find \[|(O(\dot{\mathcal{E}}\mathcal{E}^{(2)})\dot{\mathcal{E}},\mathcal{ E}^{(3)})| \lesssim\|\dot{\mathcal{E}}\|_{L^{\infty}}\|\mathcal{E}^{(2)}\|_{L^{4}}\|\dot{ \mathcal{E}}\|_{L^{4}}\|\mathcal{E}^{(3)}\|_{L^{2}}\] \[\lesssim\delta\|\dot{\mathcal{E}}\|_{L^{\infty}}\|\mathcal{E}^{(2 )}\|_{L^{4}}\|\mathcal{E}^{(3)}\|_{L^{2}}.\] Moreover, by \(\mathcal{E}^{(2)}=-\varepsilon_{1}^{-1}\nabla_{\perp}(\nabla\times\mathcal{E })_{3}+\varepsilon_{1}^{-1}\dot{\varepsilon}_{1}\dot{\mathcal{E}}\), Holder's inequality and Sobolev embedding, we find \[\|\mathcal{E}^{(2)}\|_{L^{4}}\lesssim\|\partial_{x^{\prime}}^{2}\mathcal{E} \|_{L^{4}}+\|\dot{\mathcal{E}}\|_{L^{8}}^{2}\delta\lesssim(1+\delta^{2})\|( \mathcal{E},\mathcal{H})\|_{H^{3}},\] which gives \[\delta\|\dot{\mathcal{E}}\|_{L^{\infty}}\|\mathcal{E}^{(2)}\|_{L^{4}}\| \mathcal{E}^{(3)}\|_{L^{2}}\lesssim\delta\|\dot{\mathcal{E}}\|_{L^{\infty}}\|( \mathcal{E},\mathcal{H})\|_{H^{3}}\|\mathcal{E}^{(3)}\|_{L^{2}}.\] By (3.48) this suffices. Lastly, we estimate \(|(\ddot{\varepsilon}_{1}\ddot{\mathcal{E}},\mathcal{E}^{(3)})|\) to complete the proof of (3.50). By the previously established estimate for \(\|\mathcal{E}^{(2)}\|_{L^{4}}\), we find \[|(O((\dot{\mathcal{E}})^{2}\ddot{\mathcal{E}},\mathcal{E}^{(3)})|\lesssim\| \mathcal{E}^{(3)}\|_{L^{2}}\|\dot{\mathcal{E}}\|_{L^{\infty}}\|\dot{\mathcal{E}} \|_{L^{4}}\ddot{\mathcal{E}}\|_{L^{4}}\lesssim\delta\|\mathcal{E}^{(3)}\|_{L^{2} }\|(\mathcal{E},\mathcal{H})\|_{H^{3}}\|\dot{\mathcal{E}}\|_{L^{\infty}}.\] Secondly, we find \[|(\mathcal{E}\tilde{\mathcal{E}}^{2},\mathcal{E}^{(3)})|\lesssim\delta\|\tilde{ \mathcal{E}}\|_{L^{4}}^{2}\|\mathcal{E}^{(3)}\|_{L^{2}}. \tag{3.52}\] We have \[\|\tilde{\mathcal{E}}\|_{L^{4}}\lesssim\|\Delta\mathcal{E}\|_{L^{4}}+\|\hat{ \mathcal{E}}\|_{L^{8}}^{2}\lesssim\|\Delta\mathcal{E}\|_{L^{4}}+\|(\mathcal{E },\mathcal{H})\|_{H^{7/4}}\|\hat{\mathcal{E}}\|_{L^{\infty}}.\] By the Gagliardo-Nirenberg-Ladyzhenskaya inequality, we find \[\|\Delta\mathcal{E}\|_{L^{4}}^{2}\lesssim\|\mathcal{E}\|_{H^{3}}\|\mathcal{E} \|_{H^{2}}.\] Plugging this into (3.52), we obtain \[|(\mathcal{E}\tilde{\mathcal{E}}^{2},\mathcal{E}^{(3)})|\lesssim\delta\|( \mathcal{E},\mathcal{H})\|_{H^{3}}\|(\mathcal{E}(t),\mathcal{H}(t))\|_{H^{2}} \|\mathcal{E}^{(3)}\|_{L^{2}}+\delta\|\partial_{t}\mathcal{E}\|_{L^{\infty}} \|(\mathcal{E},\mathcal{H})\|_{H^{3}}\|\mathcal{E}^{(3)}\|_{L^{2}}.\] This completes the proof of (3.49) and an application of Gronwall's argument yields \[\|(\mathcal{E},\mathcal{H})(t)\|_{H^{3}}\lesssim e^{C\int_{0}^{T}B(s)ds}\|( \mathcal{E},\mathcal{H})(0)\|_{H^{3}}.\] We have by (3.22) \[B(s)\lesssim\|\partial_{x^{\prime}}(\mathcal{E},\mathcal{H})(s)\|_{L^{\infty} _{x^{\prime}}}+e^{\int_{0}^{\kappa}|\partial_{x^{\prime}}(\mathcal{E}, \mathcal{H})(s^{\prime})|_{L^{\infty}_{x^{\prime}}}ds^{\prime}}\|(\mathcal{E},\mathcal{H})(0)\|_{H^{2}},\] which gives \[\|(\mathcal{E},\mathcal{H})(t)\|_{H^{3}}\lesssim e^{C\int_{0}^{T}|\partial_{x^{\prime}}(\mathcal{E},\mathcal{H})(t)|_{L^{ \infty}_{x^{\prime}}}dt+Te^{\int_{0}^{T}|\partial_{x^{\prime}}(\mathcal{E}, \mathcal{H})(t^{\prime})|_{L^{\infty}_{x^{\prime}}}dt^{\prime}}|(\mathcal{E} (0),\mathcal{H}(0))|_{H^{2}}}\] \[\times\|(\mathcal{E},\mathcal{H})(0)\|_{H^{3}}.\] Now (3.23) is straight-forward. ### The three-dimensional case Next, we extend the arguments to the three-dimensional case: Let \(\varepsilon,\mu\in C^{\infty}(\Omega;\mathbb{R}_{>0})\), for which we suppose that (1.3) and (1.4) hold. We consider the system of equations: \[\left\{\begin{array}{rcl}\partial_{t}(\varepsilon\mathcal{E})&=\nabla\times \mathcal{H},\qquad\quad\nabla\cdot(\varepsilon\mathcal{E})&=&\rho_{e},\quad(t,x^{\prime})\in\mathbb{R}\times\Omega;\\ \partial_{t}(\mu\mathcal{H})&=-\nabla\times\mathcal{E},\qquad\nabla\cdot(\mu \mathcal{H})&=&0.\end{array}\right. \tag{3.53}\] We require the boundary conditions: \[[\mathcal{E}\times\nu]_{x^{\prime}\in\partial\Omega}=0,\quad[\nu\cdot \mathcal{B}]_{x^{\prime}\in\partial\Omega}=0. \tag{3.54}\] Local existence of \(\mathcal{H}^{3}\)-solutions was discussed in [25, 26]. We show a priori estimates in the time-independent case: **Proposition 3.6**.: _For \(s\in[0,2]\) the following estimate holds for \(\mathcal{H}^{3}\)-solutions to (3.53) under the above assumptions:_ \[\|(\mathcal{E},\mathcal{H})\|_{L^{\infty}H^{s}(\Omega)}\lesssim\|(\mathcal{E},\mathcal{H})(0)\|_{H^{s}(\Omega)}. \tag{3.55}\] We refer for the suitable Helmholtz decomposition to Proposition C.3 in Appendix C: \[\|\mathcal{E}\|_{H^{1}(\Omega)}\sim\|\mathcal{E}\|_{H_{curl}(\Omega)}+\| \mathcal{E}\|_{H_{div}(\Omega)}+\|\mathcal{E}\|_{L^{2}(\Omega)}. \tag{3.56}\] Proof of Proposition 3.6.: We follow the argument from the two-dimensional case and begin with \(L^{2}\)-estimates. Let \(M(t)=\int_{\Omega}\mathcal{D}.\mathcal{E}+\mathcal{H}.\mathcal{B}\,dx^{\prime}\). We have \[\frac{dM}{dt}=2\int_{\Omega}\mathcal{E}.\nabla\times\mathcal{H}\,dx^{\prime} -2\int_{\Omega}\mathcal{H}.\nabla\times\mathcal{E}\,dx^{\prime}=0\] with the ultimate equality a consequence of the boundary conditions (after resolving (3.53) on \(\mathbb{R}_{>0}^{3}\)). This yields \(\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}}\sim\|(\mathcal{E},\mathcal{H})(0)\|_{L ^{2}}\), which is (3.55) for \(s=0\). To prove (3.55) for \(s=1\), we consider one time derivative to find that \((\dot{\mathcal{E}},\dot{\mathcal{H}})\) satisfies (3.53). Consequently, \(\|(\dot{\mathcal{E}},\dot{\mathcal{H}})(t)\|_{L^{2}}\sim\|(\dot{\mathcal{E}}, \dot{\mathcal{H}})(0)\|_{L^{2}}\) which yields by (3.53) that \(\|(\nabla\times\mathcal{E},\nabla\times\mathcal{H})(t)\|_{L^{2}}\sim\|(\nabla \times\mathcal{E},\nabla\times\mathcal{H})(0)\|_{L^{2}}\). By the Helmholtz decomposition, the defect to \(H^{1}\) is given by \(\|(\nabla\cdot\mathcal{E},\nabla\cdot\mathcal{H})(t)\|_{L^{2}}\) modulo \(L^{2}\). Here we use the divergence conditions: \[\nabla\cdot(\varepsilon\mathcal{E})=(\nabla\varepsilon).\mathcal{E}+ \varepsilon(\nabla\cdot\mathcal{E})=\rho_{e},\quad\nabla\cdot(\mu\mathcal{H})= (\nabla\mu).\mathcal{H}+\mu(\nabla\cdot\mathcal{H})=0.\] Since \(\rho_{e}\) is a conserved quantity, and we have already an a priori estimate for the \(L^{2}\)-norm, we have \[\|\nabla\cdot\mathcal{E}(t)\|_{L^{2}}\lesssim\|\mathcal{E}(t)\|_{L^{2}}+\| \rho_{e}(t)\|_{L^{2}}\lesssim\|(\mathcal{E},\mathcal{H})(0)\|_{L^{2}}+\|\rho_ {e}(0)\|_{L^{2}}.\] For \(\|\mathcal{H}\|_{H_{div}}\) we can argue likewise. We obtain \[\|(\mathcal{E},\mathcal{H})(t)\|_{H^{1}}\lesssim\|(\mathcal{E},\mathcal{H})(0 )\|_{H^{1}}.\] For the proof of (3.55) with \(s=2\), we use that \((\ddot{\mathcal{E}},\ddot{\mathcal{H}})\) solves (3.53). We have \[\begin{split}\partial_{t}^{2}\mathcal{E}(t)&=\frac{1 }{\varepsilon}\nabla\times\partial_{t}\mathcal{H}(t)=-\frac{1}{\varepsilon} \nabla\times(\frac{1}{\mu}\nabla\times\mathcal{E})\\ &=\frac{1}{\varepsilon\mu}\Delta\mathcal{E}+O(\|\mathcal{E}\|_{H^ {1}})-\frac{1}{\varepsilon\mu}\nabla(\nabla\cdot\mathcal{E}),\end{split} \tag{3.57}\] and from the divergence condition, we find \[\nabla(\nabla\cdot\mathcal{E})(t)=(\nabla\varepsilon^{-1})\rho_{e}(t)+ \varepsilon^{-1}\nabla\rho_{e}(t)+O(\|\mathcal{E}(t)\|_{H^{1}}).\] This implies the estimate by conservation of charge and previously established a priori estimates \[\begin{split}\|\mathcal{E}(t)\|_{\dot{H}^{2}}& \lesssim\|\partial_{t}^{2}\mathcal{E}(t)\|_{L^{2}}+\|\mathcal{E}(t)\|_{H^{1} }+\|\nabla(\nabla\cdot\mathcal{E})(t)\|_{L^{2}}\\ &\lesssim\|(\partial_{t}^{2}\mathcal{E},\partial_{t}^{2}\mathcal{ H})(0)\|_{L^{2}}+\|(\mathcal{E},\mathcal{H})(0)\|_{H^{1}}+\|\rho_{e}(0)\|_{H^{1}}.\end{split} \tag{3.58}\] Similarly, \[\begin{split}\partial_{t}^{2}\mathcal{H}(t)&=-\frac{1 }{\mu}\nabla\times\partial_{t}\mathcal{E}(t)=-\frac{1}{\mu}\nabla\times\big{(} \frac{1}{\varepsilon}\nabla\times\mathcal{H})\\ &=\frac{1}{\varepsilon\mu}\Delta\mathcal{H}-\frac{1}{\varepsilon \mu}\nabla(\nabla\cdot\mathcal{H})+O(\|\mathcal{H}\|_{H^{1}}).\end{split} \tag{3.59}\] so that by previously established a priori estimates \[\begin{split}\|\mathcal{H}(t)\|_{\dot{H}^{2}}& \lesssim\|\partial_{t}^{2}\mathcal{H}(t)\|_{L^{2}}+\|\mathcal{H}(t)\|_{H^{1} }\\ &\lesssim\|(\partial_{t}^{2}\mathcal{E},\partial_{t}^{2}\mathcal{ H})(0)\|_{L^{2}}+\|(\mathcal{E},\mathcal{H})(0)\|_{H^{1}}+\|\nabla\cdot(\mu \mathcal{H})(0)\|_{H^{1}}.\end{split} \tag{3.60}\] By (3.57) and (3.59), we obtain \[\|(\partial_{t}^{2}\mathcal{E},\partial_{t}^{2}\mathcal{H})(0)\|_{L^{2}} \lesssim\|(\mathcal{E},\mathcal{H})(0)\|_{H^{2}}. \tag{3.61}\] By taking (3.58), (3.60), and (3.61) together with a priori estimates for \((\mathcal{E},\mathcal{H})\) in \(L^{2}\), we conclude \[\begin{split}\|(\mathcal{E},\mathcal{H})(t)\|_{H^{2}}& \lesssim\|(\partial_{t}^{2}\mathcal{E},\partial_{t}^{2}\mathcal{H})(t)\|_{L^{ 2}}+\|(\mathcal{E},\mathcal{H})(t)\|_{L^{2}}+O(\|(\mathcal{E},\mathcal{H})(t) \|_{H^{1}})\\ &\lesssim\|(\partial_{t}^{2}\mathcal{E},\partial_{t}^{2}\mathcal{ H})(0)\|_{L^{2}}+\|(\mathcal{E},\mathcal{H})(0)\|_{L^{2}}+O(\|(\mathcal{E},\mathcal{H})(0) \|_{H^{1}})\\ &\lesssim\|(\mathcal{E},\mathcal{H})(0)\|_{H^{2}}.\end{split}\] The proof of (3.55) is complete for \(s=2\). For non-integer \(s\), we prove the claim by interpolation. ## 4. Preliminaries In this section we collect facts on pseudo-differential operators, which we rely on in the remainder of the paper. We denote derivatives by \[\partial_{x}^{\alpha}=\partial_{x_{1}}^{\alpha_{1}}\partial_{x_{2}}^{\alpha_{2}} \ldots\partial_{x_{m}}^{\alpha_{m}}\text{ and }D_{\xi}^{\alpha}=\partial_{\xi}^{\alpha}/(i^{|\alpha|})\text{ for }\alpha\in\mathbb{N}_{0}^{m}.\] Recall the Hormander class of symbols: \[S_{\rho,\delta}^{m}=\{a\in C^{\infty}(\mathbb{R}^{m}\times\mathbb{R}^{m}:| \partial_{x}^{\alpha}\partial_{\xi}^{\beta}a(x,\xi)|\precsim_{\alpha,\beta} \langle\xi\rangle^{m-|\beta|\rho+|\alpha|\delta}\}\] with \(m\in\mathbb{R}\), \(0\leq\delta<\rho\leq 1\). The \(L^{p}\)-boundedness of symbols \(a\in S_{1,\delta}^{0}\), \(0\leq\delta<\rho\leq 1\), is well-known (cf. [28, Theorem 0.11.A]). We use the following quantization: \[a(x,D)f=(2\pi)^{-m}\int_{\mathbb{R}^{m}}e^{ix.\xi}a(x,\xi)\hat{f}(\xi)d\xi \qquad(f\in\mathcal{S}^{\prime}(\mathbb{R}^{m})).\] We recall the composition of pseudo-differential operators. **Proposition 4.1** ([28, Prop. 0.3C]).: _Given \(P(x,\xi)\in S_{\rho_{1},\delta_{1}}^{m_{1}}\), \(Q(x,\xi)\in S_{\rho_{2},\delta_{2}}^{m_{2}}\), suppose that_ \[0\leq\delta_{2}<\rho\leq 1\text{ with }\rho=\min(\rho_{1},\rho_{2}).\] _Then, \((P\circ Q)(x,D)\in OPS_{\rho,\delta}^{m_{1}+m_{2}}\) with \(\delta=\max(\delta_{1},\delta_{2})\), and \(P(x,D)\circ Q(x,D)\) satisfies the asymptotic expansion_ \[(P\circ Q)(x,D)=\sum_{\alpha}\frac{1}{\alpha!}(D_{\xi}^{\alpha}P\partial_{x}^ {\alpha}Q)(x,D)+R,\] _where \(R:\mathcal{S}^{\prime}\to C^{\infty}\) is a smoothing operator._ The following lemma will be useful: **Lemma 4.2** ([19, Lemma 2.3]).: _Let \(1\leq p,q\leq\infty\), \(s\geq 0\), and \(a\in C_{x}^{s}C_{c}^{\infty}(\mathbb{R}^{m}\times\mathbb{R}^{m})\) with \(a(x,\xi)=0\) for \(\xi\notin B(0,2)\). Suppose that_ \[\sup_{x\in\mathbb{R}^{m}}\sum_{0\leq|\alpha|\leq m+1}\|D_{\xi}^{\alpha}a(x, \cdot)\|_{L_{\xi}^{1}}\leq C.\] _Then the following estimate holds:_ \[\|a(x,D)f\|_{L^{p}L^{q}}\precsim C\|f\|_{L^{p}L^{q}}.\] In the quasilinear case, when the coefficients merely satisfy \(\partial_{\xi}\in L_{t}^{2}L_{x^{\prime}}^{\infty}\), we have the following: **Lemma 4.3**.: _Let \(X=L_{t}^{2}L_{x^{\prime}}^{\infty}\) and \(a\in XC_{c}^{\infty}(\mathbb{R}^{m}\times\mathbb{R}^{m})\) with \(a(x,\xi)=0\) for \(\xi\notin B(0,2)\). Suppose that_ \[\sup_{x\in\mathbb{R}^{m}}\sum_{|\alpha|\leq 2m}\|D_{\xi}^{\alpha}a(x,\cdot)\|_{L_{ \xi}^{1}}\leq C.\] _Then the following estimate holds:_ \[\|a(x,D)f\|_{L_{t,x}^{2}}\precsim_{\|a\|_{X_{C}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 5. Diagonalizing reflected Maxwell equations The purpose of this section is to reduce the proof of Proposition 2.4 to Strichartz estimates for half-wave equations with metric \(\frac{g^{ij}}{\varepsilon\mu}\). Here \(\varepsilon,\mu,g^{ij}\in C^{\infty}(\mathbb{R}^{3}_{\geq 0})\) are extended evenly to the full space, introducing a Lipschitz-singularity of co-dimension \(1\). The following is due to Blair-Smith-Sogge [1]: **Theorem 5.1**.: _Let \(d\geq 2\) and \((g^{ij})_{1\leq i,j\leq d}\subseteq C^{\infty}(\mathbb{R}^{d}_{\geq 0})\) be uniformly elliptic. Let \(u:[0,1]\times\mathbb{R}^{d}\to\mathbb{C}\). Then the following estimate holds:_ \[\|u\|_{L^{p}_{t}([0,1],L^{q}_{x^{\prime}}(\mathbb{R}^{d}))}\lesssim\|u\|_{L^{ \infty}_{t}H^{\gamma}(\mathbb{R}^{3})}+\|(i\partial_{t}+D_{\tilde{g}})u\|_{L^ {1}_{t}H^{\gamma}}\] _with \(\tilde{g}^{ij}\) denoting the even extension of \(g^{ij}\) and_ \[D_{\tilde{g}}=Op\big{(}\sum_{i,j=1}^{d}\tilde{g}^{ij}\xi_{i}\xi_{j}\big{)}^{ \frac{1}{2}}\] _provided that \(2\leq p,q\leq\infty\) and \(\gamma\) satisfy_ \[\frac{3}{p}+\frac{2}{q}\leq 1,\quad q<\infty,\quad\gamma=3\big{(}\frac{1}{2}- \frac{1}{q}\big{)}-\frac{1}{p}.\] The reduction to the above proceeds via diagonalization with pseudo-differential operators. However, the symbols are very rough, so extra care is required. ### Littlewood-Paley decomposition and frequency truncation We begin with a paradifferential decomposition. Recall that \[\mathcal{P}=\begin{pmatrix}\sqrt{g}g^{-1}\varepsilon\partial_{t}&-\nabla\times \\ \nabla\times&\sqrt{g}g^{-1}\mu\partial_{t}\end{pmatrix}.\] In the following we denote \(u=(\mathcal{E},\mathcal{H}):\mathbb{R}\times\mathbb{R}^{3}\to\mathbb{R}^{3} \times\mathbb{R}^{3}\) and omit the tilde for the reflected quantities to lighten the notation. Let \((S_{\lambda})_{\lambda\in\mathbb{Z}^{3_{0}}}\) denote a family of inhomogeneous Littlewood-Paley projections for space-time frequencies and \((S^{\prime}_{\lambda})_{\lambda\in\mathbb{Z}^{3_{0}}}\), \((S^{\tau}_{\lambda})_{\lambda\in\mathbb{Z}^{3_{0}}}\) projections for spatial or temporal frequencies, respectively. We define \[\varepsilon^{\prime}=\sqrt{g}g^{-1}\varepsilon,\quad\mu^{\prime}=\sqrt{g}g^{- 1}\mu,\quad\mathcal{P}_{<\lambda}=\begin{pmatrix}\varepsilon^{\prime}_{< \lambda}\partial_{t}&-\nabla\times\\ \nabla\times&\mu^{\prime}_{<\lambda}\partial_{t}\end{pmatrix} \tag{5.1}\] through spatial frequency truncation: \(\kappa_{<\lambda}=\sum_{\mu\leq\lambda/16}S^{\prime}_{\mu}\kappa\) for \(\kappa\in\{\varepsilon^{\prime},\mu^{\prime}\}\). For the proof of Proposition 2.4 it suffices to prove the following estimate for frequency localized functions for \(2^{\mathbb{N}_{0}}\ni\lambda\gg 1\): We can suppose that \(\lambda\gg 1\) because low frequencies are easily estimated by Bernstein's inequality. Let \(0<\delta<\frac{3}{q}\). \[\|S_{\{|\tau|\sim|\xi^{\prime}|\}}u\|_{L^{p}_{t}L^{q}_{x^{\prime}}} \lesssim\|\langle\partial_{t}\rangle^{\gamma+\delta}u\|_{L^{ \infty}_{t}L^{2}_{x^{\prime}}}+\|\langle\partial_{t}\rangle^{\gamma}\mathcal{P }u\|_{L^{2}_{x}}, \tag{5.2}\] \[\|S_{\{|\tau|\gg|\xi^{\prime}|\}}u\|_{L^{p}_{t}L^{q}_{x^{\prime}}} \lesssim\|\langle\partial_{t}\rangle^{\gamma-\frac{1}{2}+\delta} u\|_{L^{2}_{t,x^{\prime}}}+\|\langle\partial_{t}\rangle^{\gamma-\frac{1}{2}+ \delta}\mathcal{P}u\|_{L^{2}_{t,x^{\prime}}},\] (5.3) \[\|S_{\{|\tau|\ll|\xi^{\prime}|\}}u\|_{L^{p}_{t}L^{q}_{x^{\prime}}} \lesssim\|\langle D^{\prime}\rangle^{\gamma-\frac{1}{2}+\delta} u\|_{L^{2}_{x}}+\|\langle\partial_{t}\rangle^{\gamma-\frac{1}{2}+\delta} u\|_{L^{2}_{x}}\] (5.4) \[\quad+\|\langle D^{\prime}\rangle^{\gamma-\frac{1}{2}+\delta} \mathcal{P}u\|_{L^{2}_{x}}+\|\rho_{e}\|_{L^{\infty}_{t}H^{\gamma-1+\frac{1}{p} +\delta}}.\] In the following we implicitly consider \(u\) compactly supported in \([0,T]\). This is strictly speaking not conserved by \(S_{\lambda}\), but for \(\lambda\gg 1\) up to Schwartz tails, which are neglected in the following. \(S_{\{|\tau|\sim|\xi^{\prime}|\}}\) denotes a space-time frequency projection to temporal frequencies comparable to spatial frequencies, \(S_{\{|\tau|\gg|\xi^{\prime}|\}}\) a space-time frequency projection to temporal frequencies \(\{|\tau|\gtrsim 1\}\) and spatial frequencies \(\{|\xi^{\prime}|\ll|\tau|\}\). Correspondingly, \(S_{\{|\xi^{\prime}|^{\infty}|\tau|\}}\) denotes a projection for spatial frequencies dominating temporal frequencies. Estimates (5.3) and (5.4) crucially rely on ellipticity of components of \(\mathcal{P}\) after diagonalization. Since we can achieve estimates with regularity \(\gamma-\frac{1}{2}<1\), the commutator estimates for Lipschitz functions are applicable. We give the proof of (5.3) shortly using the ellipticity away from the characteristic surface. The proof of (5.2) is more involved and requires the use of the Strichartz estimates due to Blair-Smith-Sogge. However, if \(\{|\tau|\sim|\xi^{\prime}|\sim 1\}\), we can trade temporal for spatial frequencies. **Lemma 5.2**.: _Let \(2^{\mathbb{N}_{0}}\ni\lambda\gg 1\), \(2\leq p,q<\infty\), and \(\delta>0\). The estimate_ \[\|S_{\lambda}^{\tau}S_{\lambda}^{\prime}u\|_{L_{t}^{p}L_{x^{\prime}}^{q}} \lesssim\lambda^{\gamma}(\|S_{\lambda}^{\tau}S_{\lambda}^{\prime}u\|_{L_{t}^{ c}L_{x^{\prime}}^{2}}+\|\mathcal{P}_{<\lambda}S_{\lambda}^{\tau}S_{\lambda}^{ \prime}u\|_{L_{x}^{2}}) \tag{5.5}\] _implies_ \[\|S_{\{|\tau|\sim|\xi^{\prime}|\sim 1\}}u\|_{L_{t}^{p}L_{x^{\prime}}^{q}} \lesssim\|\langle\mathcal{O}_{t}\rangle^{\gamma+\delta}u\|_{L_{t}^{ \infty}L_{x^{\prime}}^{2}}+\|\langle\mathcal{O}_{t}\rangle^{\gamma}\mathcal{P} u\|_{L_{x}^{2}}. \tag{5.6}\] Proof.: Littlewood-Paley decomposition and Minkowski's inequality give for \(2\leq p,q<\infty\) \[\|u\|_{L^{p}L^{q}}\lesssim\big{(}\sum_{\lambda\gg 1}\|S_{\lambda}u\|_{L_{t}^{p}L_ {x^{\prime}}^{q}}^{2}\big{)}^{\frac{1}{2}}\] which we can further decompose almost orthogonally into spatial and temporal frequencies. Summation of \(\|\langle\partial_{t}\rangle^{\gamma}u\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}\) is clear. Note that the lack of almost orthogonality in \(L_{t}^{\infty}L_{x^{\prime}}^{2}\) leads to the \(\delta\)-loss in derivatives. Now we write \[\mathcal{P}_{<\lambda}S_{\lambda}^{\prime}v=S_{\lambda}^{\prime}\mathcal{P}_{ <\lambda}v+[\mathcal{P}_{<\lambda},S_{\lambda}^{\prime}]v\] and note that \[\|S_{\lambda}^{\tau}[\mathcal{P}_{<\lambda},S_{\lambda}^{\prime}]\langle \partial_{t}\rangle^{\gamma}u\|_{L_{x}^{2}}\lesssim\|S_{\lambda}^{\tau} \langle\partial_{t}\rangle^{\gamma}u\|_{L_{x}^{2}}\] because \(\|[\kappa_{<\lambda},S_{\lambda}^{\prime}]\|_{L_{x^{\prime}}^{2}\to L_{x^{ \prime}}^{2}}\lesssim\lambda^{-1}\) by a kernel estimate for \(\kappa\in\{\varepsilon^{\prime},\mu^{\prime}\}\). We write \[S_{\lambda}^{\tau}S_{\lambda}^{\prime}\mathcal{P}_{<\lambda}v=S_{\lambda}^{ \tau}S_{\lambda}^{\prime}\mathcal{P}v-S_{\lambda}^{\tau}S_{\lambda}^{\prime} \mathcal{P}_{\gg\lambda}v-S_{\lambda}^{\tau}S_{\lambda}^{\prime}\mathcal{P}_{ \sim\lambda}v.\] Clearly, \[\|S_{\lambda}^{\tau}S_{\lambda}^{\prime}\mathcal{P}_{\sim\lambda}v\|_{L_{x}^{ 2}}\lesssim\|S_{\lambda}^{\tau}v\|_{L_{x}^{2}}\] and similarly, by a fixed-time estimate, \[\|S_{\lambda}^{\tau}S_{\lambda}^{\prime}(S_{\lambda\lambda}^{\prime}\varepsilon \partial_{t}S_{\lambda\lambda}^{\prime}v)\|_{L_{x}^{2}}\lesssim\lambda\| \varepsilon_{\lambda}\|L_{x}^{\infty}\|S_{\lambda}^{\tau}v\|_{L_{x}^{2}} \lesssim\|\partial\varepsilon\|_{L^{\infty}}\|S_{\lambda}^{\tau}v\|_{L_{x}^{2}},\] which estimates the second term. We remain with \(S_{\lambda}^{\tau}S_{\lambda}^{\prime}\mathcal{P}\langle\partial_{t}\rangle^{ \gamma}u\) and conclude \[\|S_{\lambda}^{\tau}\mathcal{P}_{<\lambda}S_{\lambda}^{\prime}\langle\partial_{ t}\rangle^{\gamma}u\|_{L_{x}^{2}}\lesssim\|S_{\lambda}^{\tau}S_{\lambda}^{ \prime}\mathcal{P}\langle\partial_{t}\rangle^{\gamma}u\|_{L_{x}^{2}}+\|S_{ \lambda}^{\tau}\langle\partial_{t}\rangle^{\gamma}u\|_{L_{x}^{2}}.\] This is the commutator argument for the Maxwell operator. After summing the Littlewood-Paley blocks, we obtain (5.6). **Lemma 5.3**.: _Let \(\lambda,\nu\in 2^{\mathbb{N}_{0}}\), \(\lambda\ll\nu\). Let \(2\leq p,q<\infty\). The estimate_ \[\|S_{\nu}^{\tau}S_{\lambda}^{\tau}u\|_{L_{t}^{p}L_{x^{\prime}}^{q}}\lesssim\nu ^{\gamma-\frac{1}{2}}\|S_{\nu}^{\tau}S_{\lambda}^{\tau}\mathcal{P}_{<\nu}u\|_{ L_{x}^{2}}+\nu^{\gamma-1+\frac{1}{p}}(\|\rho_{e\nu}^{\prime}\|_{L_{t}^{\infty}L_{x^{ \prime}}^{2}}+\|\rho_{m\nu}^{\prime}\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}) \tag{5.7}\] _with_ \[\rho_{e\nu}^{\prime}=\nabla\cdot(\varepsilon_{<\nu}^{\prime}S_{\nu}^{\prime} \mathcal{E}),\qquad\rho_{m\nu}^{\prime}=\nabla\cdot(\mu_{<\nu}^{\prime}S_{\nu}^ {\prime}\mathcal{H})\] _implies_ \[\|S_{\{|\tau|\ll|\xi^{\prime}|\}}u\|_{L_{t}^{p}L_{x^{\prime}}^{q}} \lesssim\|\langle D^{\prime}\rangle^{\gamma-\frac{1}{2}+\delta}u\|_{L_{x} ^{2}}+\|\langle\partial_{t}\rangle^{\gamma-\frac{1}{2}+\delta}u\|_{L_{x}^{2}}+ \|\langle D^{\prime}\rangle^{\gamma-\frac{1}{2}+\delta}\mathcal{P}u\|_{L_{x}^{ 2}}\] \[\quad+\|\rho_{e}\|_{L_{t}^{\infty}H^{\gamma-1+\frac{1}{p}+\delta} }+\|\rho_{m}\|_{L_{t}^{\infty}H^{\gamma-1+\frac{1}{p}+\delta}} \tag{5.8}\] _for \(\delta>0\)._ Proof.: We have to carry out the summation \[\sum_{\begin{subarray}{c}\nu\gg 1,\\ 1\lesssim\lambda\ll\nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\|S_{\lambda}^{ \tau}\mathcal{P}_{<\nu}S_{\nu}^{\prime}u\|_{L_{x}^{2}}+\nu^{\gamma-1+\frac{1}{ p}}(\|\rho_{e\nu}^{\prime}\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}+\|\rho_{m\nu}^{ \prime}\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}).\] For the Maxwell operator, we use that \(\gamma-\frac{1}{2}<1\). First, we note that \[S_{\lambda}^{\tau}\mathcal{P}_{<\nu}S_{\nu}^{\prime}u=\tilde{S}_{\nu}^{\prime }S_{\lambda}^{\tau}\mathcal{P}_{<\nu}S_{\nu}^{\prime}u.\] Above and in the following \(\tilde{S}_{\nu}^{\prime}\) denotes a mildly enlarged spatial frequency projector. By \(\mathcal{P}=\mathcal{P}_{<\mu}+\mathcal{P}_{\sim\mu}+\mathcal{P}_{\gg\mu}\) and \(\tilde{S}_{\nu}^{\prime}\mathcal{P}_{\gg\nu}S_{\nu}^{\prime}=0\), we can write \[\|S_{\lambda}^{\tau}\mathcal{P}_{<\nu}S_{\nu}^{\prime}u\|_{L_{x}^{2}} \leqslant\|S_{\lambda}^{\tau}\tilde{S}_{\nu}^{\prime}\mathcal{P}S_{\nu}^{ \prime}u\|_{L_{x}^{2}}+\|S_{\lambda}^{\tau}\tilde{S}_{\nu}^{\prime}\mathcal{P }_{\sim\nu}S_{\nu}^{\prime}u\|_{L_{x}^{2}}.\] The latter term is clearly estimated by \[\|S_{\lambda}^{\tau}\tilde{S}_{\nu}^{\prime}\mathcal{P}_{\sim\nu}S_{\nu}^{ \prime}u\|_{L_{x}^{2}}\lesssim\|S_{\nu}^{\prime}u\|_{L_{x}^{2}}.\] For the first term, we write \[\begin{split}\nu^{\gamma-\frac{1}{2}}\|S_{\lambda}^{\tau}\tilde{ S}_{\nu}^{\prime}\mathcal{P}S_{\nu}^{\prime}u\|_{L_{x}^{2}}\lesssim& \nu^{\gamma-\frac{1}{2}}\lambda\|S_{\lambda}^{\tau}\tilde{S}_{\nu}^{ \prime}[\varepsilon^{\prime},S_{\nu}^{\prime}]u\|_{L_{x}^{2}}+\nu^{\gamma- \frac{1}{2}}\lambda\|S_{\lambda}^{\tau}\tilde{S}_{\nu}^{\prime}[\mu^{\prime}, S_{\nu}^{\prime}]u\|_{L_{x}^{2}}\\ &+\|\langle D^{\prime}\rangle^{\gamma-\frac{1}{2}}S_{\lambda}^{ \tau}\tilde{S}_{\nu}^{\prime}\mathcal{P}u\|_{L_{x}^{2}}.\end{split} \tag{5.9}\] Furthermore, \[\begin{split}\|S_{\lambda}^{\tau}\tilde{S}_{\nu}^{\prime}[ \varepsilon^{\prime},S_{\nu}^{\prime}]u\|_{L_{x}^{2}}&=\|S_{ \lambda}^{\tau}\tilde{S}_{\nu}^{\prime}[\varepsilon^{\prime},S_{\nu}^{\prime }]\tilde{S}_{\nu}^{\prime}u\|_{L_{x}^{2}}+\|S_{\lambda}^{\tau}\tilde{S}_{\nu}^ {\prime}[\varepsilon^{\prime},S_{\nu}^{\prime}]S_{\ll\nu}^{\prime}u\|_{L_{x}^ {2}}\\ &+\|S_{\lambda}^{\tau}\tilde{S}_{\nu}^{\prime}[\varepsilon^{ \prime},S_{\nu}^{\prime}]S_{\gg\nu}^{\prime}u\|_{L_{x}^{2}}.\end{split} \tag{5.10}\] The estimate of the first term in (5.10) is straight-forward by the fixed-time commutator estimate \(\|[\varepsilon^{\prime},S_{\nu}^{\prime}]\|_{L_{x}^{2}\to L_{x}^{2}} \lesssim\mu^{-1}\): \[\sum_{\begin{subarray}{c}\nu\gg 1,\\ 1\lesssim\lambda\ll\nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\lambda\|S_{ \lambda}^{\tau}\tilde{S}_{\nu}^{\prime}[\varepsilon^{\prime},S_{\nu}^{\prime }]\tilde{S}_{\nu}^{\prime}u\|_{L_{x}^{2}}\lesssim\sum_{\begin{subarray}{c} \nu\gg 1,\\ 1\lesssim\lambda\ll\nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\lambda\nu^{-1}\| \tilde{S}_{\nu}^{\prime}u\|_{L_{x}^{2}}\lesssim\|\langle D^{\prime}\rangle^{ \gamma-\frac{1}{2}+\delta}u\|_{L_{x}^{2}}.\] For the second term in (5.10) we note that \[\begin{split}\sum_{\begin{subarray}{c}\nu\gg 1,\\ 1\lesssim\lambda\ll\nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\lambda\|S_{ \lambda}^{\tau}\tilde{S}_{\nu}^{\prime}[\varepsilon^{\prime},S_{\nu}^{\prime }]S_{\ll\nu}^{\prime}u\|_{L_{x}^{2}}&\lesssim\sum_{\begin{subarray} {c}\nu\gg 1,\\ 1\lesssim\lambda\ll\nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\lambda\|\varepsilon _{\sim\nu}^{\prime}S_{\lambda}^{\tau}S_{\ll\nu}^{\prime}u\|_{L_{x}^{2}}\\ &\lesssim\|\partial\varepsilon\|_{L^{\infty}}\|\langle\partial_{t }\rangle^{\gamma-\frac{1}{2}+\delta}u\|_{L_{x}^{2}}.\end{split}\] For the third term in (5.10) we obtain similarly \[\begin{split}\sum_{\begin{subarray}{c}\nu\gg 1,\\ 1\lesssim\lambda\ll\nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\lambda\|S_{ \lambda}^{\tau}\tilde{S}_{\nu}^{\prime}[\varepsilon^{\prime},S_{\nu}^{\prime }]S_{\gg\nu}^{\prime}u\|_{L_{x}^{2}}&\lesssim\sum_{\begin{subarray} {c}\nu\gg 1,\\ 1\lesssim\lambda\ll\nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\lambda\|S_{ \lambda}^{\tau}\varepsilon_{\gg\nu}^{\prime}S_{\gg\nu}^{\prime}u\|_{L_{x}^{2}} \\ &\lesssim\|\partial\varepsilon\|_{L^{\infty}}\|\langle\partial_{t }\rangle^{\gamma-\frac{1}{2}+\delta}u\|_{L_{x}^{2}}.\end{split}\] Clearly, the second commutator in (5.9) can be handled likewise. We turn to the charges: Recall that \(\rho_{e}=\nabla\cdot(\varepsilon^{\prime}\mathcal{E})\) with \(\varepsilon^{\prime}=\sqrt{g}g^{-1}\varepsilon\). Since we are working in geodesic normal coordinates, we have \[\varepsilon^{\prime}=\begin{pmatrix}\varepsilon^{\prime}_{11}&\varepsilon^{ \prime}_{12}&0\\ \varepsilon^{\prime}_{21}&\varepsilon^{\prime}_{22}&0\\ 0&0&\varepsilon^{\prime}_{33}\end{pmatrix}.\] To carry out the commutator argument, we separate \[\rho^{\prime}_{e\nu} =\partial_{1}(\varepsilon^{\prime}{}^{11}_{<\nu}S^{\prime}_{ \nu}\mathcal{E}_{1}+\varepsilon^{\prime 21}_{<\nu}S^{\prime}_{\nu}\mathcal{E}_{2})+ \partial_{2}(\varepsilon^{\prime}{}^{21}_{<\nu}S^{\prime}_{\nu}\mathcal{E}_{1} +\varepsilon^{\prime 22}_{<\nu}S^{\prime}_{\nu}\mathcal{E}_{2})+\partial_{3}( \varepsilon^{\prime}{}^{33}_{<\nu}S^{\prime}_{\nu}\mathcal{E}_{3})\] \[=(\partial_{1}\varepsilon^{\prime}{}^{11}_{<\nu})S^{\prime}_{\nu} \mathcal{E}_{1}+(\partial_{1}\varepsilon^{\prime}{}^{12}_{<\nu})S^{\prime}_{ \nu}\mathcal{E}_{2}+(\partial_{2}\varepsilon^{\prime}{}^{21}_{<\nu})S^{\prime }_{\nu}\mathcal{E}_{1}+(\partial_{2}\varepsilon^{\prime}{}^{22}_{<\nu})S^{ \prime}_{\nu}\mathcal{E}_{2}+(\partial_{3}\varepsilon^{\prime}{}^{33}_{\nu}) \mathcal{E}_{3}\] \[\quad+\varepsilon^{\prime}{}^{11}_{<\nu}\partial_{1}S^{\prime}_ {\nu}\mathcal{E}_{1}+\varepsilon^{\prime 12}_{<\nu}\partial_{1}S^{\prime}_{\nu} \mathcal{E}_{2}+\varepsilon^{\prime}{}^{21}_{<\nu}\partial_{2}S^{\prime}_{\nu }\mathcal{E}_{1}+\varepsilon^{\prime}{}^{22}_{<\nu}\partial_{2}S^{\prime}_{\nu }\mathcal{E}_{2}+\varepsilon^{\prime}{}^{33}_{<\nu}\partial_{3}S^{\prime}_{\nu }\mathcal{E}_{3}\] \[=:\rho^{\prime}{}^{(1)}_{e\nu}+\rho^{\prime}{}^{(2)}_{e\nu}.\] We can estimate terms with derivative acting on \(\varepsilon^{\prime}\) collected in \(\rho^{\prime}{}^{(1)}_{e\nu}\) directly by Lipschitz continuity. For example, \[\nu^{\gamma-1+\frac{1}{p}}\|\partial_{1}\varepsilon^{\prime}_{<\nu}S^{\prime }_{\nu}\mathcal{E}_{1}\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}\lesssim\nu^{ \gamma-\frac{1}{2}-\delta}\|S^{\prime}_{\nu}\mathcal{E}\|_{L^{\infty}_{t}L^{ 2}_{x^{\prime}}}.\] The terms with derivative acting on \(\mathcal{E}\) collected in \(\rho^{\prime}{}^{(2)}_{e\nu}\) are amenable to a commutator argument. Note that \[\nu^{\gamma-1+\frac{1}{p}}\|\varepsilon^{\prime 11}_{<\nu}\partial_{1}S^{ \prime}_{\nu}\mathcal{E}_{1}\|_{L^{2}_{x}}=\nu^{\gamma-1+\frac{1}{p}}\|\tilde {S}^{\prime}_{\nu}\varepsilon^{\prime 11}_{<\nu}\partial_{1}S^{\prime}_{\nu} \mathcal{E}_{1}\|_{L^{2}_{x}}.\] Since \(\tilde{S}^{\prime}_{\nu}\varepsilon^{\prime 11}_{>\nu}S^{\prime}_{\nu}=0\), we can write \[\nu^{\gamma-1+\frac{1}{p}}\|\tilde{S}^{\prime}_{\nu}\varepsilon^{\prime 11}_{<\nu} \partial_{1}S^{\prime}_{\nu}\mathcal{E}_{1}\|_{L^{2}_{x}}\leq\nu^{\gamma-1+ \frac{1}{p}}\|\tilde{S}^{\prime}_{\nu}\varepsilon^{\prime 11}_{-\nu} \partial_{1}S^{\prime}_{\nu}\mathcal{E}_{1}\|_{L^{2}_{x}}+\nu^{\gamma-1+\frac{ 1}{p}}\|\tilde{S}^{\prime}_{\nu}\varepsilon^{\prime 11}\partial_{1}S^{\prime}_{\nu} \mathcal{E}_{1}\|_{L^{2}_{x}}.\] The first expression is estimaetd by \[\nu^{\gamma-1+\frac{1}{p}}\|\tilde{S}^{\prime}_{\nu}\varepsilon^{\prime 11}_{ >\nu}\partial_{1}S^{\prime}_{\nu}\mathcal{E}_{1}\|_{L^{2}_{x}}\lesssim\| \varepsilon^{\prime 11}\|_{L^{\infty}_{x}}\nu^{\gamma-1+\frac{1}{p}}\|S^{\prime}_{\nu} \mathcal{E}_{1}\|_{L^{2}_{x}},\] which is more than enough. For \(\gamma-1+\frac{1}{p}>0\), we obtain by the Coifman-Meyer estimate for the second term: \[\sum_{\nu\geq 1}\|\langle D^{\prime}\rangle^{\gamma-1+\frac{1}{p}} \tilde{S}^{\prime}_{\nu}\varepsilon^{\prime 11}\partial_{1}S^{\prime}_{\nu} \mathcal{E}_{1}\|_{L^{2}_{x}} \lesssim\sum_{\nu\geq 1}(\nu^{-\delta}\|\langle D^{\prime} \rangle^{\gamma-1+\frac{1}{p}+\delta}\tilde{S}^{\prime}_{\nu}[\varepsilon^{ \prime 11},S^{\prime}_{\nu}]\partial_{1}\mathcal{E}_{1}\|_{L^{2}_{x}}\] \[\quad+\|\langle D^{\prime}\rangle^{\gamma-1+\frac{1}{p}+\delta} (\varepsilon^{\prime 11}\partial_{1}\mathcal{E}_{1})\|_{L^{2}_{x}}\] \[\lesssim\|\langle D^{\prime}\rangle^{\gamma-1+\frac{1}{p}+\delta} \mathcal{E}\|_{L^{2}_{x}}+\|\langle D^{\prime}\rangle^{\gamma-1+\frac{1}{p}+ \delta}\varepsilon^{\prime 11}\partial_{1}\mathcal{E}_{1}\|_{L^{2}_{x}}.\] Let \[\rho^{\prime(2)}_{e}=\varepsilon^{\prime 11}\partial_{1}\mathcal{E}_{1}+ \varepsilon^{\prime 12}\partial_{1}\mathcal{E}_{2}+\varepsilon^{\prime 21}\partial_{2} \mathcal{E}_{1}+\varepsilon^{\prime 22}\partial_{2}\mathcal{E}_{2}+\varepsilon^{\prime 33} \partial_{3}\mathcal{E}_{3}.\] We obtain by the previous arguments: \[\sum_{\nu}\nu^{\gamma-1+\frac{1}{p}}\|\rho^{\prime}_{e\nu}\|_{L^{\infty}_{t}L^{2}_ {x}}\lesssim\|\langle D^{\prime}\rangle^{\gamma-1+\frac{1}{p}+\delta} \mathcal{E}\|_{L^{\infty}_{t}L^{2}_{x}}+\|\langle D^{\prime}\rangle^{\gamma-1+ \frac{1}{p}+\delta}\theta^{\prime(2)}_{e}\|_{L^{\infty}_{t}L^{2}_{x}}.\] The first term is acceptable. We estimate the second term by oddness of the function \(\rho^{\prime(2)}_{e}\) switching to the half-space: \[\|\langle D^{\prime}\rangle^{\gamma-1+\frac{1}{p}+\delta}\rho^{ \prime(2)}_{e}\|_{L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{3})}\lesssim\|\langle D^{ \prime}\rangle^{\gamma-1+\frac{1}{p}+\delta}\rho^{\prime(2)}_{e}\|_{L^{\infty}_{t }L^{2}_{x}(\mathbb{R}^{3}_{+})}\] \[\lesssim\|\langle D^{\prime}\rangle^{\gamma-1+\frac{1}{p}+\delta} \rho^{\prime}_{e}\|_{L^{\infty}_{t}L^{2}_{x}(\mathbb{R}^{3}_{+})}+\|\langle D^{ \prime}\rangle^{\gamma-1+\frac{1}{p}+\delta}\mathcal{E}\|_{L^{\infty}_{t}L^{2}_ {x}(\mathbb{R}^{3}_{+})}.\] For the ultimate estimate we used smoothness of the coefficients and invariance of Sobolev functions under multiplication with smooth functions. We remark that the estimate is easier for \(\gamma-1+\frac{1}{p}<0\) because it is not necessary to switch between half-space and full space. The estimate for \(\rho^{\prime}_{m\nu}\) follows along the above lines. After summation of the Littlewood-Paley blocks, we obtain (5.8). We turn to the proof of (5.3), which does not make use of the diagonalization of \(\mathcal{P}\). Proof of (5.3).: Let \(1\ll\mu\ll\lambda\) and \[\tilde{\mathcal{P}}=\begin{pmatrix}\partial_{t}&\varepsilon^{\prime-1}\nabla \times\\ -\mu^{\prime-1}\nabla\times&\partial_{t}\end{pmatrix}.\] If \(\{\lambda\sim|\tau|\gg|\xi^{\prime}|\sim\mu\}\), the operator \(\tilde{P}_{<\mu}\) (obtained from frequency truncation of \(\varepsilon^{\prime-1}\) and \(\mu^{\prime-1}\)) is elliptic and gains one derivative. We estimate by Bernstein's inequality and ellipticity of \(\tilde{\mathcal{P}}_{<\mu}\) (note that \(\tilde{\mathcal{P}}_{<\mu}\) has Lipschitz coefficients): \[\|S^{\tau}_{\lambda}S^{\prime}_{\mu}u\|_{L^{p}L^{q}} \lesssim\lambda^{\frac{1}{2}-\frac{1}{p}}\mu^{3\left(\frac{1}{2}- \frac{1}{q}\right)}\|S^{\tau}_{\lambda}S^{\prime}_{\mu}u\|_{L^{2}_{t,x}}\] \[\lesssim\lambda^{-\frac{1}{2}-\frac{1}{p}}\mu^{3\left(\frac{1}{2 }-\frac{1}{q}\right)}\|\tilde{\mathcal{P}}_{<\mu}S^{\tau}_{\lambda}S^{\prime} _{\mu}u\|_{L^{2}_{t,x}}\] \[\lesssim\lambda^{-\frac{1}{2}-\frac{1}{p}}\mu^{\frac{1}{p}+\frac {1}{2}}\|S^{\tau}_{\lambda}\langle D^{\prime}\rangle^{\gamma-\frac{1}{2}} \tilde{S}^{\prime}_{\mu}\tilde{\mathcal{P}}_{<\mu}S^{\prime}_{\mu}u\|_{L^{2}_ {t,x}}.\] Above and in the following \(\tilde{S}^{\prime}_{\mu}\) denotes a mildly enlarged frequency projection around frequencies of size \(\mu\). Now we write again \(\tilde{\mathcal{P}}=\tilde{\mathcal{P}}_{<\mu}+\tilde{\mathcal{P}}_{\sim\mu}+ \tilde{\mathcal{P}}_{\gg\mu}\) and note that \[\|S^{\tau}_{\lambda}\langle D^{\prime}\rangle^{\gamma-\frac{1}{2}}\tilde{S}^{ \prime}_{\mu}\tilde{\mathcal{P}}_{\sim\mu}S^{\prime}_{\mu}u\|_{L^{2}_{t,x}} \lesssim\mu^{\gamma-\frac{1}{2}}\|S^{\prime}_{\mu}u\|_{L^{2}_{t,x}}\lesssim \|S^{\tau}_{\lambda}\langle D^{\prime}\rangle^{\gamma-\frac{1}{2}}S^{\prime} _{\mu}u\|_{L^{2}_{t,x}}.\] Like above, \(\tilde{S}^{\prime}_{\mu}\tilde{\mathcal{P}}_{\gg\mu}S^{\prime}_{\mu}=0\) by impossible frequency interaction. Summation over \(\mu\) and \(\lambda\) gives the acceptable contribution \[\lesssim\|\langle\partial_{t}\rangle^{\gamma-\frac{1}{2}+\delta}u\|_{L^{2}_{ t,x}}.\] For \(\tilde{\mathcal{P}}\) we use the estimate \[\|[\kappa^{\prime},S^{\prime}_{\mu}]\|_{L^{2}_{x^{\prime}}\to L^{2}_{x^{ \prime}}}\lesssim\mu^{-1}.\] We have \[\mu^{\gamma-\frac{1}{2}}\|S^{\tau}_{\lambda}\tilde{S}^{\prime}_{ \mu}[\kappa^{\prime},S^{\prime}_{\mu}]\nabla\times A\|_{L^{2}_{t,x}}\] \[\lesssim\mu^{\gamma-\frac{1}{2}}\|S^{\tau}_{\lambda}\tilde{S}^{ \prime}_{\mu}[\kappa^{\prime},S^{\prime}_{\mu}]S^{\prime}_{\lesssim\mu} \nabla\times A\|_{L^{2}_{t,x}}+\mu^{\gamma-\frac{1}{2}}\|S^{\tau}_{\lambda} \tilde{S}^{\prime}_{\mu}[\kappa^{\prime},S^{\prime}_{\mu}]S^{\prime}_{\gg\mu} \nabla\times A\|_{L^{2}_{t,x}}\] \[\lesssim\mu^{\gamma-\frac{1}{2}}\|S^{\tau}_{\lambda}A\|_{L^{2}_{ t,x}}+\mu^{\gamma-\frac{1}{2}}\|S^{\tau}_{\lambda}\tilde{S}^{\prime}_{\mu}( \kappa^{\prime}_{\gg\mu}S^{\prime}_{\gg\mu}\nabla\times A)\|_{L^{2}_{t,x}}.\] The first term is already acceptable. The second term is rewritten as \[\tilde{S}^{\prime}_{\mu}(\kappa^{\prime}_{\gg\mu}S^{\prime}_{\gg\mu}\partial S ^{\tau}_{\lambda}A)=\tilde{S}^{\prime}_{\mu}\partial(\kappa^{\prime}_{\gg\mu} S^{\prime}_{\gg\mu}S^{\tau}_{\lambda}A)-\tilde{S}^{\prime}_{\mu}(\partial \kappa^{\prime}_{\gg\mu}S^{\prime}_{\gg\mu}S^{\tau}_{\lambda}A).\] For the first term we find \[\|\tilde{S}^{\prime}_{\mu}\partial(\kappa^{\prime}_{\gg\mu}S^{\prime}_{\gg\mu} S^{\tau}_{\lambda}A)\|_{L^{2}_{t,x}}\lesssim\mu\|\kappa^{\prime}_{\gg\mu}\|_{L^{ \infty}_{x^{\prime}}}\|S^{\prime}_{\gg\mu}S^{\tau}_{\lambda}A\|_{L^{2}_{t,x^{ \prime}}}\lesssim\|\partial\kappa^{\prime}\|_{L^{\infty}_{x^{\prime}}}\|S^{ \tau}_{\lambda}A\|_{L^{2}_{t,x}}.\] This yields an acceptable contribution after summation over \(\mu\ll\lambda\) and \(\lambda\). Clearly, \[\|\tilde{S}^{\prime}_{\mu}(\partial\kappa^{\prime}_{\gg\mu}S^{\prime}_{\gg\mu} S^{\tau}_{\lambda}A)\|_{L^{2}_{t,x}}\lesssim\|\partial\kappa^{\prime}\|_{L^{ \infty}}\|S^{\tau}_{\lambda}A\|_{L^{2}_{t,x}}.\] This is likewise acceptable. We summarize \[\|S_{\{|\tau|\gg|\xi^{\prime}|\lesssim 1\}}u\|_{L^{p}L^{q}}\lesssim\|\langle \partial_{t}\rangle^{\gamma-\frac{1}{2}+\delta}u\|_{L^{2}_{t,x}}+\|\langle \partial_{t}\rangle^{\gamma-\frac{1}{2}+\delta}\mathcal{P}u\|_{L^{2}_{t,x}}. \tag{5.11}\] This completes the proof. With the estimates for different regions in phase space at hand, we can finish the proof of Proposition 2.4. Conclusion of the Proof of Proposition 2.4.: Taking (5.2)-(5.4) together, we find \[\|u\|_{L^{p}L^{q}}\lesssim \|\langle D^{\prime}\rangle^{\gamma}u\|_{L^{\infty}_{t}L^{2}_{x}}+ \|\langle\partial_{t}\rangle^{\gamma+\delta}u\|_{L^{\infty}_{t}L^{2}_{x}}\] \[+\|\langle\partial_{t}\rangle^{\gamma+\delta}\mathcal{P}u\|_{L^{ 2}_{t,x}}+\|\langle D^{\prime}\rangle^{\gamma+\delta}\mathcal{P}u\|_{L^{2}_{t,x}}\] \[+\|\rho_{e}\|_{L^{\infty}_{t}H^{\gamma-1+\frac{1}{p}+\delta}}.\] By applying the estimate to homogeneous solutions, we obtain \[\|u\|_{L^{p}L^{q}}\lesssim\|\langle D^{\prime}\rangle^{\gamma}u\|_{L^{\infty}_ {t}L^{2}_{x}}+\|\langle\partial_{t}\rangle^{\gamma+\delta}u\|_{L^{2}_{t,x}}+ \|\rho_{e}\|_{L^{\infty}_{t}H^{\gamma-1+\frac{1}{p}+\varepsilon}}.\] For homogeneous solutions, we can trade the time derivatives for spatial derivatives and by the energy estimates of Section 3, we obtain \[\|u\|_{L^{p}L^{q}}\lesssim\|\langle D^{\prime}\rangle^{\gamma+\delta}u(0)\|_{ L^{2}_{x}}+\|\rho_{e}\|_{L^{\infty}_{t}H^{\gamma-1+\frac{1}{p}+\delta}}.\] The conclusion follows from Duhamel's formula. The proofs of (5.2) and (5.4) make use of the diagonalization of \(\mathcal{P}_{<\lambda}\) via pseudo-differential operators. This is carried out in the following. Let \(h=\big{(}\det(g_{ij})\big{)}^{1/2}\) and denote \(C(\xi^{\prime})_{ij}=-\varepsilon_{ijk}\xi^{\prime}_{k}\). The principal symbol (with rough coefficients) is given by \[p(x,\xi)=i\begin{pmatrix}\xi_{0}hg^{-1}\varepsilon&-C(\xi^{\prime})\\ C(\xi^{\prime})&hg^{-1}\mu\xi_{0}\end{pmatrix}.\] We consider as truncated operator \(\mathcal{P}_{\lambda}\) the following: Let \(g^{-1}=AA^{t}\) denote the factorization into Jacobians (which we also extend such that these are Lipschitz along the boundary). Let \(A_{<\lambda}\) denote the truncation of spatial frequencies of \(A\) to frequencies less than \(\lambda/8\). Let \(h_{<\lambda}=\det(A_{<\lambda})\). We define \[\mathcal{P}_{<\lambda}=\begin{pmatrix}h_{<\lambda}A_{<\lambda}A^{t}_{<\lambda }\varepsilon_{<\lambda}\partial_{t}&-\nabla\times\\ \nabla\times&h_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}\mu_{<\lambda}\partial_{ t}\end{pmatrix}. \tag{5.12}\] Observe that \(\|(\mathcal{P}-\mathcal{P}_{<\lambda})S_{\lambda}u\|_{L^{2}}\lesssim\|S_{ \lambda}u\|_{L^{2}}\). Note that in \(\rho^{\prime}_{e}\) we can truncate \(h\), \(A\), \(A^{t}\), and \(\varepsilon\) in frequencies because we can write the difference as a telescoping sum \[\begin{split}&\|S_{\lambda}(\nabla\cdot(hAA^{t}\varepsilon \mathcal{E}))-S_{\lambda}\nabla\cdot(h_{<\lambda}A_{<\lambda}A^{t}_{<\lambda} \varepsilon_{<\lambda}\mathcal{E})\|_{L^{2}}\\ &=\|S_{\lambda}\nabla\cdot(h_{>\lambda}AA^{t}\varepsilon\mathcal{E }+hA_{>\lambda}A^{t}\varepsilon\mathcal{E}+\ldots)\|_{L^{2}}.\end{split} \tag{5.13}\] For instance, \[\|S_{\lambda}\nabla\cdot(h_{>\lambda}AA^{t}\varepsilon\mathcal{E})\|_{L^{2}} \lesssim\lambda\|h_{>\lambda}\|_{L^{\infty}}\|A\|_{L^{\infty}}\|A^{t}\|_{L^{ \infty}}\|\varepsilon\|_{L^{\infty}}\|\mathcal{E}\|_{L^{2}}. \tag{5.14}\] After these reductions, we are dealing with symbols in \(S^{1}_{1,1}\), which is a borderline case for symbol composition. But the considered symbols \(a\in S^{i}_{1,1}\) actually satisfy \[|\partial_{x}a|\lesssim 1 \tag{5.15}\] because the reflected Jacobians and coefficients are Lipschitz. This suffices for symbol composition to hold to first order. Accordingly, we make the following definition: **Definition 5.4**.: Let \(k\in\mathbb{N}_{0}\). We define the symbol class \[\tilde{S}^{k}_{1,1}=\{a\in C^{\infty}(\mathbb{R}^{d}\times\mathbb{R}^{d})\,:\,| \partial_{x}^{\alpha}\partial_{\xi}^{\beta}a(x,\xi)|\lesssim\langle\xi\rangle^ {k-|\beta|+(|\alpha|-1)_{+}}\}.\] We have the following: **Lemma 5.5**.: _Let \(m,n\in\mathbb{R}\), \(a\in\tilde{S}^{m}_{1,1}\), \(b\in\tilde{S}^{n}_{1,1}\). Then, we find the following estimate to hold:_ \[a(x,D)\circ b(x,D)=(ab)(x,D)+E\] _with \(\|E\|_{H^{s}(\mathbb{R}^{d})\to H^{s+m+n-1}(\mathbb{R}^{d})}\lesssim 1\)._ ### Diagonalizing the principal symbol In the following we carry out the formal computation to find suitable conjugation matrices for the operator \(\mathcal{P}_{\lambda}\). The aim is to prove the following proposition: **Proposition 5.6**.: _Let \(2^{\mathbb{N}}\ni\lambda\gg\lambda_{0}\). There is a decomposition of phase space by projections_ \[S^{\prime}_{\lambda}S_{\lambda}=S_{\lambda 1}+S_{\lambda 2}+S_{\lambda 3}\] _such that for every \(i\in\{1,2,3\}\) there are \(\mathcal{M}^{i}_{\lambda}\in OP\tilde{S}^{0}_{1,1}\), \(\mathcal{N}^{i}_{\lambda}\in OP\tilde{S}^{0}_{1,1}\), and \(\mathcal{D}^{i}_{\lambda}\in OP\tilde{S}^{1}_{1,1}\) such that_ \[\mathcal{P}_{\lambda}S_{\lambda i}=\mathcal{M}^{i}_{\lambda}\mathcal{D}^{i}_{ \lambda}\mathcal{N}^{i}_{\lambda}S_{\lambda i}+E^{i}_{\lambda}\] _with \(\|E^{i}_{\lambda}\|_{2\to 2}\lesssim 1\) with implicit constant independent of \(\lambda\)._ Before we turn to the technical details, we carry out a formal diagonalization of \[p(x,\xi)=i\begin{pmatrix}h_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}\varepsilon_{ <\lambda}\xi_{0}&-C(\xi^{\prime})\\ C(\xi^{\prime})&h_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}\mu_{<\lambda}\xi_{0} \end{pmatrix}.\] The symbol is in \(\tilde{S}^{1}_{1,1}\). We diagonalize the principal symbol as follows: \[p(x,\xi)\pi(x,\xi)=m(x,\xi)d(x,\xi)n(x,\xi)\pi(x,\xi)\] with \(m,n\in\tilde{S}^{0}_{1,1}\) and \(d\in\tilde{S}^{1}_{1,1}\), and \(\pi\in\tilde{S}^{0}_{1,1}\) denoting a projection to a region in phase space to be determined. In the first step, we write \[\begin{pmatrix}h_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}\varepsilon_{ <\lambda}\xi_{0}&-C(\xi^{\prime})\\ C(\xi^{\prime})&h_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}\mu_{<\lambda}\xi_{0} \end{pmatrix}\] \[=\begin{pmatrix}A_{<\lambda}&0\\ 0&A_{<\lambda}\end{pmatrix}\begin{pmatrix}h_{<\lambda}\varepsilon_{<\lambda} \xi_{0}&-A^{-1}_{<\lambda}C(\xi^{\prime})(A^{t}_{<\lambda})^{-1}\\ A^{-1}_{<\lambda}C(\xi^{\prime})(A^{t}_{<\lambda})^{-1}&h_{<\lambda}\mu_{< \lambda}\xi_{0}\end{pmatrix}\begin{pmatrix}A^{t}_{<\lambda}&0\\ 0&A^{t}_{<\lambda}\end{pmatrix}.\] We recall the following: **Lemma 5.7**.: _Let \(B\in\mathbb{C}^{3\times 3}\). The following identity holds:_ \[B^{t}C(\xi^{\prime})B=C(\text{adB}\cdot\xi^{\prime}). \tag{5.16}\] _In the above display \(adB\) denotes the adjugate matrix, i.e.,_ \[\text{adA}=((-1)^{i+j}A_{ji})_{i,j}\] _with \(A_{ji}\) denoting the \((j,i)\)-minor of \(A\)._ This yields by the definition of the adjugate matrix, \(h_{<\lambda}\), and using Cramer's rule \[A^{-1}_{<\lambda}C(\xi^{\prime})(A^{t}_{<\lambda})^{-1}=C(h_{<\lambda}A^{t}_{< \lambda}\xi^{\prime}).\] We write \[\begin{pmatrix}h_{<\lambda}\varepsilon_{<\lambda}\xi_{0}&-A_{<\lambda}^ {-1}C(\xi^{\prime})(A_{<\lambda}^{t})^{-1}\\ A_{<\lambda}^{-1}C(\xi^{\prime})(A_{<\lambda}^{t})^{-1}&h_{<\lambda}\mu_{< \lambda}\xi_{0}\end{pmatrix}\] \[=\begin{pmatrix}\varepsilon_{<\lambda}\xi_{0}&-C(A_{<\lambda}^{t} \xi^{\prime})\\ C(A_{<\lambda}^{t}\xi^{\prime})&\mu_{<\lambda}\xi_{0}\end{pmatrix}\begin{pmatrix} h_{<\lambda}&0\\ 0&h_{<\lambda}\end{pmatrix}\] \[=\begin{pmatrix}\varepsilon_{<\lambda}^{\frac{1}{2}}&0\\ 0&\mu_{<\lambda}^{\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\xi_{0}&-C\big{(} \frac{A_{<\lambda}^{t}\xi^{\prime}}{(\varepsilon_{<\lambda}\mu_{<\lambda})^{ \frac{1}{2}}}\big{)}\\ C\big{(}\frac{A_{<\lambda}^{t}\xi^{\prime}}{(\varepsilon_{<\lambda}\mu_{< \lambda})^{\frac{1}{2}}}\big{)}&\xi_{0}\end{pmatrix}\begin{pmatrix} \varepsilon_{<\lambda}^{\frac{1}{2}}&0\\ 0&\mu_{<\lambda}^{\frac{1}{2}}\end{pmatrix}\begin{pmatrix}h_{<\lambda}&0\\ 0&h_{<\lambda}\end{pmatrix}.\] Hence, we have reduced to diagonalizing \[B=\begin{pmatrix}\xi_{0}&-C(\tilde{\xi}^{\prime})\\ C(\tilde{\xi}^{\prime})&\xi_{0}\end{pmatrix}. \tag{5.17}\] This reflects invariance of pseudo-differential operators under change of coordinates. Since the symbols are very rough, we prefer to carry out the computation directly. In [17, 18] the symbol was diagonalized in the more difficult case of partially anisotropic \(\varepsilon\), i.e., \(\varepsilon\) having possibly two different eigenvalues. In this case, the resulting expressions are fairly complicated. We take the opportunity to point out a simplification for isotropic \(\varepsilon\) and \(\mu\). Write \(\xi^{\prime}=(\xi_{1},\xi_{2},\xi_{3})\). We begin with computing the characteristic polynomial of \(p/i\) using the block matrix structure: \[q(y)=\begin{vmatrix}y-\xi_{0}&C(\xi^{\prime})\\ -C(\xi^{\prime})&y-\xi_{0}\end{vmatrix}=\big{|}(y-\xi_{0})^{2}1_{3\times 3 }+C^{2}(\xi^{\prime})\big{|}.\] Hence, we have reduced to computing the eigenvalues of \(C^{2}(\xi^{\prime})\). Note that \[C^{2}(\xi^{\prime})=\begin{pmatrix}\xi_{2}^{2}+\xi_{3}^{2}&-\xi_{1}\xi_{2}&- \xi_{1}\xi_{3}\\ -\xi_{1}\xi_{2}&\xi_{1}^{2}+\xi_{3}^{2}&-\xi_{2}\xi_{3}\\ -\xi_{1}\xi_{3}&-\xi_{2}\xi_{3}&\xi_{1}^{2}+\xi_{2}^{2}\end{pmatrix}=|\xi^{ \prime}|^{2}1_{3\times 3}-\xi\otimes\xi.\] It follows that \[r(\lambda,\xi^{\prime})=\det(\lambda 1_{3\times 3}-C^{2}(\xi^{\prime}))=( \lambda-\|\xi\|^{2})^{2}\lambda.\] This gives for the characteristic polynomial \(q\) \[q(\lambda)=(\lambda-\xi_{0})^{2}[(\lambda-(\xi_{0}-\|\xi^{\prime}\|))^{2}( \lambda-(\xi_{0}+\|\xi^{\prime}\|))^{2}].\] We conclude that the diagonalization is given by \[d(x,\xi)=i(\xi_{0},\xi_{0},\xi_{0}-\|\xi^{\prime}\|,\xi_{0}+\|\xi^{\prime}\|, \xi_{0}-\|\xi^{\prime}\|,\xi_{0}+\|\xi^{\prime}\|). \tag{5.18}\] In the following let \(\xi_{i}^{*}=\frac{\xi_{i}}{\|\xi^{\prime}\|}\) for \(i=1,2,3\). Eigenvectors of \(\xi_{0}\) are clearly given by \[\begin{pmatrix}\xi_{1}^{*}\\ \xi_{2}^{*}\\ \xi_{3}^{*}\\ 0\\ 0\end{pmatrix},\qquad\begin{pmatrix}\xi_{0}\\ 0\\ \xi_{1}^{*}\\ \xi_{2}^{*}\\ \xi_{3}^{*}\end{pmatrix}.\] **Eigenvectors of \(\xi_{0}-\|\xi^{\prime}\|\)**: We use the block matrix structure of \(p(x,\xi)\). Let \(v=(v_{1},v_{2})^{t}\) denote an eigenvector. We find the system of equations: \[\begin{pmatrix}\|\xi^{\prime}\|&C(\xi^{\prime})\\ -C(\xi^{\prime})&\|\xi^{\prime}\|\end{pmatrix}\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=0.\] Iterating the above in the non-trivial case \(\xi^{\prime}=0\) yields the eigenvector equation for \(v_{1}\): \[\|\xi^{\prime}\|^{2}v_{1}+C^{2}(\xi^{\prime})v_{1}=0.\] For this we find the zero-homogeneous eigenvectors: \[\begin{pmatrix}0\\ -\xi_{3}^{*}\\ \xi_{2}^{*}\end{pmatrix},\quad\begin{pmatrix}\xi_{3}^{*}\\ 0\\ -\xi_{1}^{*}\end{pmatrix},\quad\begin{pmatrix}-\xi_{2}^{*}\\ \xi_{1}^{*}\\ 0\end{pmatrix}. \tag{5.19}\] The system of equations from above yields \[v_{2}=\frac{C(\xi^{\prime})}{\|\xi^{\prime}\|}v_{1}.\] This gives for \(v_{2}\): \[\begin{pmatrix}\xi_{2}^{*\,2}+\xi_{3}^{*\,2}\\ -\xi_{1}^{*\,\xi_{2}^{*\,2}}\\ -\xi_{1}^{*\,\xi_{3}^{*\,\xi_{3}^{*\,\xi}}}\end{pmatrix},\qquad\begin{pmatrix} -\xi_{1}^{*\,\xi_{2}^{*\,2}}\\ \xi_{1}^{*\,2}+\xi_{3}^{*\,2}\\ -\xi_{2}^{*\,\xi_{3}^{*\,\xi}}\end{pmatrix},\qquad\begin{pmatrix}-\xi_{1}^{*\, \xi_{3}^{*\,\xi}}\\ -\xi_{2}^{*\,\xi_{3}^{*\,\xi}}\\ \xi_{1}^{*\,2}+\xi_{2}^{*\,2}\end{pmatrix}.\] **Eigenvectors of \(\xi_{0}+\|\xi^{\prime}\|\)**: Again, we use the block matrix structure of \(p(x,\xi)\), and let \(v=(v_{1},v_{2})^{t}\) denote an eigenvector. This yields the system of equations: \[\begin{pmatrix}-\|\xi^{\prime}\|&C(\xi^{\prime})\\ -C(\xi^{\prime})&-\|\xi^{\prime}\|\end{pmatrix}\begin{pmatrix}v_{1}\\ v_{2}\end{pmatrix}=0.\] We find again for \(v_{1}\) \[C^{2}(\xi^{\prime})v_{1}+\|\xi^{\prime}\|^{2}v_{1}=0,\] and for \(v_{2}\) \[v_{2}=-\frac{C(\xi^{\prime})v_{1}}{\|\xi^{\prime}\|}.\] **Conjugation matrices:** We choose conjugation matrices depending on a non-vanishing direction of \(\xi\). In the following suppose that \(|\xi_{3}^{*}|\gtrsim 1\). One choice of conjugation matrices according to (5.18) is given by choosing the first eigenvector in (5.19): \[m_{3}(x,\xi)=\begin{pmatrix}\xi_{1}^{*}&0&0&0&\xi_{3}^{*}&\xi_{3}^{*}\\ \xi_{2}^{*}&0&-\xi_{3}^{*}&-\xi_{3}^{*}&0&0\\ \xi_{3}^{*}&0&\xi_{2}^{*}&\xi_{2}^{*}&-\xi_{1}^{*}&-\xi_{1}^{*}\\ 0&\xi_{1}^{*}&{\xi_{2}^{*\,2}}+{\xi_{3}^{*\,2}}&-({\xi_{2}^{*\,2}}+{\xi_{3}^{* \,2}})&-\xi_{1}^{*\,\xi_{2}^{*\,\xi}}&\xi_{1}^{*\,\xi_{2}^{*\,2}}\\ 0&\xi_{2}^{*\,2}&-\xi_{1}^{*\,\xi_{2}^{*\,\xi}}&\xi_{1}^{*\,\xi_{2}^{*\,\xi}}& ({\xi_{1}^{*\,2}}+{\xi_{3}^{*\,2}})&-({\xi_{1}^{*\,2}}+{\xi_{3}^{*\,2}}^{2})\\ 0&\xi_{3}^{*\,}&-\xi_{1}^{*\,\xi_{3}^{*\,\xi}}&\xi_{1}^{*\,\xi_{3}^{*\,\xi}}&- \xi_{2}^{*\,\xi_{3}^{*\,\xi}}&\xi_{2}^{*\,\xi_{3}^{*\,\xi}}\end{pmatrix}. \tag{5.20}\] We have the following: **Lemma 5.8**.: _Let \(m_{3}\) be given as in (5.20). Then,_ \[\det m_{3}(x,\xi)={\xi_{3}^{*\,2}}. \tag{5.21}\] Proof.: By elementary column operations, that is adding and subtracting the third and fourth and fifth and sixth eigenvector, we compute the determinant to be \[\det m_{3} =\begin{vmatrix}\xi_{1}^{\ast}&0&0&0&\xi_{3}^{\ast}&0\\ \xi_{2}^{\ast}&0&-\xi_{3}^{\ast}&0&0&0\\ \xi_{3}^{\ast}&0&\xi_{2}^{\ast}&0&-\xi_{1}^{\ast}&0\\ 0&\xi_{1}^{\ast}&0&\xi_{2}^{\ast\,2}+\xi_{3}^{\ast\,2}&0&\xi_{1}^{\ast\,1} \xi_{2}^{\ast}\\ 0&\xi_{2}^{\ast\,2}&0&-\xi_{1}^{\ast\,2}\xi_{3}^{\ast}&0&-(\xi_{1}^{\ast\,2}+ \xi_{3}^{\ast\,2})\\ 0&\xi_{3}^{\ast\,2}&0&-\xi_{1}^{\ast\,2}\xi_{3}^{\ast}&0&\xi_{2}^{\ast\,2} \xi_{3}^{\ast}\\ -\xi_{1}^{\ast\,2}&-\xi_{3}^{\ast\,2}&0\\ \xi_{3}^{\ast\,2}&-\xi_{2}^{\ast\,2}&-\xi_{1}^{\ast\,2}\end{vmatrix}\] \[=\begin{vmatrix}\xi_{1}^{\ast\,1}&0&\xi_{3}^{\ast\,3}\\ \xi_{2}^{\ast\,2}&-\xi_{3}^{\ast\,2}&0\\ \xi_{3}^{\ast\,2}&-\xi_{1}^{\ast\,2}&\xi_{2}^{\ast\,2}&-(\xi_{1}^{\ast\,2}+ \xi_{3}^{\ast\,2})\\ -\xi_{1}^{\ast\,2}&-\xi_{1}^{\ast\,2}&\xi_{3}^{\ast\,2}&\xi_{2}^{\ast\,2} \xi_{3}^{\ast}\\ \end{vmatrix}.\] The ultimate line follows from permuting the columns and using block matrix structure. Then, it is straight-forward \[\begin{vmatrix}\xi_{1}^{\ast\,2}+\xi_{2}^{\ast\,2}+\xi_{3}^{\ast\,2}=1,\, \text{we find}\\ \xi_{2}^{\ast\,2}+\xi_{3}^{\ast\,2}&\xi_{1}^{\ast\,2}&\xi_{1}^{\ast\,2}\\ -\xi_{1}^{\ast\,2}&\xi_{2}^{\ast\,2}&-(\xi_{1}^{\ast\,2}+\xi_{3}^{\ast\,2}) \\ -\xi_{1}^{\ast\,2}&\xi_{3}^{\ast\,2}&\xi_{2}^{\ast\,2}\xi_{3}^{\ast}+\xi_{3}^{ \ast\,2}\xi_{3}^{\ast\,2}\\ \end{vmatrix}.\] Again by \({\xi_{1}^{\ast\,2}}^{2}+{\xi_{2}^{\ast\,2}}^{2}+\xi_{3}^{\ast\,2}=1\), we find \[=\begin{vmatrix}1&\xi_{1}^{\ast\,1}&\xi_{1}^{\ast\,1}\xi_{2}^{ \ast\,2}\\ 0&\xi_{2}^{\ast\,2}&-(\xi_{1}^{\ast\,2}+\xi_{3}^{\ast\,2})\\ 0&\xi_{3}^{\ast\,2}&\xi_{2}^{\ast\,2}\xi_{3}^{\ast}\\ \end{vmatrix}-\xi_{1}^{\ast\,1}\xi_{2}^{\ast\,2}\] \[=\begin{vmatrix}1&\xi_{1}^{\ast\,1}&\xi_{1}^{\ast\,1}\xi_{2}^{ \ast\,2}\\ 0&\xi_{2}^{\ast\,2}&-(\xi_{1}^{\ast\,2}+\xi_{3}^{\ast\,2})\\ 0&\xi_{3}^{\ast\,2}&\xi_{2}^{\ast\,2}\xi_{3}^{\ast\,2}\\ \end{vmatrix}-\xi_{1}^{\ast\,1}\xi_{2}^{\ast\,1}\xi_{1}^{\ast\,2}\] \[=\xi_{3}^{\ast\,2}.\] This finishes the proof. Likewise, we define \(m_{1}\) and \(m_{2}\) by choosing the non-trivial eigenvectors for \(|\xi_{i}^{\ast\,1}|\gtrsim 1\), which leads us to conjugation matrices with determinant \[\det m_{i}(x,\xi)={\xi_{i}^{\ast\,2}}.\] By elementary column operations, that is adding and subtracting the third and fourth and fifth and sixth eigenvector, the determinant is computed to be \[\det m_{3}(x,\xi)={\xi_{3}^{\ast\,2}}^{2}.\] We shall see that for \(|\xi_{3}^{\ast\,1}|\gtrsim 1\), we can choose the eigenvectors as an orthonormalbasis through linear combinations of the above. Let \[w_{1}=\begin{pmatrix}0\\ -\xi_{3}^{\ast\,3}\\ \xi_{2}^{\ast\,2}+{\xi_{3}^{\ast\,2}}^{2}\\ -\xi_{1}^{\ast\,2}\\ -\xi_{1}^{\ast\,2}\\ -\xi_{1}^{\ast\,2}\xi_{3}^{\ast\,2}\\ \xi_{1}^{\ast\,2}+{\xi_{3}^{\ast\,2}}^{2}\\ \xi_{2}^{\ast\,2}\xi_{3}^{\ast\,3}\end{pmatrix}.\] We have \(\|w_{1}\|^{2}=2({\xi_{2}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2})\), \(\|w_{3}\|^{2}=2({\xi_{1}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2})\), and normalize \(w^{\prime}_{i}=w_{i}/\|w_{i}\|\). We compute \[\langle w^{\prime}_{1},w^{\prime}_{3}\rangle=\frac{-\xi_{1}^{\ast}\xi_{2}^{ \ast}(1+(\xi_{3}^{\ast})^{2})}{2({\xi_{2}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2})^{ \frac{1}{2}}({\xi_{1}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2})^{\frac{1}{2}}}.\] Now we consider \(\tilde{w}_{3}=w^{\prime}_{3}-\langle w^{\prime}_{1},w^{\prime}_{3}\rangle w^ {\prime}_{1}\): \[\tilde{w}_{3}=\frac{1}{\sqrt{2}({\xi_{1}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2})} \begin{pmatrix}\xi_{3}^{\ast}\\ 0\\ -\xi_{1}^{\ast}\\ -\xi_{1}^{\ast}\xi_{2}^{\ast}\\ {\xi_{1}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2}\\ {\xi_{2}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2}\\ {\xi_{1}^{\ast}}^{\ast}\xi_{3}^{\ast}\end{pmatrix}-\frac{\langle w^{\prime}_{1 },w^{\prime}_{3}\rangle}{\sqrt{2}({\xi_{2}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2})^{ \frac{1}{2}}}\begin{pmatrix}0\\ -\xi_{3}^{\ast}\\ {\xi_{2}^{\ast}}^{2}\\ {\xi_{2}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2}\\ -{\xi_{1}^{\ast}}^{\ast}\xi_{3}^{\ast}\end{pmatrix}.\] Clearly, \(\|\tilde{w}_{3}\|_{2}\gtrsim 1\) for \(|\xi_{3}^{\ast}|\gtrsim 1\). Hence, by renormalizing (and not changing notations for sake of brevity), we let \[\tilde{w}_{3}\rightarrow\frac{\tilde{w}_{3}}{\|\tilde{w}_{3}\|_{2}}.\] Similarly, consider \[w_{2}=\begin{pmatrix}0\\ -\xi_{3}^{\ast}\\ \xi_{3}^{\ast}\\ -({\xi_{2}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2})\\ \xi_{1}^{\ast}\xi_{2}^{\ast}\\ {\xi_{1}^{\ast}}^{\ast}\xi_{3}^{\ast}\end{pmatrix},\qquad w_{4}=\begin{pmatrix} \xi_{3}^{\ast}\\ 0\\ -\xi_{1}^{\ast}\\ {\xi_{1}^{\ast}}^{\ast}\xi_{2}^{\ast}\\ -({\xi_{1}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2})\\ -{\xi_{1}^{\ast}}^{\ast}\xi_{3}^{\ast}\end{pmatrix}.\] We compute \[\|w_{2}\|_{2}^{2}=2({\xi_{2}^{\ast}}^{2}+{\xi_{3}^{\ast}}^{2}),\qquad\|w_{4}\| _{2}^{2}=2({\xi_{3}^{\ast}}^{2}+{\xi_{1}^{\ast}}^{2}),\] which allows for renormalization \(w^{\prime}_{i}=w_{i}/\|w_{i}\|_{2}\). Now we consider \(\tilde{w}_{4}=w^{\prime}_{4}-\langle w^{\prime}_{2},w^{\prime}_{4}\rangle \tilde{w}^{\prime}_{2}\), which yields after an additional renormalization eigenvectors of \(\xi_{0}+\|\xi^{\prime}\|\). We conclude that the matrix \[\tilde{m}_{3}(x,\xi)=\begin{pmatrix}u_{1}&u_{2}&\tilde{w}_{1}&\tilde{w}_{2}& \tilde{w}_{3}&\tilde{w}_{4}\end{pmatrix}\] consists of orthonormal eigenvectors to \(d\) as in (5.18) for \(|\xi_{3}^{\prime}|\gtrsim 1\). We summarize the accomplished diagonalization: \[p(x,\xi) =\begin{pmatrix}A_{<\lambda}&0\\ 0&A_{<\lambda}\end{pmatrix}\begin{pmatrix}\varepsilon_{<\lambda}^{\frac{1}{2}} &0\\ 0&\mu_{<\lambda}^{\frac{1}{2}}\end{pmatrix}\tilde{m}_{i}(x,\xi_{0},\tilde{\xi}^ {\prime})d(x,\xi_{0},\tilde{\xi}^{\prime})\] \[\quad\times\tilde{m}^{t}_{i}(x,\xi_{0},\tilde{\xi}^{\prime}) \begin{pmatrix}\varepsilon_{<\lambda}^{\frac{1}{2}}&0\\ 0&\mu_{<\lambda}^{\frac{1}{2}}\end{pmatrix}\begin{pmatrix}A_{<\lambda}^{t}&0\\ 0&A_{<\lambda}^{t}\end{pmatrix}\text{ with }\tilde{\xi}^{\prime}=\frac{A_{<\lambda}^{t}\xi^{ \prime}}{(\varepsilon_{<\lambda}\mu_{<\lambda})^{\frac{1}{2}}}.\] in the phase space region \(|\tilde{\xi}_{3}^{\ast}|\gtrsim 1\). Note that there is always \(i\in\{1,2,3\}\) such that \[|\tilde{\xi}_{i}^{\ast}|\gtrsim 1.\] We define phase-space projection operators by the function \[\pi_{3}(x,\xi)=\chi(\lambda^{-1}\xi)\tilde{\chi}(\lambda^{-1}(A_{<\lambda}^{t} \xi^{\prime})_{3}),\] with \(\chi,\,\tilde{\chi}\in C_{c}^{\infty}\) suitable bump functions. The corresponding projections are denoted by \(S_{\lambda}S_{\lambda 3}\). We let \[N^{3}(x,\xi) =\tilde{m}_{i}^{t}(x,\xi_{0},\tilde{\xi}^{\prime})\begin{pmatrix} \varepsilon_{<\lambda}^{\frac{1}{2}}(x)&0\\ 0&\mu_{<\lambda}^{\frac{1}{2}}(x)\end{pmatrix}\begin{pmatrix}A_{<\lambda}^{t}( x)&0\\ 0&A_{<\lambda}^{t}(x)\end{pmatrix}\] \[\qquad\times\chi(\lambda^{-1}\xi^{\prime})\tilde{\chi}(\lambda^{- 1}(A_{<\lambda}^{t}\xi^{\prime})_{3}),\] and \[D^{3}(x,\xi)=d(x,\xi_{0},\tilde{\xi}^{\prime})\chi(\lambda^{-1}\xi^{\prime}) \tilde{\chi}(\lambda^{-1}(A_{<\lambda}^{t}\xi^{\prime})_{3}),\] and \[M^{3}(x,\xi)=\begin{pmatrix}A_{<\lambda}(x)&0\\ 0&A_{<\lambda}(x)\end{pmatrix}\begin{pmatrix}\varepsilon_{<\lambda}^{\frac{1} {2}}(x)&0\\ 0&\mu_{<\lambda}^{\frac{1}{2}}\end{pmatrix}\tilde{m}_{i}(x,\xi)\chi(\lambda^{- 1}\xi^{\prime})\tilde{\chi}(\lambda^{-1}(A_{<\lambda}^{t}\xi^{\prime})_{3}).\] The corresponding operators are defined by \[\mathcal{M}_{\lambda}^{3}(x,D)=Op(M^{3}(x,\xi)),\;\mathcal{D}_{\lambda}^{3}(x,D)=Op(D^{3}(x,\xi)),\;\mathcal{N}_{\lambda}^{3}(x,D)=Op(N^{3}(x,\xi)).\] By symbol composition, we can harmlessly insert frequency projectors after every factor. This makes the single factors bounded with symbols in \(\tilde{S}_{1,1}^{i}\), \(i\in\{0,1\}\). By Lemma 4.2, the claim follows, and the proof of Proposition 5.6 is complete. ### Conclusion of frequency localized estimate We have shown in Subsection 5.2 that after appropriate localization in phase space, the Maxwell system can be diagonalized to two degenerate and four non-degenerate half-wave equations. The degenerate equations correspond to stationary solutions, possibly induced by charges. However, we note that \(\mathcal{P}_{<\lambda}\) defined in (5.1) for the purpose of proving commutator estimates and \(\mathcal{P}_{<\lambda}\) defined in (5.12) for the diagonalization are defined slightly differently. We denote by \(\mathcal{P}_{<\lambda}^{(1)}\) defined in (5.1) and \(\mathcal{P}_{<\lambda}^{(2)}\) defined in (5.12). Similarly, we denote \(\rho_{\mu\mu}^{(1)}=\nabla\cdot(\varepsilon_{<\nu}^{\prime}S_{\nu}^{\prime} \mathcal{E})\) and \(\rho_{m\nu}^{(1)}=\nabla\cdot(\mu_{<\nu}^{\prime}S_{\nu}^{\prime}\mathcal{H})\) and \(\rho_{\nu\nu}^{(2)}=\nabla\cdot(h_{<\nu}A_{<\nu}A_{<\nu}^{t}\varepsilon_{<\nu} S_{\nu}^{\prime}\mathcal{E})\), and \(\rho_{m\nu}^{(2)}=\nabla\cdot(h_{<\nu}A_{<\nu}A_{<\nu}^{t}\mu_{<\nu}S_{\nu}^{ \prime}\mathcal{H})\). We have the following lemma, which shows that we can indeed use \(\mathcal{P}_{<\lambda}^{(1)}\) for commutator estimates and \(\mathcal{P}_{<\lambda}^{(2)}\) for the diagonalization: **Lemma 5.9**.: _The following estimates hold:_ \[\|\mathcal{P}_{<\lambda}^{(2)}S_{\lambda}^{\tau}S_{\lambda}^{\prime }u\|_{L_{x}^{2}} \lesssim\|S_{\lambda}^{\tau}S_{\lambda}^{\prime}u\|_{L_{x}^{2}}+\| \mathcal{P}_{<\lambda}^{(1)}S_{\lambda}^{\tau}S_{\lambda}^{\prime}u\|_{L_{x}^ {2}}, \tag{5.22}\] \[\sum_{\begin{subarray}{c}1\leqslant\lambda\ll\nu,\\ \nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\|S_{\nu}^{\prime}S_{\lambda}^{\tau} \mathcal{P}_{\nu}^{(2)}u\|_{L_{x}^{2}} \lesssim\sum_{\begin{subarray}{c}1\leqslant\lambda\ll\nu,\\ \nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\|S_{\nu}^{\prime}S_{\lambda}^{\tau} \mathcal{P}_{<\nu}^{(1)}u\|_{L_{x}^{2}}+\|\langle\partial_{t}\rangle^{\gamma- \frac{1}{2}+\varepsilon}u\|_{L_{x}^{2}},\] (5.23) \[\|\rho_{\epsilon\nu}^{(2)}\|_{L_{x^{\prime}}^{2}} \lesssim\|\rho_{\epsilon\nu}^{(1)}\|_{L_{x^{\prime}}^{2}}+\|S_{\nu}^{ \prime}\mathcal{E}\|_{L_{x^{\prime}}^{2}},\] (5.24) \[\|\rho_{m\nu}^{(2)}\|_{L_{x^{\prime}}^{2}} \lesssim\|\rho_{\epsilon\nu}^{(1)}\|_{L_{x^{\prime}}^{2}}+\|S_{\nu}^{ \prime}\mathcal{H}\|_{L_{x^{\prime}}^{2}}. \tag{5.25}\] Proof.: We begin with the proof of (5.22). By the triangle inequality, we find \[\|\mathcal{P}_{<\lambda}^{(2)}S_{\lambda}^{\tau}S_{\lambda}^{\prime}u\|_{L_{x}^ {2}}\lesssim\|(\mathcal{P}-\mathcal{P}_{<\lambda}^{(2)})S_{\lambda}^{\tau}S_{ \lambda}^{\prime}u\|_{L_{x}^{2}}+\|(\mathcal{P}-\mathcal{P}_{<\lambda}^{(1)})S_ {\lambda}^{\tau}S_{\lambda}^{\prime}u\|_{L_{x}^{2}}+\|S_{\lambda}^{\tau}S_{ \lambda}^{\prime}u\|_{L_{x}^{2}}.\] For the error estimates we note that \[\|(\mathcal{P}-\mathcal{P}_{<\lambda}^{(1)})S_{\lambda}^{\tau}S_{ \lambda}^{\prime}u\|_{L_{x}^{2}} \lesssim(\|\varepsilon_{\succ\lambda}^{\prime}\|_{L_{x^{\prime}}^{ \alpha}}+\|\mu_{\succ\lambda}^{\prime}\|_{L_{x^{\prime}}^{\alpha}})\lambda\|S_{ \lambda}^{\tau}S_{\lambda}^{\prime}u\|_{L_{x}^{2}}\] \[\lesssim(\|\varepsilon_{\epsilon^{\prime}}\|_{L_{x^{\prime}}^{ \alpha}}+\|\partial_{t}\mu_{\perp\epsilon^{\prime}}^{\prime}\|_{L_{x^{\prime}}^{ \alpha}})\|S_{\lambda}^{\tau}S_{\lambda}^{\prime}u\|_{L_{x}^{2}}.\] Similarly, we have by the telescoping sum argument (5.13) and (5.14): \[\|(\mathcal{P}-\mathcal{P}^{(2)}_{<\lambda})S^{\tau}_{\lambda}S^{\prime}_{ \lambda}u\|_{L^{2}_{x}}\lesssim\|S^{\tau}_{\lambda}S^{\prime}_{\lambda}u\|_{L^ {2}_{x}}.\] This finishes the proof of (5.22). We turn to the proof of (5.23), for which we write again \[\|S^{\prime}_{\nu}S^{\tau}_{\lambda}\mathcal{P}^{(2)}_{<\nu}u\|_{L^{2}_{x}} \lesssim\|S^{\prime}_{\nu}S^{\tau}_{\lambda}(\mathcal{P}^{(2)}_{<\nu}- \mathcal{P})u\|_{L^{2}_{x}}+\|S^{\prime}_{\nu}S^{\tau}_{\lambda}(\mathcal{P}^ {(1)}_{<\nu}-\mathcal{P})u\|_{L^{2}_{x}}+\|S^{\prime}_{\nu}S^{\tau}_{\lambda} \mathcal{P}^{(1)}_{<\nu}u\|_{L^{2}_{x}}.\] We find \[\sum_{\begin{subarray}{c}1\leqslant\lambda\ll\nu,\\ \nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\|S^{\prime}_{\nu}S^{\tau}_{\lambda} (\mathcal{P}^{(1)}_{\nu}-\mathcal{P})u\|_{L^{2}_{x}} \lesssim\sum_{\begin{subarray}{c}1\leqslant\lambda\ll\nu,\\ \nu\end{subarray}}\lambda\nu^{\gamma-\frac{1}{2}}\|S^{\tau}_{\lambda}S^{\prime }_{\nu}\kappa^{\prime}_{\gtrsim\nu}u\|_{L^{2}_{x}}\] \[\lesssim\sum_{\begin{subarray}{c}1\leqslant\lambda\ll\nu,\\ \nu\end{subarray}}\lambda\nu^{\gamma-\frac{1}{2}}(\|\varepsilon^{\prime}_{ \gtrsim\nu}\|_{L^{\infty}_{x^{\prime}}}+\|\mu^{\prime}_{\gtrsim\nu}\|_{L^{ \infty}_{x^{\prime}}})\|S^{\tau}_{\lambda}u\|_{L^{2}_{x}}\] \[\lesssim\sum_{\begin{subarray}{c}1\leqslant\lambda\ll\nu,\\ \nu\end{subarray}}\lambda\nu^{\gamma-\frac{3}{2}}(\|\partial\varepsilon^{ \prime}\|_{L^{\infty}_{x^{\prime}}}+\|\partial\mu^{\prime}\|_{L^{\infty}_{x^ {\prime}}})\|S^{\tau}_{\lambda}u\|_{L^{2}_{x}}\] \[\lesssim\|\langle\partial_{t}\rangle^{\gamma-\frac{1}{2}+\delta}u \|_{L^{2}_{x}}.\] Similarly, by the telescoping sum argument, we obtain \[\sum_{\begin{subarray}{c}1\leqslant\lambda\ll\nu,\\ \nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\|S^{\prime}_{\nu}S^{\tau}_{\lambda }(\mathcal{P}^{(2)}_{\nu}-\mathcal{P})u\|_{L^{2}_{x}}\] \[\lesssim\sum_{\begin{subarray}{c}1\leqslant\lambda\ll\nu,\\ \nu\end{subarray}}\nu^{\gamma-\frac{1}{2}}\lambda\|(hAA^{t}\varepsilon-h_{<\nu} A_{<\nu}A^{t}_{<\nu}\varepsilon_{<\nu})S^{\tau}_{\lambda}\mathcal{E}\|_{L^{2}_{x}} \lesssim\|\langle\partial_{t}\rangle^{\gamma-\frac{1}{2}+\delta}u\|_{L^{2}_{x }}.\] We turn to the proof of (5.24). We estimate \[\|\nabla\cdot(\varepsilon^{\prime}_{\gtrsim\nu}S^{\prime}_{\nu}\mathcal{E}) \|_{L^{2}_{x^{\prime}}}\lesssim\|\partial\varepsilon^{\prime}\|_{L^{\infty}_{x ^{\prime}}}|S^{\prime}_{\nu}\mathcal{E}\|_{L^{2}_{x^{\prime}}}+\nu\|\varepsilon ^{\prime}_{\gtrsim\nu}\|_{L^{\infty}_{x^{\prime}}}\|S^{\prime}_{\nu}\mathcal{E }\|_{L^{2}_{x^{\prime}}},\] and by telescoping sum argument, \[\|\nabla\cdot(h_{<\nu}A_{<\nu}A^{t}_{<\nu}\varepsilon_{<\nu}S^{ \prime}_{\nu}\mathcal{E})-\nabla\cdot(hAA^{t}\varepsilon S^{\prime}_{\nu} \mathcal{E})\|_{L^{2}_{x^{\prime}}}\] \[=\|\nabla\cdot(C^{(1)}_{>\nu}C^{(2)}\dots C^{(k)})S^{\prime}_{\nu }\mathcal{E}\|_{L^{2}_{x^{\prime}}}\] \[\lesssim\|\partial C\|_{L^{\infty}_{x^{\prime}}}\|S^{\prime}_{\nu }\mathcal{E}\|_{L^{2}_{x^{\prime}}}+\|C^{(1)}_{>\nu}\|_{L^{\infty}_{x^{\prime}}} \|C^{(2)}\|_{L^{\infty}_{x^{\prime}}}\dots\|C^{(k)}\|_{L^{\infty}_{x^{\prime}}} \nu\|S^{\prime}_{\nu}\mathcal{E}\|_{L^{2}_{x^{\prime}}}\leq\|S^{\prime}_{\nu} \mathcal{E}\|_{L^{2}_{x^{\prime}}}.\] Hence, (5.24) is a consequence of the triangle inequality. The proof of (5.25) is carried out _mutatis mutandis_. We use this to finish the proof of Proposition 2.4 by showing the following estimates: \[\|S^{\tau}_{\lambda}S^{\prime}_{\lambda}u\|_{L^{p}L^{q}} \lesssim\lambda^{\gamma}(\|S^{\tau}_{\lambda}S^{\prime}_{\lambda }u\|_{L^{\infty}_{t}L^{2}_{x}}+\|\mathcal{P}^{(2)}_{<\lambda}S^{\tau}_{\lambda}S ^{\prime}_{\lambda}u\|_{L^{2}_{t,x}}), \tag{5.26}\] \[\|S^{\prime}_{\nu}S^{\tau}_{\lambda}u\|_{L^{p}L^{q}} \lesssim\nu^{\gamma-\frac{1}{2}}\|S^{\prime}_{\nu}S^{\tau}_{\lambda }\mathcal{P}^{(2)}_{<\nu}u\|_{L^{2}_{t,x}}\] (5.27) \[\quad+\nu^{\gamma-1+\frac{1}{p}}(\|\rho^{(2)}_{\nu}\|_{L^{\infty}_ {t}L^{2}_{x}}+\|\rho^{(2)}_{m\nu}\|_{L^{\infty}_{t}L^{2}_{x}}).\] To use the diagonalization, we need the following: **Lemma 5.10**.: _For \(i\in\{1,2,3\}\) and \(\lambda\gtrsim 1\), we find the following estimates to hold:_ \[\|S_{\lambda i}u\|_{L^{p}L^{q}} \lesssim\|\mathcal{N}^{i}_{\lambda}S_{\lambda i}u\|_{L^{p}L^{q}}+ \lambda^{\gamma-\frac{1}{2}}\|S_{\lambda i}u\|_{L^{2}},\] \[\|S_{\lambda i}u\|_{L^{p}L^{q}} \lesssim\|\mathcal{M}^{i}_{\lambda}S_{\lambda i}u\|_{L^{2}}.\] Proof.: For the proof of the first estimate, we observe for the composed symbols of \(\mathcal{M}^{i}_{\lambda}\) and \(\mathcal{N}^{i}_{\lambda}\): \[\begin{split}&\begin{pmatrix}A_{<\lambda}&0\\ 0&A_{<\lambda}\end{pmatrix}\begin{pmatrix}\varepsilon^{\frac{1}{2}}_{<\lambda}&0 \\ 0&\mu_{<\lambda}\frac{1}{2}\end{pmatrix}\tilde{m}_{i}(x,\xi_{0},\tilde{\xi}^{ \prime})\tilde{m}^{t}_{i}(x,\xi_{0},\tilde{\xi}^{\prime})\\ &\times\begin{pmatrix}\varepsilon^{\frac{1}{2}}_{<\lambda}&0\\ 0&\mu^{\frac{1}{2}}_{<\lambda}\end{pmatrix}\begin{pmatrix}A^{t}_{<\lambda}&0 \\ 0&A^{t}_{<\lambda}\end{pmatrix}=\begin{pmatrix}\varepsilon_{<\lambda}A_{< \lambda}A^{t}_{<\lambda}&0\\ 0&\mu_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}\end{pmatrix}.\end{split}\] Hence, we find \[\mathcal{M}^{i}_{\lambda}\mathcal{N}^{i}_{\lambda}S_{\lambda i}=\begin{pmatrix} \varepsilon_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}&0\\ 0&\mu_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}\end{pmatrix}S_{\lambda i}+R_{i}( x,D)\] with \(\|R_{i}(x,D)\|_{L^{2}\to L^{2}}\lesssim\lambda^{-1}\). This allows us to estimate \[\begin{split}\|S_{\lambda i}u\|_{L^{p}L^{q}}&\lesssim\| \begin{pmatrix}\varepsilon_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}&0\\ 0&\mu_{<\lambda}A_{<\lambda}A^{t}_{<\lambda}\end{pmatrix}S_{\lambda i}u\|_{L^ {p}L^{q}}\\ &\lesssim\|\mathcal{M}^{i}_{\lambda}\mathcal{N}^{i}_{\lambda}S_{ \lambda i}u\|_{L^{p}L^{q}}+\|R^{i}(x,D)S_{\lambda i}u\|_{L^{p}L^{q}}\\ &\lesssim\|\mathcal{N}^{i}_{\lambda}S_{\lambda i}u\|_{L^{p}L^{q}}+ \lambda^{\gamma-\frac{1}{2}}\|S_{\lambda i}u\|_{L^{2}}\end{split}\] by Minkowski's inequality and Sobolev embedding. For the proof of the second estimate, we argue similarly \[\|S_{\lambda i}u\|_{L^{2}_{t,x}}\lesssim\|(1+R_{i})S_{\lambda i}u\|_{L^{2}_{t,x}}=\|\mathcal{N}^{i}_{\lambda}\mathcal{M}^{i}_{\lambda}S_{\lambda i}u\|_{L^ {2}_{t,x}}\lesssim\|\mathcal{M}^{i}_{\lambda}S_{\lambda i}u\|_{L^{2}_{t,x}}.\] The proof is complete. We can finally show (5.26) and (5.27): Proof of (5.26).: We split \(S^{\tau}_{\lambda}S^{\prime}_{\lambda}u=\sum_{i=1}^{3}S^{\tau}_{\lambda}S_{ \lambda i}u\) with \(S^{\tau}_{\lambda}S_{\lambda i}u\) being amenable to the diagonalization of \(\mathcal{P}\) provided by \(\mathcal{M}^{i}_{\lambda}\) and \(\mathcal{N}^{i}_{\lambda}\). We write \[\|S^{\tau}_{\lambda}S_{\lambda i}u\|_{L^{p}L^{q}}\lesssim\|S^{\tau}_{\lambda} \mathcal{M}^{i}_{\lambda}\mathcal{N}^{i}_{\lambda}S_{\lambda i}u\|_{L^{p}L^{q }}+\|S^{\tau}_{\lambda}R(x,D^{\prime})S^{\prime}_{\lambda}u\|_{L^{p}L^{q}}.\] Since \(R(x,D^{\prime})\) is smoothing of order \(-1\), we can use Sobolev embedding to find \[\begin{split}\|S^{\tau}_{\lambda}R(x,D^{\prime})S^{\prime}_{ \lambda}u\|_{L^{p}L^{q}}&\lesssim\lambda^{\frac{1}{2}-\frac{1}{p}} \lambda^{3\left(\frac{1}{2}-\frac{1}{q}\right)-1}\|S^{\tau}_{\lambda}S^{\prime} _{\lambda}u\|_{L^{2}_{t,x}}\\ &\lesssim\lambda^{-\varepsilon}\|\langle D^{\prime}\rangle^{ \gamma}S^{\prime}_{\lambda}u\|_{L^{2}_{t,x}},\end{split}\] which is acceptable. By Lemma 5.10, we have \[\|S^{\tau}_{\lambda}\mathcal{M}^{i}_{\lambda}\mathcal{N}^{i}_{\lambda}S_{ \lambda i}u\|_{L^{p}L^{q}}\lesssim\|S^{\tau}_{\lambda}\mathcal{N}^{i}_{ \lambda}S_{\lambda i}u\|_{L^{p}L^{q}}.\] We estimate the components \(\|[S^{\tau}_{\lambda}\mathcal{N}^{i}_{\lambda}S_{\lambda i}u]_{j}\|_{L^{p}L^{q}}\) separately. The degenerate components \([\mathcal{D}_{\lambda}]_{jj}\), \(j=1,2\), are elliptic. This yields by Sobolev embedding the estimate: \[\begin{split}\|[\mathcal{N}^{i}_{\lambda}S^{\tau}_{\lambda}S^{ \prime}_{\lambda}u]_{j}\|_{L^{p}_{t}L^{q}_{x}}&\lesssim\lambda^{ \gamma-\frac{1}{p}}\|[\mathcal{N}^{i}_{\lambda}S^{\tau}_{\lambda}S^{\prime}_{ \lambda}u]_{j}\|_{L^{2}_{t,x}}\\ &\lesssim\lambda^{\gamma-1+\frac{1}{p}}\|[\mathcal{D}_{\lambda} \mathcal{N}^{i}_{\lambda}S^{\tau}_{\lambda}S_{\lambda i}u]_{j}\|_{L^{2}_{t,x}} \\ &\lesssim\lambda^{\gamma}\|S^{\tau}_{\lambda}\mathcal{D}_{\lambda} \mathcal{N}^{i}_{\lambda}S_{\lambda i}u\|_{L^{2}_{t,x}}.\end{split}\] Another application of Lemma 5.10 and Proposition 5.6 yields \[\begin{split}\lambda^{\gamma}\|S^{\tau}_{\lambda}\mathcal{D}_{ \lambda}\mathcal{N}^{i}_{\lambda}S_{\lambda i}u\|_{L^{2}_{t,x}}& \lesssim\lambda^{\gamma}\|S^{\tau}_{\lambda}\mathcal{M}^{i}_{ \lambda}\mathcal{D}_{\lambda}\mathcal{N}^{i}_{\lambda}S_{\lambda i}u\|_{L^{2}_{t,x}}\\ &\lesssim\lambda^{\gamma}\|S^{\tau}_{\lambda}\mathcal{D}_{\lambda} u\|_{L^{2}_{t,x}}+\lambda^{\gamma}\|S^{\tau}_{\lambda}u\|_{L^{2}_{t,x}}.\end{split}\] The non-degenerate components \(j=3,\ldots,6\) are estimated by [1, Eq. (2.1)]: \[\|S_{\lambda}^{\tau}\mathcal{N}_{\lambda}^{i}S_{\lambda i}u\|_{L^{p}L^{q}} \lesssim\lambda^{\gamma}(\|S_{\lambda}^{\tau}\mathcal{N}_{\lambda}^{i}S_{ \lambda i}u\|_{L^{\infty}_{t}L^{2}_{x}}+\|S_{\lambda}^{\tau}\mathcal{D}_{ \lambda}\mathcal{N}_{\lambda i}S_{\lambda i}u\|_{L^{2}_{t,x}}).\] By another application of Lemma 5.10 and Proposition 5.6, we find \[\|S_{\lambda}^{\tau}\mathcal{N}_{\lambda}^{i}S_{\lambda i}u\|_{L^{p}L^{q}} \lesssim\lambda^{\gamma}(\|S_{\lambda}^{\tau}S_{\lambda}^{\gamma}u\|_{L^{ \infty}_{t}L^{2}_{x}}+\|S_{\lambda}^{\tau}\mathcal{D}_{\lambda}S_{\lambda}u\|_ {L^{2}_{t,x}}).\] We passed from \(S_{\lambda i}\) to \(S_{\lambda}^{\prime}\) above by first order symbol composition. This finishes the proof. Proof of (5.27).: If \(\{|\tau|\ll|\xi^{\prime}|\}\) and \(\{|\xi^{\prime}|\ \gtrsim 1\}\), we see that after diagonalization, the operator \(\mathcal{P}\) is elliptic up to the charges. Let \(\lambda\sim|\tau|\ll|\xi^{\prime}|\sim\nu\). We make an additional localization in phase space: \(S_{\nu}^{\prime}u=\sum_{i=1}^{3}S_{\nu i}u\). \[\|S_{\lambda}^{\tau}S_{\nu i}u\|_{L^{p}L^{q}}\lesssim\|S_{\lambda}^{\tau} \mathcal{M}_{\nu}^{i}\mathcal{N}_{\nu}^{i}S_{\nu i}u\|_{L^{p}L^{q}}+\|S_{ \lambda}^{\tau}R(x,D^{\prime})S_{\nu i}u\|_{L^{p}L^{q}}\] with \(R(x,D^{\prime})\) being smoothing of order \(-1\), we can use Sobolev embedding to find \[\|S_{\lambda}^{\tau}R(x,D^{\prime})S_{\nu i}u\|_{L^{p}L^{q}}\lesssim\lambda^ {\frac{1}{2}-\frac{1}{p}}\mu^{3\left(\frac{1}{2}-\frac{1}{q}\right)-1}\|S_{ \lambda}^{\tau}S_{\nu}^{\prime}u\|_{L^{2}_{t,x}}\lesssim\|\langle D^{\prime} \rangle^{\gamma}S_{\nu}^{\prime}u\|_{L^{2}_{t,x}}.\] By Lemma 5.10, \[\|S_{\lambda}^{\tau}\mathcal{M}_{\nu}^{i}\mathcal{N}_{\nu}^{i}S_{\nu i}u\|_{L^ {p}L^{q}}\lesssim\|S_{\lambda}^{\tau}\mathcal{N}_{\nu}^{i}S_{\nu i}u\|_{L^{p} L^{q}}.\] For \([\mathcal{N}_{\nu}^{i}S_{\nu i}u]_{j}\) and \(j=1,2\) we can use Sobolev embedding and definition of charges. For this purpose, recall the symbol of \(\mathcal{N}_{\nu}^{i}\). With \(\tilde{\xi}^{\prime}=A_{<\nu}^{t}\xi^{\prime}\), we find for \(v\in\mathbb{C}^{6}\), \(v=(v_{1},v_{2})^{t}\), with \(v_{i}\in\mathbb{C}^{3}\): \[[\tilde{m}_{i}^{t}(x,\xi_{0},\tilde{\xi}^{\prime})\begin{pmatrix}\varepsilon^{ \frac{1}{2}}_{\geq\nu}&0\\ 0&\mu^{\frac{1}{2}}_{<\nu}\end{pmatrix}\begin{pmatrix}A_{<\nu}^{t}&0\\ 0&A_{<\nu}^{t}\end{pmatrix}v]_{1}=\frac{(\xi^{\prime})^{t}}{\mu^{\frac{1}{2}}_{ \leq\nu}\|\xi^{\prime}\|}A_{<\nu}A_{<\nu}^{t}v_{1}.\] Moreover, \[[\tilde{m}_{i}^{t}(x,\xi_{0},\tilde{\xi}^{\prime})\begin{pmatrix}\varepsilon^{ \frac{1}{2}}_{<\nu}&0\\ 0&\mu^{\frac{1}{2}}_{\leq\nu}\end{pmatrix}\begin{pmatrix}A_{<\nu}^{t}&0\\ 0&A_{<\nu}^{t}\end{pmatrix}v]_{2}=\frac{(\xi^{\prime})^{t}}{\varepsilon^{ \frac{1}{2}}_{\leq\nu}\|\xi^{\prime}\|}A_{<\nu}A_{<\nu}^{t}v_{2}.\] Consequently, we can write \[[\mathcal{N}_{\nu}^{i}S_{\nu}u]_{1}=\frac{1}{h_{<\nu}\varepsilon<\nu\mu^{\frac{ 1}{2}}_{<\nu}}\frac{1}{|\nabla_{x^{\prime}}|}\nabla\cdot(h_{<\nu}\varepsilon _{<\nu}A_{<\nu}A_{<\nu}^{t}\mathcal{E})+R_{1}(x,D)\mathcal{E}\] with \(\|R_{1}\|_{L^{2}\to L^{2}}\lesssim\nu^{-1}\). Therefore, the estimate for the first component follows from Sobolev embedding: \[\|[\mathcal{N}_{\nu}^{i}S_{\nu i}u]_{1}\|_{L^{p}L^{q}}\lesssim\nu^{\gamma-1+ \frac{1}{p}}\big{(}\|S_{\nu}^{\prime}\rho_{e}\|_{L^{\infty}L^{2}}+\|S_{\nu}^{ \prime}u\|_{L^{\infty}L^{2}}).\] Similarly, \[[\mathcal{N}_{\nu}^{i}S_{\nu i}u]_{2}=\frac{1}{\varepsilon^{\frac{1}{2}}_{<\nu }h_{<\nu}\mu^{\frac{1}{2}}_{\leq\nu}}\frac{1}{|\nabla_{x^{\prime}}|}\nabla \cdot(h_{<\nu}\mu_{<\nu}A_{<\nu}A_{<\nu}^{t}\mathcal{H})+R_{2}(x,D)\mathcal{H}\] with \(\|R_{2}\|_{L^{2}\to L^{2}}\lesssim\nu^{-1}\). We find by definition of \(\rho^{\prime}_{m\mu}\) and another Sobolev embedding yields \[\|[\mathcal{N}_{\nu}^{i}S_{\nu i}u]_{2}\|_{L^{p}L^{q}}\lesssim\nu^{\gamma-\frac {1}{2}}\|S_{\nu}u\|_{L^{2}}.\] For the components \(i=3,\ldots,6\)\([\mathcal{D}_{\nu}]_{ii}\) is elliptic: \[\|S_{\lambda}^{\tau}\mathcal{N}_{\nu}S_{\nu}^{\prime}u\|_{L^{p}L^{q}}\lesssim\nu^ {3\left(\frac{1}{2}-\frac{1}{q}\right)}\lambda^{\frac{1}{2}-\frac{1}{p}}\nu^{-1} \|S_{\lambda}^{\tau}\mathcal{D}_{\nu}^{i}S_{\nu i}^{\prime}u\|_{L^{2}_{t,x}}.\] Consequently, we obtain \[\|S_{\lambda}^{\tau}\mathcal{N}_{\mu}S_{\mu}^{\prime}u\|_{L^{p}L^{q}}\lesssim\mu^ {3\big{(}\frac{1}{2}-\frac{1}{q}\big{)}-\frac{1}{p}-\frac{1}{2}+\varepsilon} \big{(}\frac{\lambda}{\mu}\big{)}^{\frac{1}{2}-\frac{1}{p}}\mu^{-\varepsilon} \|S_{\lambda}^{\tau}\mathcal{D}_{\mu}^{i}[\mathcal{N}_{\mu}S_{\mu}^{\prime}u] _{i}\|_{L^{2}_{t,x}}.\] By another application of Lemma 5.10 and Proposition 5.6, we conclude the proof. ## 6. Diagonalizing reflected Maxwell equations in two dimensions This section is devoted to the proof of Strichartz estimates in the two-dimensional case. We want to reduce to previously established results for half-wave equations with structured Lipschitz coefficients. We have already reduced to Proposition 2.9 in Section 2, which states that it suffices to prove Strichartz estimates for the extended fields in geodesic coordinates \(\tilde{u}=(\tilde{\mathcal{E}},\tilde{\mathcal{H}})\) close to the boundary: \[\|\tilde{u}\|_{L^{p}_{T}L^{q}_{x^{\prime}}}\lesssim\|\tilde{u}\|_{L^{\infty}_{ T}H^{\gamma+\delta}}+\|\tilde{\mathcal{J}}_{e}\|_{L^{2}_{T}H^{\gamma+\delta}}+\| \tilde{\rho}_{e}\|_{L^{\infty}_{T}H^{\gamma-1+\frac{1}{p}+\delta}}\] for \(p,q\geq 2\), \(q<\infty\) satisfying \[\frac{3}{p}+\frac{1}{q}\leq\frac{1}{2},\quad\gamma=2\big{(}\frac{1}{2}-\frac{ 1}{q}\big{)}-\frac{1}{p},\quad 0<\delta<\frac{1}{2}.\] We omit the \(\tilde{}\) in the following for the extended quantities to lighten the notation. For the diagonalization we can rely on results from [19, 18]. Interestingly, in the two-dimensional case, there are no symmetry assumptions on the permittivity (the permeability is scalar anyway) required for a diagonalization with \(L^{p}\)-bounded multipliers to hold. Thus, we simply redenote the permittivity and permeability decorated with the cometric and \(\sqrt{g}\) by \(\varepsilon\) and \(\mu\) to arrive at the Maxwell operator: \[\mathcal{P}=\begin{pmatrix}\partial_{t}(\varepsilon^{11}\cdot)&0&-\partial_{2 }\\ 0&\partial_{t}(\varepsilon^{22}\cdot)&\partial_{1}\\ -\partial_{2}&\partial_{1}&\partial_{t}(\mu\cdot)\end{pmatrix}.\] The principal symbol of \(\mathcal{P}\) with rough coefficients is given by \[p(x,\xi) = i\begin{pmatrix}\xi_{0}\varepsilon^{11}&0&-\xi_{2}\\ 0&\xi_{0}\varepsilon^{22}&\xi_{1}\\ -\xi_{2}&\xi_{1}&\xi_{0}\mu\end{pmatrix}\] \[= i\begin{pmatrix}\xi_{0}&0&-\xi_{2}/\mu\\ 0&\xi_{0}&\xi_{1}/\mu\\ -\xi_{2}\varepsilon_{11}&\xi_{1}\varepsilon_{22}&\xi_{0}\end{pmatrix}\begin{pmatrix} \varepsilon^{11}&0&0\\ 0&\varepsilon^{22}&0\\ 0&0&\mu\end{pmatrix}.\] On the level of the equation, the above factorization corresponds to rewriting the equation in terms of \((\mathcal{D},\mathcal{B})\) instead of \((\mathcal{E},\mathcal{H})\). It turns out that this facilitates to find conjugation matrices. For the proof of Proposition 2.9 it suffices to show the following estimate for frequency localized functions for \(1\ll\lambda\in 2^{\mathbb{N}_{0}}\): **Proposition 6.1**.: _The following dyadic estimate holds:_ \[\|S_{\lambda}^{\prime}S_{\lambda}u\|_{L^{p}_{T}L^{q}_{x^{\prime}}(\mathbb{R}^ {2})}\lesssim\lambda^{\gamma}(\|S_{\lambda}S_{\lambda}^{\prime}u\|_{L^{\infty }_{T}L^{2}_{x^{\prime}}}+\|\mathcal{P}_{\lambda}S_{\lambda}S_{\lambda}^{\prime} u\|_{L^{2}_{T,x^{\prime}}})+\lambda^{\gamma-1+\frac{1}{p}}\|\rho^{\prime}_{e \lambda}\|_{L^{\infty}_{T}L^{2}_{x^{\prime}}} \tag{6.1}\] _with \(\rho^{\prime}_{e\lambda}=\nabla\cdot(\varepsilon_{<\lambda}S_{\lambda}^{\prime }\mathcal{E})\)._ (6.1) handles the contribution of the phase space region \(\{|\tau|\lesssim|\xi^{\prime}|\}\). The commutator arguments to remove the frequency localization are easier than in three dimensions because \(\gamma<1\) and thus, omitted. The estimate for \(\{|\tau|\gg|\xi^{\prime}|\}\) follows from ellipticity of \(\mathcal{P}\) in this region in phase space and is carried out like in three dimensions. Secondly, note that we do not distinguish between definitions of \(\mathcal{P}_{\lambda}^{(1)}\) or \(\mathcal{P}_{\lambda}^{(2)}\) like in Section 5 because we actually do not use the internal structure of \(\varepsilon^{\prime}=\sqrt{g}g^{-1}\varepsilon\). ### Diagonalizing the principal symbol We use the diagonalization established in [19] (see also [18, Lemma 2.2]) to show the following: **Proposition 6.2**.: _Let \(2^{\mathbb{N}}\ni\lambda\gg\lambda_{0}\). There are operators \(\mathcal{M}_{\lambda}\in OP\tilde{S}_{1,1}^{0}\), \(\mathcal{N}_{\lambda}\in OP\tilde{S}_{1,1}^{0}\), and \(\mathcal{D}_{\lambda}\in OP\tilde{S}_{1,1}^{1}\) such that_ \[\mathcal{P}_{\lambda}S_{\lambda}S_{\lambda}^{\prime}=\mathcal{M}_{\lambda} \mathcal{D}_{\lambda}\mathcal{N}_{\lambda}+E_{\lambda}\] _with \(\|E_{\lambda}\|_{L^{2}\to L^{2}}\lesssim 1\) and implicit constant independent of \(\lambda\). The principal symbols are given by_ \[m(x,\xi) =\begin{pmatrix}\varepsilon_{22}\xi_{1}^{*}&-\xi_{2}^{*}/\mu& \xi_{2}^{*}/\mu\\ \varepsilon_{11}\xi_{2}^{*}&\xi_{1}^{*}/\mu&-\xi_{1}^{*}/\mu\\ 0&-1&-1\end{pmatrix},\] \[n(x,\xi) =\begin{pmatrix}\mu^{-1}\xi_{1}^{*}&\mu^{-1}\xi_{2}^{*}&0\\ \frac{-\xi_{2}^{*}\varepsilon_{11}}{2}&\frac{\xi_{1}^{*}\varepsilon_{22}}{2}&- \frac{1}{2}\\ \frac{\xi_{2}^{*}\varepsilon_{11}}{2}&\frac{-\xi_{1}^{*}\varepsilon_{22}}{2}&- \frac{1}{2}\end{pmatrix}\begin{pmatrix}\varepsilon^{11}&0&0\\ 0&\varepsilon^{22}&0\\ 0&0&\mu\end{pmatrix},\] \[d(x,\xi) =i\text{diag}(\xi_{0},\xi_{0}-\|\xi\|_{\varepsilon^{\prime}},\xi _{0}+\|\xi\|_{\varepsilon^{\prime}})\] _with \(\|\xi\|_{\varepsilon^{\prime}}^{2}=\langle\xi,\mu^{-1}\det(\varepsilon)^{-1} \varepsilon\xi\rangle\), \(\xi^{*}=\xi/\|\xi\|_{\varepsilon^{\prime}}\). All coefficients in the above definitions are frequency truncated at \(\lambda\)._ The diagonalization is substantially easier than in three dimensions because it does not require an additional localization in phase space. ### Conclusion of the proof To finish the proof of Theorem 1.2 like in Section 5, we have to check that the contribution of the charges is ameliorated like before: **Proposition 6.3**.: _With the notations from Proposition 6.2, the following estimate holds:_ \[\|\mathcal{N}_{\lambda}S_{\lambda}u\|_{L^{p}_{t}L^{q}_{x^{\prime}}}\lesssim \lambda^{\gamma}(\|\mathcal{N}_{\lambda}S_{\lambda}u\|_{L^{2}_{x}}+\|\mathcal{D }_{\lambda}\mathcal{N}_{\lambda}S_{\lambda}u\|_{L^{2}_{x}})+\lambda^{\gamma-1 +\frac{1}{p}}\|\rho_{e\lambda}\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}. \tag{6.2}\] Proof.: We show (6.2) componentwise. For the first component we have to use the divergence condition: We have \[[n(x,\xi)]_{11} =\frac{\xi_{1}\varepsilon^{11}}{\mu\|\xi\|_{\varepsilon^{\prime}} },\quad[n(x,\xi)]_{12}=\frac{\xi_{2}\varepsilon^{22}}{\mu\|\xi\|_{\varepsilon^ {\prime}}},\] \[[n(x,\xi)]_{13} =0.\] This gives \[[\mathcal{N}_{\lambda}S_{\lambda}u]_{1}=\frac{1}{\mu|\nabla_{\varepsilon^{ \prime}}|}[\nabla\cdot(\varepsilon S_{\lambda}u)]+R_{1}(x,D)\mathcal{E}\] with \(\|R_{1}\|_{2\to 2}\lesssim\lambda^{-1}\). This yields the estimate for the first component by Sobolev embedding: \[\|[\mathcal{N}_{\lambda}S_{\lambda}u]_{1}\|_{L^{p}_{t}L^{q}_{x^{\prime}}} \lesssim\lambda^{\gamma-1+\frac{1}{p}}(\|S_{\lambda}\rho_{e}\|_{L^{\infty}_{t} L^{2}_{x^{\prime}}}+\|S_{\lambda}u\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}).\] The non-degenerate components \(i=2,3\) are estimated by Theorem 5.1: \[\|[\mathcal{N}_{\lambda}S_{\lambda}u]_{i}\|_{L^{p}_{t}L^{q}_{x^{\prime}}}\lesssim \lambda^{\gamma}(\|S_{\lambda}u\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}+\|\mathcal{ D}^{i}_{\lambda}[\mathcal{N}_{\lambda}S_{\lambda}u]_{i}\|_{L^{2}_{x}}).\] The proof is complete. We record the corresponding result of Lemma 5.10 to complete the proof of Theorem 1.2. **Lemma 6.4**.: _For \(\lambda\gg 1\), the following estimates hold:_ \[\|S_{\lambda}u\|_{L^{p}_{t}L^{q}_{x}} \lesssim\|\mathcal{N}_{\lambda}S_{\lambda}u\|_{L^{p}_{t}L^{q}_{x} }+\lambda^{\gamma-\frac{1}{2}}\|S_{\lambda}u\|_{L^{2}_{t,x}},\] \[\|S_{\lambda}u\|_{L^{2}_{t,x}} \lesssim\|\mathcal{M}_{\lambda}S_{\lambda}u\|_{L^{2}_{t,x}}.\] The lemma is proved like in the previous section. ## 7. Improved local well-posedness for the Kerr system in two dimensions This section is devoted to the proof of the following theorem. **Theorem 7.1**.: _Let \(\Omega\subseteq\mathbb{R}^{2}\) be a smooth domain with compact boundary and \(s\in(11/6,2]\). Then the Kerr system in two dimensions_ \[\left\{\begin{array}{rcll}\partial_{t}(\varepsilon\mathcal{E})&=&\nabla_{ \perp}\mathcal{H},&[\mathcal{E}_{\wedge}\nu]_{x^{\prime}\in\partial\Omega}&=&0,\quad(t,x^{\prime})\in\mathbb{R}\times\Omega,\\ \partial_{t}\mathcal{H}&=&-(\partial_{1}\mathcal{E}_{2}-\partial_{2}\mathcal{ E}_{1}),&\text{tr}_{\partial\Omega}(\rho_{e})&=&0\end{array}\right. \tag{7.1}\] _with \(\varepsilon(\mathcal{E})=1+|\mathcal{E}|^{2}\) and \((\mathcal{E},\mathcal{H})(0)=(\mathcal{E}_{0},\mathcal{H}_{0})\in\mathcal{H}_ {0}^{s}(\Omega)\) is locally well-posed provided that \(\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{H^{s}}\leq\delta\ll 1\) and \(\|\rho_{e}(0)\|_{H^{s}}\leq D<\infty\) for some \(\tilde{s}>\frac{13}{12}\). This means there is \(T=T(\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{H^{s}},D)\) such that the solution \((\mathcal{E},\mathcal{H})\) to (7.1) exists for \(0\leq t\leq T\) and for initial data \((\mathcal{E}^{(i)},\mathcal{H}^{(i)})\in\mathcal{H}_{0}^{s}(\Omega)\), \(i=1,2\) with \(\|(\mathcal{E}^{i},\mathcal{H}^{i})\|_{H^{s}}\leq\delta\) and \(\|\rho_{e}^{i}(0)\|_{H^{i}}\leq D\) we have_ \[\sup_{t\in[0,T]}\|(\mathcal{E}^{1},\mathcal{H}^{1})(t)-(\mathcal{E}^{2}, \mathcal{H}^{2})(t)\|_{H^{s}(\Omega)}\to 0\] _for \(\|(\mathcal{E}^{1},\mathcal{H}^{1})-(\mathcal{E}^{2},\mathcal{H}^{2})\|_{H^{s }}\to 0\)._ Spitz proved continuous dependence in \(\mathcal{H}^{3}(\Omega)\) in three dimensions: Recall that this refers to data in \(H^{3}(\Omega)\) satisfying the compatibility conditions up to second order derived from the boundary condition. In Appendix A we establish local well-posedness in \(\mathcal{H}^{3}(\Omega)\): **Theorem 7.2**.: _Let \(\Omega\subseteq\mathbb{R}^{2}\) be a smooth domain with compact boundary. Then (7.1) is locally well-posed in \(\mathcal{H}^{3}(\Omega)\). This means there is \(T=T(\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{H^{3}(\Omega)})\) such that solutions \((\mathcal{E}^{i},\mathcal{H}^{i})\), \(i=1,2\) exists for \(0\leq t\leq T\) and depend continuously on the initial data: We have_ \[\sup_{t\in[0,T]}\|(\mathcal{E}^{1},\mathcal{H}^{1})(t)-(\mathcal{E}^{2}, \mathcal{H}^{2})(t)\|_{H^{3}(\Omega)}\to 0\] _for \(\|(\mathcal{E}^{1},\mathcal{H}^{1})(0)-(\mathcal{E}^{2},\mathcal{H}^{2})(0)\|_{ H^{3}(\Omega)}\to 0\)._ In the following we take the local existence of these (sufficiently smooth) solutions for granted and want to examine behavior in rougher topologies. The argument to show local well-posedness for \(11/6<s\leq 2\) proceeds in three steps: 1. We require estimates for solutions \[\|u(t)\|_{H^{s}(\Omega)}\lesssim_{\delta,T}e^{\int_{0}^{T}\|\partial_{x}\mathcal{E }(\tau)\|_{L^{\infty}(\Omega)}d\tau}\|u(0)\|_{H^{s}(\Omega)}\] (7.2) for \(s\in[0,2]\). These were already proved in Section 3. By using Strichartz estimates, these give a priori estimates for \(s\in(\frac{11}{6},2]\) (see Proposition 7.3). 2. We prove Lipschitz-continuous dependence in \(L^{2}\) for initial data in \(\mathcal{H}^{s}(\Omega)\), \(s\in(\frac{11}{6},2]\) (see Proposition 7.5). 3. We show continuous dependence, but no uniform continuous dependence via frequency envelopes (Subsection 7.3). Here we use a regularization, which respects the compatibility conditions. This is facilitated by working in \(\mathcal{H}^{s}_{0}(\Omega)\) and would be more delicate in \(\mathcal{H}^{s}(\Omega)\). This is the only step in the proof, which uses that the initial data are in the smaller space \(\mathcal{H}^{s}_{0}(\Omega)\). We note that for \(s>2\) we have \(\|\partial_{x}\mathcal{E}\|_{L^{\infty}(\Omega)}\lesssim\|\mathcal{E}\|_{H^{s }(\Omega)}\) by Sobolev embedding, and Strichartz estimates are not required. For this reason, we shall only prove Theorem 7.1 as formulated for \(s\in(11/6,2]\). ### A priori control of solutions via Strichartz estimates In this subsection we show the following: **Proposition 7.3**.: _Let \(11/6<s\leqslant 2\) and \(\tilde{s}>\frac{13}{12}\). Then there is \(\delta>0\) such that for an \(\mathcal{H}^{3}(\Omega)\)-solution to (7.1) with \(\|u(0)\|_{H^{s}(\Omega)}\leqslant\delta\) we have_ \[\sup_{t\in[0,T]}\|u(t)\|_{H^{s}(\Omega)}\lesssim\|u(0)\|_{H^{s}(\Omega)} \tag{7.3}\] _with \(T=T(\|u(0)\|_{H^{s}(\Omega)},\|\rho_{e}(0)\|_{H^{z}})\)._ We first argue how the proof is finished with Strichartz estimates at hand: Conclusion of proof with Strichartz estimatesBy finite speed of propagation, it suffices to prove the claim in charts. The interior of \(\Omega\) can be handled like in \(\mathbb{R}^{2}\) (see [19]). It suffices to prove (7.3) in a chart and written in geodesic coordinates. To this end, let \(\sup_{t\in[0,T]}\|u(t)\|_{H^{s}(\Omega)}\leqslant\tilde{\delta}\ll 1\). We use a bootstrap argument based on the estimates: \[\|\partial_{x}u\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}} \leqslant C_{1}(\tilde{\delta},T,s_{1})(\|u\|_{L^{\infty}_{T}H^{s_ {1}}}+\|\rho_{e}(0)\|_{H^{z}}),\quad\frac{11}{6}<s_{1}\leqslant 2, \tag{7.4}\] \[\sup_{t\in[0,T]}\|u(t)\|_{H^{s}(\Omega)} \leqslant C(\tilde{\delta},T)e^{C_{2}(\tilde{\delta})_{0}^{T}\;| \partial_{x}u(t^{\prime})|_{L^{\infty}(\Omega)}dt^{\prime}}\|u(0)\|_{H^{s}( \Omega)},\quad s\in[0,2],\] (7.5) \[\sup_{t\in[0,T]}\|u(t)\|_{H^{3}(\Omega)} \leqslant C(\tilde{\delta},T)e^{C_{3}(\tilde{\delta},T)e^{C_{0}^{T} \;|\partial_{x}u(t^{\prime})|_{L^{\infty}(\Omega)}dt^{\prime}}(\|u(0)\|_{H^{2} (\Omega)}+1)}\|u(0)\|_{H^{3}(\Omega)}. \tag{7.6}\] The crucial Strichartz estimate (7.4) will be proved below. The energy estimate (7.5) was established in the proof of Proposition 3.3. We need (7.6) to argue that the \(H^{3}\)-solutions exist for the same time like the solutions at lower regularity. Recall that (7.5) and (7.6) require a smallness condition, which is ensured by \(\tilde{\delta}\ll 1\). Once we control the \(H^{s}\)-norm via a continuity argument and keep it small, the assumptions on the \(H^{s}\)-norm will be satisfied (this requires smallness of the initial data). Moreover, we can choose the constants uniform in \(\tilde{\delta}\) and \(T\), if these parameters are bounded. Taking (7.4) and (7.5) together, we find \[\|\partial_{x}u\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}(\Omega)}\leqslant C_{1}Ce^ {C_{2}\|\partial_{x}u\|_{L^{1}_{T}L^{\infty}_{x^{\prime}}}}\big{(}\|u(0)\|_{H^ {s}(\Omega)}+\|\rho_{e}(0)\|_{H^{z}}\big{)} \tag{7.7}\] for \(\frac{11}{6}<s\leqslant 2\) and \(\tilde{s}>\frac{13}{12}\). We argue as follows: Suppose that the \(H^{3}\)-solution exists on \([0,\tau]\). By continuity of \(\partial_{x}u\) in \(L^{\infty}(\Omega)\) (recall that \(u\in C([0,\tau],H^{3}(\Omega))\) and use Sobolev embedding), there is \(T_{0}\) such that \(\|\partial_{x}u\|_{L^{4}_{T_{0}}L^{\infty}_{x^{\prime}}(\Omega)}\leqslant eC_{ 1}C(\|u(0)\|_{H^{s}(\Omega)}+\|\rho_{e}(0)\|_{H^{z}})\). We want to extend this badly quantified time interval to \([0,T_{\max}]\), \(T_{\max}=T(\|u(0)\|_{H^{s}(\Omega)},\|\rho_{e}(0)\|_{H^{z}})\)4 such that Footnote 4: The time moreover depends on \(\Omega\), but this is suppressed in the following. \[T_{\max}^{3/4}C_{2}C_{1}C(\|u(0)\|_{H^{s}(\Omega)}+\|\rho_{e}(0)\|_{H^{z}}) \leqslant\frac{1}{8}.\] Suppose that \(T_{0}<T_{\max}\) (otherwise we are done). Then we have for \(T_{0}\) actually the improved estimate \[\|\partial_{x}u\|_{L^{4}_{T_{0}}L^{\infty}_{x^{\prime}}} \leqslant C_{1}Ce^{C_{2}T_{0}^{\frac{3}{4}}CC_{1}(\|u_{0}\|_{H^{s }(\Omega)}+\|\rho_{e}(0)\|_{H^{z}})}(\|u(0)\|_{H^{s}(\Omega)}+\|\rho_{e}(0)\|_ {H^{\tilde{s}}})\] \[\leqslant C_{1}Ce^{\frac{1}{8}}(\|u(0)\|_{H^{s}(\Omega)}+\|\rho_{e}( 0)\|_{H^{z}}).\] Moreover, by finiteness of \(\|\partial_{x}u\|_{L^{1}_{T_{0}}L^{\infty}_{x^{\prime}}}\) we have that \(\tau\geqslant T_{0}\). This allows us to continue up to a time \(T_{1}\) such that \[\|\partial_{x}u\|_{L^{4}_{T_{1}}L^{\infty}_{x^{\prime}}}\leqslant 2C_{1}C(\|u(0) \|_{H^{s}(\Omega)}+\|\rho_{e}(0)\|_{H^{\tilde{s}}}).\] This can be bootstrapped up to a time \(T_{\max}=T(\|u_{0}\|_{H^{s}(\Omega)},\|\rho_{e}(0)\|_{H^{z}})\) and gives the estimate \[\|\partial_{x}u\|_{L^{4}_{T_{\max}}L^{\infty}_{x^{\prime}}(\Omega)} \lesssim(\|u(0)\|_{H^{s}(\Omega)}+\|\rho_{e}(0)\|_{H^{\tilde{s}}}).\] By (7.5) for \(s=3\), we also infer existence of solutions in \(H^{3}\) for the same time. The a priori estimate is immediate from (7.5). The Strichartz estimates for \((\mathcal{E},\mathcal{H})\) in (7.4) play the key role in the argument. We resolve the Kerr system \[\left\{\begin{array}{rlclcl}\partial_{t}(\varepsilon\mathcal{E})&=\nabla_{ \perp}\mathcal{H},&&\nabla\cdot(\varepsilon\mathcal{E})&=\rho_{e},&&(t,x^{ \prime})&\in\mathbb{R}\times\Omega,\\ \partial_{t}\mathcal{H}&=-(\nabla\times\mathcal{E})_{3},&&[\mathcal{E}_{||}]_ {x^{\prime}\in\partial\Omega}&=0,&&(\mathcal{E},\mathcal{H})(0)&=(\mathcal{ E}_{0},\mathcal{H}_{0})\in\mathcal{H}^{s}(\Omega)\end{array}\right.\] in geodesic coordinates as follows: Let \(\varphi:\Omega\to\mathbb{R}^{2}_{>0}\) denote the change of coordinates \(x^{\prime}=\varphi(x)\) and \(J(x^{\prime})=\frac{\partial\varphi}{\partial x}(\varphi^{-1}(x^{\prime}))\) the Jacobian. The change of coordinates from Section 2 reads \[\mathcal{E}^{\prime}(x^{\prime})=(J^{t})^{-1}\mathcal{E}(\varphi^{-1}(x^{\prime })),\quad\mathcal{H}^{\prime}(x^{\prime})=\mathcal{H}(\varphi^{-1}(x^{\prime })),\quad\varepsilon^{\prime}=\frac{J\varepsilon J^{t}}{\det J}.\] The cometric is given by \[g^{-1}=JJ^{t}=\begin{pmatrix}g^{1}&0\\ 0&g^{2}\end{pmatrix}.\] The change of coordinates leads us to \[\left\{\begin{array}{rlclcl}\partial_{t}(\varepsilon^{\prime}\mathcal{E}^{ \prime})&=\nabla_{\perp}\mathcal{H}^{\prime},&&\frac{1}{\sqrt{g}}\nabla\cdot( \varepsilon^{\prime}\mathcal{E}^{\prime})&=\rho_{e}^{\prime},&&(t,x^{\prime})& \in\mathbb{R}\times\mathbb{R}^{2}_{>0},\\ \partial_{t}(\mu_{1}\mathcal{H}^{\prime})=&=-(\nabla\times\mathcal{E}^{\prime })_{3},&&[\mathcal{E}^{\prime}_{1}]_{x^{\prime}_{2}-0}&=0,&&(\mathcal{E}^{ \prime},\mathcal{H}^{\prime})(0)&=(\mathcal{E}^{\prime}_{0},\mathcal{H}^{ \prime}_{0}).\end{array}\right.\] We compute \[\mu_{1}=\sqrt{g},\quad\varepsilon^{\prime}=\frac{J(1+|\mathcal{E}|^{2})J^{t}} {\det J}=\frac{g^{-1}(1+\langle\mathcal{E}^{\prime},JJ^{t}\mathcal{E}^{\prime }\rangle)}{\det J}=\frac{g^{-1}(1+\langle\mathcal{E}^{\prime},g^{-1}\mathcal{ E}^{\prime}\rangle)}{\det J}.\] We extend the system to \(\mathbb{R}^{2}\) by reflecting \(\mathcal{E}^{\prime}_{1}\) oddly and \(\mathcal{E}^{\prime}_{2}\) and \(\mathcal{H}^{\prime}\) evenly (according to the boundary conditions). Moreover, the coefficients of \(g\) are extended evenly. Consequently, \(\varepsilon^{\prime}\) is extended evenly. It will be important to work in non-divergence form, to which end we compute \[\partial_{t}(\varepsilon^{\prime}\mathcal{E}^{\prime})=\varepsilon_{1}\partial _{t}\mathcal{E}^{\prime}\text{ with }\varepsilon_{1}=\sqrt{g}(\varepsilon^{\prime}+2(g^{-1}\mathcal{E}^{\prime}) \otimes(g^{-1}\mathcal{E}^{\prime})).\] Note that \(\varepsilon_{1}\) is still diagonal at \(x_{2}=0\). The diagonal components are reflected evenly, the off-diagonal ones oddly. Importantly, we observe that for \((\mathcal{E},\mathcal{H})\in C_{t}H^{3}\) we have \(\partial_{x}\varepsilon_{1}\in L^{2}_{t}L^{\infty}_{x^{\prime}}\) by Sobolev embedding. Using Strichartz estimates for coefficients \(\partial_{x}\varepsilon_{1}\in L^{2}_{t}L^{\infty}_{x^{\prime}}\) and energy estimates, we will prove bounds depending on \(\|u(0)\|_{H^{s}}\). It is non-trivial to verify \[\|\partial_{x}\varepsilon_{1}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}}\leq C(\|( \mathcal{E},\mathcal{H})\|_{H^{s}(\Omega)}),\] which is carried out below. We let \[P_{1}=\begin{pmatrix}\partial_{t}(\varepsilon_{1}\cdot)&-\nabla_{\perp}\\ (\nabla\times\cdot)_{3}&\partial_{t}(\mu\cdot)\end{pmatrix}\text{ with }\partial_{x}( \varepsilon_{1},\mu)\in L^{2}_{t}L^{\infty}_{x^{\prime}},\text{ and }\rho^{\prime}_{e}=\nabla\cdot( \varepsilon_{1}\mathcal{E}^{\prime}). \tag{7.8}\] The analysis of [19] provides us with the following Strichartz estimates; see Corollary B.4 in Appendix B: **Proposition 7.4** (Strichartz estimates for anisotropic permittivity in two dimensions).: _Let \(2\leq p,q\leq\infty\), \(\frac{2}{p}+\frac{1}{q}\leq\frac{1}{2}\), \(\rho=2\big{(}\frac{1}{2}-\frac{1}{q}\big{)}-\frac{1}{2}\), \(P_{1}\) like in (7.8), and \(\delta>0\). Then the following estimate holds:_ \[\|\langle D^{\prime}\rangle^{-\rho-\frac{1}{3p}-\delta}(\mathcal{ E}^{\prime},\mathcal{H}^{\prime})\|_{L^{p}_{T}L^{q}_{x}}\lesssim_{T,\delta,\| \partial(\varepsilon_{1},\mu_{1})\|_{L^{2}_{T}L^{\infty}_{x}}}\|(\mathcal{E} ^{\prime},\mathcal{H}^{\prime})\|_{L^{\infty}_{T}L^{2}_{x}}+\|P_{1}(\mathcal{ E}^{\prime},\mathcal{H}^{\prime})\|_{L^{1}_{T}L^{2}_{x}}\\ +\|\langle D^{\prime}\rangle^{-1+\frac{1}{p}}\rho^{\prime}_{e}\|_ {L^{\infty}_{T}L^{2}_{x^{\prime}}}+\|\langle D^{\prime}\rangle^{-1+\frac{1}{p} }\partial_{t}\rho^{\prime}_{e}\|_{L^{1}_{T}L^{2}_{x^{\prime}}}.\] Since the argument is essentially contained in [19] and Appendix B up to the change of variables \(\mathcal{D}=\varepsilon_{1}\mathcal{E}\), \(\mathcal{B}=\mu_{1}\mathcal{H}\), we shall be brief. Sketch of Proof of Proposition 7.4.: The idea is to show the estimates first for coefficients \(\varepsilon\) with \(\partial_{x}^{2}\varepsilon\in L^{1}_{t}L^{\infty}_{x}\) and then use paradifferential truncation. In the following we omit \({}^{\prime}\) to lighten the notations. After standard reductions, which are detailed in [19], we find that it suffices to prove: \[\lambda^{-\rho}\|S_{\lambda}u\|_{L^{p}_{T}L^{q}_{x^{\prime}}}\lesssim\|S_{ \lambda}u\|_{L^{\infty}_{T}L^{2}_{x^{\prime}}}+\|P_{1}^{\lambda}S_{\lambda}u\|_ {L^{2}_{x}}+\lambda^{-1+\frac{1}{p}}\|S^{\prime}_{\lambda}\rho_{e}\|_{L^{ \infty}_{t}L^{2}_{x^{\prime}}}. \tag{7.9}\] Above we require \(\lambda\gtrsim 1\) with Fourier support of \(\varepsilon\) contained in \(\{|\xi|\leq\lambda^{\frac{1}{2}}\}\) and \(u\) essentially supported in the unit cube. Moreover, we can suppose that \(\|\partial_{x}^{2}\varepsilon\|_{L^{1}_{T}L^{\infty}_{x}}\lesssim 1\), that the coefficients of \(P_{1}^{\lambda}\) are truncated at frequencies \(\lambda^{\frac{1}{2}}\), and we suppose by an elliptic estimate away from the characteristic surface that the support of the space-time Fourier transform of \(u\) is in \(\{|\xi_{0}|\lesssim|(\xi_{1},\xi_{2})|\}\). The estimate (7.9) is then proved by diagonalization with pseudo-differential operators: We factorize the principal symbol \[\tilde{p}(x,\xi) =\begin{pmatrix}i\xi_{0}\varepsilon^{11}&i\xi_{0}\varepsilon^{12}&-i \xi_{2}\\ i\xi_{0}\varepsilon^{21}&i\xi_{0}\varepsilon^{22}&i\xi_{1}\\ i\xi_{2}&-i\xi_{1}&i\xi_{0}\mu\end{pmatrix}\] \[=\begin{pmatrix}i\xi_{0}&0&-i\frac{\xi_{2}}{\mu}\\ 0&i\xi_{0}&i\frac{\xi_{1}}{\mu}\\ i(\varepsilon_{11}\xi_{2}-\varepsilon_{12}\xi_{1})&i(\varepsilon_{21}\xi_{2}- \varepsilon_{22}\xi_{1})&i\xi_{0}\end{pmatrix}\begin{pmatrix}\varepsilon^{11} &\varepsilon^{12}&0\\ \varepsilon^{21}&\varepsilon^{22}&0\\ 0&0&\mu\end{pmatrix}\] \[=:p(x,\xi)(\varepsilon\otimes\mu).\] The symbol \(p(x,\xi)\) was diagonalized in [19] (see also Appendix B) as \[p(x,\xi)=m(x,\xi)\mathrm{diag}(i\xi_{0},i(\xi_{0}+\|\xi^{\prime}\|_{\varepsilon ^{\prime}}),i(\xi_{0}-\|\xi^{\prime}\|_{\varepsilon^{\prime}}))m^{-1}(x,\xi)\] with quantizations of \(m\) and \(m^{-1}\) giving bounded operators in \(L^{p}L^{q}\). We can write \[P_{1}^{\lambda}=P^{\lambda}(\varepsilon_{\leqslant\lambda^{\frac{1}{2}}} \otimes\mu_{\leqslant\lambda^{\frac{1}{2}}})+A,\quad\|A\|_{L^{2}\to L^{2}} \lesssim 1. \tag{7.10}\] We define new variables \(S_{\lambda}(\mathcal{D},\mathcal{H})=(\varepsilon_{\leqslant\lambda^{\frac{1 }{2}}}\otimes\mu_{\leqslant\lambda^{\frac{1}{2}}})S_{\lambda}(\mathcal{E}, \mathcal{H})\), to which we apply Theorem B.3: \[\lambda^{-\rho}\|S_{\lambda}(\mathcal{D},\mathcal{H})\|_{L^{p}L^ {q}} \lesssim\|S_{\lambda}(\mathcal{D},\mathcal{H})\|_{L^{\infty}_{T}L^{2}_{x^{ \prime}}}+\|P^{\lambda}S_{\lambda}(\mathcal{D},\mathcal{H})\|_{L^{2}_{x}}\] \[\quad+\lambda^{-1+\frac{1}{p}}\|S^{\prime}_{\lambda}S_{\lambda}( \partial_{1}\mathcal{D}_{1}+\partial_{2}\mathcal{D}_{2})\|_{L^{\infty}_{t}L^{ 2}_{x^{\prime}}}.\] By (7.10) and straight-forward error estimates, we find \[\lambda^{-\rho}\|S_{\lambda}(\mathcal{E},\mathcal{H})\|_{L^{p}L^{q}} \lesssim\|S_{\lambda}(\mathcal{E},\mathcal{H})\|_{L^{\infty}_{T}L^{2}_{x^{ \prime}}}+\|\tilde{P}_{\leqslant\lambda^{\frac{1}{2}}}S_{\lambda}(\mathcal{E},\mathcal{H})\|_{L^{2}_{x}}+\lambda^{-1+\frac{1}{p}}\|S^{\prime}_{\lambda} \rho_{e}\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}.\] We are ready to conclude the proof of Proposition 7.3 by applying the Strichartz estimates. Proof of (7.4) via Proposition 7.4.: It suffices to prove (7.4) in geodesic coordinates. Using Maxwell equations, we have \[\|\partial_{t}(\mathcal{E},\mathcal{H})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}} \lesssim_{|\mathcal{E}|_{L^{\infty}_{T}L^{\infty}_{x^{\prime}}}}\|\partial_ {x^{\prime}}(\mathcal{E},\mathcal{H})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}}.\] Since we can control the \(L^{\infty}_{x^{\prime}}\)-norm of \((\mathcal{E},\mathcal{H})\) by Sobolev embedding, it suffices to show an estimate for the spatial derivatives of \((\mathcal{E},\mathcal{H})\). We apply Strichartz estimates due to Proposition 7.4 to \((\mathcal{E},\mathcal{H})\) resolved in geodesic normal coordinates and extended appropriately to the full space (see above). Recall that the extended fields are denoted with \((\mathcal{E}^{\prime},\mathcal{H}^{\prime})\). We aim for the estimate: \[\|\langle D^{\prime}\rangle(\mathcal{E}^{\prime},\mathcal{H}^{ \prime})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}(\mathbb{R}^{2})} \lesssim_{T,\alpha}(1+T^{\kappa}C(\|(\mathcal{E}^{\prime},\mathcal{H}^{ \prime})\|_{L^{\infty}_{T}H^{\alpha+1}},\|\partial_{x^{\prime}}(\mathcal{E}^{ \prime},\mathcal{H}^{\prime})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}})) \tag{7.11}\] \[\quad\times\|(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{ \infty}_{T}H^{\alpha+1}}+T^{\frac{1}{4}}\|\rho_{e}(0)\|_{H^{z}}.\] Applying Proposition 7.4 requires uniform ellipticity of \(\varepsilon_{1}\) and an estimate of \(\|\partial_{x}\varepsilon_{1}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}}\) with \(T=T(\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{\mathcal{H}^{*}(\Omega)})\). _Control of \(\|\partial_{x}\varepsilon_{1}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}(\mathbb{R}^{2})}\) and uniform ellipticity of \(\varepsilon_{1}\):_ We require an estimate \(\|\partial_{x}\varepsilon_{1}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}(\mathbb{R}^{ 2})}\leqslant C(\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{H^{*}(\Omega)})\) for \(\frac{11}{6}<s\leqslant 2\) and \(T=T(\|(\mathcal{E}_{0},\mathcal{H}_{0})\|_{H^{*}(\Omega)})\). By the bootstrap assumption it suffices to prove that there is \(\kappa>0\) such that \[\|\partial_{x}\varepsilon_{1}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}}\lesssim T^{\kappa}C(\|\partial_{x^{\prime}}(\mathcal{E}^{ \prime},\mathcal{H}^{\prime})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}(\mathbb{R}^{2} )},\ \|(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{\infty}_{T}H^{\alpha}}).\] The time derivative of \(\varepsilon_{1}\) is handled by Sobolev embedding, Holder's inequality, and Maxwell equations: \[\|\partial_{t}\varepsilon_{1}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}}\lesssim T^{ \frac{1}{2}}\|\mathcal{E}^{\prime}\|_{L^{\infty}_{T}L^{\infty}_{x^{\prime}}}\| \partial_{t}\mathcal{E}^{\prime}\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}}\lesssim T ^{\frac{1}{4}}\|(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{\infty}_{T} H^{\mu}}\|\partial_{x^{\prime}}(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{ \infty}_{T}L^{\infty}_{x^{\prime}}}.\] Similarly, we find for the spatial derivatives: \[\|\partial_{x^{\prime}}\varepsilon_{1}\|_{L^{2}_{T}L^{\infty}_{x^ {\prime}}(\mathbb{R}^{2})} \lesssim T^{\frac{1}{2}}\|\mathcal{E}^{\prime}\|_{L^{\infty}_{T} L^{\infty}_{x^{\prime}}}^{2}+\|\partial_{x^{\prime}}\mathcal{E}^{\prime}\|_{L^{2}_{T}L^{ \infty}_{x^{\prime}}(\mathbb{R}^{2})}\|\mathcal{E}^{\prime}\|_{L^{\infty}_{T }L^{\infty}_{x^{\prime}}(\mathbb{R}^{2})}\] \[\lesssim T^{\frac{1}{2}}\|(\mathcal{E}^{\prime},\mathcal{H}^{ \prime})\|_{L^{\infty}_{T}H^{s}}^{2}+T^{\frac{1}{4}}\|\partial_{x^{\prime}} \mathcal{E}^{\prime}\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}(\mathbb{R}^{2})}\|( \mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{\infty}_{T}H^{s}}.\] Ellipticity of \(\varepsilon_{1}\) follows from recalling \(g^{-1}=JJ^{t}\) and writing \[\varepsilon_{1} =\sqrt{g}(J(1+|\mathcal{E}^{\prime}|_{g^{-1}}^{2})J^{t}+2J(J^{t }\mathcal{E}^{\prime})(\mathcal{E}^{\prime})^{t}JJ^{t})\] \[=\sqrt{g}(J((1+\langle J^{t}\mathcal{E}^{\prime},J^{t}\mathcal{E }^{\prime}\rangle)+2(J^{t}\mathcal{E}^{\prime})(J^{t}\mathcal{E}^{\prime})^{t })J^{t}).\] Hence, uniform ellipticity follows by uniform invertibility of \(J\), and uniform ellipticity of \(1+|\mathcal{E}|^{2}+2\mathcal{E}\otimes\mathcal{E}\). The uniform invertibility of \(J\) can clearly be assumed in one chart and uniform ellipticity of \(1+|\mathcal{E}|^{2}+2\mathcal{E}\otimes\mathcal{E}\) is given since the eigenvalues are \(1+3|\mathcal{E}|^{2}\) and \(1+|\mathcal{E}|^{2}\), and \(\|\mathcal{E}\|_{L^{\infty}_{T}L^{\infty}_{x^{\prime}}}\lesssim\|\mathcal{E} \|_{L^{\infty}_{T}H^{s}}\). _Applying Strichartz estimates and switching to non-divergence form:_ \[\tilde{P}_{1}=\begin{pmatrix}\varepsilon_{1}\partial_{t}&-\nabla_{\perp}\\ (\nabla\times\cdot)_{3}&\mu\partial_{t}\end{pmatrix},\quad\eta(x,D)\mathcal{E }^{\prime}=\sum_{i,j=1}^{2}(\varepsilon_{1})_{ij}\partial_{i}\mathcal{E}^{ \prime}_{j}.\] We apply Strichartz estimates from Proposition 7.4 with \(P_{1}\) to find \[\|\langle D^{\prime}\rangle^{-\alpha}(\mathcal{E}^{\prime}, \mathcal{H}^{\prime})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}}\lesssim_{\alpha,T} \|(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{\infty}_{T}L^{2}_{x^{ \prime}}}+\|P_{1}(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{2}_{T}L^{2}_ {x^{\prime}}}\] \[\quad+\|\langle D^{\prime}\rangle^{-\frac{3}{4}}\nabla\cdot( \varepsilon_{1}\mathcal{E}^{\prime})\|_{L^{2}_{T}L^{2}_{x^{\prime}}}\] for \(\alpha>\frac{5}{6}\), \(\|\partial_{x}\varepsilon_{1}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}}\leq C<\infty\). Noting that \(\nabla\cdot(\varepsilon_{1}\mathcal{E}^{\prime})=(\partial_{x^{\prime}} \varepsilon_{1})\mathcal{E}^{\prime}+\eta(x,D)\mathcal{E}^{\prime}\) and applying Holder's inequality gives \[\|\langle D^{\prime}\rangle^{-\alpha}(\mathcal{E}^{\prime}, \mathcal{H}^{\prime})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}(\mathbb{R}^{2})} \lesssim_{T,\alpha} \|(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{\infty}_{T}L^{ 2}_{x^{\prime}}}+\|\tilde{P}_{1}(\mathcal{E}^{\prime},\mathcal{H}^{\prime}) \|_{L^{2}_{T}L^{2}_{x^{\prime}}} \tag{7.12}\] \[+\|\langle D^{\prime}\rangle^{-\frac{3}{4}}\eta(x,D)\mathcal{E}^{ \prime}\|_{L^{2}_{T}L^{2}_{x^{\prime}}}.\] _Commutator estimates:_ Finally, we use commutator estimates to find an estimate for \(\|\langle D^{\prime}\rangle(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{ 4}_{T}L^{\infty}_{x^{\prime}}}\). We apply (7.12) to \(\langle D^{\prime}\rangle^{\alpha+1}(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\) to find \[\|\langle D^{\prime}\rangle(\mathcal{E}^{\prime},\mathcal{H}^{ \prime})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}(\mathbb{R}^{2})}\lesssim_{T,\alpha} \|(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{\infty}_{T}H^{\alpha+1}}+\|P _{1}(\langle D^{\prime}\rangle^{\alpha+1}(\mathcal{E}^{\prime},\mathcal{H}^{ \prime}))\|_{L^{2}_{T}L^{2}_{x^{\prime}}}\] \[\quad+\|\langle D^{\prime}\rangle^{-\frac{3}{4}}\eta(x,D)\langle D ^{\prime}\rangle^{\alpha+1}\mathcal{E}^{\prime}\|_{L^{2}_{T}L^{2}_{x^{\prime}}}.\] We note that \[\mu(\partial_{t}\langle D^{\prime}\rangle^{\alpha+1}\mathcal{H}^{ \prime}+(\nabla\times\langle D^{\prime}\rangle^{\alpha+1}\mathcal{E}^{\prime})_{3}\] \[=\mu(\partial_{t}\langle D^{\prime}\rangle^{\alpha+1}\mathcal{H}^{ \prime}+\mu^{-1}(\nabla\times\langle D^{\prime}\rangle^{\alpha+1}\mathcal{E}^{ \prime})_{3})\] \[=\mu(\langle D^{\prime}\rangle^{\alpha+1}(\partial_{t}\mathcal{H}^{ \prime}+\mu^{-1}(\nabla\times\mathcal{E}^{\prime})_{3}))+\mu[\mu^{-1},\langle D ^{\prime}\rangle^{\alpha+1}](\nabla\times\mathcal{E}^{\prime})_{3}\] \[=\mu[\mu^{-1},\langle D^{\prime}\rangle^{\alpha+1}](\nabla\times \mathcal{E}^{\prime})_{3}.\] The ultimate estimate follows from \(P_{1}(\mathcal{E}^{\prime},\mathcal{H}^{\prime})=0\). For the second term we write \[\varepsilon_{1}\partial_{t}\langle D^{\prime}\rangle^{\alpha+1}\mathcal{E}^{ \prime}-\nabla_{\perp}\langle D^{\prime}\rangle^{\alpha+1}\mathcal{H}^{\prime} =\varepsilon_{1}\langle D^{\prime}\rangle^{\alpha+1}(\partial_{t} \mathcal{E}^{\prime}-\varepsilon_{1}^{-1}\nabla_{\perp}\mathcal{H}^{\prime}) -\varepsilon_{1}[\langle D^{\prime}\rangle^{\alpha+1},\varepsilon_{1}^{-1}] \nabla_{\perp}\mathcal{H}.\] We infer from the Kato-Ponce commutator estimate (see [15]) and \(\|(\varepsilon_{1},\mu)\|_{L^{\infty}_{x^{\prime}}}\lesssim_{|\mathcal{E}^{ \prime}|_{L^{\infty}_{x^{\prime}}}}1\), \[\|P_{1}(\langle D^{\prime}\rangle^{\alpha+1}(\mathcal{E}^{\prime},\mathcal{H}^{\prime}))\|_{L^{2}_{x^{\prime}}}\] \[\lesssim\|[\mu^{-1},\langle D^{\prime}\rangle^{\alpha+1}]( \nabla\times\mathcal{E}^{\prime})_{3}\|_{L^{2}_{x^{\prime}}}+\|[\langle D^{ \prime}\rangle^{\alpha+1},\varepsilon_{1}^{-1}]\nabla_{\perp}\mathcal{H}^{ \prime}\|_{L^{2}_{x^{\prime}}}\] \[\lesssim\|\partial_{x^{\prime}}\mu^{-1}\|_{L^{\infty}_{x^{\prime }}}\|\langle D^{\prime}\rangle^{\alpha+1}\mathcal{E}^{\prime}\|_{L^{2}_{x^{ \prime}}}+\|\langle D^{\prime}\rangle^{\alpha+1}\mu^{-1}\|_{L^{2}_{x^{\prime} }}\|(\nabla\times\mathcal{E}^{\prime})_{3}\|_{L^{\infty}_{x^{\prime}}}\] \[\quad+\|\partial_{x^{\prime}}\varepsilon_{1}^{-1}\|_{L^{\infty}_{ x^{\prime}}}\|\langle D^{\prime}\rangle^{\alpha+1}\mathcal{H}^{\prime}\|_{L^{2}_{x^{ \prime}}}+\|\langle D^{\prime}\rangle^{\alpha+1}\varepsilon_{1}^{-1}\|_{L^{2}_ {x^{\prime}}}\|\nabla_{\perp}\mathcal{H}^{\prime}\|_{L^{\infty}_{x^{\prime}}}.\] We have \(\|\partial_{x^{\prime}}\varepsilon_{1}^{-1}\|_{L^{\infty}_{x^{\prime}}} \lesssim C(\|\mathcal{E}\|_{L^{\infty}_{x^{\prime}}},\|\partial_{x^{\prime}} \mathcal{E}\|_{L^{\infty}_{x^{\prime}}})\) and therefore by Sobolev embedding and Holder's inequality, there is \(\kappa>0\) such that \[\|\partial_{x^{\prime}}\varepsilon_{1}^{-1}\|_{L^{2}_{x^{\prime}}L^{\infty}_{ x^{\prime}}}\lesssim T^{\kappa}C(\|\mathcal{E}^{\prime}\|_{L^{\infty}_{x}H^{ \kappa}},\|\partial_{x^{\prime}}\mathcal{E}^{\prime}\|_{L^{4}_{x}L^{\infty}_{x ^{\prime}}}).\] Secondly, by Moser estimates, ellipticity of \(\varepsilon_{1}\), and the fractional Leibniz rule we have \[\|\langle D^{\prime}\rangle^{\alpha+1}(\varepsilon_{1}^{-1})\|_{L^{2}_{x^{ \prime}}}\lesssim_{|\mathcal{E}^{\prime}|_{L^{\infty}_{x^{\prime}}}}\|\langle D ^{\prime}\rangle^{\alpha+1}\varepsilon_{1}\|_{L^{2}_{x^{\prime}}}.\] We write the components of \(\varepsilon_{1}\) as product of \(g_{1}\), \(\sqrt{g_{1}}\), and \(\mathcal{E}^{\prime}_{1}\), \(\mathcal{E}^{\prime}_{2}\). Recall that \((\varepsilon_{1})_{11}\), \((\varepsilon_{1})_{22}\) are reflected evenly, whereas \((\varepsilon_{1})_{21}\), \((\varepsilon_{1})_{12}\) are reflected oddly due to their internal structure. We use continuity of \(\mathrm{ext}_{N}\) and \(\mathrm{ext}_{D}\) to estimate for \(\alpha+1\leq 2\): \[\|(\varepsilon_{1})_{ij}\|_{H^{\alpha+1}(\mathbb{R}^{2})}\lesssim\|(\varepsilon _{1})_{ij}\|_{H^{\alpha+1}(\mathbb{R}^{2}_{>0})}.\] On the half-space, we can use smoothness of \(g\) and invariance of Sobolev spaces under multiplication with smooth functions to find \[\|(\varepsilon_{1})_{ij}\|_{H^{\alpha+1}(\mathbb{R}^{2}_{>0})} \lesssim\sum_{m,n}\|\mathcal{E}^{\prime}_{m}\mathcal{E}^{\prime}_{ n}\|_{H^{\alpha+1}(\mathbb{R}^{2}_{>0})}\] \[\lesssim\sum_{m,n}\|\mathcal{E}^{\prime}_{m}\mathcal{E}^{\prime}_ {n}\|_{H^{\alpha+1}(\mathbb{R}^{2})}\lesssim\sum_{m,n}\|\mathcal{E}^{\prime}_ {m}\|_{H^{\alpha+1}(\mathbb{R}^{2})}\|\mathcal{E}^{\prime}_{n}\|_{H^{\alpha+1}( \mathbb{R}^{2})}.\] In the penultimate estimate, we switched back to the full space by appropriate extension of the components of \(\mathcal{E}^{\prime}\). In the ultimate estimate, we used that \(H^{s}(\mathbb{R}^{2})\) is a Banach algebra for \(s>1\). The second estimate \[\|\langle D^{\prime}\rangle^{\alpha+1}\mu^{-1}\|_{L^{2}_{x^{\prime}}}\lesssim \|\langle D^{\prime}\rangle^{\alpha+1}g\|_{L^{2}_{x^{2}_{x^{2}>0}}}\lesssim 1\] follows likewise by switching back to the half space and assuming that \(g\in C^{\infty}_{c}(\mathbb{R}^{2}_{>0})\). Here we use that we are dealing with a compact boundary. Consequently, \[\|\langle D^{\prime}\rangle^{\alpha+1}\mu^{-1}\|_{L^{2}_{x^{\prime }}}\|(\nabla\times\mathcal{E}^{\prime})_{3}\|_{L^{\infty}_{x^{\prime}}}\|_{L^{2}_ {x^{\prime}}} \lesssim T^{\frac{1}{4}}\|\partial_{x^{\prime}}\mathcal{E}^{\prime}\|_{L^{4}_ {T}L^{\infty}_{x^{\prime}}},\] \[\|\langle D^{\prime}\rangle^{\alpha+1}(\varepsilon_{1}^{-1})\|_{L ^{2}_{x^{\prime}}}\|\nabla_{\perp}\mathcal{H}^{\prime}\|_{L^{\infty}_{x^{\prime}}} \|_{L^{2}_{x^{\prime}}} \lesssim_{|\mathcal{E}^{\prime}|_{L^{\infty}_{T}L^{\infty}_{x^{\prime}}}}T^{ \frac{1}{4}}\|\mathcal{E}^{\prime}\|_{L^{\infty}_{T}H^{\alpha+1}}^{2}\| \partial_{x^{\prime}}\mathcal{H}^{\prime}\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}}.\] Hence, for \(T\) small enough we can absorb this into the left-hand side to find \[\|\langle D^{\prime}\rangle(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{4}_ {T}L^{\infty}_{x^{\prime}}}\lesssim_{T,\alpha}\|(\mathcal{E}^{\prime},\mathcal{H}^ {\prime})\|_{L^{\infty}_{T}H^{\alpha+1}}+\|\langle D^{\prime}\rangle^{-\frac{ 3}{4}}\eta(x,D)\langle D^{\prime}\rangle^{\alpha+1}\mathcal{E}^{\prime}\|_{L^{2}_ {T}L^{2}_{x^{\prime}}}.\] It remains to find an estimate of \(\eta(x,D)\langle D^{\prime}\rangle^{\alpha+1}\mathcal{E}^{\prime}\) in terms of the charges and lower order terms \(\|\langle D^{\prime}\rangle^{\alpha+1}(\mathcal{E}^{\prime},\mathcal{H}^{\prime })\|_{L^{\infty}_{T}L^{2}_{x^{\prime}}}\). We write \[\eta(x,D)\langle D^{\prime}\rangle^{\alpha+1}\mathcal{E}^{\prime}=\langle D^{ \prime}\rangle^{\alpha+1}\eta(x,D)\mathcal{E}^{\prime}+[\varepsilon_{1}, \langle D^{\prime}\rangle^{\alpha+1}]\partial_{x^{\prime}}\mathcal{E}^{\prime}.\] The commutator \[\|\langle D^{\prime}\rangle^{-\frac{3}{4}}[\varepsilon_{1},\langle D^{\prime }\rangle^{\alpha+1}]\partial_{x^{\prime}}\mathcal{E}^{\prime}\|_{L^{2}_{T}L^{ 2}_{x^{\prime}}}\lesssim\|[\varepsilon_{1},\langle D^{\prime}\rangle^{ \alpha+1}]\partial_{x^{\prime}}\mathcal{E}^{\prime}\|_{L^{2}_{T}L^{2}_{x^{ \prime}}}\] can be handled like above with the Kato-Ponce commutator estimate. We need to relate \(\eta(x,D)\mathcal{E}^{\prime}\) to the charges \(\nabla\cdot(\varepsilon\mathcal{E}^{\prime})=\frac{1}{\sqrt{g}}\nabla\cdot( \sqrt{g}g^{-1}(1+|\mathcal{E}^{\prime}|^{2}_{g^{-1}})\mathcal{E}^{\prime})\). A straight-forward computation yields \[\nabla\cdot(\sqrt{g}g^{-1}(1+|\mathcal{E}^{\prime}|^{2}_{g^{-1}})\mathcal{E}^ {\prime})=\eta(x,D)\mathcal{E}^{\prime}+O(\partial g(\mathcal{E}^{\prime})^{ 3}).\] Therefore, \[\|\langle D^{\prime}\rangle^{\alpha+\frac{1}{4}}\eta(x,D)\mathcal{E}^{\prime} \|_{L^{\infty}_{T}L^{2}_{x^{\prime}}}\lesssim\|\rho_{e}(0)\|_{H^{\alpha+\frac{ 1}{4}}}+\|\langle D^{\prime}\rangle^{\alpha+\frac{1}{4}}(\partial g(\mathcal{ E}^{\prime})^{3})\|_{L^{\infty}_{T}L^{2}_{x^{\prime}}}.\] We split the second term into monomials of \(\partial g\) and \(\mathcal{E}^{\prime}\) \[\partial g(\mathcal{E}^{\prime})^{3}=\sum_{k,i_{1},i_{2},i_{3}}(\partial g)_{ k}\mathcal{E}^{\prime}_{i_{1}}\mathcal{E}^{\prime}_{i_{2}}\mathcal{E}^{\prime}_{i_{3}},\] which gives a decomposition into even or odd functions. By continuity of the respective extension for \(\alpha+\frac{1}{2}\leqslant 2\), we find \[\|(\partial g)_{k}\mathcal{E}^{\prime}_{i_{1}}\mathcal{E}^{\prime }_{i_{2}}\mathcal{E}^{\prime}_{i_{3}}\|_{L^{\infty}_{T}H^{\alpha+\frac{1}{4}} (\mathbb{R}^{2})} \lesssim\|(\partial g)_{k}\mathcal{E}^{\prime}_{i_{1}}\mathcal{E} ^{\prime}_{i_{2}}\mathcal{E}^{\prime}_{i_{3}}\|_{L^{\infty}_{T}H^{\alpha+ \frac{1}{4}}(\mathbb{R}^{2}_{\geq 0})}\] \[\lesssim\|\mathcal{E}^{\prime}_{i_{1}}\mathcal{E}^{\prime}_{i_{2} }\mathcal{E}^{\prime}_{i_{3}}\|_{L^{\infty}_{T}H^{\alpha+\frac{1}{4}}(\mathbb{ R}^{2}_{>0})}.\] Here we used again invariance of Sobolev spaces under multiplication with smooth functions. Like above, we extend again to the full space to apply the fractional Leibniz rule and Sobolev embedding \[\|\mathcal{E}^{\prime}_{i_{1}}\mathcal{E}^{\prime}_{i_{2}}\mathcal{E}^{\prime }_{i_{3}}\|_{L^{\infty}_{T}H^{\alpha+\frac{1}{4}}(\mathbb{R}^{2}_{>0})} \lesssim\prod_{j=1}^{3}\|\mathcal{E}^{\prime}_{i_{j}}\|_{L^{\infty}_{T}H^{ \alpha+\frac{1}{4}}(\mathbb{R}^{2})}\lesssim\|\mathcal{E}^{\prime}\|^{3}_{L^{ \infty}_{T}H^{\alpha+1}}.\] We conclude for \(\frac{5}{6}<\alpha\leqslant 1\) the existence of \(\kappa>0\) such that \[\|\langle D^{\prime}\rangle(\mathcal{E}^{\prime},\mathcal{H}^{ \prime})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}(\mathbb{R}^{2})}\] \[\lesssim_{T,\alpha}(1+T^{\kappa}C(\|(\mathcal{E}^{\prime}, \mathcal{H}^{\prime})\|_{L^{\infty}_{T}H^{\alpha+1}},\|\partial_{x^{\prime}}( \mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{4}_{T}L^{\infty}_{x^{\prime}}} )\|(\mathcal{E}^{\prime},\mathcal{H}^{\prime})\|_{L^{\infty}_{T}H^{\alpha+1}}\] \[\quad+T^{\frac{1}{4}}\|\rho_{e}(0)\|_{H^{\alpha+\frac{1}{4}}}.\] This finishes the proof letting \(\alpha\to\frac{5}{6}\) and setting \(s=\alpha+1\). ### \(L^{2}\)-Lipschitz continuous dependence We turn to \(L^{2}\)-Lipschitz continuous dependence for initial data at higher regularities. **Proposition 7.5**.: _Let \(s\in(\frac{11}{6},2]\), \(u^{1}_{0}\), \(u^{2}_{0}\in\mathcal{H}^{3}(\Omega)\), with \(\|u^{i}_{0}\|_{H^{s}(\Omega)}\leqslant\delta\ll 1\), \(i=1,2\), and for some \(\tilde{s}>\frac{13}{12}\) and \(D>0\), \(\|\rho^{i}_{e}(0)\|_{H^{\tilde{s}}}\leqslant D\), and denote the corresponding solutions to (7.1) with \(u^{i}\). Then the following estimate holds:_ \[\|u^{1}-u^{2}\|_{L^{\infty}_{T}L^{2}(\Omega)}\leq C(\|u^{i}_{0}\|_{H^{s}(\Omega) },D)\|u^{1}_{0}-u^{2}_{0}\|_{L^{2}(\Omega)}.\] Proof.: We analyze the equation satisfied by the differences of solutions \(v=u^{1}-u^{2}=(\mathcal{E}_{\Delta},\mathcal{H}_{\Delta})\): \[\left\{\begin{array}{cc}\partial_{t}(\tilde{\varepsilon}\mathcal{E}_{\Delta})& =\nabla_{\perp}\mathcal{H}_{\Delta},&(t,x^{\prime})\in\mathbb{R}\times\Omega, \\ \partial_{t}\mathcal{H}_{\Delta}&=-\nabla\times\mathcal{E}_{\Delta},&[( \mathcal{E}_{\Delta})_{||}]_{x^{\prime}\in\partial\Omega}=0\end{array}\right.\] with \(\tilde{\varepsilon}=1+\mathcal{E}_{2}^{t}\mathcal{E}_{1}+|\mathcal{E}_{1}- \mathcal{E}_{2}|^{2}+\mathcal{E}_{1}\otimes\mathcal{E}_{2}+\mathcal{E}_{2} \otimes\mathcal{E}_{1}\). Hence, the integration-by-parts-argument from the proof of Proposition 3.35 yields Footnote 5: We proved Proposition 3.3 for the diagonal Kerr permittivity, but it is straight-forward to obtain the below estimate for possibly off-diagonal \(\tilde{\varepsilon}\). \[\|v(t)\|_{L^{2}(\Omega)}\leqslant e^{C(\tilde{\delta})\int_{0}^{t}\|\tilde{ \varepsilon}_{t}(\mathcal{E}_{\Delta},\mathcal{H}_{\Delta})(s)\|_{L^{ \infty}(\Omega)}ds}\|v(0)\|_{L^{2}(\Omega)}\] with \(\tilde{\delta}=\|u^{1}\|_{L^{\infty}_{T}L^{\infty}_{x^{\prime}}(\Omega)}+\|u ^{2}\|_{L^{\infty}_{T}L^{\infty}_{x^{\prime}}(\Omega)}^{L_{\infty}}\). By the proof of Proposition 7.3 and Minkowski's inequality, we obtain for \(T=T(\|u^{i}_{0}\|_{H^{s}(\Omega)})\), \(\frac{11}{6}<s\leqslant 2\), and provided that \(\tilde{\delta}\) is small enough, the following for some \(\kappa>0\): \[\|\partial_{t}(\mathcal{E}_{\Delta},\mathcal{H}_{\Delta})\|_{L^{2 }_{T}L^{\infty}_{x^{\prime}}(\Omega)} \leqslant\|\dot{u}_{1}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}(\Omega) }+\|\dot{u}_{2}\|_{L^{2}_{T}L^{\infty}_{x^{\prime}}(\Omega)}\] \[\lesssim T^{\kappa}(\|u_{1}(0)\|_{H^{s}(\Omega)}+\|u_{2}(0)\|_{H^{s}( \Omega)}+D).\] The proof is complete. ### Proof of continuous dependence via frequency envelopes In this section we want to extend the data-to-solution mapping for (7.1) from \(\mathcal{H}^{3}(\Omega)\) to \(\mathcal{H}^{s}(\Omega)\) for \(\frac{11}{6}<s\leqslant 2\) under a boundedness condition on the charges. We shall use frequency envelopes, for which a regularization is required, which is consistent with the compatibility conditions (to use the local existence provided in \(\mathcal{H}^{3}(\Omega)\)). The compatibility conditions for the Kerr nonlinearity are computed in the proof of Theorem A.1. In geodesic coordinates the boundary conditions are given by \[[\mathcal{E}_{1}]_{x^{\prime}_{2}=0} =0, \tag{7.13}\] \[=0,\] (7.14) \[=0. \tag{7.15}\] For this reason (7.13) and (7.15) are automatically satisfied for \(\mathcal{E}_{0}\in\overline{C^{\infty}_{c}(\Omega)^{2}}^{|\cdot|_{H^{s}}}\), and \(\mathcal{H}\) satisfies Neumann boundary conditions in \(H^{3}(\Omega)\). We suppose in the following by density and compactness of \(\partial\Omega\) that in geodesic coordinates \[\operatorname{dist}(\operatorname{supp}(\mathcal{E}_{0}),\{x^{\prime}_{2}=0\} )>0.\] The components are extended to the full space by reflection: \(\mathcal{E}_{1}\) is reflected oddly, and \(\mathcal{E}_{2}\), \(\mathcal{H}\) is reflected evenly. We regularize the components as follows: \[f_{n}(x^{\prime}_{1},x^{\prime}_{2})=\int_{\mathbb{R}^{2}}f(x^{\prime}_{1}-y_{1 },x^{\prime}_{2}-y_{2})n^{2}\psi(ny_{1})\psi(ny_{2})dy_{1}dy_{2}\] with \(\psi\in C^{\infty}_{c}(\mathbb{R})\), which is symmetric at the origin, \(\psi\geqslant 0\), and \(\int_{\mathbb{R}}\psi(y)dy=1\). This regularization preserves the boundary conditions (for \(\mathcal{E}_{0n}\) we have to choose \(n\) large enough) and clearly \((\mathcal{E}_{0n},\mathcal{H}_{0n})\in H^{k}(\mathbb{R}^{2})^{3}\) for any \(k\in\mathbb{N}\). The regularization with parameter \(N\in 2^{\mathbb{N}}\) corresponds to a frequency truncation at frequencies \(N\). However, presently this is not sharp in frequency space, but with a Schwartz tail. Sharp frequency truncation does not preserve the boundary conditions. Now, for \(N\in 2^{\mathbb{N}_{0}}\), we let \[P_{\leqslant N}\mathcal{E}_{i}=\mathcal{E}_{iN},\quad P_{\leqslant N}\mathcal{H}= \mathcal{H}_{N},\quad P_{N}=P_{\leqslant 2N}-P_{\leqslant N/2}.\] We introduce the notation for \(M,N\in 2^{\mathbb{N}_{0}}\): \[\big{[}\frac{M}{N}\big{]}=\min(\frac{M}{N},\frac{N}{M}).\] We can now define frequency envelopes for the problem at hand: **Definition 7.6**.: \((c_{N})_{N\in 2^{\mathbb{N}_{0}}}\in\ell^{2}\) is called a frequency envelope for a function \(u\in H^{s}(\Omega)\), if it has the following properties: * Energy bound: \[\|P_{N}u\|_{H^{s}}\leqslant c_{N}.\] (7.16) * Slowly varying: For a suitable choice of \(\delta>0\) we have for all \(J,K\in 2^{\mathbb{N}_{0}}\): \[\frac{c_{K}}{c_{J}}\lesssim\max\big{(}\frac{J}{K},\frac{K}{J}\big{)}^{\delta}.\] (7.17) The envelopes are called sharp, if they also satisfy \[\|u\|_{H^{s}}^{2}\approx\sum_{N}c_{N}^{2}.\] Sharp frequency envelopes exist for any \(\delta>0\). Indeed, we can start taking \(\widetilde{c}_{N}=N^{2s}\|P_{N}f\|_{L^{2}}^{2}\) as an ansatz and define (cf. [10]) \[c_{N}=\sup_{M}\min\big{[}\frac{N}{M}\big{]}^{\delta}\widetilde{c}_{M}.\] Indeed, (7.16) is clearly satisfied while (7.17) follows from \[c_{K}=\sup_{M}\big{[}\frac{M}{N}\big{]}^{\delta}\widetilde{c}_{M}\leqslant\sup _{M}\big{[}\frac{M}{N}\big{]}^{\delta}\widetilde{c}_{M}\times\max\big{(}\frac {K}{N},\frac{N}{K}\big{)}^{\delta}. \tag{7.18}\] Finally, we have \[c_{N}^{2}=\sup_{M}\big{[}\frac{M}{N}\big{]}^{2\delta}\widetilde{c}_{M}^{2} \leqslant\sum_{M}\big{[}\frac{M}{N}\big{]}^{2\delta}\widetilde{c}_{M}^{2},\] which implies \[\sum_{N}c_{N}^{2}\leqslant C_{\delta}\sum_{N}\widetilde{c}_{N}^{2}=\|u\|_{H^{ s}}^{2}.\] Lastly, the definition of the frequency envelope (7.18) satisfies the additional compactness property: **Lemma 7.7**.: _Assume that the sequence \((u_{n})_{n\in\mathbb{N}_{0}}\) converges to \(u\) in \(H^{s}\). Then for any \(\delta>0\) the family of frequency envelopes defined by_ \[c_{K,n}=\sup_{M}\big{[}\frac{M}{N}\big{]}^{\delta}\widetilde{c}_{M,n},\qquad \widetilde{c}_{M,n}=\|P_{M}u_{n}\|_{H^{s}}\] _converge in \(\ell^{2}\) to \((c_{K})\)._ Proof.: We observe that \[|c_{K_{n}}-c_{K}|=|\sup_{M}\big{[}\frac{M}{N}\big{]}^{\delta}\widetilde{c}_{M, n}-\sup_{M}\big{[}\frac{M}{N}\big{]}^{\delta}\widetilde{c}_{M}|\leqslant\sup_{M} \big{[}\frac{M}{N}\big{]}^{\delta}|\widetilde{c}_{M,n}-\widetilde{c}_{M}|.\] We recall the following: _Regularization:_ Let \(u_{0}\in H^{s}(\tilde{\Omega})\) and \((c_{N})_{N\in 2^{\mathbb{N}_{0}}}\) be a sharp frequency envelope for \(u_{0}\) in \(H^{s}\). Then, we have for \(u_{0}^{N}=P_{\leqslant N}u_{0}\) the following bounds: * Uniform bounds: \[\|P_{K}u_{0}^{N}\|_{H^{s}}\lesssim c_{K}.\] (7.19) * High frequency bounds: If \(0<\delta<j\), we have \[\|u_{0}^{N}\|_{H^{s+j}}\lesssim N^{j}c_{N}\quad(j\geq 0).\] (7.20) By the slow varying property of frequency envelopes (7.17): \[\|u_{0}^{N}\|_{H^{s+j}}^{2}=\sum_{K\leqslant N}K^{2j}\|P_{K}u_{0}\|_{H^{s}}^{2} \leq\sum_{K\leqslant N}K^{2j}c_{K}^{2}\leq\sum_{K\leqslant N}K^{2(j-\delta)} N^{2\delta}c_{N}^{2}\sim N^{2j}c_{N}^{2}.\] * Difference bounds: \[\|u_{0}^{2N}-u_{0}^{N}\|_{L^{2}}\lesssim N^{-s}c_{N}\quad(s\geq 0).\] (7.21) * Limit as \(n\to\infty\): \(u_{0}=\lim_{N\to\infty}u_{0}^{N}\) in \(H^{s}\). Presently, the above properties are relevant for \(\frac{11}{6}<s<s+j\leq 3\) (for a proper choice of \(\delta>0\)). The regularized initial data give rise to a family of solutions in \(H^{3}\) by the local existence result in Theorem 7.2. _Uniform bounds:_ Proposition 7.3 yields a time interval of length \[T=T(\|u_{0}\|_{H^{s}},\|\rho_{e}\|_{H^{\frac{13}{12}+*}}),\] on which the solution exists, with \(T\) independent of the regularization parameter \(N\), provided that we can prove a uniform bound for the regularized charges. We show the following: **Lemma 7.8**.: _Let \(\tilde{s}=\frac{13}{12}+\epsilon\) with \(\epsilon\) chosen small enough. We have the following estimate:_ \[\|\rho_{eN}\|_{H^{\tilde{s}}}\lesssim\|\mathcal{E}\|_{H^{2+\frac{1}{2}+*}}^{3} +\|\rho_{e}\|_{H^{\tilde{s}}}.\] Proof.: To carry out commutator estimates, we change to non-divergence form: \[\rho_{eN} =\frac{1}{\sqrt{g}}\partial_{1}(\sqrt{g}g_{1}\varepsilon_{N} \mathcal{E}_{1N})+\frac{1}{\sqrt{g}}\partial_{2}(\sqrt{g}g_{2}\varepsilon_{N }\mathcal{E}_{2N})\] \[=\frac{1}{\sqrt{g}}\partial_{1}(\sqrt{g}g_{1}(1+g_{1}|\mathcal{E }_{1N}|^{2}+g_{2}|\mathcal{E}_{2N}|^{2})\mathcal{E}_{1N})\] \[\quad+\frac{1}{\sqrt{g}}\partial_{2}(\sqrt{g}g_{2}(1+g_{1}| \mathcal{E}_{1N}|^{2}+g_{2}|\mathcal{E}_{2N}|^{2})\mathcal{E}_{2N})\] \[=\tilde{\varepsilon}_{ijN}\partial_{j}\mathcal{E}_{iN}+O( \partial g\mathcal{E}^{3}).\] The second term is clearly lower order. By the fractional Leibniz rule we find \[\|\langle D^{\prime}\rangle^{\tilde{s}}(\tilde{\varepsilon}_{ijN}\partial_{j} \mathcal{E}_{iN})\|_{L^{2}(\mathbb{R}^{2})}\lesssim\|\langle D^{\prime} \rangle^{\tilde{s}}\tilde{\varepsilon}_{ijN}\partial_{j}\mathcal{E}_{iN}\|_{L ^{2}(\mathbb{R}^{2})}+\|\tilde{\varepsilon}_{ijN}\langle D^{\prime}\rangle^{ \tilde{s}}\partial_{j}\mathcal{E}_{iN}\|_{L^{2}(\mathbb{R}^{2})}.\] The first term is acceptable by Holder's inequality and Sobolev embedding as \[\|\langle D^{\prime}\rangle^{\tilde{s}}\tilde{\varepsilon}_{ijN}\partial_{j} \mathcal{E}_{iN}\|_{L^{2}(\mathbb{R}^{2})}\lesssim\|\langle D^{\prime}\rangle ^{\tilde{s}+\frac{1}{2}}\mathcal{E}\|_{L^{2}}^{2}\|\langle D^{\prime}\rangle^{ \frac{3}{2}}\mathcal{E}\|_{L^{2}}.\] We turn to the second term: \[\|\tilde{\varepsilon}_{ijN}\langle D^{\prime}\rangle^{\tilde{s}}\partial_{j} \mathcal{E}_{iN}\|_{L^{2}}\leq\|(1-P_{\leqslant N})\tilde{\varepsilon}_{ij} \partial_{j}\langle D^{\prime}\rangle^{\tilde{s}}\mathcal{E}_{iN}\|_{L^{2}}+ \|\tilde{\varepsilon}_{ij}\partial_{j}\langle D^{\prime}\rangle^{\tilde{s}} \mathcal{E}_{iN}\|_{L^{2}}.\] We compute for the first term \[\|(1-P_{\leq N})\tilde{\varepsilon}_{ij}\partial_{j}\langle D^{ \prime}\rangle^{\tilde{s}}\mathcal{E}_{iN}\|_{L^{2}} \lesssim\|(1-P_{\leq N})\tilde{\varepsilon}_{ij}\|_{L^{4}}N\| \langle D^{\prime}\rangle^{\tilde{s}}\mathcal{E}_{iN}\|_{L^{4}}\] \[\lesssim\|\partial_{x}(1-P_{\leq N})\tilde{\varepsilon}_{ij}\|_{ L^{4}}\|\langle D^{\prime}\rangle^{\tilde{s}+\frac{1}{2}}\mathcal{E}_{i}\|_{L^{2}}\] \[\lesssim\|\langle D^{\prime}\rangle^{\tilde{s}+\frac{1}{2}} \mathcal{E}\|_{L^{2}}^{3}.\] Regarding the second term, we find \[\|\tilde{\varepsilon}_{ij}\partial_{j}\langle D^{\prime}\rangle^{\tilde{s}} \mathcal{E}_{iN}\|_{L^{2}}\leq\|P_{\leq N}(\tilde{\varepsilon}_{ij}\partial_{ j}\langle D^{\prime}\rangle^{\tilde{s}}\mathcal{E}_{i})\|_{L^{2}}+\|[P_{\leq N}, \tilde{\varepsilon}_{ij}]\partial_{j}\langle D^{\prime}\rangle^{\tilde{s}} \mathcal{E}_{i}\|_{L^{2}}.\] The first term is estimated by Holder's inequality and Sobolev embedding as \[\|P_{\leq N}(\tilde{\varepsilon}_{ij}\partial_{j}\langle D^{ \prime}\rangle^{\tilde{s}}\mathcal{E}_{i})\|_{L^{2}} \leq\|\tilde{\varepsilon}_{ij}\partial_{j}\langle D^{\prime} \rangle^{\tilde{s}}\mathcal{E}_{i}\|_{L^{2}}\] \[\leq\|\langle D^{\prime}\rangle^{\tilde{s}}(\tilde{\varepsilon}_{ ij}\partial_{j}\mathcal{E}_{i})\|_{L^{2}}+\|\langle D^{\prime}\rangle^{ \tilde{s}}\tilde{\varepsilon}_{ij}\partial_{j}\mathcal{E}_{i}\|_{L^{2}}\] \[\lesssim\|\rho_{e}\|_{H^{s}}+\|\langle D^{\prime}\rangle^{\tilde {s}+\frac{1}{2}}\mathcal{E}\|_{L^{2}}^{3}.\] For the second term we introduce an additional Littlewood-Paley decomposition. Here \(P_{K}^{\prime}\) denotes the usual frequency localization in \(\mathbb{R}^{2}\). We find \[[P_{\leq N},\tilde{\varepsilon}_{ij}]\partial_{j}\langle D^{ \prime}\rangle^{\tilde{s}}\mathcal{E}_{i}= \sum_{M_{1}\ll M_{2}}[P_{\leq N},P_{M_{1}}^{\prime}\tilde{ \varepsilon}_{ij}]\partial_{j}\langle D^{\prime}\rangle^{\tilde{s}}P_{M_{2}}^ {\prime}\mathcal{E}_{i}\] \[+\sum_{M_{1}\gtrsim M_{2}}[P_{\leq N},P_{M_{1}}^{\prime}\tilde{ \varepsilon}_{ij}]\partial_{j}\langle D^{\prime}\rangle^{\tilde{s}}P_{M_{2}}^ {\prime}\mathcal{E}_{i}.\] The contribution of the second term is estimated like above by Holder's inequality and distributing derivatives: \[\big{\|}\sum_{M_{1}\gtrsim M_{2}}[P_{\leq N},P_{M_{1}}^{\prime} \tilde{\varepsilon}_{ij}]\partial_{j}\langle D^{\prime}\rangle^{\tilde{s}}P_{ M_{2}}^{\prime}\mathcal{E}_{i}\big{\|}_{L^{2}} \lesssim\sum_{M_{1}\gtrsim M_{2}}\|P_{M_{1}}^{\prime}\tilde{ \varepsilon}_{ij}\|_{L^{4}}M_{2}\|\langle D^{\prime}\rangle^{\tilde{s}} \mathcal{E}_{i}\|_{L^{4}}\] \[\lesssim\|\langle D^{\prime}\rangle^{\frac{3}{2}+\epsilon}\tilde {\varepsilon}_{ij}\|_{L^{2}}\|\langle D^{\prime}\rangle^{\tilde{s}+\frac{1}{2}} \mathcal{E}_{i}\|_{L^{2}}\] \[\lesssim\|\langle D^{\prime}\rangle^{\tilde{s}+\frac{1}{2}} \mathcal{E}\|_{L^{2}}^{3}.\] We turn to the first term, for which we can suppose that \(M_{2}\lesssim N\) since \(P_{\leq N}P_{M_{2}}^{\prime}\) is smoothing for \(M_{2}\gg N\). By a standard kernel estimate, we find \[\|[P_{\leq N},P_{M_{1}}^{\prime}\tilde{\varepsilon}_{ij}]\partial_ {j}\langle D^{\prime}\rangle^{\tilde{s}}P_{M_{2}}^{\prime}\mathcal{E}_{i}\|_{L ^{2}}\] \[\lesssim N^{-1}M_{2}^{\frac{1}{2}}\|\partial_{x}P_{M_{1}}^{\prime} \tilde{\varepsilon}_{ij}\|_{L^{2}}\|\langle D^{\prime}\rangle^{\tilde{s}+\frac{1 }{2}}\mathcal{E}\|_{L^{2}}\] \[\lesssim N^{-1}M_{2}^{\frac{1}{2}}\|\partial_{x}P_{M_{1}}^{\prime} \tilde{\varepsilon}_{ij}\|_{L^{2}}\|\langle D^{\prime}\rangle^{\tilde{s}+\frac{1 }{2}}\mathcal{E}\|_{L^{2}}\] \[\lesssim\|\langle D^{\prime}\rangle^{\tilde{s}+\frac{1}{2}} \mathcal{E}\|_{L^{2}}^{3}.\] The proof is complete. We can now come back to the proof of the continuity of the flow. Proposition 7.5 yields \(L^{2}\)-bounds on the difference: * High frequency bounds for solutions (from (7.20)): \[\|u^{N}\|_{C([0,T],H^{+s+j})}\lesssim N^{j}c_{N}\qquad\big{(}\frac{11}{6}<s<s+j \leq 3\big{)},\] * Difference bounds (from (7.21)) \[\|u^{2N}-u^{N}\|_{C([0,T],L^{2})}\lesssim N^{-s}c_{N}.\] Interpolation gives \[\|u^{2N}-u^{N}\|_{C([0,T],H^{n})}\lesssim c_{N}N^{-(s-m)}\] for \(0\leq m\leq 3\). By the \(L^{2}\)-bound and telescoping sum \[u-u^{N}=\sum_{M\geq N}u_{2M}-u_{M},\] we obtain the estimates \[\|u-u^{N}\|_{C_{T}L^{2}}\lesssim N^{-s}.\] This yields convergence in \(L^{2}\). Moreover, the frequencies of \(u^{2N}-u^{N}\) are decaying exponentially off \(N\). Indeed, let \(M\ll N\). Then, we find by the \(L^{2}\)-estimate \[\|P_{M}(u^{2N}-u^{N})\|_{C_{T}H^{s}}\lesssim M^{s}\|u^{2N}-u^{N}\|_{L^{\infty}_ {T}L^{2}}\lesssim\big{(}\frac{M}{N}\big{)}^{s}c_{N}.\] For \(M\gg N\), we can use the high frequency bounds for the solutions: \[M^{j}\|P_{M}(u^{2N}-u^{N})\|_{C_{T}H^{s}}\sim\|P_{M}(u^{2N}-u^{N})\|_{C_{T}H^{s +j}}\lesssim N^{j}c_{N}.\] Therefore, \[\|P_{M}(u^{2N}-u^{N})\|_{C_{T}H^{s}}\lesssim\big{(}\frac{N}{M}\big{)}^{j}c_{N}.\] and as a consequence, \[\|P_{M}(u^{2N}-u^{N})\|_{C_{T}H^{s}}\lesssim\big{[}\frac{M}{N}\big{]}^{\kappa} c_{N},\qquad\kappa=\min(j,s),\] which implies \[\|P_{M}(u-u^{N})\|_{C_{T}H^{s}} \lesssim\sum_{P\geq N}\|P_{M}(u^{2P}-u^{P})\|_{C_{T}H^{s}}\] \[\lesssim\sum_{P\geq N}\big{[}\frac{P}{M}\big{]}^{\kappa}c_{P}= \sum_{P}\big{[}\frac{P}{M}\big{]}^{\kappa}c_{P}1_{P\geq N}\] By Schur's Lemma, the operator with kernel \(K(M,P)=\big{[}\frac{M}{P}\big{]}^{\kappa}\) is bounded on \(\ell^{2}(2^{\mathbb{N}_{0}})\), which implies \[\|(u-u^{N})\|_{C_{T}H^{s}}^{2}\leq C\sum_{M}\|P_{M}(u-u^{N})\|_{C_{T}H^{s}}^{2 }\leq C\sum_{P\geq N}c_{P}^{2}\to_{N\to+\infty}0. \tag{7.22}\] This also proves convergence in \(H^{s}\) to \(u\in C_{T}H^{s}\). The proof of continuous dependence follows from this estimate and Lemma 7.7. Let \(u_{0,n}\to u_{0}\) in \(H^{s}\). Denote by \(c_{N,n}\) a family of frequency envelopes satisfying the conclusions of Lemma 7.7. Then, since the sequence \(u_{0,n}\) is bounded in \(H^{s}\), we get for \(T\) fixed \[\|u_{n}-u_{n}^{N}\|_{C_{T}H^{s}}\leq C\sum_{P\geq N}c_{P,n}^{2},\qquad\|u-u^{N }\|_{C_{T}H^{s}}\leq C\sum_{P\geq N}c_{P}^{2}.\] From Lemma 7.7 for any \(\epsilon>0\), we can choose \(N>0\) and \(n_{0}\) such that \[\forall n\geq n_{0}:\,\|u_{n}-u_{n}^{N}\|_{C_{T}H^{s}}\leq\epsilon,\|u-u^{N} \|_{C_{T}H^{s}}\leq\epsilon.\] Now, writing \[\|u_{n}-u\|_{C_{T}H^{s}} \leq\|u_{n}-u_{n}^{N}\|_{C_{T}H^{s}}+\|u_{n}^{N}-u^{N}\|_{C_{T}H^ {s}}+\|u^{N}-u\|_{C_{T}H^{s}}\] \[\leq 2\epsilon+\|u_{n}^{N}-u^{N}\|_{C_{T}H^{s}},\] since the initial data \(u_{0,n}^{N}\) and \(u_{0}^{N}\) are bounded in \(H^{s+j}\) (\(N\) fixed, \(n\to+\infty\)), we get from interpolation between the a priori estimate in \(H^{s+j}\) and contraction in \(L^{2}\) that \[\lim_{n\to+\infty}\|u_{n}^{N}-u^{N}\|_{C_{T}H^{s}}=0.\] The proof of Theorem 7.1 is complete. ## Appendix A Local well-posedness for Maxwell equations in two dimensions at high regularity In the following we prove local well-posedness of the Maxwell system with Kerr nonlinearity in two dimensions. Let \(\Omega\subseteq\mathbb{R}^{2}\) be a smooth domain with compact boundary. We consider the Kerr system \[\left\{\begin{array}{ll}\partial_{t}(\varepsilon\mathcal{E})&=\nabla_{\perp} \mathcal{H},\quad[\mathcal{E}\wedge\nu]_{x^{\prime}\in\partial\Omega}=0,\\ \partial_{t}(\mu\mathcal{H})&=-(\partial_{1}\mathcal{E}_{2}-\partial_{2} \mathcal{E}_{1}),\quad(t,x^{\prime})\in\mathbb{R}\times\Omega\end{array}\right.\] (A.1) with \(\nabla_{\perp}=(\partial_{2},-\partial_{1})\) and \(\varepsilon=1+|\mathcal{E}|^{2}\). Moreover, we shall prove that the solution satisfies finite speed of propagation. To this end, we perceive the two-dimensional case as projection of the three-dimensional case. This allows us to apply the local well-posedness theory established by Spitz [25] and transfer the finite speed of propagation from the three-dimensional to the two-dimensional case. We remark that a simpler version of the detailed argument likewise yields local well-posedness in \(\mathcal{H}^{3}(\Omega)\) in the autonomous case: \[\left\{\begin{array}{ll}\partial_{t}(\varepsilon\mathcal{E})&=\nabla_{\perp }\mathcal{H},\quad(t,x^{\prime})\in\mathbb{R}\times\Omega,\quad[\mathcal{E} \wedge\nu]_{x^{\prime}\in\partial\Omega}=0,\\ \partial_{t}\mathcal{H}&=-(\partial_{1}\mathcal{E}_{2}-\partial_{2}\mathcal{ E}_{1})\end{array}\right.\] (A.2) with \(\varepsilon,\mu\in C^{\infty}(\Omega;\mathbb{R}_{>0})\), which satisfy ellipticity conditions: \[\exists\lambda,\Lambda>0:\forall x^{\prime}\in\Omega:\lambda\leq\kappa(x^{ \prime})\leq\Lambda,\quad\kappa\in\{\varepsilon;\mu\}.\] We show the following: **Theorem A.1**.: _There is \(\delta>0\) such that (A.1) is locally well-posed in \(\mathcal{H}^{3}(\Omega)\) for initial data \(\|(\mathcal{E},\mathcal{H})_{0}\|_{H^{3}(\Omega)}\leq\delta\). This means there is \(T=T(\|(\mathcal{E},\mathcal{H})_{0}\|_{H^{3}(\Omega)},\Omega)\) such that solutions \((\mathcal{E}^{i},\mathcal{H}^{i})\), \(i=1,2\) exist for \(0<t\leq T\) and depend continuously on the initial data: We have_ \[\|(\mathcal{E}^{1},\mathcal{H}^{1})(t)-(\mathcal{E}^{2},\mathcal{H}^{2})(t)\| _{H^{s}(\Omega)}\to 0\] _for \(\|(\mathcal{E}^{1},\mathcal{H}^{1})(0)-(\mathcal{E}^{2},\mathcal{H}^{2})(0)\| _{H^{s}(\Omega)}\to 0\)._ The proof is carried out in the following steps: * We extend the two-dimensional system to three dimensions by introducing a cylindrical tangent direction. * We find the compatibility conditions for the Kerr nonlinearity in two and three dimensions and see that the cylindrical extensions satisfies the compatibility conditions, if the compatibility conditions in two dimensions are satisfied. * Finally, we recover the solutions to the two-dimensional system by restricting the solutions to the cylindrical extension and check the regularity. Extension to the three-dimensional caseWe consider the non-compact cylinder \(\tilde{\Omega}=\Omega\times\mathbb{R}\), which is still smooth and its boundary can be covered by finitely many charts. This makes the extended Maxwell system on \(\tilde{\Omega}\) still amenable to Spitz's local well-posedness theory: \[\left\{\begin{array}{ll}\partial_{t}(\varepsilon\mathcal{\tilde{E}})&= \nabla\times\mathcal{\tilde{H}},\quad[\mathcal{\tilde{E}}\wedge\nu]_{x^{\prime }\in\partial\tilde{\Omega}}=0,\quad[\mathcal{\tilde{H}}.\nu]_{x^{\prime}\in \partial\tilde{\Omega}}=0,\\ \partial_{t}\mathcal{\tilde{H}}&=-\nabla\times\mathcal{\tilde{E}},\quad(t,x^ {\prime})\in\mathbb{R}\times\tilde{\Omega}.\end{array}\right.\] (A.3) Let the two-dimensional initial data be given by \((\mathcal{E}_{0},\mathcal{H}_{0})\in\mathcal{H}^{3}(\Omega)\). We extend the initial data using a smooth cut-off in the cylindrical direction. Let \(\varphi\in C^{\infty}_{c}(\mathbb{R})\) with \(\varphi(x^{\prime})=1\) for \(|x^{\prime}|\leq R\) and \(\varphi\in C^{\infty}_{c}(B(0,2R))\). The extended data is given by \[\tilde{\mathcal{E}}_{0}(x^{\prime}_{1},x^{\prime}_{2},x^{\prime}_{3})=\varphi( x^{\prime}_{3})\begin{pmatrix}\mathcal{E}_{01}(x^{\prime}_{1},x^{\prime}_{2}) \\ \mathcal{E}_{02}(x^{\prime}_{1},x^{\prime}_{2})\\ 0\end{pmatrix},\qquad\tilde{\mathcal{H}}_{0}(x^{\prime}_{1},x^{\prime}_{2},x^{ \prime}_{3})=\varphi(x^{\prime}_{3})\begin{pmatrix}0\\ 0\\ \mathcal{H}_{0}(x^{\prime}_{1},x^{\prime}_{2})\end{pmatrix}.\] (A.4) For \((\mathcal{E}_{0},\mathcal{H}_{0})\in H^{3}(\Omega)\) we clearly have \((\tilde{\mathcal{E}}_{0},\tilde{\mathcal{H}}_{0})\in H^{3}(\tilde{\Omega})\). Compatibility conditions in two and three dimensions.We turn to compatibility conditions. First, we record the compatibility conditions in two dimensions by changing to geodesic coordinates: \[\left\{\begin{array}{rl}\partial_{t}(\tilde{\varepsilon}\mathcal{E})&=\sqrt{ g}^{-1}g\nabla_{\perp}\mathcal{H},\quad\tilde{\varepsilon}=1+\langle\mathcal{E},g ^{-1}\mathcal{E}\rangle,\quad g^{-1}=\begin{pmatrix}g^{1}&0\\ 0&1\end{pmatrix},\\ \partial_{t}\mathcal{H}&=-\sqrt{g}^{-1}(\partial_{1}\mathcal{E}_{2}-\partial _{2}\mathcal{E}_{1}),\quad(t,x^{\prime})\in\mathbb{R}\times\mathbb{R}_{>0}^{2}.\end{array}\right.\] (A.5) The tangential direction in \(\mathbb{R}_{>0}^{2}=\{(x^{\prime}_{1},x^{\prime}_{2})\in\mathbb{R}^{2}:x^{ \prime}_{2}>0\}\) is \(e_{1}\). The boundary condition is given by \([\mathcal{E}_{1}]_{x^{\prime}_{2}=0}=0\). Since \(\partial_{t}\) is a tangential derivative, we obtain \[\partial_{t}(\tilde{\varepsilon}\mathcal{E}_{1})=\sqrt{g}^{-1}g^{1}\partial_{ 2}\mathcal{H}\Rightarrow[\partial_{2}\mathcal{H}]_{x^{\prime}_{2}=0}=0.\] This means that first order compatibility conditions are Neumann boundary conditions for \(\mathcal{H}\). We take an additional time derivative in geodesic coordinates to find the second order compatibility condition: \[0=[\partial_{t}^{2}(\tilde{\varepsilon}\mathcal{E})_{1}]_{x^{\prime}_{2}=0}=[ \partial_{2}(\sqrt{g}^{-1}(\partial_{1}\mathcal{E}_{2}-\partial_{2}\mathcal{E }_{1})]_{x^{\prime}_{2}=0}.\] We turn to the nonlinear compatibility conditions for the Kerr nonlinearity in three dimensions. The conditions are computed again in geodesic normal coordinates with cometric given by \[g^{-1}=\begin{pmatrix}g^{11}&g^{12}&0\\ g^{21}&g^{22}&0\\ 0&0&1\end{pmatrix}.\] The equations are given by \[\left\{\begin{array}{rl}\partial_{t}((1+\langle\mathcal{E},g^{-1}\mathcal{E }\rangle)\mathcal{E})&=\sqrt{g}^{-1}g\nabla\times\mathcal{H},\\ \partial_{t}\mathcal{H}&=-\sqrt{g}^{-1}g\nabla\times\mathcal{E}.\end{array}\right.\] (A.6) The boundary conditions (zeroth order compatibility conditions) read \[[\mathcal{E}_{1}]_{x^{\prime}_{3}=0}=[\mathcal{E}_{2}]_{x^{\prime}_{3}=0}=[ \mathcal{H}_{3}]_{x^{\prime}_{3}=0}=0.\] (A.7) We compute the compatibility conditions of first order by taking (A.6) and (7.13) together: \[[\partial_{2}\mathcal{H}_{3}-\partial_{3}\mathcal{H}_{2}]=0,\quad[\partial_{3} \mathcal{H}_{1}-\partial_{1}\mathcal{H}_{3}]=0.\] Since \(\partial_{1}\) and \(\partial_{2}\) are tangential derivatives, it follows that we have Neumann boundary conditions for \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\): \[[\partial_{3}\mathcal{H}_{1}]=[\partial_{3}\mathcal{H}_{2}]=0.\] (A.8) The time derivative of the third component of \(\mathcal{H}_{3}\) yields no additional condition on \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\). \[\partial_{t}^{2}(\sqrt{g}g^{-1}\varepsilon\mathcal{E})=\nabla\times(\sqrt{g}^{ -1}g(\nabla\times\mathcal{E})).\] Expanding gives \[\partial_{t}^{2}(\sqrt{g}g^{-1}\varepsilon\mathcal{E})_{1} =\partial_{2}(\sqrt{g}^{-1}(\partial_{1}\mathcal{E}_{2}-\partial_{2 }\mathcal{E}_{1}))\] \[\quad-\partial_{3}(\sqrt{g}^{-1}(g_{21}(\partial_{2}\mathcal{E}_{3 }-\partial_{3}\mathcal{E}_{2})+g_{22}(\partial_{3}\mathcal{E}_{1}-\partial_{1} \mathcal{E}_{3}))),\] \[\partial_{t}^{2}(\sqrt{g}g^{-1}\varepsilon\mathcal{E})_{2} =\partial_{3}(\sqrt{g}^{-1}(g_{11}(\partial_{2}\mathcal{E}_{3}- \partial_{3}\mathcal{E}_{2})+g_{12}(\partial_{3}\mathcal{E}_{1}-\partial_{1} \mathcal{E}_{3})))\] \[\quad-\partial_{1}(\sqrt{g}^{-1}(\partial_{1}\mathcal{E}_{2}- \partial_{2}\mathcal{E}_{1})).\] The left hand-side is vanishing at the boundary. Moreover, \(\partial_{1}\) and \(\partial_{2}\) are tangential derivatives, which means that for \(i=1,2\) \[[\partial_{i}(\sqrt{g}^{-1}(\partial_{1}\mathcal{E}_{2}-\partial_{2}\mathcal{ E}_{1}))]_{x_{3}^{\prime}=0}=0.\] We find that the second order compatibility conditions are given by \[[\partial_{3}(\sqrt{g}^{-1}(g_{21}(\partial_{2}\mathcal{E}_{3}- \partial_{3}\mathcal{E}_{2})+g_{22}(\partial_{3}\mathcal{E}_{1}-\partial_{1} \mathcal{E}_{3})))]_{x_{3}^{\prime}=0}=0,\] \[[\partial_{3}(\sqrt{g}^{-1}(g_{11}(\partial_{2}\mathcal{E}_{3}- \partial_{3}\mathcal{E}_{2})+g_{12}(\partial_{3}\mathcal{E}_{1}-\partial_{1} \mathcal{E}_{3})))]_{x_{3}^{\prime}=0}=0.\] \[\quad\quad[\partial_{3}\partial_{t}\mathcal{H}_{1}]=[\partial_{3 }^{2}\mathcal{E}_{1}]-[\partial_{1}\partial_{3}\mathcal{E}_{3}]=[\partial_{3} ^{2}\mathcal{E}_{1}]=0.\] #### Relating the solutions in two and three dimensions To relate the boundary conditions in two and three dimensions after cylindrical extension, we note that we can extend the geodesic coordinates in two dimensions \[g^{-1}=\begin{pmatrix}g^{1}&0\\ 0&1\end{pmatrix}\] such that \(e_{1}\) denotes the tangential and \(e_{2}\) the normal direction trivially as \[\tilde{g}^{-1}=\begin{pmatrix}g^{1}&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\] such that \(e_{3}\) denotes the second tangential direction. The boundary conditions in geodesic coordinates in two dimensions are given by \[[\mathcal{E}_{1}]_{x_{2}^{\prime}=0} =0,\] \[[\partial_{2}\mathcal{H}]_{x_{2}^{\prime}=0} =0,\] (A.9) \[[\partial_{2}(\sqrt{g}^{-1}(\partial_{1}\mathcal{E}_{2}-\partial_ {2}\mathcal{E}_{1}))]_{x_{2}^{\prime}=0} =0.\] In three dimensions we find \[[\mathcal{\tilde{E}}_{1}]_{x_{2}^{\prime}=0} =[\mathcal{\tilde{E}}_{3}]_{x_{2}^{\prime}=0}=[\mathcal{\tilde{H }}_{2}]_{x_{2}^{\prime}=0}=0,\] \[[\partial_{2}\mathcal{\tilde{H}}_{1}]_{x_{2}^{\prime}=0} =[\partial_{2}\mathcal{\tilde{H}}_{3}]_{x_{2}^{\prime}=0}=0,\] (A.10) \[[\partial_{2}(\sqrt{g}^{-1}(\partial_{1}\mathcal{\tilde{E}}_{2}- \partial_{2}\mathcal{\tilde{E}}_{1}))]_{x_{2}^{\prime}=0} =0,\quad[\partial_{2}(\sqrt{g}^{-1}g_{1}(\partial_{2}\mathcal{ \tilde{E}}_{3}-\partial_{3}\mathcal{\tilde{E}}_{2}))]_{x_{2}^{\prime}=0}=0.\] It turns out that the conditions in three dimensions either follow from the conditions in two dimensions or from cylindrical extension. Indeed, \[[\mathcal{\tilde{E}}_{1}]_{x_{2}^{\prime}=0}=[\partial_{2}(\sqrt{g}^{-1}( \partial_{1}\mathcal{\tilde{E}}_{2}-\partial_{2}\mathcal{\tilde{E}}_{1}))]_{x _{2}^{\prime}=0}=0\] is immediate from (A.9). Since \(e_{3}\) is the cylindrical direction, we find by the definition of the extended fields \[[\mathcal{\tilde{E}}_{3}]_{x_{2}^{\prime}=0}=[\mathcal{\tilde{H }}_{2}]_{x_{2}^{\prime}=0}=[\partial_{2}\mathcal{\tilde{H}}_{1}]_{x_{2}^{\prime} =0}=[\partial_{2}\mathcal{\tilde{H}}_{3}]_{x_{2}^{\prime}=0}=0,\] \[[\partial_{2}(\sqrt{g}^{-1}g_{1}(\partial_{2}\mathcal{\tilde{E}}_{3 }-\partial_{3}\mathcal{\tilde{E}}_{2}))]_{x_{2}^{\prime}=0}=0.\] This yields local-in-time solutions to (A.3) by applying [25, Theorem 5.3]. Next, we argue that the solutions \((\tilde{\mathcal{E}},\tilde{\mathcal{H}})\) in \(\Omega\times\mathbb{R}\) to (A.3) yield solutions to (A.1). By finite speed of propagation there is \(T=T(\tilde{\Omega})=T(\Omega)\) such that for \(0\leq t\leq T\) \[\tilde{\mathcal{E}}_{3}(t,x_{1}^{\prime},x_{2}^{\prime},0)=\tilde{\mathcal{H} }_{1}(t,x_{1}^{\prime},x_{2}^{\prime},0)=\tilde{\mathcal{H}}_{2}(t,x_{1}^{ \prime},x_{2}^{\prime},0)=0.\] Secondly, we argue that \(\tilde{\mathcal{E}}_{1}\), \(\tilde{\mathcal{E}}_{2}\), and \(\tilde{\mathcal{H}}_{3}\) do not depend on the cylindrical coordinate \(x_{3}^{\prime}\) for \(0\leq t\leq T\). For the components \(i=1,2\) of \(\tilde{\mathcal{E}}\) this follows from (A.3): \[\partial_{t}\tilde{\mathcal{H}}_{1}=0=\partial_{2}\tilde{\mathcal{E}}_{3}- \partial_{3}\tilde{\mathcal{E}}_{2} \Rightarrow\ \partial_{3}\tilde{\mathcal{E}}_{2}=0,\] \[\partial_{t}\tilde{\mathcal{H}}_{2}=0=\partial_{1}\tilde{\mathcal{E}}_{3}- \partial_{3}\tilde{\mathcal{E}}_{1}\ \Rightarrow\ \partial_{3}\tilde{\mathcal{E}}_{1}=0.\] For \(\tilde{\mathcal{H}}\), we observe that \(\nabla\cdot\tilde{\mathcal{H}}_{0}(x_{1}^{\prime},x_{2}^{\prime},0)=0\). It follows from the time evolution that \(\nabla\cdot\tilde{\mathcal{H}}(t,x_{1}^{\prime},x_{2}^{\prime},0)=0\), and consequently, for \(0<t\leq T\) we find \[\partial_{1}\tilde{\mathcal{H}}_{1}+\partial_{2}\tilde{\mathcal{H}}_{2}+ \partial_{3}\tilde{\mathcal{H}}_{3}=0\Rightarrow\partial_{3}\tilde{\mathcal{H }}_{3}=0.\] Hence, we retrieve local-in-time solutions to (A.1) by restricting solutions to (A.3): \[(\mathcal{E}(t,x_{1}^{\prime},x_{2}^{\prime}),\mathcal{H}(t,x_{1}^{\prime},x_{ 2}^{\prime}))=(\tilde{\mathcal{E}}_{1}(t,x_{1}^{\prime},x_{2}^{\prime},0), \tilde{\mathcal{E}}_{2}(t,x_{1}^{\prime},x_{2}^{\prime},0),\tilde{\mathcal{H }}_{3}(t,x_{1}^{\prime},x_{2}^{\prime},0)).\] The restriction is well-defined by Sobolev embedding, which yields that \((\tilde{\mathcal{E}},\tilde{\mathcal{H}})(t)\in C^{1,\frac{1}{2}}(\tilde{ \Omega})\). It remains to check that the solutions obtained from restriction are in \(H^{3}(\Omega)\). To this end, we note that by independence of \(x_{3}^{\prime}\) for \(0\leq t\leq T\) and \(x_{3}^{\prime}\in[-\varepsilon,\varepsilon]\): \[(\tilde{\mathcal{E}}_{1},\tilde{\mathcal{E}}_{2},\tilde{\mathcal{H}}_{3})(t,x _{1}^{\prime},x_{2}^{\prime},x_{3}^{\prime})=(\tilde{\mathcal{E}}_{1},\tilde{ \mathcal{E}}_{2},\tilde{\mathcal{H}}_{3})(t,x_{1}^{\prime},x_{2}^{\prime},0).\] Hence, \((\tilde{\mathcal{E}},\tilde{\mathcal{H}})(t)\in H^{3}(\tilde{\Omega})\) implies that \((\mathcal{E},\mathcal{H})(t)\in H^{3}(\Omega)\). ## Appendix B Strichartz estimates for Maxwell equations in the full space revisited ### Strichartz estimates for Maxwell equations in three dimensions Here we show Strichartz estimates away from the boundary based on Strichartz estimates in the full space. Recall that we can find \(T\) small enough such that \((\mathcal{E},\mathcal{H})(t)\) within \(\Omega^{\text{int}}=\{x\in\Omega:d(x)>\varepsilon/2\}\) only depends on initial data in the interior \(\tilde{\Omega}^{\text{int}}=\{x\in\Omega:d(x)>\varepsilon/4\}\), and the solution does not reach the boundary for times \(t\leq T\). We prove that \[\|(\mathcal{E},\mathcal{H})\|_{L^{p}_{T}L^{q}(\Omega^{\text{int}})}\lesssim\| (\mathcal{E}_{0},\mathcal{H}_{0})\|_{H^{s}(\Omega)}+\|\rho_{e}(0)\|_{H^{s-1+ \frac{1}{p}}}.\] (B.1) A difficulty in applying the results from [17] arises as these are formulated in terms of \(\mathcal{D}=\varepsilon\mathcal{E}\) and \(\mathcal{B}=\mu\mathcal{H}\) and with negative derivatives on the left-hand side, which requires the use of commutator estimates to find Strichartz estimates like in (B.1). **Theorem B.1** ([17, Theorem 1.3]).: _Let \(\varepsilon_{1},\mu_{1}\in C^{1}(\mathbb{R}\times\mathbb{R}^{3};\mathbb{R})\) and define \(\varepsilon=\text{diag}(\varepsilon_{1},\varepsilon_{1},\varepsilon_{1}): \mathbb{R}\times\mathbb{R}^{3}\to\mathbb{R}^{3\times 3}\), \(\mu=\text{diag}(\mu_{1},\mu_{1},\mu_{1}):\mathbb{R}\times\mathbb{R}^{3}\to \mathbb{R}^{3\times 3}\) be matrix-valued functions, which satisfy (1.3) and \(\partial_{x}^{2}\varepsilon\in L^{1}_{t}L^{\infty}_{x^{\prime}}\) and \(\partial_{x}^{2}\mu\in L^{1}_{t}L^{\infty}_{x^{\prime}}\). Let \((s,p,q)\) be wave Strichartz admissible in three dimensions, i.e.,_ \[2\leq p\leq\infty,\;2\leq q<\infty^{6},\quad\frac{2}{p}+\frac{2}{q}\leq 1,\quad(p,q )\neq(2,\infty),\quad s=3\big{(}\frac{1}{2}-\frac{1}{q}\big{)}-\frac{1}{p}.\] _Let \(u=(u_{1},\ldots,u_{6})=(u^{(1)},u^{(2)}):\mathbb{R}\times\mathbb{R}^{3}\to \mathbb{R}^{3}\times\mathbb{R}^{3}\), and_ \[\tilde{P}=\begin{pmatrix}\partial_{t}&-\nabla\times(\mu^{-1}\cdot)\\ \nabla\times(\varepsilon^{-1}\cdot)&\partial_{t}\end{pmatrix}.\] _The following estimate holds:_ \[\begin{split}\|D^{\prime}|^{-s}u\|_{L^{p}_{t}(0,T;L^{q}_{x^{ \prime}})}&\lesssim\nu^{\frac{1}{p}}\|u\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}+ \nu^{-\frac{1}{p^{\prime}}}\|\tilde{P}(x,D)u\|_{L^{1}_{t}L^{2}_{x^{\prime}}}\\ &\quad+T^{\frac{1}{p}}\big{(}\|D^{\prime}|^{-1+\frac{1}{p}}(\nabla \cdot u^{(1)},\nabla\cdot u^{(2)})\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}\\ &\quad+\|D^{\prime}|^{-1+\frac{1}{p}}\partial_{t}(\nabla\cdot u ^{(1)},\nabla\cdot u^{(2)})\|_{L^{1}_{t}L^{2}_{x^{\prime}}}),\end{split}\] (B.2) _whenever the right hand-side is finite, provided that \(\nu\geq 1\), and \(T\|\partial_{x}^{2}\varepsilon\|_{L^{1}_{t}L^{\infty}_{x^{\prime}}}+T\| \partial_{x}^{2}\mu\|_{L^{1}_{t}L^{\infty}_{x^{\prime}}}\leq\nu^{2}\)._ For the present application, we can fix \(T\) and \(\nu\) such that the solutions stay away from the boundary (using finite speed of propagation). We have by Theorem B.1 \[\begin{split}\|\langle D^{\prime}\rangle^{-s}u\|_{L^{p}_{t}(0,T;L ^{q}_{x^{\prime}}(\mathbb{R}^{3}))}&\lesssim_{T,\varepsilon,\mu} \|u\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}+\|\tilde{P}_{1}(x,D)u\|_{L^{1}_{t}L^{ 2}_{x^{\prime}}}\\ &\quad+\|D^{\prime}|^{-1+\frac{1}{p}}\partial_{t}(\nabla\cdot(u^ {(1)},\nabla\cdot u^{(2)})\|_{L^{1}_{t}L^{2}_{x^{\prime}}}\\ &\quad+\|D^{\prime}|^{-1+\frac{1}{p}}\partial_{t}(\nabla\cdot u ^{(1)},\nabla\cdot u^{(2)})\|_{L^{1}_{t}L^{2}_{x^{\prime}}}\end{split}\] (B.3) with \[\tilde{P}_{1}=\begin{pmatrix}\partial_{t}&-\mu^{-1}\nabla\times\\ \varepsilon^{-1}\nabla\times&\partial_{t}\end{pmatrix}\] by Holder's inequality. We apply (B.3) to \(u=\langle D^{\prime}\rangle^{s}(\varepsilon\mathcal{E},\mu\mathcal{H})\) to find \[\begin{split}\|(\varepsilon\mathcal{E},\mu\mathcal{H})\|_{L^{p}_{ t}(0,T;L^{q}_{x^{\prime}}(\mathbb{R}^{3}))}&\lesssim_{T, \varepsilon,\mu}\|\langle D^{\prime}\rangle^{s}(\varepsilon\mathcal{E},\mu \mathcal{H})\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}+\|\tilde{P}_{1}(\langle D^{ \prime}\rangle^{s}(\varepsilon\mathcal{E},\mu\mathcal{H}))\|_{L^{1}_{t}L^{2}_ {x^{\prime}}}\\ &\quad+\|\langle D^{\prime}\rangle^{s}|D^{\prime}|^{-1+\frac{1}{ p}}\partial_{em}\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}+\|\langle D^{\prime} \rangle^{s}|D^{\prime}|^{-1+\frac{1}{p}}\partial_{t}\rho_{em}\|_{L^{1}_{t}L^{ 2}_{x^{\prime}}}\end{split}\] with \(\rho_{em}=(\nabla\cdot(\varepsilon\mathcal{E}),\nabla\cdot(\mu\mathcal{H}))\). By uniform ellipticity of \(\varepsilon\) and \(\mu\), we find \[\|(\varepsilon\mathcal{E},\mu\mathcal{H})\|_{L^{p}_{t}(0,T;L^{q}_{x^{\prime}})} \gtrsim\|(\mathcal{E},\mathcal{H})\|_{L^{p}_{t}(0,T;L^{q}_{x^{\prime}})},\] and by the fractional Leibniz rule, we obtain \[\|\langle D^{\prime}\rangle^{s}(\varepsilon\mathcal{E},\mu\mathcal{H})\|_{L^{ 2}_{x^{\prime}}}\lesssim_{\|(\varepsilon,\mu)\|_{C^{[t]+1}_{x}}}\|\langle D^{ \prime}\rangle^{s}(\mathcal{E},\mathcal{H})\|_{L^{2}_{x^{\prime}}}.\] Moreover, the charge terms are already in suitable form. By commutator estimates we shall argue that \[\|\tilde{P}_{1}\langle D^{\prime}\rangle^{s}(\varepsilon\mathcal{E},\mu \mathcal{H})\|_{L^{2}_{x^{\prime}}}\lesssim\|\langle D^{\prime}\rangle^{s}P( \varepsilon\mathcal{E},\mu\mathcal{H})\|_{L^{2}_{x^{\prime}}}+\|\langle D^{ \prime}\rangle^{s}(\mathcal{E},\mathcal{H})\|_{L^{2}_{x^{\prime}}}.\] (B.4) Above \(P\) denotes the Maxwell operator for \((\mathcal{E},\mathcal{H})\): \[P=\begin{pmatrix}\partial_{t}(\varepsilon\cdot)&-\nabla\times\\ \nabla\times&\partial_{t}(\mu\cdot)\end{pmatrix}.\] To prove (B.4), we use the fractional Leibniz rule (cf. [7]) \[\|\langle D^{\prime}\rangle^{\rho}(fg)\|_{L^{2}_{x^{\prime}}}\lesssim\|\langle D ^{\prime}\rangle^{\rho}f\|_{L^{2}_{x^{\prime}}}\|g\|_{L^{\infty}_{x^{\prime}}}+ \|\langle D^{\prime}\rangle^{\rho}g\|_{L^{2}_{x^{\prime}}}\|f\|_{L^{\infty}_{x^ {\prime}}}\quad(\rho\geq 0)\] and the following elementary commutator estimate: **Lemma B.2**.: _Let \(X=B^{1}_{\infty,2}(\mathbb{R}^{d})\cap C^{0,1}(\mathbb{R}^{d})\). The following estimate holds:_ \[\|[\varepsilon,\langle D^{\prime}\rangle^{\rho}]\partial_{x^{\prime }}u\|_{L^{2}_{x^{\prime}}(\mathbb{R}^{d})} \lesssim\|\varepsilon\|_{X}\|\langle D^{\prime}\rangle^{\rho}u \|_{L^{2}_{x^{\prime}}(\mathbb{R}^{d})}\text{ for }0<\rho\leq 1,\] (B.5) \[\|[\varepsilon,\langle D^{\prime}\rangle^{\rho}]\partial_{x^{ \prime}}u\|_{L^{2}_{x^{\prime}}(\mathbb{R}^{d})} \lesssim\|\varepsilon\|_{B^{\rho}_{\infty,2}(\mathbb{R}^{d})}\| \langle D^{\prime}\rangle^{\rho}u\|_{L^{2}(\mathbb{R}^{d})}\text{ for }\rho>1.\] (B.6) Proof.: We use a Littlewood-Paley decomposition: \[\|[\varepsilon,\langle D^{\prime}\rangle^{\rho}]\partial_{x^{\prime}}u\|_{L^ {2}_{x^{\prime}}}^{2}=\sum_{N\geq 1}\|P_{N}[\varepsilon,\langle D^{ \prime}\rangle^{\rho}]\partial_{x^{\prime}}u\|_{L^{2}_{x^{\prime}}}^{2}.\] We write \[P_{N}[\varepsilon,\langle D^{\prime}\rangle^{\rho}]\partial_{x^{ \prime}}u =P_{N}[\varepsilon_{<N/8},\langle D^{\prime}\rangle^{\rho}] \partial_{x^{\prime}}\tilde{P}_{N}u+P_{N}[\varepsilon_{\sim N},\langle D^{ \prime}\rangle^{\rho}]\partial_{x^{\prime}}P_{<N/8}u\] (B.7) \[\quad+P_{N}[\varepsilon_{\widetilde{\gtrsim}N},\langle D^{\prime }\rangle^{\rho}]\partial_{x^{\prime}}P_{\widetilde{\gtrsim}N}u.\] We estimate the first term in (B.7): Rewrite \[P_{N}[\varepsilon_{<N/8},\langle D^{\prime}\rangle^{\rho}]\tilde{P}_{N} \partial_{x^{\prime}}u=P_{N}\partial_{x^{\prime}}[\varepsilon_{<N/8},\langle D ^{\prime}\rangle^{\rho}]\tilde{P}_{N}u-P_{N}[\partial_{x^{\prime}}\varepsilon _{<N/8},\langle D^{\prime}\rangle^{\rho}]\tilde{P}_{N}u.\] (B.8) The second term in (B.8) is directly estimated by \[\|P_{N}[\partial_{x^{\prime}}\varepsilon_{<N/8},\langle D^{\prime}\rangle^{ \rho}]\tilde{P}_{N}u\|_{L^{2}_{x^{\prime}}}\lesssim N^{\rho}\|\partial_{x^{ \prime}}\varepsilon\|_{L^{\infty}_{x^{\prime}}}\|\tilde{P}_{N}u\|_{L^{2}_{x^ {\prime}}}.\] For the first term in (B.8) we write \[\|P_{N}\partial_{x^{\prime}}[\varepsilon_{<N/8},\langle D^{\prime}\rangle^{ \rho}]\tilde{P}_{N}u\|_{L^{2}_{x^{\prime}}}\lesssim N\|P_{N}[\varepsilon_{<N/ 8},\langle D^{\prime}\rangle^{\rho}\tilde{\tilde{P}}_{N}]\tilde{P}_{N}u\|_{L^ {2}_{x^{\prime}}}.\] Let \(K_{N}\) denote the kernel of \(\langle D^{\prime}\rangle^{\rho}\tilde{\tilde{P}}_{N}\): \[\langle D^{\prime}\rangle^{\rho}\tilde{\tilde{P}}_{N}f(x)=\int_{\mathbb{R}^{d }}K_{N}(x-y)f(y)dy.\] We have the pointwise kernel estimate: \[|K_{N}(x)|\lesssim N^{\rho}N^{d}(1+N|x|)^{-M}\text{ for any }M\geq 1.\] (B.9) This follows from \(K_{N}(x)=\int\!e^{ix.\xi}a(N^{-1}\xi)\langle\xi\rangle^{\rho}d\xi\) with \(a\in C^{\infty}_{c}(B(0,4)\backslash B(0,1/4))\), rescaling, and non-stationary phase. Therefore, by the mean-value theorem, \[|\varepsilon_{<N/8}(x)K_{N}(x-y)-K_{N}(x-y)\varepsilon_{<N/8}(y)|\lesssim|x-y ||K_{N}(x-y)||\partial_{x^{\prime}}\varepsilon\|_{L^{\infty}_{x^{\prime}}}.\] An application of Young's inequality gives \[\|P_{N}[\varepsilon_{<N/8},\langle D^{\prime}\rangle^{\rho}]\tilde{P}_{N}u\|_ {L^{2}_{x^{\prime}}}\lesssim N^{\rho-1}\|\partial_{x^{\prime}}\varepsilon\|_ {L^{\infty}_{x^{\prime}}}\|\tilde{P}_{N}u\|_{L^{2}_{x^{\prime}}}.\] This shows (B.5) and (B.6) in the considered cases by square summation. We turn to the second term in (B.7), where we distinguish \(0<\rho\leq 1\) and \(\rho>1\). We have for \(0<\rho\leq 1\) \[\|P_{N}[\varepsilon_{\sim N},\langle D^{\prime}\rangle^{\rho}] \partial_{x^{\prime}}P_{<N/8}u\|_{L^{2}_{x^{\prime}}} \lesssim N\|\varepsilon_{\sim N}\|_{L^{\infty}_{x^{\prime}}}\| \langle D^{\prime}\rangle^{\rho}P_{<N/8}u\|_{L^{2}_{x^{\prime}}}\] \[\lesssim N\|\varepsilon_{\sim N}\|_{L^{\infty}}\|\langle D^{\prime} \rangle^{\rho}u\|_{L^{2}_{x^{\prime}}}\] with straight-forward square summation. For \(\rho>1\), we obtain \[\|P_{N}[\varepsilon_{\sim N},\langle D^{\prime}\rangle^{\rho}]\partial_{x^{ \prime}}P_{<N/8}u\|_{L^{2}_{x^{\prime}}}\lesssim N^{\rho}\|\varepsilon_{\sim N }\|_{L^{\infty}_{x^{\prime}}}\|\partial_{x^{\prime}}P_{\leq N/8}u\|_{L^{2}_{x^ {\prime}}}.\] It remains to estimate the third term in (B.7), which is rewritten as \[P_{N}[\varepsilon_{\widehat{\times}N},\langle D^{\prime}\rangle^{\rho}]\partial_ {x^{\prime}}P_{\widehat{\times}N}u=P_{N}\partial_{x^{\prime}}[\varepsilon_{ \widehat{\times}N},\langle D^{\prime}\rangle^{\rho}]P_{\widehat{\times}N}u-P_ {N}[\partial_{x^{\prime}}\varepsilon_{\widehat{\times}N},\langle D^{\prime} \rangle^{\rho}]P_{\widehat{\times}N}u.\] (B.10) We write the first term in (B.10) as \[P_{N}\partial_{x^{\prime}}[\varepsilon_{\widehat{\times}N},\langle D^{\prime} \rangle^{\rho}]P_{\widehat{\times}N}u=P_{N}\partial_{x^{\prime}}(\varepsilon_ {\widehat{\times}N}\langle D^{\prime}\rangle^{\rho}P_{\widehat{\times}N}u)-P_ {N}\partial_{x^{\prime}}\langle D^{\prime}\rangle^{\rho}(\varepsilon_{ \widehat{\times}N}P_{\widehat{\times}N}u).\] (B.11) For the first term in (B.11) we find by the Cauchy-Schwarz inequality \[\sum_{N\geq 1}\|P_{N}\partial_{x^{\prime}}(\varepsilon_{\widehat{\times}N} \langle D^{\prime}\rangle^{\rho}P_{\widehat{\times}N}u)\|_{L^{2}}^{2} \lesssim\sum_{N\geq 1}N^{2}\sum_{M\gtrsim N}\|\varepsilon_{M}\|_{L^{ \infty}}^{2}\|\langle D^{\prime}\rangle^{\rho}u\|_{L^{2}}^{2}.\] (B.12) Indeed, we have \[\|\sum_{M\gtrsim N}\varepsilon_{M}\langle D^{\prime}\rangle^{ \rho}P_{M}u\|_{L^{2}}^{2} \lesssim\big{\|}\big{(}\sum_{M\gtrsim N}|\varepsilon_{M}|^{2} \big{)}^{\frac{1}{2}}\big{(}\sum_{M\gtrsim N}|P_{M}\langle D^{\prime}\rangle^ {\rho}u|^{2}\big{)}^{\frac{1}{2}}\big{\|}_{L^{2}}^{2}\] \[\lesssim\big{\|}\big{(}\sum_{M\gtrsim N}|\varepsilon_{M}|^{2} \big{)}^{\frac{1}{2}}\big{\|}_{L^{\infty}}^{2}\big{\|}\big{(}\sum_{M\gtrsim N}| P_{M}\langle D^{\prime}\rangle^{\rho}u|^{2}\big{)}^{\frac{1}{2}}\big{\|}_{L^{2}}^{2}\] \[\lesssim\big{(}\sum_{M\gtrsim N}\|\varepsilon_{M}\|_{L^{\infty}} ^{2}\big{)}\|P_{\widehat{\times}N}\langle D^{\prime}\rangle^{\rho}u\|_{L^{2}} ^{2}.\] We conclude the above estimate by changing the summation as (B.12) \[\lesssim\sum_{M\gtrsim 1}\|\varepsilon_{M}\|_{L^{\infty}}^{2}\sum_ {1\lesssim N\lesssim M}N^{2}\|\langle D^{\prime}\rangle^{\rho}u\|_{L^{2}}^{2} \lesssim\sum_{M\gtrsim 1}M^{2}\|\varepsilon_{M}\|_{L^{\infty}}^{2}\| \langle D^{\prime}\rangle^{\rho}u\|_{L^{2}}^{2}\] \[\lesssim\|\varepsilon\|_{B_{\infty,2}^{1}}^{2}\|u\|_{H^{\rho}}^{2}.\] For the second term in (B.11) we find \[\|P_{N}\partial_{x^{\prime}}\langle D^{\prime}\rangle^{\rho}( \varepsilon_{\widehat{\times}N}P_{\widehat{\times}N}u)\|_{L^{2}} \lesssim N^{\rho+1}\|P_{N}\big{(}\sum_{M\gtrsim N}\varepsilon_{M} P_{M}u\big{)}\|_{L^{2}}\] \[\lesssim N^{\rho+1}\|P_{N}\big{(}\sum_{M\gtrsim N}|\varepsilon_{M} |^{2}\big{)}^{\frac{1}{2}}\big{(}\sum_{M\gtrsim N}|P_{M}u|^{2}\big{)}^{\frac{1}{ 2}}\|_{L^{2}}\] \[\lesssim N^{\rho+1}\big{(}\sum_{M\gtrsim N}\|\varepsilon_{M}\|_{L^{ \infty}}^{2}\big{)}^{\frac{1}{2}}\|P_{\widehat{\times}N}u\|_{L^{2}}.\] For this reason we have \[\sum_{N\geq 1}\|P_{N}\partial_{x^{\prime}}\langle D^{\prime} \rangle^{\rho}(\varepsilon_{\widehat{\times}N}P_{\widehat{\times}N}u)\|_{L^{2}} ^{2} \lesssim\sum_{N\geq 1}N^{2\rho}N^{2}\sum_{M\gtrsim N}\| \varepsilon_{M}\|_{L^{\infty}}^{2}\|P_{\widehat{\times}N}u\|_{L^{2}}^{2}\] \[\lesssim\sum_{N\geq 1}N^{2}\sum_{M\gtrsim N}\|\varepsilon_{M}\|_{L^{ \infty}}^{2}\|\langle D^{\prime}\rangle^{\rho}u\|_{L^{2}}^{2}\] \[\lesssim\|\varepsilon\|_{B_{\infty,2}^{1}}^{2}\|\langle D^{\prime }\rangle^{\rho}u\|_{L^{2}}^{2}.\] We turn to the second term in the High-High-interaction (B.10), which is written as \[P_{N}\big{[}\partial_{x^{\prime}}\varepsilon_{\widehat{\times}N},\langle D^{ \prime}\rangle^{\rho}]P_{\widehat{\times}N}u=P_{N}(\partial_{x^{\prime}} \varepsilon_{\widehat{\times}N})(\langle D^{\prime}\rangle^{\rho}P_{\widehat{ \times}N}u)-P_{N}\langle D^{\prime}\rangle^{\rho}(\partial_{x^{\prime}} \varepsilon_{\widehat{\times}N})P_{\widehat{\times}N}u.\] (B.13) For the first term in (B.13) we have by Plancherel's theorem and the Cauchy-Schwarz inequality \[\sum_{N}\|P_{N}(\partial_{x^{\prime}}\varepsilon_{\nwarrow N})( \langle D^{\prime}\rangle^{\rho}P_{\nwarrow N}u)\|_{L^{2}}^{2} =\|\sum_{M\ngeq 1}(\partial_{x^{\prime}}\varepsilon_{M})(\langle D^{ \prime}\rangle^{\rho}P_{M}u)\|_{L^{2}}^{2}\] \[\leq\|\big{(}\sum_{M}|\partial_{x^{\prime}}\varepsilon_{M}|^{2} \big{)}^{\frac{1}{2}}\big{(}\sum_{M}|\langle D^{\prime}\rangle^{\rho}P_{M}u|^{ 2}\big{)}^{\frac{1}{2}}\|_{L^{2}}^{2}\] \[\leq\|\big{(}\sum_{M}|\partial_{x^{\prime}}\varepsilon_{M}|^{2} \big{)}^{\frac{1}{2}}\|_{L^{\infty}}^{2}\|\big{(}\sum_{M}|\langle D^{\prime} \rangle^{\rho}P_{M}u|^{2}\big{)}^{\frac{1}{2}}\|_{L^{2}}^{2}.\] We use the characterization of BMO by the Littlewood-Paley square function and the embedding \(L^{\infty}\hookrightarrow BMO\) to find \[\lesssim\|\partial_{x^{\prime}}\varepsilon\|_{L^{\infty}}^{2}\|\langle D^{ \prime}\rangle^{\rho}u\|_{L^{2}}^{2}.\] The second term in (B.13) is better behaved than the first term and can be estimated similarly. This finishes the proof. We now turn to the proof of (B.4): For the time derivatives there is no commutator, but we have \[\mu^{-1}\nabla\times(\langle D^{\prime}\rangle^{s}(\mu\mathcal{H}))=[\mu_{1}^{ -1},\langle D^{\prime}\rangle^{s}]\nabla\times(\mu\mathcal{H})+\langle D^{ \prime}\rangle^{s}(\mu^{-1}\nabla\times(\mu\mathcal{H})).\] Furthermore, \(\mu^{-1}\nabla\times(\mu\mathcal{H})=O(\mu^{-1}(\partial\mu)\mathcal{H})+ \nabla\times\mathcal{H}\). In both steps, we use that \(\mu=\mu_{1}1_{3\times 3}\). By Lemma B.2 and the fractional Leibniz rule, we find \[\|\mu^{-1}\nabla\times(\langle D^{\prime}\rangle^{s}(\mu\mathcal{H}))-\langle D ^{\prime}\rangle^{s}\nabla\times\mathcal{H}\|_{L^{2}_{x^{\prime}}}\lesssim_{ \|\mu\|_{\mathcal{L}^{[s]+1}_{x^{\prime}}}}\|\langle D^{\prime}\rangle^{s} \mathcal{H}\|_{L^{2}_{x^{\prime}}}.\] This also shows the corresponding estimate \[\|\varepsilon^{-1}\nabla\times(\langle D^{\prime}\rangle^{s}(\varepsilon \mathcal{E}))-\langle D^{\prime}\rangle^{s}\nabla\times\mathcal{E}\|_{L^{2}_{ x^{\prime}}}\lesssim_{\|\varepsilon\|_{\mathcal{L}^{[s]+1}_{x^{\prime}}}}\| \langle D^{\prime}\rangle^{s}\mathcal{E}\|_{L^{2}_{x^{\prime}}}.\] We conclude (B.3), which implies \[\|(\mathcal{E},\mathcal{H})\|_{L^{p}_{t}(0,T;L^{q}_{x^{\prime}})} \lesssim_{T,\varepsilon,\mu}\|\langle D^{\prime}\rangle^{s}( \mathcal{E},\mathcal{H})\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}+\|\langle D^{ \prime}\rangle^{s}P(\mathcal{E},\mathcal{H})\|_{L^{1}_{t}L^{2}_{x^{\prime}}}\] \[\quad+\|\langle D^{\prime}\rangle^{s}|D^{\prime}|^{-1+\frac{1}{p} \rho}em\|_{L^{\infty}_{t}L^{2}_{x^{\prime}}}+\|\langle D^{\prime}\rangle^{s}| D^{\prime}|^{-1+\frac{1}{p}}\partial_{t}\rho_{em}\|_{L^{1}_{t}L^{2}_{x^{ \prime}}}.\] (B.14) This we apply to the homogeneous solution \((\mathcal{E},\mathcal{H})\), which remains in the interior up to time \(T>0\) to find with \(P(\mathcal{E},\mathcal{H})=0\) and \(\partial_{t}\rho_{em}(0)=0\): \[\|(\mathcal{E},\mathcal{H})\|_{L^{p}_{t}(0,T;L^{q}_{x^{\prime}})}\lesssim\| \langle D^{\prime}\rangle^{s}(\mathcal{E},\mathcal{H})(0)\|_{L^{2}_{x^{\prime}} }+\|\langle D^{\prime}\rangle^{s-1+\frac{1}{p}}\rho_{em}(0)\|_{L^{2}_{x^{ \prime}}}.\] Note that we used the trivial estimate \[\||D^{\prime}|^{-1+\frac{1}{p}}\langle D^{\prime}\rangle^{s}P_{\nwarrow 1} \nabla\cdot\mathcal{E}(0)\|_{L^{2}_{x^{\prime}}}\lesssim\|\mathcal{E}(0)\|_{L^{ 2}_{x^{\prime}}}\] to recast the homogeneous derivatives in (B.14) for low frequencies as inhomogeneous derivatives. This finishes the proof of Strichartz estimates for the interior part. ### Strichartz estimates for Maxwell equations in two dimensions The purpose of this section is to show Strichartz estimates for Maxwell equations with rough coefficients in the full space in two dimensions, which are suitable for the arguments of this paper. The following is the analog of Theorem B.1 in two dimensions: **Theorem B.3**.: _Let \(\varepsilon_{ij}\), \(\mu_{1}\in C^{1}(\mathbb{R}\times\mathbb{R}^{2};\mathbb{R})\) for \(i=1,2\) such that \((\varepsilon_{ij})_{i,j=1,2}\) satisfies (1.16) and \(\partial_{x}^{2}\varepsilon_{ij}\in L_{t}^{1}L_{x^{\prime}}^{\infty}\) and \(\partial_{x}^{2}\mu_{1}\in L_{t}^{1}L_{x^{\prime}}^{\infty}\). Let \((s,p,q)\) be wave Strichartz admissible in two dimensions, i.e.,_ \[2\leqslant p\leqslant\infty,\ 2\leqslant q<\infty^{7},\quad\frac{2}{p}+\frac{1} {q}\leqslant\frac{1}{2},\quad s=2\big{(}\frac{1}{2}-\frac{1}{q}\big{)}-\frac{ 1}{p}.\] _Let \(u=(u_{1},u_{2},u_{3})=(u^{(1)},u^{(2)}):\mathbb{R}\times\mathbb{R}^{2}\to \mathbb{R}^{2}\times\mathbb{R}\), and_ \[\tilde{P}=\begin{pmatrix}\partial_{t}&0&-\partial_{2}(\mu_{1}\cdot)\\ 0&\partial_{t}&\partial_{1}(\mu_{1}\cdot)\\ \partial_{1}(\varepsilon_{21}\cdot)-\partial_{2}(\varepsilon_{11}\cdot)& \partial_{1}(\varepsilon_{22}\cdot)-\partial_{2}(\varepsilon_{12}\cdot)& \partial_{t}\end{pmatrix}.\] _The following estimate holds:_ \[\|D^{\prime}|^{-s}u\|_{L_{t}^{p}(0,T;L_{x^{\prime}}^{q})} \lesssim\nu^{\frac{1}{p}}\|u\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}+ \nu^{-\frac{1}{p^{\prime}}}\|\tilde{P}(x,D)u\|_{L_{t}^{1}L_{x^{\prime}}^{2}}\] \[\quad+T^{\frac{1}{p}}\big{(}\|D^{\prime}|^{-1+\frac{1}{p}}\nabla \cdot u^{(1)}\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}+\|D^{\prime}|^{-1+\frac{1}{ p}}\partial_{t}\nabla\cdot u^{(1)}\|_{L_{t}^{1}L_{x^{\prime}}^{2}}\big{)},\] (B.15) _whenever the right hand-side is finite, provided that \(\nu\geqslant 1\), and \(T\sum_{i,j}\|\partial_{x}^{2}\varepsilon_{ij}\|_{L_{t}^{1}L_{x^{\prime}}^{ \infty}}+T\|\partial_{x}^{2}\mu_{1}\|_{L_{t}^{1}L_{x^{\prime}}^{\infty}} \leqslant\nu^{2}\)._ A simplified variant of the estimate (B.15) was proved in [19, Theorem 1.3] with \(\mu_{1}\equiv 1\) and an inferior estimate for the charges. For the proof, we revisit the analysis of [19] and improve and generalize it using arguments from [17, 18], but shall be brief to avoid repitition. Proof.: In the first step we reduce to the frequency localized estimate \[\lambda^{-s}\|S_{\lambda}u\|_{L^{p}L^{q}}\lesssim\|S_{\lambda}u\|_{L^{\infty} L^{2}}+\|\tilde{P}S_{\lambda}u\|_{L^{2}}+\lambda^{-1+\frac{1}{p}}\|S_{\lambda} \tilde{\rho}_{e}\|_{L_{x}^{2}}\] for \(\lambda\gtrsim 1\), where \(u\) is essentially supported in the unit cube and its space-time Fourier transform is supported in \(\{|\xi_{0}|\lesssim|(\xi_{1},\xi_{2})|\}\). Moreover, we denote \(\tilde{\rho}_{e}=\nabla\cdot u^{(1)}\). This is carried out like in [19, Section 3.4]. Now we frequency truncate the operator \(\tilde{P}\) to coefficients, which have space-time Fourier transform \(B(0,\lambda^{\frac{1}{2}})\). We let \[\tilde{P}^{\lambda}=\begin{pmatrix}\partial_{t}&0&-\partial_{2}(\mu_{1}^{ \lambda^{\frac{1}{2}}}\cdot)\\ 0&\partial_{t}&\partial_{1}(\mu_{1}^{\lambda^{\frac{1}{2}}}\cdot)\\ -\partial_{2}(\varepsilon_{11}^{\lambda^{\frac{1}{2}}}\cdot)+\partial_{1}( \varepsilon_{21}^{\lambda^{\frac{1}{2}}}\cdot)&\partial_{1}(\varepsilon_{22}^ {\lambda^{\frac{1}{2}}}\cdot)-\partial_{2}(\varepsilon_{12}^{\lambda^{\frac{1 }{2}}}\cdot)&\partial_{t}\end{pmatrix}.\] It suffices to show \[\lambda^{-s}\|S_{\lambda}u\|_{L^{p}L^{q}}\lesssim\|S_{\lambda}u\|_{L^{\infty}L^ {2}}+\|\tilde{P}^{\lambda}S_{\lambda}u\|_{L^{2}}+\lambda^{-1+\frac{1}{p}}\|S_ {\lambda}\tilde{\rho}_{e}\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}.\] (B.16) \(\tilde{P}^{\lambda}\) was diagonalized with pseudo-differential operators for \(\mu_{1}\equiv 1\) in [19, Section 3.1] as \[\tilde{P}^{\lambda}=\mathcal{M}_{\lambda}\mathcal{D}_{\lambda}\mathcal{N}_{ \lambda}+E_{\lambda}\text{ with }\|E_{\lambda}\|_{L_{x}^{2}\to L_{x}^{2}}\lesssim 1.\] (B.17) In [18] the diagonalization was carried out in the constant-coefficient case for \(\mu_{1}=\mu^{-1}\neq 0\). This will determine the principal symbols of the operators in (B.17). The principal symbol of \(\tilde{P}^{\lambda}\) reads (omitting the frequency truncation and \(x\)-dependence to lighten notations): \[p(x,\xi)=i\begin{pmatrix}\xi_{0}&0&-\xi_{2}\mu_{1}\\ 0&\xi_{0}&\xi_{1}\mu_{1}\\ \xi_{1}\varepsilon_{12}-\xi_{2}\varepsilon_{11}&\xi_{1}\varepsilon_{22}-\xi_ {2}\varepsilon_{12}&\xi_{0}\end{pmatrix}.\] We let \[\|\xi^{\prime}\|_{\varepsilon^{\prime}}^{2}=\langle\xi^{\prime},\mu_{1}\det( \varepsilon)^{-1}\varepsilon\xi^{\prime}\rangle,\quad\varepsilon=((\varepsilon _{ij})_{i,j})^{-1},\quad\xi^{*}=\xi^{\prime}/\|\xi^{\prime}\|_{\varepsilon^{ \prime}}.\] The following diagonalization holds for almost all \(\xi^{\prime}\in\mathbb{R}^{2}\) by [18, Lemma 2.2]: \(p(x,\xi)=m(x,\xi^{\prime})d(x,\xi)m^{-1}(x,\xi^{\prime})\) with \[m(x,\xi^{\prime}) =\begin{pmatrix}\varepsilon_{22}\xi_{1}^{*}-\varepsilon_{12}\xi_ {2}^{*}&-\xi_{2}^{*}\mu_{1}&\xi_{2}^{*}\mu_{1}\\ \varepsilon_{11}\xi_{2}^{*}-\varepsilon_{21}\xi_{1}^{*}&\xi_{1}^{*}\mu_{1}&- \xi_{1}^{*}\mu_{1}\\ 0&-1&-1\end{pmatrix},\] \[m^{-1}(x,\xi^{\prime}) =\begin{pmatrix}\mu_{1}\xi_{1}^{*}&\mu_{1}\xi_{2}^{*}&0\\ \frac{\xi_{1}^{*}\varepsilon_{21}-\xi_{2}^{*}\varepsilon_{11}}{2}&\frac{ \varepsilon_{22}\xi_{1}^{*}-\varepsilon_{21}\xi_{2}^{*}}{2}&-\frac{1}{2}\\ \frac{\xi_{2}^{*}\varepsilon_{11}-\xi_{1}^{*}\varepsilon_{12}}{2}&\frac{\xi_{ 2}^{*}\varepsilon_{12}-\xi_{1}^{*}\varepsilon_{22}}{2}&-\frac{1}{2}\end{pmatrix},\] and \(d(x,\xi)=i\mathrm{diag}(\xi_{0},\xi_{0}-\|\xi^{\prime}\|_{\varepsilon^{\prime} },\xi_{0}+\|\xi^{\prime}\|_{\varepsilon^{\prime}})\). The error estimates for the diagonalization with the standard quantization can be proved like in [19, Section 3.3]; see also [17, Section 3.2] for a simplification of arguments. In the following we sketch the conclusion of the proof with the diagonalization at hand. By \(\mathcal{M}_{\lambda}\mathcal{N}_{\lambda}S_{\lambda}S_{\lambda}^{\prime}=S_{ \lambda}S_{\lambda}^{\prime}+O_{L_{x}^{2}\to L_{x}^{2}}(\lambda^{-1})\) and Sobolev embedding, we obtain \[\lambda^{-\rho}\|S_{\lambda}u\|_{L_{t}^{p}L_{x^{\prime}}^{q}}\lesssim\lambda^ {-\rho}\|\mathcal{N}_{\lambda}S_{\lambda}u\|_{L_{t}^{p}L_{x^{\prime}}^{q}}+\|S_ {\lambda}u\|_{L_{x}^{2}}.\] (B.18) Moreover, by Fourier support of \(u\), we have \(S_{\lambda}u=S_{\lambda}\tilde{S}_{\lambda}^{\prime}u\) and straight-forward estimates for pseudo-differential operators (cf. Section 4) \[\begin{split}\lambda^{-\rho}\|[\mathcal{N}_{\lambda}S_{\lambda}u] _{1}\|_{L_{t}^{p}L_{x^{\prime}}^{q}}&\lesssim\lambda^{-(\rho+1)}\| \nabla\cdot(\tilde{S}_{\lambda}^{\prime}u)\|_{L_{t}^{p}L_{x^{\prime}}^{q}}\\ &\lesssim\lambda^{-1+\frac{1}{p}}\|\nabla\cdot\tilde{S}_{\lambda} ^{\prime}u\|_{L_{t}^{p}L_{x^{\prime}}^{q}}\\ &\lesssim\lambda^{-1+\frac{1}{p}}\|\nabla\cdot\tilde{S}_{\lambda} ^{\prime}u\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}.\end{split}\] (B.19) This controls \([\mathcal{N}_{\lambda}S_{\lambda}u]_{1}\) in terms of the charges. \([\mathcal{N}_{\lambda}S_{\lambda}u]_{i}\) are estimated like in [19] with the estimates for (rough) half-wave equations. This yields \[\lambda^{-s}\|S_{\lambda}u\|_{L_{t}^{p}L_{x^{\prime}}^{q}}\lesssim\|S_{ \lambda}u\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}+\|\mathcal{D}_{\lambda} \mathcal{N}_{\lambda}S_{\lambda}u\|_{L_{x}^{2}}+\lambda^{-1+\frac{1}{p}}\|S_ {\lambda}\tilde{\rho}_{e}\|_{L_{t}^{\infty}L_{x^{\prime}}^{2}}.\] The proof of (B.16) can be concluded now by another error estimate \[\mathcal{N}_{\lambda}\mathcal{M}_{\lambda}S_{\lambda}S_{\lambda}^{\prime}=S_{ \lambda}S_{\lambda}^{\prime}+O_{L_{x}^{2}\to L_{x}^{2}}(\lambda^{-1}),\] and invoking (B.17). We obtain the following corollary by paradifferential truncation (cf. [19, Corollary 1.7]): **Corollary B.4**.: _Let notations be like in Theorem B.3. Assume that \(\|\partial_{x}\varepsilon\|_{L^{2}_{T}L^{\omega}_{x^{\prime}}}\lesssim 1\). Then the solution \(u\) to_ \[\left\{\begin{array}{rl}\tilde{P}(x,D)u&=f,\\ u(0)&=u_{0}\end{array}\right.\qquad\nabla\cdot u^{(1)}=\tilde{\rho}_{e},\] _satisfies_ \[\|\langle D^{\prime}\rangle^{-\alpha}u\|_{L^{p}(0,T;L^{q}(\mathbb{ R}^{2})}\lesssim_{T,\alpha}\|u_{0}\|_{L^{2}(\mathbb{R}^{2})}+\|f\|_{L^{1}(0,T;L^{2}( \mathbb{R}^{2}))}\\ +\|\langle D^{\prime}\rangle^{-1+\frac{2}{3p}}\tilde{\rho}_{e}(0)\|_{L^ {2}_{x^{\prime}}}+\|\langle D^{\prime}\rangle^{-1+\frac{2}{3p}}\partial_{t} \rho_{e}\|_{L^{1}(0,T;L^{2})}.\] ## Appendix C Helmholtz decompositions In this Appendix we collect facts on Helmholtz decompositions. Let \(\mathcal{E}:\mathbb{R}^{d}\supseteq\Omega\to\mathbb{R}^{d}\) denote a sufficiently smooth vector field with \(d\in\{2,3\}\). The question is under which assumptions on the domain \(\Omega\subseteq\mathbb{R}^{d}\), the vector field \(\mathcal{E}\), and \(s\geq 0\) we find the equivalence of norms to hold: \[\|\mathcal{E}\|_{H^{s+1}(\Omega)}\sim\|\nabla\times\mathcal{E}\|_{H^{s}( \Omega)}+\|\nabla\cdot\mathcal{E}\|_{H^{s}(\Omega)}+\|\mathcal{E}\|_{L^{2}( \Omega)}.\] Suitable results for connected bounded domains with smooth boundary were proved in [4, Chapter IX, Section SS1]. We shall see how these results extend to domains with compact boundary. We have the following for \(d=2\): **Proposition C.1** ([4, Proposition 6', p. 237]).: _Let \(k\in\mathbb{N}_{0}\), and \(\Omega\) be a connected bounded open set in \(\mathbb{R}^{2}\) with smooth boundary. Then,_ \[H^{k+1}(\Omega)^{2}=\{\mathcal{E}\in L^{2}(\Omega),\;(\nabla\times\mathcal{E}) _{3}\in H^{k}(\Omega),\;\nabla\cdot\mathcal{E}\in H^{k}(\Omega),\;u\wedge \nu\big{|}_{\partial\Omega}\in H^{k+\frac{1}{2}}(\partial\Omega)^{2}\}\] _and correspondingly,_ \[\|\mathcal{E}\|_{H^{k+1}(\Omega)}\sim\|(\nabla\times\mathcal{E})_{3}\|_{H^{k} (\Omega)}+\|\nabla\cdot\mathcal{E}\|_{H^{k}(\Omega)}+\|\mathcal{E}\|_{L^{2}( \Omega)}.\] (C.1) From this we deduce the following for domains with compact boundary: **Proposition C.2**.: _Let \(\Omega\subseteq\mathbb{R}^{2}\) be an open set with compact smooth boundary and \(s\geq 0\). Then, for \(\mathcal{E}\in H^{s+1}(\Omega)^{2}\) with \([\mathcal{E}\times\nu]_{x^{\prime}\in\partial\Omega}=0\) we have the equivalence of norms:_ \[\|\mathcal{E}\|_{H^{s+1}(\Omega)}\sim\|(\nabla\times\mathcal{E})_{3}\|_{H^{s}( \Omega)}+\|\nabla\cdot\mathcal{E}\|_{H^{s}(\Omega)}+\|\mathcal{E}\|_{L^{2}( \Omega)}.\] (C.2) Proof.: It suffices to show the claim for \(s\in\mathbb{N}_{0}\) and connected \(\Omega\) as the norms disentangle by disjointness of the supports. Indeed, for \(\Omega=\bigcup_{i}\Omega_{i}\) denoting the decomposition into connected components, we have \(H^{s}(\Omega)=\overleftarrow{\bigoplus_{i}}H^{s}(\Omega_{i})\). Moreover, the estimate \[\|(\nabla\times\mathcal{E})_{3}\|_{H^{s}(\Omega)}+\|\nabla\cdot\mathcal{E}\|_ {H^{s}(\Omega)}+\|\mathcal{E}\|_{L^{2}(\Omega)}\lesssim\|\mathcal{E}\|_{H^{s+1 }(\Omega)}\] is immediate. We turn to the reverse inequality. This will follow from a localization argument and Proposition C.1. Let \((U_{i},\varphi_{i})_{i=1,\ldots,n}\) be bounded charts, which cover a neighbourhood of the boundary and \((U_{0},\varphi_{0}=id)\) be the trivial chart for the interior. Let \(\sum_{i=0}^{n}\psi_{i}=1_{\Omega}\) be a partition of unity of \(\Omega\) with \(\operatorname{supp}(\psi_{i})\subseteq U_{i}\). We have for \(i=1,\ldots,n\) by Proposition C.1: \[\|\mathcal{E}\psi_{i}\|_{H^{s+1}}\sim\|(\nabla\times(\psi_{i}\mathcal{E}))_{3} \|_{H^{s}(\Omega)}+\|\nabla\cdot(\psi_{i}\mathcal{E})\|_{H^{s}}+\|\psi_{i} \mathcal{E}\|_{L^{2}},\] and for \(i=0\): \[\|\mathcal{E}\psi_{0}\|_{H^{s+1}(\Omega)} =\|\mathcal{E}\psi_{0}\|_{H^{s+1}(\mathbb{R}^{2})}\] \[\sim\|(\nabla\times(\mathcal{E}\psi_{0}))_{3}\|_{H^{s}(\Omega)}+ \|\nabla\cdot(\mathcal{E}\psi_{0})\|_{H^{s}(\Omega)}+\|\mathcal{E}\psi_{0}\|_{L ^{2}(\Omega)}\] \[=\|(\nabla\times(\mathcal{E}\psi_{0}))_{3}\|_{H^{s}(\Omega)}+\| \nabla\cdot(\mathcal{E}\psi_{0})\|_{H^{s}(\Omega)}+\|\mathcal{E}\psi_{0}\|_{L ^{2}(\Omega)}.\] Here we used that (C.2) holds for \(\Omega=\mathbb{R}^{d}\) as can readily be verified by changing to Fourier space and \(\operatorname{dist}(\operatorname{supp}(\mathcal{E}\psi_{0}),\partial\Omega)>0\), which allows us to change back and forth between \(H^{s}(\Omega)\) and \(H^{s}(\mathbb{R}^{2})\) for the involved functions. Furthermore, for \(i=0,\dots,n\) we have \[\|(\nabla\times(\mathcal{E}\psi_{i}))_{3}\|_{H^{s}(\Omega)}\leqslant\|\psi_{i }(\nabla\times\mathcal{E})_{3}\|_{H^{s}(\Omega)}+\|\partial\psi_{i}\mathcal{E }\|_{H^{s}(\Omega)}.\] Moreover, \[\|\psi_{i}(\nabla\times\mathcal{E})_{3}\|_{H^{s}(\Omega)}\leqslant C\|(\nabla \times\mathcal{E})_{3}\|_{H^{s}(\Omega)}\] follows from \(\partial_{x^{\prime}}^{\alpha}\psi_{i}\in C_{c}^{\infty}(\Omega)\) for \(|\alpha|\geqslant 1\) and \(\|\psi_{i}(\nabla\times\mathcal{E})_{3}\|_{L^{2}(\Omega)}\leqslant\|(\nabla \times\mathcal{E})_{3}\|_{L^{2}(\Omega)}\) because \(|\psi_{i}(x)|\leqslant 1\) for any \(x\in\Omega\). Since \(\partial_{x^{\prime}}\psi\in C_{c}^{\infty}(\Omega)\), we have \[\|(\partial_{x^{\prime}}\psi)\mathcal{E}\|_{H^{s}(\Omega)}\lesssim\|\mathcal{ E}\|_{H^{s}(\Omega)}.\] We proved \[\|(\nabla\times(\mathcal{E}\psi_{i}))_{3}\|_{H^{s}(\Omega)} \lesssim\|(\nabla\times\mathcal{E})_{3}\|_{H^{s}(\Omega)}+\| \mathcal{E}\|_{H^{s}(\Omega)}\] \[\leqslant C\|(\nabla\times\mathcal{E})_{3}\|_{H^{s}(\Omega)}+ \varepsilon\|\mathcal{E}\|_{H^{s+1}(\Omega)}+C_{\varepsilon}\|\mathcal{E}\|_{L ^{2}(\Omega)}.\] By the same arguments it holds \[\|\nabla\cdot(\mathcal{E}\psi_{i})\|_{H^{s}(\Omega)} \leqslant C\|\nabla\cdot\mathcal{E}\|_{H^{s}(\Omega)}+\varepsilon \|\mathcal{E}\|_{H^{s+1}(\Omega)}+C_{\varepsilon}\|\mathcal{E}\|_{L^{2}( \Omega)}.\] Hence, we can conclude \[\|\mathcal{E}\|_{H^{s+1}(\Omega)} \leqslant\sum_{i=0}^{n}\|\mathcal{E}\psi_{i}\|_{H^{s+1}(\Omega)}\] \[\leqslant C\sum_{i=0}^{n}\big{(}\|(\nabla\times(\mathcal{E}\psi_{i }))_{3}\|_{H^{s}(\Omega)}+\|\nabla\cdot(\mathcal{E}\psi_{i})\|_{H^{s}(\Omega)} +\|\psi_{i}\mathcal{E}\|_{L^{2}(\Omega)}\big{)}\] \[\leqslant C(n+1)\big{(}\|(\nabla\times\mathcal{E})_{3}\|_{H^{s}( \Omega)}+\|\nabla\cdot\mathcal{E}\|_{H^{s}(\Omega)}+\|\mathcal{E}\|_{L^{2}( \Omega)}\big{)}\] \[\quad+C(n+1)\varepsilon\|\mathcal{E}\|_{H^{s+1}(\Omega)}+C_{ \varepsilon}\|\mathcal{E}\|_{L^{2}(\Omega)}.\] Choosing \(\varepsilon=\frac{1}{2(n+1)C}\) we have proved that \[\|\mathcal{E}\|_{H^{s+1}(\Omega)}\lesssim\|(\nabla\times\mathcal{E})_{3}\|_{H^ {s}(\Omega)}+\|\nabla\cdot\mathcal{E}\|_{H^{s}(\Omega)}+\|\mathcal{E}\|_{L^{2} (\Omega)}.\] For \(d=3\), one can argue like in Proposition C.2 to extend the results due to Dautray-Lions [4, Proposition 6', p. 237] for connected bounded domains with smooth boundary likewise in the three-dimensional case to domains with compact and smooth boundary. We record the following, which suffices for the purposes of this paper: **Proposition C.3**.: _Let \(\Omega\subseteq\mathbb{R}^{3}\) be a smooth domain with compact boundary. Let \(\mathcal{E}\in H^{3}(\Omega;\mathbb{R}^{3})\) be a vector field. Suppose that either the tangential components satisfy Dirichlet boundary conditions and the normal component satisfies Neumann boundary conditions or vice versa. Then the following estimate holds:_ \[\|\mathcal{E}\|_{H^{1}(\Omega)}\sim\|\mathcal{E}\|_{H_{curl}(\Omega)}+\| \mathcal{E}\|_{H_{div}(\Omega)}+\|\mathcal{E}\|_{L^{2}(\Omega)}.\] (C.3) ## Acknowledgements R.S. acknowledges financial support by the German Research Foundation (DFG) - Project-Id 258734477 - SFB 1173. The second author would like to thank the Institut de Mathematique d'Orsay for kind hospitality, where much of this research was carried out in spring 2022, and Roland Schnaubelt (KIT) for helpful discussions on Maxwell equations on domains and putting the results into context.
2307.12526
Rethinking Medical Report Generation: Disease Revealing Enhancement with Knowledge Graph
Knowledge Graph (KG) plays a crucial role in Medical Report Generation (MRG) because it reveals the relations among diseases and thus can be utilized to guide the generation process. However, constructing a comprehensive KG is labor-intensive and its applications on the MRG process are under-explored. In this study, we establish a complete KG on chest X-ray imaging that includes 137 types of diseases and abnormalities. Based on this KG, we find that the current MRG data sets exhibit a long-tailed problem in disease distribution. To mitigate this problem, we introduce a novel augmentation strategy that enhances the representation of disease types in the tail-end of the distribution. We further design a two-stage MRG approach, where a classifier is first trained to detect whether the input images exhibit any abnormalities. The classified images are then independently fed into two transformer-based generators, namely, ``disease-specific generator" and ``disease-free generator" to generate the corresponding reports. To enhance the clinical evaluation of whether the generated reports correctly describe the diseases appearing in the input image, we propose diverse sensitivity (DS), a new metric that checks whether generated diseases match ground truth and measures the diversity of all generated diseases. Results show that the proposed two-stage generation framework and augmentation strategies improve DS by a considerable margin, indicating a notable reduction in the long-tailed problem associated with under-represented diseases.
Yixin Wang, Zihao Lin, Haoyu Dong
2023-07-24T04:56:23Z
http://arxiv.org/abs/2307.12526v1
# Rethinking Medical Report Generation: ###### Abstract Knowledge Graph (KG) plays a crucial role in Medical Report Generation (MRG) because it reveals the relations among diseases and thus can be utilized to guide the generation process. However, constructing a comprehensive KG is labor-intensive and its applications on the MRG process are under-explored. In this study, we establish a complete KG on chest X-ray imaging that includes 137 types of diseases and abnormalities. Based on this KG, we find that the current MRG data sets exhibit a long-tailed problem in disease distribution. To mitigate this problem, we introduce a novel augmentation strategy that enhances the representation of disease types in the tail-end of the distribution. We further design a two-stage MRG approach, where a classifier is first trained to detect whether the input images exhibit any abnormalities. The classified images are then independently fed into two transformer-based generators, namely, "disease-specific generator" and "disease-free generator" to generate the corresponding reports. To enhance the clinical evaluation of whether the generated reports correctly describe the diseases appearing in the input image, we propose diverse sensitivity (DS), a new metric that checks whether generated diseases match ground truth and measures the diversity of all generated diseases. Results show that the proposed two-stage generation framework and augmentation strategies improve DS by a considerable margin, indicating a notable reduction in the long-tailed problem associated with under-represented diseases. Machine Learning, Knowledge Graph, Knowledge Graph ## 1 Introduction Chest radiography is one of the most common and effective imaging examinations used in clinical practice for diagnosing diseases and evaluating health risks. The obtained images generally require medical reports with comprehensive interpretation written by qualified physicians or pathologists, which can be time-consuming and requires expertise. With the advancement in deep learning (DL) algorithms, automatic medical report generation (MRG) has been widely explored and achieved significant performance (Jing et al., 2018; Wang et al., 2018; Xue et al., 2018; Li et al., 2018; Boag et al., 2020; Chen et al., 2020; Liu et al., 2021; Wang et al., 2021; Chen et al., 2021; Liu et al., 2019; Wang et al., 2022; Yang et al., 2023). These DL-based systems analyze the chest images and automatically generate a descriptive report outlining the findings. However, these methods are primarily designed to optimize the performance of matching generated N-gram to ground truth reports, rather than focusing on aligning generated medical attributes, _i.e._, abnormalities or diseases with the actual reports, which is more important when assessing the clinical utility of a generation algorithm. While some researchers (Irvin et al., 2019; Harzig et al., 2019; Zhang et al., 2020) propose disease labeling tools or build disease knowledge graphs to aid in evaluating the reports, their KG contains limited disease types and they only consider report-level n-gram matching accuracy, which is a coarse reflection of the medical attributes. To address these problems, we construct a large KG with 137 types of chest diseases based on two widely used chest X-ray datasets, IU-Xray (Demner-Fushman et al., 2016) and MIMIC-CXR (Johnson et al., 2019) (See Section 2 for details). Utilizing the diseases from this KG, a rule-based criterion is adopted to make a detailed statistical analysis on the appearing diseases and abnormalities in IU-Xray. As depicted in Figure 1(a), across all reports in the data set, the frequency of sentences indicating normal results (no diseases or abnormalities) is three times greater than those indicating the presence of at least one disease or abnormality. Moreover, the number of sentences with common diseases (occurrences greater than 20) is almost 4 times of those with uncommon diseases (occurrences less than 20). The frequency of occurrence for each disease keyword is further highlighted in Figure 1(b), which exhibits a long-tailed distribution of the disease classes in the data set. In the original training data (dark blue bars), only three diseases appear more than 100 times and 65.7% diseases appear less than 10 times in IU-Xray, which shows that several common diseases dominate but rarer ones are under-represented. In response, we design a two-stage generation approach to reduce the bias towards generating "disease-free" reports instead of "disease-specific" reports, i.e., reports that contain at least one disease or abnormality. We further alleviate the long-tailed distribution issue by expanding the distribution of the disease classes through a designed disease augmentation strategy. According to our statistics on the augmented training data (light blue bars in Figure 1(b)), the overall frequency of uncommon diseases in the original data set increases from 37.6% to 55.5%, while the common diseases see a decrease in overall frequency from 62.4% to 44.5%. Moreover, when evaluating generated reports, more emphasis should be placed on clinic-efficacy information. Previous works employ the commonly used N-gram evaluation metrics from image captioning tasks, such as BLEU-N (Papineni et al., 2002). However, these metrics do not necessarily reflect the clinical quality of the diagnostic reports, such as the accuracy of the specific diseases. In our experiments, as well as in previous studies (Harzig et al., 2019), it has been observed that with an imbalanced data set, models tend to achieve the highest BLEU score when generating repetitive sentences that most frequently appear in the training set. Li et al. (Li et al., 2021) argue that the quality of medical reports largely depends on the accurate detection of positive disease keywords. Therefore, they employ several human evaluations as additional measurements. Nevertheless, implementing this evaluation requires significant expert efforts and is prone to subjectivity and variability. Based on KG, we propose a new evaluation metric, diverse sensitivity (DS), to assess the model's ability to generate reports containing special diseases, which concentrates more on clinical-relevant texts. Our KG and codes will be available at [https://github.com/Wangyixinxin/MRG-KG](https://github.com/Wangyixinxin/MRG-KG). Our contributions are as follows: * A complete knowledge graph with 8 disease categories and 137 diseases or abnormalities of chest radiographs is built based on accurate and detailed disease classification. * A novel augmentation strategy is proposed to address the long-tailed problems in chest X-ray data sets. * An effective two-stage MRG approach is designed to separately handle normal and abnormal images, generating texts more specific to the identified diseases. * A KG-based evaluation metric, DS, is further proposed to assess the quality of generated reports, prioritizing the accuracy of disease-relevant attributes. ## 2 Knowledge Graph Starting from (Zhang et al., 2020), several works have demonstrated the effectiveness of KG on chest report generation (Li et al., 2019; Liu et al., 2021; Zhang et al., 2020). The existing KG, which includes the most common diseases or abnormalities (Zhang et al., 2020), consists of 7 organs with 18 corresponding diseases, along with "normal" and "other findings". However, this KG lacks comprehensiveness as it omits many common diseases such as "calcification", "spine degenerative", and "lung consolidation". The restriction in disease types places a limitation on the model's capacity to learn about the relationships between diseases, resulting in a lack of clinical depth. For example, lung opacity can be divided into categories like "nodular opacity", "lobe opacity", and "hilar opacity". Besides, identical abnormali Figure 1: Illustration of counts of labeled sentences and disease keywords in IU-Xray. Part (a) shows the count of sentences that have common diseases (d_com), uncommon diseases (d_tail), or do not have diseases (d_free). Part (b) shows parts of distributions of diseases and abnormalities in original (dark blue bars) / augmented (light blue bars) training data. The lower / upper numbers are the occurrence counts of specific disease keywords in original / augmented training data. Figure 2: An illustration of our proposed knowledge graph, which contains β€œnormal” and 8 disease categories including 7 organs and an β€œother” category. Each category further branches out into its corresponding specific diseases. ties can appear in different organs, such as "lung opacity", "diaphrag opacity" and "airspace opacity". Lastly, the current KG does not account for several rare diseases or anomalies, leaving them unclassified. To overcome these limitations, we extend the knowledge graph by adding more diseases based on IU-Xray (Demner-Fushman et al., 2016) and MIMIC-CXR (Johnson et al., 2019). Figure 2 depicts a partial representation of our proposed knowledge graph. In our work, we retain the current seven organ categories while supplementing them with additional diseases. We also introduce another new category "other", which contains abnormalities, such as "tube" and "sternotomy" that do not belong to any of the seven organs. While constructing this KG, we also take into account the synonyms and variations of each specific disease, leading to a comprehensive representation of 137 disease types. These will be leveraged in our training approach (See Section 3) and evaluation metrics (See Section 4). Based on the knowledge graph, we build a rule-based criterion to classify diagnostic reports. Firstly, each word will be replaced by its synonyms through a pre-defined synonyms pool. Then, each sentence in the report will be labeled by a concatenation of "diseases-organs" pairs if it includes keywords from the KG or "normal" class otherwise. For example, the sentence "there are low lung volumes with broncho-vascular crowding" will be labeled as "bronchovascular crowding-lung-low volume-lung". A report is labeled as "disease-free" if all its sentences are labeled as "normal", otherwise it's marked as "disease-specific". These report labels will be utilized to train a classifier in our proposed two-stage generation approach. ## 3 Two-Stage Generation Approach Figure 1 illustrates an imbalance distribution within the IU-Xray dataset between the number of sentences that indicates the presence or absence of diseases, along with a long-tail issue in the disease distributions. To address these issues, we propose two solutions: firstly, a novel two-stage pipeline including an image classifier and two identical generation networks. These networks are trained with "disease-free" and "disease-specific" data separately (Section 3.1). Secondly, a disease-specific augmentation strategy to alleviate the imbalanced distribution of disease data (Section 3.2). ### Training and Inference Stage To address the dominance of normal findings in the data, we propose a two-stage approach. During the training phase, we leverage the available ground-truth reports to segregate the training data into the defined two classes, _i.e_., "disease-free" and "disease-specific". Following this strategy, the images corresponding to each report, paired with their respective labels, are leveraged to train an image classifier, ResNet101 (He et al., 2016), with standard cross-entropy loss to detect if an input image contains diseases. In parallel, we employ two generative models for report generation: a "disease-free generator" and a "disease-specific generator", each trained on data from their respective classes. Both generators utilize the same architectural design based on R2Gen (Chen et al., 2020), one of the most popular approaches for MRG. Specifically, given a radiology image as an input, a visual extractor is trained to extract related features. Subsequently, a transformer encoder and a transformer decoder, both consisting of a multi-head self-attention and a multi-head cross-attention module, are further employed to generate long reports. During the inference stage, a two-stage approach is adopted, where an input image is first fed to the image classifier to distinguish whether it contains any disease or abnormality, and then the corresponding generator is chosen to generate the diagnostic report in the second stage. Although the two-stage strategy can improve the ability of the generator to specifically generate "disease-specific" reports, there is still an inherent challenge of data imbalance which biases the model towards producing reports of the most dominant diseases found in the training data. With our disease KG, we further propose a novel data augmentation method to mitigate the disease imbalance issue. ### Disease-Specific Augmentation The first step of our augmentation strategy is to create a key-value pool of disease sentences, where the keys represent sentence labels (See Section 2) which are a concatenation of "diseases-organs" pairs such as "opacity-lung", and values include all unique-format sentences under this label such as "The lung is opacity" and "This patient has lung opacity". We define the label count as the number of unique-format sentences for each sentence label. A higher label count indicates more sentence variations that describe that label, which is easier to perform disease augmentation through random substitution. Therefore, we define a count interval [5, 100] by omitting sentence labels with a label count of less than 5 or more than 100. Starting from the label with the fewest unique-format sentences in this interval, we first find all diagnostic reports that contain sentences under this sentence label. For each report, we substitute the sentence under this sentence label with another format from the key-value pool and repeat this operation for all reports. For example, if 5 distinct sentences belong to a particular label, the proposed augmentation strategy will generate \(5\times(5-1)=20\) additional reports. Given that a report might contain multiple disease sentences, this augmentation process could inadvertently boost the frequency of various diseases concurrently. To moderate this undesired effect, we update the statistics of disease labels after each round of augmentation and find the next least frequent disease that has not been augmented before. Figure 1 (b) indicates that the applied augmentation strategy successfully evens out the distribution of diseases, especially in reducing the long-tail problem. It is noted that although the augmentation strategy increases the occurrences of all types of diseases, it prioritizes the occurrence of diseases in the tailed population. ## 4 Evaluation Metric Common evaluation metrics, including BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005), etc., fail to consider whether the generated reports describe the diseases appearing in the input image. The accurate description of disease keywords is the main criterion for radiologists to decide whether to use the generated reports. Therefore, based on our proposed KG, we introduce a new evaluation metric, Diverse Sensitivity (DS), that evaluates whether the diseases identified in the ground-truth report are also accurately depicted in the generated report. Firstly, we consider a generated report to be correct _iff_ it depicts at least one disease that appears in the ground truth. The Sensitivity (Sen.) is defined as \(Sen.=\frac{TP}{TP+FN}\), where \(TP\) and \(FN\) stand for true positive and false negative respectively. However, due to the long-tail disease distribution, the network is able to achieve a high sensitivity if all the generated reports contain the most common disease. Thus, we propose a different metric, Diversity (Div.), to account for the variability during generation. Div. is defined as the ratio between the number of uniquely generated disease types and the number of total disease types. Lastly, DS is a harmonic mean of Sen. and Div., _i.e._, \(DS=2\times\frac{Sen.\times Div.}{Sen.+Div.}\). Since DS focuses on evaluating "disease-specific" sentences, we also introduce Diagnostic Odds Ratio (DOR), similar to the concept defined in (Glas et al., 2003). This complementary metric evaluates the model's ability to generate correct "disease-free" reports. Formally, \(DOR=\frac{TP\times TN}{FP\times FN}\), where \(TN\) and \(FP\) stand for true negative and false positive respectively. DS and DOR are considered jointly to evaluate the clinical efficacy of a generation model. ## 5 Experiment ### Datasets and Implementation In the experiments, we adopt IU X-ray (Demner-Fushman et al., 2016) which consists of 7,470 chest X-ray images with 3,955 radiology reports. Each report is paired with two associated images - a frontal and a lateral view. This dataset is split into train/validation/test set by 7:1:2, following R2Gen (Chen et al., 2020). Model selection is based on the best DS score on the validation set and we report its performance on the test set. For a fair comparison, we keep all experimental settings consistent with those used in the R2Gen (Chen et al., 2020). ### Comparison Results #### 5.2.1 Quantitative Results. We evaluate the effectiveness of our proposed method as compared to R2Gen (Chen et al., 2020) using DS and DOR. We also include Sensitivity (Sen.) and Diversity (Div.) in our comparison for reference. As shown in Table 1, our method achieves a DS score of \(0.1902\) and a DOR score of \(0.5138\), outperforming R2Gen by a large margin. This improvement in these two clinical-relevant metrics implies greater applicability of our method in real-world clinical settings. We further investigate the effect of augmentation and the two-stage generation process in Table 1. The observed decrease in DOR to 0.4223 and DS to 0.1634, when the low-frequency diseases are not augmented, emphasizes the significance of ensuring a balanced disease distribution in the data set. Given our objective to improve the correct generation of disease sentences, we compare the two-stage generation model to its "disease-specific generator". Although the "disease-specific generator" achieves a higher DS, it can only generate reports with diseases, leading to zero TN and thus a zero score on DOR. Such a model is not clinically useful as it lacks the ability to distinguish between normal and disease images. Introducing a classifier can alleviate this problem, but it gives rise to another issue of having false negative predictions, which is beyond the scope of this paper. Lastly, we show the discrepancy between the common metric and the proposed one in the last row by recording a second R2Gen model that is selected based on the best BLEU-4 score. Although this BLEU-based model gains an impressive performance of \(0.1656\) (the leading performance under this metric on IU-Xray, not presented in Table 1 for clarity) in our experiments, it achieves close to zero in both DS and sensitivity. This implies that the majority of the generated reports do not align with the actual diseases, reducing their usefulness in a clinical setting. #### 5.2.2 Qualitative Results. Figure 3 provides a qualitative analysis that demonstrates the clinical efficacy of our methods and metrics. The generated \begin{table} \begin{tabular}{c c c|c c} \hline \hline Method & DOR & DS & Sen. & Div. \\ \hline R2Gen (Chen et al., 2020) & 0.2911 & 0.1523 & 0.0932 & 0.4153 \\ Two-Stage + Aug. (Ours) & **0.5138** & **0.1902** & 0.1220 & 0.4305 \\ \hline Two-Stage & 0.4223 & 0.1634 & 0.1034 & 0.3898 \\ Disease-Specific Only & 0 & 0.1955 & 0.1305 & 0.3898 \\ R2Gen\({}^{*}\) (Chen et al., 2020) & 0.4366 & 0.0324 & 0.0186 & 0.1220 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison results. R2Gen\({}^{*}\) means the best model under BLEU. reports reveal an important finding: the best R2Gen model, when selected based on the BLEU-4 metric (referring to as R2Gen (BLEU-4)), fails to generate disease-specific sentences, disregarding clinical-relevant information. In contrast, when selecting the models using our proposed DS metric (referring to as R2Gen (DS)), the chosen R2Gen model performs much better, indicating its ability to generate disease-specific sentences and emphasizing the need for a more clinical-relevant evaluation metric. Moreover, our two-stage generation approach, incorporating our augmentation strategy based on the DS metric, denoted as "Ours (DS)", effectively tackles the long-tailed issue by successfully capturing rare abnormalities such as "interstitial opacity" and "edema". The accurate descriptions of diseases generated by our approach, which align with the keywords in our knowledge graph (referred to as "Disease Keyword"), further validate the utility of our approach. ## 6 Conclusion In this paper, we present the construction of a comprehensive knowledge graph focusing on chest X-ray images to uncover disease relationships and investigates the significance of disease mentions in medical report generation task. We propose a two-stage generation approach and a KG-based augmentation strategy to mitigate the challenges associated with imbalanced data sets. The KG developed in this study can be extended and utilized by other researchers. Furthermore, a novel evaluation metric is devised, leveraging the information captured in the KG to measure clinical relevance. This work serves as a catalyst for future exploration of the clinical efficacy in medical report generation. Figure 3: Qualitative comparison on abnormal cases. Only our method based on DS evaluation metric successfully generates the correct disease mentions. Highlighted words can be accurately captured by β€œDisease Keywords” from our KG.
2305.03252
HeteroEdge: Addressing Asymmetry in Heterogeneous Collaborative Autonomous Systems
Gathering knowledge about surroundings and generating situational awareness for IoT devices is of utmost importance for systems developed for smart urban and uncontested environments. For example, a large-area surveillance system is typically equipped with multi-modal sensors such as cameras and LIDARs and is required to execute deep learning algorithms for action, face, behavior, and object recognition. However, these systems face power and memory constraints due to their ubiquitous nature, making it crucial to optimize data processing, deep learning algorithm input, and model inference communication. In this paper, we propose a self-adaptive optimization framework for a testbed comprising two Unmanned Ground Vehicles (UGVs) and two NVIDIA Jetson devices. This framework efficiently manages multiple tasks (storage, processing, computation, transmission, inference) on heterogeneous nodes concurrently. It involves compressing and masking input image frames, identifying similar frames, and profiling devices to obtain boundary conditions for optimization.. Finally, we propose and optimize a novel parameter split-ratio, which indicates the proportion of the data required to be offloaded to another device while considering the networking bandwidth, busy factor, memory (CPU, GPU, RAM), and power constraints of the devices in the testbed. Our evaluations captured while executing multiple tasks (e.g., PoseNet, SegNet, ImageNet, DetectNet, DepthNet) simultaneously, reveal that executing 70% (split-ratio=70%) of the data on the auxiliary node minimizes the offloading latency by approx. 33% (18.7 ms/image to 12.5 ms/image) and the total operation time by approx. 47% (69.32s to 36.43s) compared to the baseline configuration (executing on the primary node).
Mohammad Saeid Anwar, Emon Dey, Maloy Kumar Devnath, Indrajeet Ghosh, Naima Khan, Jade Freeman, Timothy Gregory, Niranjan Suri, Kasthuri Jayaraja, Sreenivasan Ramasamy Ramamurthy, Nirmalya Roy
2023-05-05T02:43:16Z
http://arxiv.org/abs/2305.03252v1
# HeteroEdge: Addressing Asymmetry in Heterogeneous Collaborative Autonomous Systems ###### Abstract Gathering knowledge about surroundings and generating situational awareness for autonomous systems is of utmost importance for systems developed for smart urban and uncontested environments. For example, a large area surveillance system is typically equipped with multi-modal sensors such as cameras and LIDARs and is required to execute deep learning algorithms for action, face, behavior, and object recognition. However, these systems are subjected to power and memory limitations due to their ubiquitous nature. As a result, optimizing how the sensed data is processed, fed to the deep learning algorithms, and the model inferences are communicated is critical. In this paper, we consider a testbed comprising two Unmanned Ground Vehicle (UGVs) and two NVIDIA Jetson devices and posit a self-adaptive optimization framework that is capable of navigating the workload of multiple tasks (storage, processing, computation, transmission, inference) collaboratively on multiple heterogenous nodes for multiple tasks simultaneously. The self-adaptive optimization framework involves compressing and masking the input image frames, identifying similar frames, and profiling the devices for various tasks to obtain the boundary conditions for the optimization framework. Finally, we propose and optimize a novel parameter _split-ratio_, which indicates the proportion of the data required to be offloaded to another device while considering the networking bandwidth, busy factor, memory (CPU, GPU, RAM), and power constraints of the devices in the testbed. Our evaluations captured while executing multiple tasks (e.g., PoseNet, SegNet, ImageNet, DetectNet, DepthNet) simultaneously, reveal that executing 70% (_split-ratio_70%) of the data on the _auxiliary node_ minimizes the offloading latency by \(\approx\) 33% (18.7 ms/image to 12.5 ms/image) and the total operation time by \(\approx\) 47% (69.32s to 36.43s) compared to the baseline configuration (executing on the _primary node_). Collaborative Systems, Deep Edge Intelligence, Autonomous Systems ## I **Introduction** In recent days, autonomous systems such as unmanned aerial or ground vehicles have been very popular in various applications, such as surveillance, photography and videography, mapping and surveying, agriculture, environmental monitoring, search and rescue, and delivery services. Unmanned vehicles equipped with sensors such as cameras, lidar, radar, and GPS allow us to collect data and perform tasks (e.g., object detection, scene detection, and many more) in various environments. Performing these tasks while depending on the onboard computational unit (especially advanced deep learning models) involves limited power consumption, eventually affecting the autonomous systems' operation and safety [1]. Several recent studies have investigated the impact of operational time and power supply of autonomous systems [2, 3, 4], and suggest that the operational capacity of the systems is severely affected due to the execution of onboard sub-systems (e.g., navigation unit, cameras, communication systems) [4]. In addition to these sub-systems, accommodating Deep Neural Network (DNN) algorithms (usually power, memory, and computation hungry) to perform the tasks for situational awareness contributes to the already limited power availability [5]. Furthermore, as some of the systems' operations, such as navigation and communication, are more important for the safety of the expensive autonomous systems, optimizing the DNNs would be essential to conserve the limited available power. One of the approaches to conserving power using DNNs can be achieved by offloading the inference task to a remote device (cloud server or a device connected to the same network) with surplus power and computational capability [6]. However, such a solution is impacted by network availability, reliability, low bandwidth, and latency [7] caused due to the quality of communication links and the distance between the (_primary node_) (the device that will offload data) and the _auxiliary node_ (either a remote server or edge device on the same network that can share the workload of the primary node). Some of the limitations of prior research on offloading include an offloading task to homogenous devices (e.g., MASA [8]) and smartphones [9] and expensive in the case of remote cloud services [10]. For scenarios such as situational awareness by autonomous systems, we hypothesize that leveraging another device within the system would mitigate latency and help conserve power compared to expensive cloud services. Keeping these discussions in mind, the overarching objective of this paper is to optimize and schedule tasks by offloading the data from a busy _primary node_ to a relatively idle _auxiliary node_. To address this objective, we propose the following contribution. **(i) Data-Driven Resource-Aware Offloading Framework.** This framework optimizes the system parameters, such as the processing complexity of the task, memory utilization, bandwidth, and power availability, to assert the _primary node_ to offload a portion of the data to an _auxiliary node_. Besides, we have introduced a novel parameter called _split-ratio_, which helps us efficiently offload the data to the _auxiliary node_. Our analysis indicates that offloading with a _split-ratio_ of 0.7-0.8 enhances the performance of task execution at the expense of increased power and memory usage (_primary + auxiliary node_). **(ii) Testbed Development for System Evaluation.** A system of two Nvidia Jetson devices and two UGVs was designed to evaluate the optimization framework. The devices were equipped with an MQTT-based publisher-subscriber protocol to share the _auxiliary node's_ system parameters to the _primary node_ and offload the data to the _auxiliary node_. To show the effectiveness of our proposed optimization framework, we assess its performance on DNN applications with multiple data modalities, including posture estimation (identifying human postures like standing, sitting, or lying down), semantic segmentation (classifying each pixel in an image to the object it belongs to, providing a detailed understanding of the scene), and object detection. **(iii) Data Compression for Enhanced Optimization Performance.** As the data size grows, offloading data becomes more expensive. As a result, a frame compression and masking technique was leveraged to eliminate similar frames and extract the object of interest from the data to be offloaded, thereby reducing inference time and communication overhead, eventually enhancing overall performance in data-intensive environments. **(iv) Simulating Real-World Scenarios to Evaluate Offloading Strategy.** In real-world autonomous system operations, the individual nodes would be in motion, suggesting that the distance between the nodes can affect the network parameters. As a result, we simulate a scenario where the nodes are constantly in motion. As the distance increases, offloading latency rises, prompting questions about when to stop offloading images. This exploration offers valuable insights such as understanding the distance-offloading-latency relationship, identifying offloading thresholds for efficiency, and optimizing image offloading strategies in dynamic environments, ultimately improving performance in real world scenarios. ## II Related work **Edge inference and Offloading of multiple concurrent DNN tasks.** Inferencing Multiple DNN tasks concurrently is a critical salient feature for any autonomous system's real-time operation. Motivated by this, the authors of heimdall [5] developed a mobile GPU coordination platform for emerging augmented reality applications in which frame rates decrease and inference latency increases significantly due to multi-DNN GPU contention. It is designed with a pseudo-preemption mechanism that (i) breaks down the multi-DNN into smaller units and (ii) prioritizes and flexibly schedules simultaneous GPU tasks. Additionally, in BAND [11], the authors develop a mobile-based inference system. BAND dynamically enables the creation of DNN execution plans and schedules DNNs on processors according to stated scheduling goals. In contrast, we consider a broader range of edge devices and application scenarios in this work to eventually minimize bandwidth consumption and latency. **Task-based Scheduling**: Task-based scheduling plays a vital role in optimizing and reducing inference with respect to the assigned tasks. The authors of [12] propose a novel approach to the constraint optimization problem and suggest a greedy heuristic approach to choosing the best subset of concurrent applications within the constrained fidelity and resource budget. Additionally, LaLaRAND [13], a real-time layer-level DNN scheduling framework that enables CPU/GPU scheduling of individual DNN layers with fine-grained CPU/GPU allocation schemes. This work tackles the schedulability of real-time DNN tasks, the asymmetric nature of DNN task execution on CPU and GPU, respectively, and the lack of task-based scheduling of CPU/GPU-aware allocation schemes. In contrast, our work involves running multiple DNN applications concurrently by splitting data into different nodes, considering each device's capabilities and task requirements. **Frame-based compression techniques.** Prior research has shown that optical flow can be utilized to estimate object motion across multiple camera views, enabling the system to track objects moving between cameras [14]. This information helps schedule video frame processing, minimizing latency and prioritizing relevant frames to meet real-time video analytics demands. On the other hand, AdaMask [15] presents an adaptive frame masking approach for efficient video streaming and processing in edge computing environments, focusing on lower communication overhead and accelerated DNN inference. However, both [14] and [15] aim at compressing frames acquired from static cameras. In contrast, we consider both static and mobile autonomous devices to optimize image offloading strategies, enhancing efficiency and effectiveness in various real-world scenarios. ## III System Overview Our system architecture comprises a Device profiler and an online scheduler as shown in Fig. 1. The inputs are the image data and it is split according to resource availability. We consider three important parts to design the framework 1. They are frame masking, profile engine, and optimization. This involves devising an efficient frame masking solution to ensure optimal performance during offloading between edge devices and creating a profiling engine that accurately evaluates primary and auxiliary nodes' performance, considering memory, power, and inference time while adapting to dynamic conditions like UGV movement. An optimization framework is also required to identify optimal split ratios for offloading decisions within specified bounds, resulting in a comprehensive and adaptive solution for multi-DNN systems. ### HeteroEdge _Components_ **Device profiler.** In order to make memory and power-aware scheduling, we analyze and monitoring of the performance of both devices, with a focus on key metrics such as device memory, power usage, inference time, and network latency. By gathering this information, we are able to evaluate the resource availability of each device in real-time and make informed decisions on the allocation of processing tasks. **Task scheduler.** In our optimization framework, we incorporated a task scheduler that intelligently manages task offloading in a multi-node environment. By gathering profiling data about primary and auxiliary devices, it assesses resource availability and the Multi-DNN workload. The task scheduler then ascertains if the primary node requires offloading and calculates the optimal data-split ratio for efficient resource utilization. Acting as a smart decision-making system, It distributes workloads based on resource availability and performance capabilities, leading to reduced inference time, less memory utilization, and enhanced overall system efficiency. The are some difficulties in designing an efficient offloading framework for multi-DNN execution systems in resource-constrained edge environments. This framework must address device heterogeneity, resource constraints, performance variability, and energy efficiency while adapting to dynamic conditions like varying velocities and distances. In moving conditions, the offloading latency may vary due to changes in distance between devices, leading to potential inefficiencies. Processing large image data in edge environments poses challenges, as increased data transmission consumes more bandwidth and results in higher latency. Additionally, processing larger images requires more computational power, straining resource-constrained devices. To tackle this issue, we introduce a frame compression technique, specifically frame masking which is crucial for ensuring optimal performance during offloading. Frame masking reduces image data size, minimizing bandwidth consumption, lowering latency, and improving energy efficiency. ### _System Assumptions_ In this context, we assume that resource-constrained devices have limited processing power and memory resources compared to more powerful servers or computers. When running multiple DNN models simultaneously, higher energy consumption and potential performance degradation are expected. Model inference may also be a concern when executing multiple DNN models concurrently on these devices. Additionally, scalability concerns arise due to the UGV's limited capacity to handle an increasing number of DNN models or growing complexity. The system assumes a variety of UGV with different processing capabilities, memory capacities, and energy consumption profiles. The UGV are connected through a communication network that allows for data offloading and communication between devices. The profiling engine can accurately measure performance metrics like memory utilization, power consumption, and inference time. The system assumes that the UGV may be in motion, causing the distance between them to change and affecting communication latency ## IV _HeteroEdge_ Profiling Engine In our proposed system, the individual nodes continuously monitor system variables under multi-DNN workloads to identify optimal collaborative configurations. We first describe the testbed setup we used in profiling multi-DNN workloads across heterogeneous systems and then present quantitative insights from profiling various device and network attributes. ### _Testbed setup_ We consider a network consisting of two heterogeneous edge platforms akin to a pair of autonomous systems with heterogeneous resources: (i) a low-resource Jetson Nano with a quad-core ARM Cortex-A57 MPCore processor, 4GB of LPDDR4 memory, and a 128-core NVIDIA Maxwell GPU, and (ii) a Jetson Xavier embedded with an octa-core NVIDIA Carmel ARM v8.2 CPU, 8GB LPDDR5, and a 512-core Volta GPU. In all our experiments, we assume that the lower end device (i.e., Nano) constantly monitors system parameters to offload its workload to the more powerful device, for executing multiple DNNs for downstream applications. Fig. 2 shows the experimental setup of our testbed. While the Jetson Xavier device was positioned in a fixed location, UGVs mounted with Jetson Nanos were moved at different angles and velocities for emulating various mobility conditions. We adopted a publisher-subscriber architecture [16] (specifically, the Message Queuing Telemetry Transport (MQTT) [17] protocol) for message passing between the two devices. ### _Device Profiling_ _HeteroEdge_ profiling engine runs on both primary and auxiliary nodes to continuously log memory utilization, power consumption, and inference time for both devices. While in our Fig. 1: Optimization framework overview: Frame compression technique and task scheduling process. Fig. 2: Experimental setup for Testbed: (a) UGV setup for 2-meter distance (b) 6-meter distance (c) 10-meter distance (d) static device (Nano-Xavier) setup. experiments we consider the low-resource device (i.e., Jetson Nano) as the primary node and Jetson Xavier as the auxiliary node for simplicity, in reality, all nodes in the network can assume primary and auxiliary roles. _HeteroEdge_ uses Jetson Stats [18] to measure memory utilization and average power consumption. **DNN Workloads.** As _HeteroEdge_ is designed for autonomous systems that are required to run multiple concurrent compute-intensive tasks, we run two exemplar tasks, namely semantic segmentation and posture estimation, using a multiprocessing pool. _HeteroEdge_ utilizes the Nvidia Jetson Inference Library [19] for profiling the various DNN models in order to optimize offloading decisions. **Split Ratio (\(r\)).** In our work, we propose the **split ratio**, which represents the proportion of images offloaded to the auxiliary node. It ranges from 0 (all images processed locally) to 1 (all images offloaded to the auxiliary). The optimal split ratio maximizes the collaborative system's throughput while minimizing resource consumption. Notations \(T1\), \(P1\), and \(M1\) represent operation time, power, and memory usage of the auxiliary, while \(T2\), \(P2\), and \(M2\) represent those of the primary node. \(Offlatency\) refers to the network latency resulting from offloading images. In Table I, we report the measured performance of the two devices in processing a batch of 100 images, under various configurations - \(r\) ranging from 0 to 1. As anticipated, we observe that while the overall power consumption is comparable between the nodes, the processing latency is significantly lower for the auxiliary device for the same workload. For e.g., at a \(r=0.5\), while the processing time on the primary (\(\approx\) 28.35 seconds) is double that of the auxiliary (\(\approx\) 13.88 secs), the power consumption is comparable at 5.63 W and 5.42 W. At the same time, we also note that the offloading latency varies only minimally (between 0 and 1.56 secs) with \(r\), supporting our premise for intelligent offloading. ### _Network Profiling_ Furthermore, We investigated the network latency under two different network configurations- specifically, WiFi on two frequency bands, 2.4 Ghz and 5 Ghz. In Fig. 3, we plot the latency (\(y-\)axis) (a) for different sizes of images, (b) various split ratios, and (c) distance between the primary and auxiliary devices, on the \(x-\)axis. We note that the higher band offers lower latencies and we observe increasing latencies with both increasing split ratios as well as distances. In the next section, we describe the _HeteroEdge_ solver which takes into account the profiled variables to output an optimal collaborative system for vision tasks. ## V _HeteroEdge_ Solver We design the _HeteroEdge_ solver such that it dynamically adjusts data-splitting ratios based on available resources, optimizing the collective throughput and resource utilization. This cost-effective and scalable solution is applicable to various edge computing scenarios, addressing resource limitations and energy constraints, while allowing efficient processing of large data volumes in a distributed manner. In the following subsections, we describe the steps in devising the optimization framework. ### _Latency and Energy Modeling_ In Table II, we list the variables used in the optimization formulation. #### V-A1 Execution Period \(I\) denotes the input size of the computation task of the offloading device. N presents the number of CPU cycles needed to execute one bit of input computation data. Therefore, \(C_{cpu}=NI\) denotes the cycles needed to finish the selected computation task. So, the executing latency can be expressed as \(T_{exec}=\frac{C_{cpu}}{S}\) where \(S\) is the computation speed of the device and it is measured in cycles per second. We model the power consumption of CPU as \(P=\mu(S)^{3}\) as in [20] Thus, the energy consumption per cycle is \(\mu(S)^{2}\). The energy consumption for deep model processing is then \(E_{exec}=C_{cpu}\mu(S)^{2}\) where \(\mu\) is a coefficient depending on \begin{table} \begin{tabular}{|l|l|} \hline Notation & Meaning \\ \hline T & Input size of computational task \\ \hline N & Number of cpu cycle needed to execute \\ & one bit of computation data \\ \hline \(C_{cpu}\) & Total cycle needed to finish specific task \\ \hline \(T_{exec}\) & Total execution time \\ \hline \(S\) & Computation speed of a client device \\ \hline \(E_{exec}\) & Total execution energy \\ \hline B & Transmission Bandwidth \\ \hline \(D_{R}\) & Data rate \\ \hline \(T_{0}\) & Offloading latency \\ \hline \(r\) & Split ratio \\ \hline \(T_{s}\) & Time required to run ratio offloading code \\ \hline \(E_{0}\) & Offloading energy \\ \hline \(E_{s}\) & Energy required to run ratio offloading code \\ \hline \end{tabular} \end{table} TABLE II: Important Notation & Meaning. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**r** (**split ratio**)} & \(T_{1}\) & \(P_{1}\) & \(M_{1}\) & \(T_{2}\) & \(T_{3}\) & \(P_{2}\) & \(M_{2}\) \\ & (**Xavier**) & (**Xavier**) & **1+ (Xiang)** & **(Ollllraey)** & (**Ollraey**) & (**Ollraey**) & (**Ollraey**) \\ & (**o**) & (**w**) & (**\%**) & (**\%**) & (**\%**) & (**\%**) & (**\%**) \\ \hline 0 & 0.95 & 10.2 & 1 & 68.34 & 0 & 5.89 & 69.82 \\ \hline 0.3 & 8.45 & 4.59 & 36.67 & 0.7 & 39.03 & 0.43 & 5.35 & 63.77 \\ \hline 0.5 & 13.88 & 5.42 & 45.61 & 0.5 & 28.35 & 0.89 & 5.63 & 52.54 \\ \hline 0.7 & 16.64 & 5.73 & 51.23 & 0.3 & 19.54 & 1.25 & 4.75 & 45.58 \\ \hline 0.8 & 17.24 & 6.17 & 56.96 & 0.2 & 13.34 & 1.44 & 4.48 & 40.34 \\ \hline 1 & 19.001 & 6.38 & 59.37 & 0 & 0 & 1.56 & 0.77 & 16 \\ \hline \end{tabular} \end{table} TABLE I: Profiling results from testbed for semantic segmentation and posture estimation model Fig. 3: MQTT latency for (a) different network bands & image sizes, (b) different split ratios, & (c) different distances with differing velocities of UGVs. chip architecture. We assume that the computing speed of the server's CPU is limited to \(S^{max}\), so we may have \(0\leq S\leq S^{max}\). Considering \(r\) is the split ratio, \[E_{exec} =E_{1}r+E_{2}(1-r)\] \[T_{exec} =T_{1}r+T_{2}(1-r)\] Here, \(E_{1}\) and \(E_{2}\) are the execution energy of the two nodes participating to complete one task. Similarly, \(T_{1}\) and \(T_{2}\) are the execution time of the two devices for a specific split ratio. #### Iii-B2 Offloading Period If \(B\) is transmission bandwidth, \(d\) is distance between two devices, \(e\) is path loss exponent, \(N_{0}\) is the Gaussian noise power, the transmission data rate for offloading task can be found using the Shannon-Hartley algorithm [21]. \[D_{R}=B\ \log_{2}(1+\frac{d^{-u}P_{t}}{N_{0}})\] Here, \(P_{t}\) is the transmission power of the device during offloading. If the medium is lossless then we can put \(u=0\). Then, the Offloading Latency is \(T_{o}=\frac{C}{D_{R}}\). Here, \(C\) will depend on the selected split ratio. The Total latency can be given by \(T=T_{exec}+T_{o}+T_{s}\). Energy required to run the split ratio selection code is then \(E_{s}=P_{k}T_{s}\) where \(P_{k}\) is the power rating of the device which will run the solver code. Offloading energy requirement can be divided into two parts, \[E_{0}=T_{0}\sum_{i=0}^{N}P_{i}\] Here, \(P_{t}\) is the power required of the sender device and \(P_{r}\) is the power drawn while receiving the sent data. Then the Total energy can be expressed as \(E=E_{exec}+E_{s}+E_{o}\). #### Iii-B3 Solver As our objective is to make the system both memory and energy-aware while minimizing the latency (i.e., improving the overall throughput), we have derived a relation between energy and memory during execution time. Specifically, we have considered the quadratic relation between energy and required memory during execution. While solving for optimization, we can use the variable substitution approach. We can work with the real part only of the following equation for a sub-optimal solution. \[T=r(T_{1}+T_{3})+(1-r)T_{2}\] Here T1 is the operation time for Jetson Xavier and \(T_{2}\) is the operation time for Jetson Nano. \(T_{3}\) is the round trip time for image transfer. \[T_{1}=a_{1}r^{2}+a_{2}r+c_{1} \tag{1}\] \[T_{2}=b_{1}(1-r)^{2}+b_{2}(1-r)+c_{2}\] \[E_{1}=a_{1}r^{3}+a_{2}r^{2}+a_{3}r+c_{1} \tag{2}\] \[E_{2}=b_{1}(1-r)^{3}+b_{2}(1-r)^{2}+b_{3}r+c_{2}\] \[M_{1}=a_{1}r^{2}+a_{2}r+c_{1} \tag{3}\] \[M_{2}=b_{1}(1-r)^{2}+b_{2}(1-r)+c_{2}\] Here, \(E1\), and \(E2\) are energy consumption for Jetson Xavier and Nano. \(M1\) and \(M2\) are also memory needed for Jetson Xavier and Nano the values of \(a_{1}\), \(a_{2}\), \(a_{3}\), \(b_{1}\), \(b_{2}\), \(b_{3}\) coefficients can be found through curve fitting feature with some experimental values. Problem formulation, \[\min(T) To calculate the distance between the UGVs, we employ the following equation: \[d=(V_{\text{primary}}+V_{\text{auxiliary}})\times t\] This equation calculates the distance, \(d\), between two UGVs based on their velocities, \(V_{\text{primary}}\) and \(V_{\text{auxiliary}}\), and a given time interval, \(t\). The equation takes into account the relative motion of the UGVs, and the distance increases as the UGVs move apart during the time interval. The relationship between latency (L) and distance (d) between the two UGVs is modeled using the following equation and we get this equation from curve fitting: \[L=a_{1}\times d^{2}-a_{2}\times d+a_{3}\] This equation represents the time delay (latency) in sending images from one UGV to another. As the distance, \(d\), between the devices increases, the latency, \(L\), also increases, affecting the efficiency of the offloading process. When the latency meets or exceeds the threshold, \(\beta\), the system stops sending data: \[\text{If }L\geq\beta,\text{ stop sending data}\] This approach ensures that the offloading process is adapted to the dynamic changes in UGV motion, resulting in more efficient resource utilization and improved overall performance. ### _Implementation_ We use Python's GEKKO library [22] to solve our optimization problems. The objective function, variables and constraints are all specified as we described earlier with a nonlinear optimization solver (i.e., IPOPT solver [23]). ``` 0: IPs of connected devices \(n\), Memory profiling of the nodes \(M\), Inference time of device 1 \(T_{1}\), Inference time of device 2 \(T_{2}\), Round trip time \(T_{3}\) 0: Determine the split ratio \(r\) for optimal operation time 1: On the primary node: 2: Calculate the device availability factor \(\lambda\) based on the memory of both devices. **Compute** the coefficients \(a_{1}\), \(a_{2}\), \(b_{1}\), \(b_{2}\), \(c_{1}\), \(c_{2}\) from equations 3 using curve fitting 3:if\(M_{1},M_{2}\geq\lambda\) and check latency, \(L\leq\beta\)then 4: Assign constraints from equation 3 on the following objective: \[T=r(T_{1}+T_{3})+(1-r)T_{2}\] 5: Check battery capacity and available UGV power: 5 and 6 \[P_{available}=E_{available}/\left(\left(1-k\right)\left(t_{dnn}+t_{drive} \right)/3600\right)\] if \(P_{available}>=E_{max}\) 6: Solve the formulated problem for the given constraints using Interior Point Optimizer method 7: Send the derived amount of data to the subscriber node ``` **Algorithm 1** Algorithm for Split Ratio Selection ## VI **Frame-level Compression** To further optimize performance, _HeteroEdge_ teases out _regions of interest_ in images prior to running downstream DNN inferences (e.g., pose detection, image segmentation, etc.). This step ensures that the network latency from offloading them to an auxiliary node, when feasible, is also lowered. _HeteroEdge_ first uses a state-of-the-art object detection model that generates binary masks where pixels with _detected objects_ are denoted by bit 1, and 0 elsewhere. Element-wise multiplication of the binary mask with the original image returns a _compressed image_ (see Fig. 4) which isolates objects of interest and eliminate extraneous backgrounds. To demonstrate the savings in bandwidth utilization and inference times, we conduct microbenchmark experiments using two downstream DNN models. We generated a virtual environment in the Gazebo simulator [24] generating 3100 images with a total of 9 common object classes such as persons and vehicles present. First, we use the faster-RCNN object detector [25] for generating compressed images which are then fed to exemplar downstream DNNs; semantic segmentation (SegNet [26]) and posture detection (PoseNet [27]) models, respectively. Figure 4 shows illustrative examples of the output on the compressed frames for the two tasks. Overall, we observe a 13% reduction (on a Jetson Nano device) in the total computational time corresponding to a savings in bandwidth by up to 28% (i.e., from 8 MB down to 5.8 MB through compression) if the images were to be offloaded. While we observe only a 2% drop in inference accuracy from compression, the astute reader will note that an imperfect object detector model may create artifacts for downstream computer vision tasks; we defer an in-depth study as future work. ## VII **Evaluation** This section presents our in-depth evaluation of the proposed optimization framework on the Gazebo-based dataset we describe in Section VI, previously. ### _Deriving constraints for optimization_ We derive constraints for optimization presented in equation 4 which provides the limitations and requirements of different resources that must be satisfied during the optimization process. We consider deriving constraints for important resources i.e., memory, power and inference time for optimizing the task offloading process. From the experiments performed on the primary node, we obtain the base processing time for running multiple models on a single device. The total processing time for 100 images was found to be 68.34 seconds, as shown in Table I. We got 200 Fig. 4: (a) Original frame (b) Compressed frame & input (c) Results from compressed frame for Pose Estimation (d) Results from compressed frame for Semantic Segmentation. output images from two models for 100 image inputs. We use this as baseline for inference time, where optimized inference time must be less than 68.34 seconds overall. As we described in Section V, we use curve fitting to analyze the relationship between inference time and data splitting ratio, memory and splitting ratio, and power and splitting ratio which represent equations 123. This enables us to predict the inference time for different splitting ratios and allows us to identify the optimal data-splitting ratio that will provide the lowest inference time while considering memory and power constraints for the task offloading process. Fig. 5(a) and Fig. 5(b) show the time, memory and power for different split ratios by our proposed _HeteroEdge_ solver. From the solver we got the best value of the split ratio is 70% within our desired memory and power constraints. The total inference time for this split ratio is 17.72 seconds for Xavier (70 images) and 16.79 seconds for Nano (30 images). On average for two models and 200 outcomes, it takes a total of 34.51 seconds. ### _Empirical Evaluation_ We perform empirical evaluations of static and dynamic conditions for our scenarios. We discuss the static and dynamic conditions in two case studies in the following. **Case-1:** In this case, two UGVs are positioned at a fixed distance from each other, and their velocities are the same, leading to no relative movement between them. Since they are only 4 meters apart in our experiment, the communication overhead and latency remain constant throughout the evaluation. With a stable MQTT communication, the offloading of images from the primary UGV to the auxiliary UGV experiences a consistent latency based on their fixed distance. This constant latency allows an effective offloading process without additional communication overhead due to varying distances. The optimization framework can then be evaluated under this static condition to understand its performance in a controlled environment with minimal variations in latency. The performance metrics for different ratios tested on the real-time testbed with the static condition are consistent with the optimization results from the solver, as illustrated in Table III. Here \(T_{1}+T_{2}\) represents the total operation time for both Jetson Nano and Xavier and \(T_{3}\) stands for the offloading latency for UGVs with the static condition of 4 meters distance from each other. From the results, we notice there is a slight change in offloading latency with split ratios. In this case, offloading latency increases only with higher split ratios and the distance between UGVs is constant. After using the IPOPT optimization solver, we estimate that offloading 70% of the images to a more powerful Xavier is the best optimal option given our specific memory and power constraints. We then evaluated the results in real-time systems (Table III) and tested different split ratios on a testbed and ended up with our desired results, which support our optimization framework. Overall, this demonstrates the efficacy of our framework in memory and power-aware task offloading, as well as the potential for further optimizations. **Case-2:** In this case, the two UGVs are in motion with different velocities and/or directions, resulting in a dynamic distance between them. As the distance between the UGVs changes over time, the communication overhead and latency for offloading images from the primary to the auxiliary node also vary. Due to the dynamic nature of this scenario, the MQTT communication may experience fluctuations in latency based on the changing distance between the UGVs. This can lead to challenges in the optimization framework, as the offloading process may need to account for variations in communication overhead and latency. The optimization framework can be evaluated under this dynamic condition to understand its performance in a real-world environment with changing distances between the UGVs. This will help identify any potential issues and adjustments that may be needed to improve the framework's adaptability to varying communication conditions. In Fig. 6 we record total operation time (\(T_{1}+T_{2}\)) for both Fig. 5: Optimized results for different split ratios. (a) Total time and (b) memory usage for different split ratios and power usage for both devices, (c) Computational time for both devices changes with split ratio r. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **r (split ratio)** & **T3 (Offloading** & **PI (Xavier)** & **MI (Xavier)** & **1-r** & **T1+I2** & **P2 (nano)** & **M2 (nano)** \\ & **latency (s)** & **(w)** & **(\%)** & **(s)** & **(w)** & **(\%)** \\ \hline 0.2 &.67 & 4.87 & 32.09 & 0.8 & 55.38 & 6.96 & 75.12 \\ \hline 0.35 & 1.23 & 5.12 & 41.56 & 0.65 & 51.89 & 6.11 & 70.17 \\ \hline 0.45 & 1.98 & 5.78 & 49.55 & 0.55 & 42.87 & 6.24 & 65.66 \\ \hline 0.5 & 2.34 & 5.57 & 50.09 & 0.5 & 43.09 & 5.69 & 54.65 \\ \hline 0.6 & 2.90 & 6.35 & 53 & 0.4 & 39.45 & 5.88 & 57.77 \\ \hline 0.7 & 3.23 & 6.03 & 59.56 & 0.3 & 36.43 & 5.17 & 47.13 \\ \hline 0.8 & 3.55 & 6.34 & 63.45 & 0.2 & 34.90 & 5.35 & 43.34 \\ \hline 0.9 & 3.56 & 7.12 & 69.09 & 0.1 & 28.23 & 4.89 & 40.11 \\ \hline \end{tabular} \end{table} TABLE III: Results from the real-time system for static condition. UGVs and offloading latency \(T_{3}\) for the varying distances between UGVs. In this setup, we use three different split ratio 30%, 70% and worst case 100% split and velocity for primary and auxiliary UGVs are respectively \(V_{primary}\)=1 m/s and \(V_{auxiliary}\)=3 m/s. The evaluation results reveal a positive correlation between the distance among the UGVs and the offloading latency. As the distance increases, the offloading latency also rises, affecting the optimization system's effectiveness. For instance, at a distance of 26 meters, the average offloading latency from the primary node to the auxiliary node is 13.9 seconds, which increases the communication overhead and compromises the system's performance. In order to address this challenge, we propose an offloading latency threshold. The system can effectively track the offloading latency during the operation. If the latency surpasses the threshold, the primary node stops offloading images to the auxiliary node and searches for a more suitable split ratio lower than the previous one. If the search for an optimal split ratio within the bounds is unsuccessful, the primary node performs all processing tasks locally. This adaptive approach maintains optimal performance by minimizing communication overhead and avoiding excessive latency. ### _Evaluation with Model Heterogeneity_ In order to thoroughly evaluate the performance of the _HeteroEdge_, it is critical to validate it with a diverse range of models that represent different use cases and applications. To accomplish this, we carefully selected a range of models that represent different types of computational requirements from object detection to image classification and depth estimation. As an exemplar, we selected computer vision model ImageNet [28] for object detection, DetectNet [29] for object localization and DepthNet [30] for monocular depth estimation. We deployed each model including the previous two models PoseNet and SegNet on our _HeteroEdge_ testbed and ran them concurrently while measuring the key performance metrics such as total operation time, power consumption, and resource utilization. Table IV depicts the results from multiple DNN models simultaneously running on primary and auxiliary nodes. We tested it for 100 images and when the split ratio became zero, the entire processing was carried out only on the primary node as shown in Table IV. We observe that the primary node sends 50% and 70% of the images to the auxiliary node for split ratio 0.5 and 0.7, respectively. It is worthwhile to note that there is an additional runtime on the primary node for object detection and mask generation. We record on average 3-4 ms latency per image with a lightweight faster-rCNN model [25]. We notice that the total operating time is lower (on average 9%) in case of masked frames compared to the original frames \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Application** & **DNN Model** & \begin{tabular}{c} **Split Ratio (r=0)** \\ **Run 100\% on** \\ **primary node** \\ **(Original image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-5)** \\ **T2 (nano)** \\ **(Ass)** \\ **(Mashed image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-5)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(Original image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(Mashed image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(Mashed image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(original image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(original image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(Mashed image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(original image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(original image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(original image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(s)** \\ **(original image)** \\ \end{tabular} & \begin{tabular}{c} **Split Ratio (r=0-7)** \\ **T1 (Xavier)+** \\ **T2 (Nano)** \\ **(s) though there was a notable change in power consumption and memory utilization. Fig. 7(a) presents a slight increase in power consumption which is on average 4-5% more compared to the baseline where all processing is done locally for split ratio = 0. On contrary, Fig.7(b) depicts a significant reduction in memory usage compared to the baseline memory usage i.e., \(\approx\) 72.23% at split ratio = 0. For example, for a 70% split ratio, both devices use an average of 47% of memory, which is almost a 34% decrease compared to the baseline configuration. ## VIII **Conclusion & Future work** In conclusion, our optimization work focused on reducing the latency of DNN model inference by offloading image processing to more powerful computing devices. We proposed framewise optimization a split ratio metric to determine the proportion of images to offload and used a solver to determine the optimal split ratio based on the memory and power constraints of UGV. Overall, our work provides a practical solution for reducing DNN model inference latency on resource-constrained devices. Our results show that offloading with MQTT and dynamic adjustment of the split ratio based on available power can further reduce latency and improve performance. In this work, we utilize a primary-auxiliary node setting which is hierarchical offloading in nature. We want to extend the current research to consider a star topology for offloading tasks in future work. In a star topology, a central node (the "hub") manages the communication and coordination among multiple edge devices (the "spokes"), allowing for more efficient resource allocation and data sharing. ## Acknowledgment This work has been partially supported by NSF CAREER Award #1750936 and U.S.Army Grant #W911NF2120076.
2310.07282
An Analysis on Large Language Models in Healthcare: A Case Study of BioBERT
This paper conducts a comprehensive investigation into applying large language models, particularly on BioBERT, in healthcare. It begins with thoroughly examining previous natural language processing (NLP) approaches in healthcare, shedding light on the limitations and challenges these methods face. Following that, this research explores the path that led to the incorporation of BioBERT into healthcare applications, highlighting its suitability for addressing the specific requirements of tasks related to biomedical text mining. The analysis outlines a systematic methodology for fine-tuning BioBERT to meet the unique needs of the healthcare domain. This approach includes various components, including the gathering of data from a wide range of healthcare sources, data annotation for tasks like identifying medical entities and categorizing them, and the application of specialized preprocessing techniques tailored to handle the complexities found in biomedical texts. Additionally, the paper covers aspects related to model evaluation, with a focus on healthcare benchmarks and functions like processing of natural language in biomedical, question-answering, clinical document classification, and medical entity recognition. It explores techniques to improve the model's interpretability and validates its performance compared to existing healthcare-focused language models. The paper thoroughly examines ethical considerations, particularly patient privacy and data security. It highlights the benefits of incorporating BioBERT into healthcare contexts, including enhanced clinical decision support and more efficient information retrieval. Nevertheless, it acknowledges the impediments and complexities of this integration, encompassing concerns regarding data privacy, transparency, resource-intensive requirements, and the necessity for model customization to align with diverse healthcare domains.
Shyni Sharaf, V. S. Anoop
2023-10-11T08:16:35Z
http://arxiv.org/abs/2310.07282v2
# An Analysis on Large Language Models in Healthcare: A Case Study of BioBERT ###### Abstract This paper conducts a comprehensive investigation into applying large language models, particularly on BioBERT, in healthcare. It begins with thoroughly examining previous natural language processing (NLP) approaches in healthcare, shedding light on the limitations and challenges these methods face. Following that, this research explores the path that led to the incorporation of BioBERT into healthcare applications, highlighting its suitability for addressing the specific requirements of tasks related to biomedical text mining. The analysis outlines a systematic methodology for fine-tuning BioBERT to meet the unique needs of the healthcare domain. This approach includes various components, including the gathering of data from a wide range of healthcare sources, data annotation for tasks like identifying medical entities and categorizing them, and the application of specialized preprocessing techniques tailored to handle the complexities found in biomedical texts. Additionally, the paper covers aspects related to model evaluation, with a focus on healthcare benchmarks and functions like processing of natural language in biomedical, question-answering, clinical document classification, and medical entity recognition. It explores techniques to improve the model's interpretability and validates its performance compared to existing healthcare-focused language models. The paper thoroughly examines ethical considerations, particularly patient privacy and data security. It highlights the benefits of incorporating BioBERT into healthcare contexts, including enhanced clinical decision support and more efficient information retrieval. Nevertheless, it acknowledges the impediments and complexities of this integration, encompassing concerns regarding data privacy, integrity, bias mitigation, transparency, resource-intensive requirements, and the necessity for model customization to align with diverse healthcare domains. Large language models Healthcare BioBERT Health informatics Natural Language Processing ## 1 Introduction NLP processing has evolved to become LLM (Large Language Model). In the 1940s, after World War II, people realized the importance of translation between languages and wanted to create a machine that could perform automatic translation. Early NLP systems were rule-based systems that humans manually programmed with rules for processing language. These systems often had many limitations in handling complex language and could be easily deceived by unexpected inputs. In the early 2000s, statistical NLP models began to emerge. These models were trained on large text datasets and learned to predict the next word in a sequence based on preceding words. Statistical NLP models exhibited greater robustness compared to rule-based systemsWang et al. (2023) and could handle a wider range of language tasks. By the mid-2010s, deep learning models started to revolutionize NLP. These models, based on artificial neural networks, had the capability to learn intricate patterns from data. Deep learning models Lavanya and Sasikala (2021)quickly outperformed statistical NLP models such as machine translation, summarization of texts, and question answering. In 2017, the Transformer La Quatra and Cagliero (2022)a deep learning model designed for processing sequential data, such as text, marked a significant milestone. It achieved state-of-the-art results across numerous NLP tasks and soon became the standard architecture for training LLMs. This shift led to the emergence of natural language processing (NLP) as we know it today. NLP has radically transformed into the era of large language models (LLMs). LLMs are trained on massive text datasets, sometimes containing hundreds of billions or even trillions of words. This extensive training enables LLMs to understand the intricate patterns and relationships inherent in language. As a result, LLMs have fundamentally altered how we interact with and harness the power of language. Popular LLMs like GPT-3.5, GPT-4, PaLM, Coherence, LaMDA, Llama, and others have revolutionized our interaction with data by redefining the boundaries of language understanding and generation. Natural Language Processing (NLP) is a branch of artificial intelligence (AI) and computational linguistics that facilitates interaction between computers and humans through natural language. LLMsReddy [2023]process vast amounts of textual data, learn the underlying patterns, and generate contextually relevant human-like text. This technology has not only catalyzed but has become a driving force in transforming healthcare and biomedical applications. In this article, we conduct a comparative analysis of the diverse applications of LLMs in the healthcare and biomedical domains. We explore how LLMs are reshaping the landscape by offering innovative solutions to long-standing challenges. Current healthcare and biomedical systems often operate inefficiently, have limited access to relevant information, and involve cumbersome documentation processes. LLMs can address these challenges by providing rapid, context-aware responses to medical queries, extracting valuable insights from unstructured data, and automating clinical documentation. The major contributions of this article are as follows: * Conducts a detailed evaluation of the existing prominent state-of-the-art large language models introduced in the healthcare domain. * Taking BioBERT as a reference pre-trained language model, we check the applications of the same in healthcare. * Discusses the prominence of BioBERT in downstream clinical natural language processing tasks and discusses them in detail. * Outlines the challenges with LLM in healthcare and presents the future research directions. ### An overview of LLM components * **Input Text**: Initially, the LLM receives raw text as input. This text can be in sentences, paragraphs, or documents. * **Tokenization**: The text we give is divided into individual tokens. Tokens can be words, subwords, or characters, depending on LLMs tokenization scheme. This step breaks down the text into manageable units for processing. * **Word Embeddings**: Each token is transformed into a high-dimensional vector through word embeddings. These vectors capture the token's meaning and context. Word embeddings are learned during the model's training using a vast amount of text data. * **Transformer Layers**: The embedded vectors are passed through multiple transformer layers. Each transformer layer consists of two main components: * Multi-Head Self-Attention: This component weighs the importance of each token in relation to others, capturing dependencies and context. * Feedforward Neural Networks: Complex transformations are applied to the vectors, enhancing the model's understanding of patterns and relationships. * **Output Layer**: After processing through the transformer layers, the output is fed into a linear layer. This linear layer generates a probability distribution that spans the model's vocabulary. It estimates the probability of different words or word sequences following the input text. * **Probability Estimation**: The probability distribution generated in the previous step is used for various tasks such as language generation, text completion, and question answering. * **Training and Fine-Tuning**: LLMs are initially trained on a large text corpus to learn embeddings and model parameters. Fine-tuning can follow, where the model is further trained on task-specific data to adapt its language understanding to the specific task or domain. ## 2 NLP for Healthcare Applications LLMs have emerged as a transformative technology in healthcare, enabling an extensive range of applications, from clinical decision support to medical data analysis. LLMs allow healthcare professionals to harness the power of language data for improved patient care, research, and administrative tasks. * **Medical Question Answering**: LLMs can answer medical questions, providing quick and accurate responses. This application aids healthcare professionals in accessing medical knowledge and information rapidly. * **Electronic Health Record (EHR) Analysis**: LLMs can analyze unstructured text in electronic health records, extracting valuable insights about patient histories, diagnoses, treatments, and clinical notes. This supports clinical decision-making and research. * **Clinical Documentation**: LLMs can assist healthcare providers in generating clinical notes, reports, and documentation. This streamlines the documentation process, allowing clinicians to focus more on patient care. * **Medical Imaging**: LLMs can assist in medical image interpretation by generating natural language descriptions of images. This can improve communication between radiologists and referring physiciansWang et al. [2023b]Rao et al. [2023]. * **Clinical Decision Support**: LLMs can provide context-aware information to support clinical decisions. They can recommend treatment options, predict patient outcomes, and identify potential risks Rao et al. [2023]. * **Healthcare Communication**: LLMs can improve doctor-patient communication by offering language translation services, ensuring effective communication in multilingual healthcare settings Yunxiang et al. [2023] * **Patient Engagement**: LLMs can be used in chatbots and virtual assistants Ray [2023] to engage with patients, answer their healthcare queries, and provide health-related information and guidance. Healthcare professionals can use NLP to extract relevant information from patient records, such as medical history, medication allergies, and previous diagnoses, enabling the creation of personalized treatment plans and early identification of high-risk patients for disease prevention. * **Enhancing Medical Research**: NLP can also analyze large amounts of medical data to identify patterns and trendsHao et al. [2018]. It helps researchers to develop new treatments and therapies. * **Improving Clinical Trials**: NLP algorithms can sift through much data and extract information relevant to the clinical trial. NLP helps clinical trials Chen et al. [2020]by finding the right participants faster and cheaper through patient data analysis and improves efficiency. It reduces the time and cost. * **Improving Digital Health Records**: NLP can make digital patient records more correct and complete. These records hold information about a person's health history and treatments. NLP helps doctors to get the right details from these health records Costea [2020]so they can make better decisions for patient care. * **Supports Medical Practitioners**: NLP makes many everyday tasks of health professionals easierDemner-Fushman et al. [2009]. For instance, it finds possible issues with medicines, helps doctors adjust treatment plans, and helps doctors write notes faster by saving time and reducing errors, so they can spend more time caring for patients. Also, NLP aids in extracting Information from medical literature, helping healthcare professionals to learn Henwood and Lister [2007]stay current with the latest research and best practices. ## 3 Language Models and Healthcare Large Language Models (LLMs) are one of the most exciting areas in Artificial Intelligence (AI) research that have the ability to process and generate human-like text for various healthcare applications. The more data we train, the more predictions will be more accurate. Mainly used LLM are GPT-3, BERT, and RoBERTa Liu et al. [2019], which are trained on billions of words and patterns. so these models can understand the structure easily and generate text. Once the model is generated it can be fine-tuned for a specific Task. The applications of LLM Hao et al. [2018] healthcare have many different aspects and have the power to bring about significant positive changes in various fields. These technologies offer real-time assistance to healthcare professionals by helping them diagnose diseases and give the right treatments without Errors. Predictive Analytics in healthcare can use data to predict disease outbreaks and enhance healthcare delivery efficiency. The significance of studying LLM applications in healthcare is that it is versatile. LLMs should be used in healthcare in a collaborative and verified way to ensure responsible and effective use, ultimately improving patient care. This means the use of LLMs should be carefully monitored and evaluated, and thereby identifying potential problems or risks. LLMs are a powerful tool that has the potential to revolutionize healthcare. ### Benefits of LLMs in Healthcare: * **Improved Support for Clinical Decisions**: LLMs assist healthcare providers in decision-making by providing access to a vast amount of medical knowledge and up-to-date research. They can suggest potential diagnoses, treatment options, or relevant research articles quickly. LLMs can make the diagnosis and data more accurate than humans; thereby, the quality of outcomes and care of patients can be improved. * **Efficient Information Retrieval**: LLMs can be used to clarification medical tests by analyzing the results providing valuable information and helping to find out the abnormalities. By this, we can reduce the time and cost of interpreting the results and improve accuracy and reliability. * **Clarification of Medical Tests**: LLMs can identify clinical trials by analyzing the current conditions of the patient, medical history, and treatment plans. This will improve the efficiency and effectiveness and provide potential lifesaving treatments. * **Searching for Potential Clinical Trials**: LLMs can identify clinical trials by analyzing the current conditions of the patient, medical history, and treatment plans. This will improve the efficiency and effectiveness and provide potential lifesaving treatments ### Limitations and Challenges of Using LLMs in Healthcare * **Data Privacy and Security**: Integrating LLMs into healthcare must proceed cautiously to safeguard highly sensitive healthcare data. Ensuring data privacy and security, along with compliance with regulations such as HIPAA, is paramount to prevent the potentially severe consequences of data breaches. * **Bias and Fairness**: LLMs trained on biased data may produce biased or unfair results in healthcare applications. This can lead to disparities in care, misdiagnoses, or unfair allocation of resources. * **Lack of Transparency**: LLMs often operate as "black boxes," making it challenging to understand their decision-making processes. This lack of transparency can hinder trust among healthcare professionals and patients. * **Quality Control**: Ensuring the quality and accuracy of information generated or retrieved by LLMs is crucial. Erroneous information or recommendations could harm patients or mislead healthcare providers. * **Concern of Ethical issues**: Using LLMs in healthcare raises ethical concerns, such as the potential for technology to replace human interaction in patient care, leading to depersonalized medicine. * **Resource Intensiveness**: Developing, fine-tuning, and maintaining LLMs for healthcare can be resource-intensive regarding computational power, data annotation, and expert oversight. * **Generalization Challenges**: LLMs may struggle with generalizing to specific healthcare domains, specialties, or rare conditions if not adequately fine-tuned. Customization may be necessary. ## 4 Related Studies In this section, we review relevant articles that have explored the integration of LLMs in healthcare applications. The articles showcase the significant impact of large language models in various healthcare-related tasks, such as biomedical text mining, medical image interpretation, medical question answering, and processing electronic health records. They also highlight the need for careful evaluation and consideration of limitations when applying these models in clinical settings. The strengths include improved task performance and potential benefits for healthcare However, the resource-intensive nature of such models and potential challenges in fine-tuning for specific healthcare applications should be considered. N. Kang et al. (2013)primarily focus on evaluating the performance of MetaMap and Peregrine tools used for biomedical concept normalization. The study investigates the usefulness of rule-based NLP modules that are used to enhance the performance of MetaMap and Peregrine, an adjunct to dictionary-based concept normalization in the biomedical field, to evaluate the Corpus for Arizona Disease. S. A. Hasan et al. Hasan and Farri Hasan and Farri (2019) discuss the application of deep learning(DL) techniques in clinical natural language processing (CNLP). The model emphasizes the use of DL models for various clinical applications. Deep learning-driven clinical NLP applications include diagnostic inferencing, biomedical article retrieval, clinical paraphrase generation, adverse drug event detection, and medical image caption generation. J. Lee et al. (2020)introduced BioBERT, a pre-trained biomedical language representation model tailored for biomedical text mining. BioBERT's training involved a substantial biomedical text corpus. This model excelled in various tasks such as named entity recognition, relation extraction, and question answering, achieving state-of-the-art performance across the board. Shin et al.Shin et al. (2020)contributed to the field with BioMegatron, a larger pre-trained biomedical language model aimed at biomedical text mining analogous to BioBERT. Differing in scale, BioMegatron was trained on an even more extensive corpus of biomedical text and exhibited state-of-the-art performance in tasks such as entity recognition, relation extraction, and question-answering. Additionally, X. Yang et al.Yang et al. (2022) presented GatorTron, a substantial clinical language model created for processing and interpreting electronic health records (EHRs). With extensive scaling in model parameters and training data, GatorTron significantly improved performance across clinical NLP tasks, offering potential enhancements in healthcare-related natural language processing by evaluating it on 5 clinical NLP tasks like clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). J.Singhal et al.Singhal et al. (2022)explored the encoding of clinical knowledge using Large Language Models. They demonstrated that training LLMs on extensive clinical text enabled them to accurately answer questions related to clinical concepts, showcasing their potential in encoding clinical knowledge. They proposed a robust framework for human evaluation of model responses, incorporating factors such as factuality, precision, potential harm, and bias into the assessment process. PaLM, and its instruction-tuned variant, Flan-PaLM, were evaluated using MultiMedQA. Wang et al.Wang et al. (2023)presented Chatcad, a large language model designed for interactive computer-aided diagnosis (CAD) in medical image analysis. Trained on a dataset featuring medical images and their accompanying text descriptions, Chatcad demonstrated the ability to accurately diagnose diseases from images, aiding radiologists in their diagnoses. S. Reddy et al.Kang et al. (2013)reddy2023evaluatingintroduced a framework for evaluating the translational value of Large Language Models (LLMs) in healthcare. This framework was a comprehensive tool for assessing LLMs' performance in healthcare applications. It was subsequently employed to assess the NLP performance of LLM's on the grounds of not assessing the models' functional, utility, and ethical aspects as they apply to healthcare, and recommended governance aspects of LLMs in healthcare are required. Zhang et al.Zhang et al.(2023)unveiled HuituGPT, a specialized LLM tailored for medical consultation. By leveraging data from ChatGPT and real-world doctors, HuituGPT was fine-tuned to provide clinical advice and support to patients. This unique approach improved its performance, achieving state-of-the-art results in medical consultation tasks. K. Singhal et al.Singhal et al. (2023)introduced Med-PaLM 2, an LLM designed for expert-level medical question answering. This model achieved remarkable scores in medical question-answering tasks with a score of 67.2% on the MedQA dataset, highlighting its potential for delivering high-precision performance to medical question answering. ## 5 An Analysis of LLMs in Healthcare - A Case Study of BioBERT In this section, we delve into the methodologies and outcomes of the aforementioned articles. We assess how LLMs are employed to address healthcare challenges and explore their impact on various aspects of the healthcare industry. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Author & Model & Methodology \\ \hline N. Kang et al. Kang et al. (2013) & Rule-based NLP & A rule-based NLP module used to enhance the performance of MetaMap and Peregrine. \\ S. A. Hasan et al. Hasan and Farri Hasan and Farri (2019) & Deep Learning & Addresses the challenges posed by clinical documents, including acronyms, nonstandard clinical jargon, inconsistent document structure, and privacy concerns. \\ J. Lee et al. Lee et al. Lee et al. (2020) & BioBERT & Pre-training on large-scale biomedical corpora outperforms BERT and other models in biomedical text-mining tasks. \\ HC Shin et al. Shin et al. Shin et al. (2020) & BioMegatron & Empirical study on factors affecting domain-specific language models, pre-training on larger domain corpus. Developing a large clinical language model, scaling up the number of parameters and training data. \\ K. Singhal et al.Singhal et al. (2022) & MultiMedQA, PaLM, Flan-PaLM, Med-PaLM & MultiMedQA benchmark, human evaluation of model answers, instruction prompt tuning. \\ Wang et al.Wang et al. (2023)Wang et al. (2023)Wang et al. (2023)Wang et al. (2023)Wang et al. (2023)Wang et al. (2023) & ChatGPT, CAD networks & Integrating LLMs with CAD networks, enhancing output with natural language text. \\ S. Reddy et al.Reddy (2023) & – & Discusses the potential use of Large Language Models (LLMs) in healthcare. Highlights concerns related to misinformation and data falsification. Proposes a framework for evaluation, including human assessments. \\ H. Zhang et al. Zhang et al. (2023) & HuatuoGPT & Leveraging distilled data from ChatGPT and real-world data from doctors for medical consultation, reward model training. \\ K. Singhal et al.Singhal et al. (2023) & Med-PaLM - 2 & Improving upon Med-PaLM with base LLM improvements, medical domain fine-tuning, and prompting strategies. \\ \hline \hline \end{tabular} \end{table} Table 1: Some state-of-the-art approaches in using LLM and related techniques in the healthcare NLP The paper "BioBERT: a pre-trained biomedical language representation model for biomedical text mining" by J Lee et al. Lee et al. (2020) investigates how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. In this article, we explore the possibility of Fine-tuning BioBERT for the healthcare domain, which can be a valuable endeavor given its success in biomedical text-mining tasks. To adapt BioBERT for healthcare applications, methodology outlines the steps and considerations for fine-tuning BioBERT for healthcare-specific tasks. It emphasizes the importance of domain expertise, data quality, and ethical considerations in developing robust and reliable healthcare language models. To adapt BioBERT for healthcare applications, the following methodology can be considered: * **Data Collection**: Gather a comprehensive and diverse dataset from healthcare and biomedical sources. This dataset should include electronic health records (EHRs) Yang et al. (2022)medical literature, clinical notes, medical imaging reports, and other relevant sources. And annotate the data for various healthcare-related tasks, such as medical entity recognition (e.g., disease names, medications, procedures), medical text classification (e.g., diagnosis prediction, disease classification), and medical question-answeringSinghal et al. (2023). * **Pre-processing**: Prepare the data by cleaning and formatting it for training. This may involve standardizing medical terms, removing duplicates, removing any errors or inconsistencies in the data, and handling missing values. Then Customize tokenization to accommodate the unique vocabulary and structure of biomedical and clinical texts. Clinical text data often contains specialized vocabulary and structure, so it is important to use a customized tokenizer for this type of data. Hence specialized tokenizers may be needed to handle medical terminology, abbreviations, and symbols. Some common tokenizers for biomedical and clinical text data include: * The BioBERT tokenizer. This tokenizer is based on the BERT tokenizer but has been customized to handle medical terminology and abbreviations. * The MedTokenizer. This tokenizer is specifically designed for biomedical text data. * The SciBERT tokenizer. This tokenizer is designed for scientific text data, which includes biomedical text. Figure 1: Overall architecture for BioBERT Pre-training BioBERT is a pre-trained language model trained on a massive dataset of biomedical text. The pre-trained weights represent the model's understanding of the general structure and semantics of biomedical text. Design a set of downstream tasks specific to healthcare. Many different downstream tasks can be performed using BioBERT. Some common tasks include: * **Medical Entity Recognition:** This task involves identifying and extracting medical entities from text. These entities can include diseases, medications, and medical procedures. * **Medical Text Classification:** In this task, the text is categorized into different healthcare-related categories. Examples of categories include diagnosis, prognosis, and treatment. * **Disease Prediction:** This task involves predicting the likelihood of a patient having a particular disease. * **Medical Question-Answering:** This task involves answering questions about medical topics based on text. Fine-tune BioBERT on these tasks using the annotated healthcare dataset. Fine-tuning is adjusting the model's weights to improve performance on a specific task. This is done by feeding the model the annotated healthcare dataset and letting it learn from it. Apply appropriate loss functions. A loss function measures the model's performance on the task. The loss function is used to update the model's weights during fine-tuning. Incorporate transfer learning techniques: Transfer learning involves using a model trained on one task to enhance the performance of a model on a distinct task. This can be achieved by initializing the new model with the pre-trained model's weights. Conduct experiments on hyperparameters. Hyperparameters represent the configurations of the machine learning algorithm and significantly influence the model's performance. Common hyperparameters to explore encompass: * Adjust the learning rate, which dictates the magnitude of weight updates in each training iteration. * Vary the batch size, determining the quantity of samples utilized for weight updates in each training iteration. * Modify the number of epochs, specifying how often the model undergoes training on the data. ### Evaluation Metrics: Assess the fine-tuned BioBERT model across a range of healthcare-related benchmarks and tasks. These include biomedical NLP tasks, medical question-answering, clinical document classification, medical entity recognition, generating discharge summaries, interpreting medical records, and providing medical advice. Figure 2: Overall architecture for BioBERT Finetuning * **F1 score:** It is calculated by taking the harmonic mean of precision and recall. it is the measure of the accuracy and completeness of the model's predictions. The F1 score is a good metric for tasks such as medical entity recognition and text classification. * **Accuracy:** Accuracy is the percentage of predictions that the model gets correct. * **Precision:** Precision is the percentage of positive predictions that are actually positive. * **Recall:** Recall is the percentage of actual positives that are predicted as positive. * **AUC:** AUC is the area under the receiver operating characteristic curve. It measures the model's ability to distinguish between positive and negative examples. AUC is a good metric for tasks such as medical question-answering and disease prediction. * **C-index:** The C-index measures the model's ability to predict the survival of patients. ### Model Interpretability: To enhance the interpretability of a fine-tuned BioBERT model, employ the following techniques: * **Analyze the model's predictions :** Examine the model's predictions and comprehend their rationale. This involves inspecting the model's features for making predictions and scrutinizing the attention weights assigned to various parts of the text. * **Utilize visualization techniques :** Make the model's predictions more comprehensible through graphical representations. Employ heat maps to visualize attention weights or other visualization methods to elucidate how the model generates predictions. * **Leverage explainability tools :** Utilize various explainability tools designed to elucidate how a machine learning model arrives at its predictions. These tools reveal the features employed by the model for prediction and provide insight into the significance of each feature. ### Validation and Testing To validate the performance of a fine-tuned BioBERT model for healthcare tasks, consider the following actions. * **C**ompare model's performance with that of other existing biomedical models like BioMegatronShin et al. (2020)GatorTron Yang et al. (2022)and clinical language modelsSinghal et al. (2022). Use the same evaluation metrics and datasets to determine the best-performing model based on these metrics. * **E**xperiment with hyperparameters, recognizing that these settings can significantly influence the model's performance. Conduct experiments with different hyperparameters to identify the optimal configuration for the specific task. * **V**alidate the model on external healthcare datasets or benchmarks to assess its generalizability and robustness. The model should demonstrate strong performance on previously unseen datasets. When validating the performance of a fine-tuned BioBERT model for healthcare tasks, also consider the following factors: * The size and quality of the training dataset. * The specific task for which the model is being evaluated. * The choice of evaluation metrics. * The clinical requirements that the model aims to address. ### Deployment and Integration: To deploy and integrate a fine-tuned BioBERT Lee et al. (2020)model into healthcare applications and systems, take the following actions: * Apply regularization techniques to prevent overfitting, a potential issue when training the model on a limited dataset. Overfitting occurs when the model captures noise in the data rather than the underlying patterns. Regularization discourages the model from learning non-generalizable patterns. * **Augment dataset by artificially increasing its size. Employ techniques such as image translation, text generation, and synthetic data creation to enhance the dataset. Data augmentation bolsters the model's performance by increasing its resilience to data noise and variations.** * **Integrate the model into the application or system, making it accessible for making predictions or recommendations. Embed the model within the application or system or provide an API for seamless access.** * **Ensure compliance with relevant healthcare regulations and privacy standards during the model's deployment. This is crucial for safeguarding patient privacy and promoting responsible model usage. Be aware that healthcare regulations and privacy standards can vary between regions.** While deploying and integrating a fine-tuned BioBERT model into healthcare applications, consider the following: * **E**valuate the model's performance on a held-out dataset to ensure its effectiveness with new data. * **Continuously monitor the model's performance to confirm it meets expectations.** * **R**egularly update the model to account for changes in the data. ### Continuous Improvement: Continuously update and fine-tune the model in response to new healthcare data availability or evolving clinical requirements. * Seek feedback from healthcare professionals, leveraging their expertise in the field for model improvement. Use their insights to identify areas where the model underperforms or to uncover new potential applications. * Fine-tune the model using newly acquired healthcare data, applying the same training process employed in the model's initial training phase. * Experiment with various hyperparameters to optimize the model's performance for the specific task. * Apply regularization techniques to prevent overfitting, a concern that may arise when training the model on a limited dataset. * Enhance the model's robustness by employing data augmentation techniques, making it more resilient to noise and data variations. * Continually monitor the model's performance to ensure it meets expectations. If performance deteriorates, consider fine-tuning or updating it with fresh data. ### Documentation and Accessibility: Comprehensively document the fine-tuned BioBERT model, including pre-trained weights and code, and make it accessible to the healthcare and research community. Provide comprehensive documentation, code, and model checkpoints in various formats like a technical paper, a blog post, and a GitHub repository. This approach will expand accessibility to a broader audience. ### Ethical Considerations: To ensure that the fine-tuned model addresses ethical concerns related to patient privacy and data security and that it avoids inadvertently revealing sensitive patient information in compliance with healthcare regulations like HIPAA, the following specific ethical considerations should be incorporated when using a fine-tuned BioBERT model: * **Respecting Patient Privacy:Users must refrain from utilizing the model to access or disclose sensitive patient information, including patient names, medical records, and insurance details.** * **Enhancing Data Security: The model should be safeguarded against unauthorized access and use. This entails implementing measures like encryption and access control.** * **Mitigating Bias: Efforts should be made to prevent bias against any particular group of people. This can be achieved by employing a balanced dataset and avoiding discriminatory features.** * **Ensuring Transparency: The model must be transparent and interpretable. Users should have the capacity to comprehend how the model operates and how it generates its predictions.** * **Establishing Accountability: Developers and users of the model bear responsibility for its actions. They are obligated to ensure the model's safe and responsible use.** ## 6 Discussion Based on analyzing the selected works, we realized that LLMs have the potential to revolutionize healthcare. They can introduce novel approaches to enhance clinical decision-making, facilitate information retrieval, and enable more natural language interaction. We have explored the potential benefits and limitations of integrating these language models into healthcare applications. BioBERT's primary strength resides in its capacity to comprehend and process intricate biomedical and clinical texts. Its pre-training on an extensive corpus of biomedical literature provides it with a robust foundation to accurately interpret medical terminologies, abbreviations, and concepts. Such capability proves indispensable in the healthcare context, where specialized language prevails. Moreover, BioBERT can undergo fine-tuning for specific applications, encompassing medical entity recognition, text classification, disease prediction, and question-answering. This adaptability empowers healthcare professionals to harness the model's capabilities across a broad spectrum of clinical and administrative functions. ### Advantages of using BioBERT for healthcare applications BioBERT offers improved clinical decision support, representing one of its most promising applications. Healthcare providers can utilize the model to swiftly access current medical knowledge, research articles, and patient records. This empowers them to render more informed decisions regarding diagnosis, treatment, and patient care, enhancing patient outcomes. BioBERT significantly enhances information retrieval efficiency from electronic health records (EHRs) and other clinical documents. Its ability to process and analyze extensive text data aids healthcare professionals in promptly accessing patient-specific information, thereby reducing the risk of overlooking critical data. The model's natural language processing capabilities make it accessible to healthcare professionals, even those without technical backgrounds. This promotes more effective communication between healthcare providers and technology, enhancing user experience and adoption. However, we must recognize and tackle the challenges linked to deploying BioBERT in healthcare: 1. Data Privacy and Security: Healthcare data is highly sensitive and falls under stringent privacy regulations. To ensure BioBERT's compliance with these regulations, such as HIPAA in the United States, it is crucial to prevent data breaches and safeguard patient information. 2. Bias and Fairness: BioBERT, like other language models, can inherit biases present in the training data. This bias can lead to disparities in healthcare if not carefully mitigated. Developing techniques to identify and rectify bias in healthcare-specific contexts is essential. 3. Lack of Transparency: Interpreting BioBERT's decisions can be challenging due to its complex architecture. Efforts to make the model more transparent and explainable are necessary to build trust among healthcare professionals. 4. Quality Control: Ensuring the quality and accuracy of information generated or retrieved by BioBERT is paramount. Erroneous information or recommendations could have serious consequences in clinical settings. 5. Resource Intensiveness: Developing, fine-tuning, and maintaining BioBERT for healthcare can be resource-intensive, requiring substantial computational power, data annotation, and expert oversight. 6. Generalization Challenges: BioBERT may struggle with generalizing to specific healthcare domains, specialties, or rare conditions if not adequately fine-tuned. Customization may be necessary to achieve optimal performance. BioBERT holds immense potential for revolutionizing healthcare applications by improving information retrieval, clinical decision support, and natural language interaction. However, its deployment in the healthcare sector must be accompanied by stringent measures to address privacy concerns, mitigate bias, ensure transparency, maintain data quality, allocate resources effectively, and fine-tune for specific healthcare contexts. With careful consideration and responsible implementation, BioBERT can become a valuable tool for healthcare professionals, enhancing patient care and medical research. ## 7 Conclusion and Future Work In conclusion, this study offers valuable insights into how the Large Language Models can impact the healthcare sector. It highlights the potential of enhancing various aspects of healthcare, applications such as improving patient care and streamlining healthcare processes. However, challenges such as model performance and ethical considerations remain. Future research shall focus on addressing the existing challenges and further harnessing the capabilities of LLMs by extending to the following dimensions. * Improving model performance * Extending NLP for downstream tasks. * Harnessing the capabilities of multimodal LLMs to provide a more comprehensive understanding of patient health. * Developing cost-effective methods for developing and deploying LLMs.
2303.08089
Retrieval of material properties of monolayer transition-metal dichalcogenides from magnetoexciton energy spectra
Reduced exciton mass, polarizability, and dielectric constant of the surrounding medium are essential properties for semiconducting materials, and they have been extracted recently from the magnetoexciton energies. However, the acceptable accuracy of the suggested method requires very high magnetic intensity. Therefore, in the present paper, we propose an alternative method of extracting these material properties from recently available experimental magnetoexciton s-state energies in monolayer transition-metal dichalcogenides (TMDCs). The method is based on the high sensitivity of exciton energies to the material parameters in the Rytova-Keldysh model. It allows us to vary the considered material parameters to get the best fit of the theoretical calculation to the experimental exciton energies for the $1s$, $2s$, and $3s$ states. This procedure gives values of the exciton reduced mass and $2D$ polarizability. Then, the experimental magnetoexciton spectra compared to the theoretical calculation also determine the average dielectric constant. Concrete applications are presented only for monolayers WSe$_2$ and WS$_2$ from the recently available experimental data; however, the presented approach is universal and can be applied to other monolayer TMDCs. The mentioned fitting procedure requires a fast and effective method of solving the Schr\"{o}dinger equation of an exciton in monolayer TMDCs with a magnetic field. Therefore, we also develop such a method in this paper for highly accurate magnetoexciton energies.
Duy-Nhat Ly, Dai-Nam Le, Duy-Anh P. Nguyen, Ngoc-Tram D. Hoang, Ngoc-Hung Phan, Hoang-Minh L. Nguyen, Van-Hoang Le
2023-03-14T17:22:35Z
http://arxiv.org/abs/2303.08089v2
Retrieval of material properties of monolayer transition-metal dichalcogenides from magnetoexciton energy spectra ###### Abstract Reduced exciton mass, polarizability, and dielectric constant of the surrounding medium are essential properties for semiconducting materials, and they have been extracted recently from the magnetoexciton energies. However, the acceptable accuracy of the suggested method requires very high magnetic intensity. Therefore, in the present paper, we propose an alternative method of extracting these material properties from recently available experimental magnetoexciton s-state energies in monolayer transition-metal dichalcogenides (TMDCs). The method is based on the high sensitivity of exciton energies to the material parameters in the Rytova-Keldysh model. It allows us to vary the considered material parameters to get the best fit of the theoretical calculation to the experimental exciton energies for the \(1s\), \(2s\), and \(3s\) states. This procedure gives values of the exciton reduced mass and \(2D\) polarizability. Then, the experimental magnetoexciton spectra compared to the theoretical calculation also determine the average dielectric constant. Concrete applications are presented only for monolayers WSe\({}_{2}\) and WS\({}_{2}\) from the recently available experimental data; however, the presented approach is universal and can be applied to other monolayer TMDCs. The mentioned fitting procedure requires a fast and effective method of solving the Schrodinger equation of an exciton in monolayer TMDCs with a magnetic field. Therefore, we also develop such a method in this paper for highly accurate magnetoexciton energies. Exciton, transition-metal dichalcogenides, retrieval of material properties, magnetoexciton energy, exciton reduced mass, exact numerical solutions, FK operator method ## I Introduction Two-dimensional van der Waals semiconductors such as transition-metal dichalcogenides (TMDCs) unlock a big door to technological applications such as making ultra-thin computing devices based on their reduced dimensionality, magnetism, (opto-)spintronics, valleytronics or magneto-optics properties [1; 2; 3; 4]. Especially magnetoexcitons in these materials provide a great potential to make light-control magnetic devices because of their thermal stability as well as their high binding energies. Hence, accurate determination of intrinsic optoelectronic quantities of these monolayer TMDCs, such as their exciton reduced mass, two-dimensional (\(2D\)) static polarizability, or the dielectric constant of the surrounding medium, is obvious and crucial for future development of designing van-der-Waals-heterostructure-based devices. There are several methods to determine the exciton reduced mass of monolayer TMDCs. For example, angle-resolved photoemission spectroscopy (ARPES) can experimentally detect energy versus momentum maps and extract effective electron and hole masses [5; 6; 7; 8]. However, they are expensive and not easy-to-do methods. On the other hand, theoretical studies suggest more effective and accurate ways to determine exciton reduced mass. One of the first methods is estimation from the band structure of _ab initio_ calculations, such as density functional theory (DFT) [9; 10; 11]. In recent studies [12; 13; 14], optical spectroscopy of magnetoexcitons in monolayer TMDCs has revealed an exciton reduced mass. However, this method utilizes the diamagnetic shift for extraction; thus, it requires a high magnetic intensity for the Landau levels to describe the energy spectra. Based on our estimation, the magnetic fields of 65 and 91 Tesla used in these works must be higher to get an acceptable accuracy, although they have already reached the laboratory limit. In works [13; 14], besides the exciton reduced mass obtained from the experimental diamagnetic shift, other parameters such as the screening length (related to the \(2D\) polarizability) and the dielectric constant of the surrounding medium are determined by comparing the experimental data for magnetoexciton energies to the theoretical calculation. Actually, the idea of comparing experimental data with theoretically calculated exciton energies to get the material properties of monolayer TMDCs was suggested early in references [15; 16]. Especially the study [16] showed that the exciton reduced mass could be extracted from the exciton energies without a magnetic field by the fitting procedure. Therefore, in the present work, we will apply this fitting scheme to the experimental data in [12, 13] as an alternative method of extracting exciton reduced mass and \(2D\) polarizability of monolayer TMDCs. The data with the magnetic field are then used for determining the dielectric constant. The extracted material properties are then compared with data of other works [12, 13, 14, 17, 18, 19, 20, 21]. The retrieval method mentioned above requires a combination of highly accurate theoretical calculations of energy spectra and precise experimental measurements of optical spectroscopy of excitons to achieve reliable results. While the experimental data provided in [12, 13, 14] are the most accurate measurement recently, theoretical energy spectra of the magnetoexciton are nothing but solutions of the Schrodinger equation describing a two-dimensional pair of electron and hole that interacts via Rytova-Keldysh potential [22, 23, 24, 25] because of the screening effect arising from their reduced dimensionality [10, 15]. In the case of zero-field, these solutions can be obtained by the variational calculations or semiempirical formula [26, 27] with precision enough for analyzing experimental results. However, when a magnetic field or more accurate solutions are needed, we must use a much faster and more precise method. Fortunately, in Ref. [16], we have provided exact numerical solutions for some \(s\)-states of the exciton with and without a uniform perpendicular magnetic field with a precision of up to 20 decimal places by using the so-called Feranchuk-Komarov (FK) operator method [28, 29]. In the present study, we even improve this method more advanced by calculating the matrix elements for the Rytova-Keldysh potential using its new integral form that significantly reduces the computational resources compared with the previous version. Furthermore, examining the sensitivity of magnetoexciton energy on the material parameters allows us to establish an efficient fitting scheme from which we can accurately extract exciton reduced mass, screening length related to \(2D\) static polarizability, and dielectric constant from experimental data of optical peaks associated with exciton \(s\)-states. We also extract the free-particle bandgap from the experimental exciton energy of the \(1\)s state by comparing it with the calculated one. Hence, a tool with universal data can be developed to retrieve these material properties for any monolayer TMDCs with different substrates. A schematic flowchart is given in Fig. 1 to describe our object of study and the method of retrieving the material parameters of monolayer TMDCs. The rest of this paper is as follows. Section II introduces the FK operator method of solving the Schrodinger equation with Rytova-Keldysh potential. Section III examines the sensitivity of exciton energy when varying the exciton reduced mass, screening length, and the dielectric constant and then proposes a fitting scheme to retrieve these parameters from experimental data for monolayers WSe\({}_{2}\) and WS\({}_{2}\). In this section, the \(1\)s exciton energy is also used for determining the free-particle bandgap. Finally, Sec. IV includes our conclusions. ## II Exact numerical solutions for an magnetoexciton in monolayer TMDC _Schrodinger equation_ - For a two-dimensional system of one electron and one hole interacting by the potential \(\hat{V}_{h-e}(r)\) in the magnetic field \(B\mathbf{e}_{z}\) perpendicular to the monolayer plane \((x,y)\), the center of mass (c.m.) motion can be separated to get the Hamiltonian for the relative motion of the electron and hole as \[\hat{H}=\frac{\hat{p}^{2}}{2\mu}+\frac{1-\rho}{1+\rho}\frac{eB}{2\mu}\hat{l}_{ z}+\frac{e^{2}B^{2}}{8\mu}r^{2}+\hat{V}_{h-e}(r)-\frac{(e\mathbf{B}\times \mathbf{K})\cdot\mathbf{r}}{M},\] where \(\mu=m_{e}^{*}m_{h}^{*}/(m_{e}^{*}+m_{h}^{*})\), \(M=m_{e}^{*}+m_{h}^{*}\), and \(\rho=m_{e}^{*}/m_{h}^{*}\) are the exciton reduced mass, total mass, and ratio of masses, respectively; \(m_{e}^{*}\) and \(m_{h}^{*}\) are the effective masses of electron and hole; \(e\) is the elementary charge with the positive value. The last term in the above Hamiltonian is the motional Stark potential with the pseudomomentum \(\mathbf{K}\) of the c.m. related to the temperature of exciton gas [30, 31]. This term can be neglected for experiments in low temperature as considered in the present study. Therefore, the Schrodinger equation Figure 1: Schematic flowchart of extracting the exciton reduced mass, screening length (related to \(2D\) static polarizability), dielectric constant and free-particle bandgap of monolayer TMDCs by the fitting scheme for magnetooptical absorption spectra and theoretical solutions of effective SchrΓΆdinger equation of magnetoexciton. for the relative motion can be written in atomic units as \[\left\{-\frac{1}{2}\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{ \partial^{2}}{\partial y^{2}}\right)+\frac{1}{8}\gamma^{2}(x^{2}+y^{2})+\hat{V} _{h-e}(r)\right.\] \[\left.+\frac{1-\rho}{1+\rho}\,\frac{m}{2}\gamma-E\right\}\psi(x,y )=0, \tag{1}\] where \(r=\sqrt{x^{2}+y^{2}}\); energy \(E\) and coordinates \(x,y\) are given in the effective Hartree \(E_{h}^{*}=\mu e^{4}/16\pi^{2}\varepsilon_{0}^{2}\hbar^{2}\) and effective Bohr radius \(a_{0}^{*}=4\pi\varepsilon_{0}\hbar^{2}/\mu e^{2}\), respectively; \(\gamma\) is dimensionless magnetic intensity related to the magnetic field by the equation \(B=\gamma\times\mu E_{h}^{*}/\hbar e\); \(\hbar\) is the reduced Planck constant; \(\varepsilon_{0}\) is the vacuum permittivity. In equation (1), the operator \(\hat{l}_{z}\) is replaced by its eigenvalue (the magnetic quantum number \(m\)) because of the conservation of the angular momentum on the z axis. The electron and hole interaction is described by the Rytova-Keldysh potential, initially established for excitons in thin films [22, 23] but applicable recently for excitons in monolayer TMDCs such as MoS\({}_{2}\), MoSe\({}_{2}\), WS\({}_{2}\), WSe\({}_{2}\)[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. In most studies, this potential is expressed via the Struve and Bessel functions and is thus suitable for numerical calculations only. For analytical calculations of the matrix elements in our approach, which significantly saves computational resources, we rewrite the Rytova-Keldysh potential by the Laplace transformation as \[\hat{V}_{h-e}(r)=-\frac{1}{\kappa}\int\limits_{0}^{+\infty}\frac{dq}{\sqrt{1+ \alpha^{2}q^{2}}}\;\mathrm{e}^{-qr}, \tag{2}\] where the dimensionless parameter \(\alpha=r_{0}/\kappa a_{0}^{*}\) is used instead of the screening length \(r_{0}\). Here, \(\kappa\) is the average dielectric constant of the surrounding medium; \(r_{0}\) is related to the \(2D\) static polarizability for monolayer materials by the formula \(r_{0}=2\pi\chi_{2D}\). _Numerical method of solving the Schrodinger equation - The Schrodinger equation (1) can be solved numerically by several methods. In the present work, we develop a numerical method based on the matrix eigenvalue equation solver of the Linear Algebra PACKage (LAPACK) [34] and the Feranchuk-Komarov operator method [28, 29], where all matrix elements are calculated algebraically via the formalism of annihilation and creation operators with using the Levi-Civita transformation for two-dimensional atomic systems [35]._ For this purpose, we rewrite the Schrodinger equation (1) in the algebraic form as \[\left(-\frac{1}{8}\hat{T}+\frac{1}{8}\gamma^{2}\hat{R}^{3}+\hat{V}-\widetilde {E}\,\hat{R}\right)|\psi\rangle=0, \tag{3}\] where all operators have the form of annihilation and creation operators as presented in Appendix A, Eqs. (A2) and (A3). Here, we use the notation \(\widetilde{E}=E-\frac{1-\rho}{1+\rho}\,\frac{m}{2}\gamma\). We also establish a basis set of wave vectors \(\left|k,m\right\rangle\), Eq. (A1), labeled by a free parameter \(\omega\) and calculate all matrix elements with respect to the built basis set: \(\mathcal{R}_{jk}=\omega\,\langle j,m|\,\hat{R}\,|k,m\rangle\), \(\mathcal{T}_{jk}=\frac{1}{\omega}\,\langle j,m|\,\hat{T}\,|k,m\rangle\), \((\mathcal{R}^{3})_{jk}=\omega^{3}\langle j,m|\hat{R}^{3}|k,m\rangle\), and \(\mathcal{V}_{jk}=\langle j,m|\,\omega\hat{V}|k,m\rangle\). Analytical expressions for these matrix elements are given in Eqs. (A4), (A5), (A6), and (A8). We will find the wave vector of equation (3) in the expansion via the basis set as \[|\psi^{(s)}\rangle=\sum\limits_{k=|m|}^{s+|m|}C_{k}^{(s)}|k,m\rangle, \tag{4}\] with \(s+1\) unknown coefficients \(C_{k}^{(s)}\;k=|m|,1+|m|,...,s+|m|)\) needed to define. For the considered system, the angular momentum \(l_{z}\) is conserved, so \(m\) is the magnetic quantum number and fixed; only one running index \(k\) remains. In wave vector (4), we use only \(s+1\) basis set vectors, so that the number \(s\) can be considered an approximation order of the solutions. In practice, we will increment the \(s\)-order until getting the needed precision. Plugging wave vector (4) into equation (3) and acting to the left with \(\langle j,m|,\;(j=|m|,1+|m|,2+|m|,...,s+|m|)\), we lead this equation to \(s+1\) linear equations for the coefficients \(C_{k}^{(s)}\) and corresponding energy \(E^{(s)}\) as \[\sum\limits_{k=|m|}^{s+|m|}\left(-\frac{\omega^{2}}{8}\mathcal{T} _{jk}+\frac{\gamma^{2}}{8\omega^{2}}(\mathcal{R}^{3})_{jk}+\mathcal{V}_{jk}\right.\] \[\left.-\widetilde{E}^{(s)}\;\mathcal{R}_{jk}\right)C_{k}^{(s)}=0, \tag{5}\] where all matrix elements have explicit analytical expressions provided in Appendix A. Linear equations (5) can be rewritten in the \((s+1)\times(s+1)-\) matrix eigenvalue equation, where the eigenvalue is \(\widetilde{E}^{(s)}\), while the eigenvector contains \(s+1\) elements \(C_{k}^{(s)}\). This matrix eigenvalue equation can be solved using the subroutine dsygvx.f of the LAPACK. _Exact numerical solutions - We note that equations (5) are not solved for a sole quantum state but for a broad range of \(s+1\) quantum states with the principle quantum number \(n\) from \(1\) to \(s+1\), where the magnetic quantum number \(m\) is fixed. Besides energies \(E_{nm}^{(s)}\), our Fortran codes also give wave functions \(|\psi_{nm}^{(s)}\rangle\) calculated by the formula (4) with the coefficients \(C_{k}^{(s)}\). The wave functions are normalized by the condition \(\sum\limits_{j=|m|}^{s+|m|}C_{j}^{(s)}C_{j}^{(s)}=1\)._ Generally, if \(\lim\limits_{s\rightarrow+\infty}E^{(s)}\to E\), the solving process converges and gives exact numerical solutions. However, in practice, we use a limited number of basis set functions to get the required precision. The more basis set functions are included in expansion (4), the better accuracy of the solution is obtained. However, another way to increase accuracy is by choosing the appropriate value of the free parameter \(\omega\). Work [16] shows that convergence strongly depends on the free parameter, and there is an optimum region of this parameter where the convergence rate is highest. We confirm the same results even for the case \(m\neq 0\) and implement the optimum values of \(\omega\) in the Fortran codes. We have tested the codes with energies converged to 15 decimal places so that the solutions used in this work (required only three decimal digits) are considered numerically exact. Therefore, the precision of calculated exciton energies is determined only by the accuracy of the material parameters. Tables 1 and 2 present exciton energies in monolayers WSe\({}_{2}\) and WS\({}_{2}\) encapsulated by hBN slabs for the states with the principal quantum number \(n\leq 5\). We provide only the \(s\)-state energies because recent experiments detect only \(s\)-state peaks in the absorption spectra. Energies for other states with \(m\neq 0\) are available upon request. In our calculation, the exciton reduced mass \(\mu=0.190\,m_{e}\), screening length \(r_{0}=4.21\) nm, and dielectric constant \(\kappa=4.34\) are taken from Table 4, retrieved by our method in Sec. III; \(m_{e}\) is the electron mass. We consider the magnetic field intensity up to 90 Tesla only because of the current laboratory limit in generating the magnetic field. Indeed, most studies deal with the intensity from 30 to 65 Tesla [12, 14, 17, 18, 19, 20, 21], while the highest intensity recently achieved is 91 Tesla [13]. Also, for binding energies, we need to subtract the bandgap (extracted from experimental exciton energies in Table 4) from the calculated exciton energies. ## III Retrieval of material properties from energy spectra _Sensitivity of exciton energies on material parameters_ - There are four parameters in the Schrodinger equation (1) of an exciton in a monolayer TMDC that vary for different materials. They are the exciton reduced mass \(\mu\), screening length \(r_{0}\) (related to the \(2D\) polarizability), average dielectric constant \(\kappa\) of the surrounding medium, and the mass ratio \(\rho\). We consider only the \(s\)-states, so the mass ratio \(\rho\) disappears in the equation. Remain only three material parameters (\(\mu\), \(r_{0}\), and \(\kappa\)) needed to retrieve. Therefore, we now investigate the sensitivity of exciton energies on these parameters and show the results in Fig. 2. From our calculations, Figs. 2 (a), (b), and (c) present the energy difference \(\Delta E_{21}=E_{2s}-E_{1s}\) dependent on \(\mu\), \(r_{0}\), and \(\kappa\), respectively, for monolayer TMDCs. The changes are 24.4 meV (18%), -16.1 meV (-12%), and -25.1 meV (-19%), respectively, when varying exciton reduced mass from 0.16 to 0.25 \(m_{e}\), screening length from 4.0 to 5.0 nm, and dielectric constant from 4.0 to 5.0. Analogically for the energy difference \(\Delta E_{32}=E_{3s}-E_{2s}\) (not shown in the figure), the changes are 6.5 meV (30%), -1.4 meV (- 6%), and - 6.8 meV (- 31%), respectively. On the other hand, the measurement accuracy for exciton energies in the hBN environment is less than 1.0 meV, so the energy changes are significant enough for the experimental detection. Therefore, we conclude that exciton energies are sensitive to the change of reduced mass, screening length, and dielectric constant and will use this fact for developing our extraction method. _Fitting method for exciton reduced mass, screening length, and dielectric constant_ - The work of Stier _et al._ (2018) [12] for exciton energies in monolayer WSe\({}_{2}\) encapsulated by hBN slabs with \(\kappa=4.5\) provides experimental data of 130.0 meV for energy difference \(\Delta E_{21}\) and 22.0 meV for \(\Delta E_{32}\). This work also performs the theoretical calculation with 124.0 meV and 21.3 meV respectively for the mentioned energy differences. The discrepancies between experimental data and theoretical calculation are 4.0 % and 3.2 %, which we attribute to the inaccuracy of the material parameters \(\mu\), \(r_{0}\), and \(\kappa\) used in the calculation. The sensitivity of exciton energies on the material parameters inspires us to find the values of the reduced mass \(\mu\), screening length \(r_{0}\), and dielectric constant \(\kappa\) so that the theoretical results best fit the experimental data. Figure 3 shows the relative discrepancy between the \begin{table} \begin{tabular}{c c c c c} Magnetic field & \multicolumn{4}{c}{Energy (meV)} \\ (Tela) & 1s & 2s & 3s & 4s & 5s \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 2: Magnetoexciton energies (meV) in monolayer WS\({}_{2}\) encapsulated by hBN slabs with \(r_{0}=3.76\) nm, \(\mu=0.175\,\mathrm{m}_{e}\), \(\kappa=4.16\). For binding energies, add the bandgap \(E_{g}=2.238\) eV. \begin{table} \begin{tabular}{c c c c c c} Magnetic field & \multicolumn{4}{c}{Energy (meV)} \\ (Tela) & 1s & 2s & 3s & 4s & 5s \\ \hline [MISSING_PAGE_POST] \end{tabular} \end{table} Table 1: Magnetoexciton energies (meV) in monolayer WSe\({}_{2}\) encapsulated by hBN slabs with \(r_{0}=4.21\) nm, \(\mu=0.190\,m_{e}\), \(\kappa=4.34\). For binding energies, add the bandgap \(E_{g}=1.892\) eV. experimental data from Ref. [12] and the theoretical energy differences. We calculate it by the formula \[\delta=\frac{1}{2}\left(\frac{|\Delta E_{21}^{\rm theo}-\Delta E_{21}^{\rm exp}|}{ \Delta E_{21}^{\rm exp}}+\frac{|\Delta E_{32}^{\rm theo}-\Delta E_{32}^{\rm exp}| }{\Delta E_{32}^{\rm exp}}\right) \tag{6}\] varying the exciton reduced mass \(\mu\) and screening length \(r_{0}\) by the steps \(\Delta\mu=0.0025\,m_{e}\) and \(\Delta\,r_{0}=0.025\) nm while fixing the value \(\kappa=4.5\). There is a minimum discrepancy at \(\mu=0.204\,m_{e}\) and \(r_{0}=4.21\) nm, which gives true values for the exciton reduced mass and screening length (\(2D\) polarizability) of the considered monolayer WSe\({}_{2}\). Mathematically, the minimum in Fig. 3 can be understood because there are two constraints (\(\Delta E_{21}\) and \(\Delta E_{32}\)) for two parameters (\(\mu\) and \(r_{0}\)) to be defined. However, we also provide a more comprehensible explanation demonstrated in Fig. 4. Panel (a) presents the energy difference \(\Delta E_{21}\) dependent on \(\mu\) and \(r_{0}\), which is not monosemantic. Each energy difference value corresponds to a set of values \(\mu\) and \(r_{0}\), establishing a curved line in the diagram. Analogically, Panel (b) shows a similar picture - each energy difference value \(\Delta E_{32}\) corresponds to a curved line in the plane (\(\mu\), \(r_{0}\)). As shown in Panel (c), the two lines (\(\Delta E_{21}\) =130.0 meV and \(\Delta E_{32}\) = 22.0 meV) intersect at one point, defining the material parameters for monolayer WSe\({}_{2}\), \(\mu=0.204\,\)m\({}_{e}\) and \(r_{0}=4.21\) nm, consistent with the results shown in Fig. 3. Work [12] also provides exciton energy spectra dependent on the magnetic intensity. We can use this information to get a more precise value of the dielectric constant \(\kappa\) of the surrounding medium (hBN in this case). First, we change \(\kappa\) around the value 4.5, from 4.0 to 5.0, and for each value, we get the optimum values of \(\mu\) and \(r_{0}\) by the above procedure. The results presented in Table 3 show that the screening length \(r_{0}\) does not change but is around the value of 4.21. For each pair of optimum values of \(\mu\) and \(\kappa\), we calculate energies for \(1s\), \(2s\), \(3s\), and \(4s\) states of the exciton at the magnetic intensity for which the experimental energies are available in Ref. [12]. By the least square method, we get the values of \(\mu=0.190\,\)m\({}_{e}\) and \(\kappa=4.34\), where the theoretical energies best fit the experimental data. Here, we note that the screening effect in monolayer TMDC is the consequence of dimensionality reduction, that is why the screening length \(r_{0}\) in Table 3 is almost independent of the dielectric constant \(\kappa\). In contrast, the exciton reduced mass \(\mu\) strongly depends on \(\kappa\). However, this fact needs more careful investigation with analytical exciton energies as functions of material parameters. Some works are available for analytical energies of exciton in monolayer TMDCs [26; 27]; however, they are unsuitable for our analysis. Therefore, we left it for further study. Figure 3: Relative discrepancy between the experimental data for monolayer WSe\({}_{2}\)[12] and theoretical energy differences \(\Delta E_{21}\) and \(\Delta E_{32}\), calculated with varied exciton reduced mass \(\mu\), screening length \(r_{0}\), and fixed \(\kappa=4.5\). There is a minimum at \(\mu=0.204\,\)m\({}_{e}\) and \(r_{0}=4.21\) nm. Figure 2: Sensitivity of the exciton energy difference \(\Delta E_{21}=E_{2s}-E_{1s}\) on the exciton reduced mass (a), screening length (b), and average dielectric constant of the surrounding medium (c). For illustration, we present in Fig. 5 exciton energy spectra calculated for two sets of \(\mu\), \(r_{0}\), and \(\kappa\), compared with the experimental data (color symbols). It is clear that the theoretical spectrum best fits the experimental data at the optimum values \(\mu\), \(r_{0}\), and \(\kappa\). We note that the bandgap energy \(E_{g}=1.892\) eV for monolayer WSe\({}_{2}\) in Fig. 5 is chosen so that the calculated binding \(1s\) exciton energy equals the experimental one. We also extracted the bandgap for monolayer WS\({}_{2}\), given in Table 4. _Extracted fundamental optoelectronic material parameters for monolayer TMDCs_ - The method suggested above can retrieve the reduced mass, screening length, and dielectric constant of any monolayer TMDC from the measured energy differences \(\Delta E_{21}\) and \(\Delta E_{32}\) combined with the magnetoexciton energy spectra. For this task, we have Fortran codes available upon request. Here, we demonstrate the method for the experimental data extracted from Ref. [12] for monolayer WSe\({}_{2}\): \(\Delta E_{21}\)= 130.0 meV, \(\Delta E_{32}\)= 22.0 meV; and from Ref. [13] for monolayer WS\({}_{2}\): \(\Delta E_{21}\)= 139.2 meV, \(\Delta E_{32}\)= 22.1 meV. The retrieved exciton reduced mass \(\mu\), screening length \(r_{0}\), and dielectric constants \(\kappa\) are given in Table 4 compared with data from other works. The free-particle bandgaps are also obtained by fitting the experimental and calculated \(1s\) energies. For reference, we also calculate the diamagnetic coefficient \(\sigma\) and exciton radii \(r_{1s}\), \(r_{2s}\), and \(r_{3s}\) for \(1s\), \(2s\), and \(3s\) states presented in the Table. We now discuss our results in comparison with other works. First, for the free-particle bandgap, we retrieve it by correlating the theoretical and experimental energies as it was performed in Refs. [12, 13]. However, the theoretical ones are numerically exact in our calculation while approximated by the variational method in these cited references, resulting in the difference between the bandgaps of 1.892 eV (Present work) and 1.890 eV (Ref. [12]) for monolayer WSe\({}_{2}\). Meanwhile, our result and Ref. [13] are the same, \(E_{g}=2.238\) eV. We note that these bandgaps revealed from exciton absorption peaks are smaller than those obtained by the GW calculation Figure 4: Sensitiveness of the exciton energy differences (a) \(\Delta E_{21}=E_{2s}-E_{1s}\) and (b) \(\Delta E_{32}=E_{3s}-E_{2s}\) on the exciton reduced mass \(\mu\) and the screening length \(r_{0}\). The energies are calculated for \(\kappa=4.5\). (c) The intersection of two lines (\(\Delta E_{21}=130.0\) meV and \(\Delta E_{32}=22.0\) meV) gives the finding values of \(\mu\) and \(r_{0}\). Figure 5: Magnetoexciton energy spectra calculated with different values of material parameters for monolayer WSe\({}_{2}\): (a) \(\mu=0.204\) m\({}_{e}\), \(r_{0}=4.208\) nm, and \(\kappa=4.5\); (b) \(\mu=0.190\) m\({}_{e}\), \(r_{0}=4.208\) nm, and \(\kappa=4.34\). The results in (b) agree better with the experimental data of Ref. [12], indicated by the color symbols. For the binding energies in the figures, the bandgap \(E_{g}=1.892\) eV is used. \begin{table} \begin{tabular}{c c c} \hline \hline Dielectric constant & Exciton reduced mass & Screening length \(r_{0}\) \\ \(\kappa\) & \(\mu\) (\(m_{e}\)) & \(r_{0}\) (nm) \\ \hline 5.0 & 0.252 & 4.208 \\ 4.8 & 0.232 & 4.208 \\ 4.6 & 0.213 & 4.208 \\ 4.5 & 0.204 & 4.208 \\ 4.4 & 0.195 & 4.209 \\ 4.35 & 0.191 & 4.209 \\ 4.34 & 0.190 & 4.209 \\ 4.33 & 0.189 & 4.209 \\ 4.2 & 0.178 & 4.209 \\ 4.1 & 0.169 & 4.207 \\ 4.0 & 0.161 & 4.209 \\ \hline \hline \end{tabular} \end{table} Table 3: Optimum values of exciton reduced mass \(\mu\) and screening length \(r_{0}\) extracted with different values of dielectric constant \(\kappa\) for monolayer WSe\({}_{2}\) encapsulated by hBN slabs. (WSe\({}_{2}\): 2.100 eV and WS\({}_{2}\): 2.530 eV) and bigger than those calculated by the DFT method (WSe\({}_{2}\): 1.730 eV and WS\({}_{2}\): 2.050 eV), based on the computational 2D materials database [36, 37]. Compared with the direct measurements by the scanning tunneling spectroscopy, the extracted bandgap for monolayer WS\({}_{2}\) agrees well with the experiments (2.238 vs. 2.140 eV) [38], but the one for WSe\({}_{2}\) is underestimated (1.892 vs. 2.080 eV) [39, 40]. Concerning the exciton reduced mass \(\mu\) for WSe\({}_{2}\), our extracted result is close to the one of Ref [12], i.e., 0.19 versus 0.20 \(m_{e}\). The discrepancy is because of the difference in the fitting schemes used in the two works. Work [12] extracts only one parameter \(\mu\) while estimating the screening length \(r_{0}=4.5\) nm from previous theoretical studies and experimental measurements [10]. Meanwhile, the dielectric constant \(\kappa\) in Ref. [12] is also taken from the infrared measurements [41]. In contrast, we consider \(\mu\), \(r_{0}\), \(\kappa\), and \(E_{g}\) as material parameters and extract them from the magnetoexciton energies. From Table 4, our extracted parameters \(r_{0}=4.21\) nm and \(\kappa=4.34\) are close to the previous experimental and theoretical data. Particularly, calculated from the extracted screening length \(r_{0}=4.21\) nm by the equation \(\chi_{2D}=r_{0}/2\pi\), the \(2D\) polarizability is 6.7 A, very close to the DFT calculation (7.18 A[10] and 6.72 A[36, 37]). We note that using the high-\(B\) shifts of the \(3s/4s\) excitons in Ref. [12] to retrieve the exciton reduced mass leads to a wide range for it, from 0.16 to 0.23 \(m_{e}\). This inaccuracy is because the electron-hole interaction is negligible only when the dimensionless magnetic intensity \(\gamma\) is larger than 10 [42, 43]. This condition is equivalent \(a_{0}^{*}>3\,l_{B}\), noticing that \(\gamma=(a_{0}^{*}/l_{B})^{2}\) where \(l_{B}=\sqrt{\hbar/eB}\) is a magnetic length. Meanwhile, for WSe\({}_{2}\) with \(\mu\sim 0.20\,m_{e}\) at \(B=65\) Tesla, it has \(a_{0}^{*}\sim 0.08\,l_{B}\) only. Nevertheless, Work [12] has also modeled the exciton by the Rytova-Keldysh potential and calculated the diamagnetic shifts in the magnetic field range up to 65 Tesla, and with the numerically acceptable magnetoexciton energies, it has retrieved a good result for \(\mu\) (about \(0.20\,m_{e}\)) despite the current laboratory limit in the magnetic field generation. For monolayer WS\({}_{2}\), we compare our results with those of Ref. [13]. For the exciton reduced mass \(\mu\), this reference uses the \(4s/5s\) exciton states where the material parameters such as \(r_{0}\) and \(\kappa\) are supposed to influence the energies weakly at the strong magnetic field. The obtained exciton reduced mass of \(\mu=0.175\,m_{e}\) exactly coincides with ours. Then, Work [13] modeled excitons by the Rytova-Keldysh potential with different values of screening length \(r_{0}\) and dielectric constant \(\kappa\) at the weak magnetic intensity to fit the calculated binding energies with the experimental data. As a result, they get \(r_{0}=3.4\) nm and \(\kappa=4.35\). Our fitting scheme is different and with numerically exact exciton energies used. As shown in Table 4, our result for the screening length, \(r_{0}=3.76\) nm, is bigger than the one of Ref. [13], about 10%, and closer to the DFT calculation. Indeed, we calculated the \(2D\) polarizability related to the screening length with the result \(\chi_{2D}=5.98\) A which is close to the DFT estimation (6.03 A[10], 6.393 A[44], and 5.9 A[36, 37]). ## IV Conclusion We have shown the sensitivity of the exciton energy differences among three exciton quantum states (\(1s\), \(2s\), and \(3s\)) in the monolayer TMDCs to the material properties. It inspires us to propose a method to retrieve the exciton reduced mass, screening length (related to the \(2D\) polarizability), and dielectric constant of the surrounding medium from experimental magnetoexciton energies available recently. Applying the proposed method to monolayers WSe\({}_{2}\) and WS\({}_{2}\), we have obtained results for the material properties that complement well the available data. The method could be extended for other monolayer TMDCs, such as MoS\({}_{2}\), MoSe\({}_{2}\), and MoTe\({}_{2}\), which are the subject of recent intensive investigation. Also, the mass ratio \(\rho=m_{e}^{*}/m_{h}^{*}\) is an important material property that needs to be extracted. Our approach can be applied for this purpose required experimental energies of states with \(m\neq 0\), which could be obtained from \begin{table} \begin{tabular}{l l l l l l l l l l} Material & \(\mu\) & \(r_{0}\) & \(\kappa\) & \(E_{g}\) & \(\sigma\) & \(r_{1s}\) & \(r_{2s}\) & \(r_{3s}\) & References \\ & (\(m_{s}\)) & (nm) & & (eV) & (\(\mu\)eV/T\({}^{2}\)) & (nm) & (nm) & (nm) & \\ \hline WSe\({}_{2}\) & 0.190 & 4.21 & 4.34 & 1.892 & 0.28 & 1.68 & 7.01 & 16.09 & Present work \\ & 0.20 & 4.5 & 4.5 & 1.890 & 0.31 & 1.7 & 6.6 & 14.3 & [12] \\ & 0.20 & 5.0 & 3.97 & 1.884 & 0.24 & 1.6 & 8.24 & 17.0 & [14] \\ & 0.22 & 4.51 & 4.5 & 1.900 & 0.25 & 1.6 & 6.5 & 14.7 & [21] \\ & 0.22 & 4.5 & 3.3 & β€” & 0.32 & 1.79 & β€” & β€” & [13] \\ \hline WS\({}_{2}\) & 0.175 & 3.76 & 4.16 & 2.238 & 0.34 & 1.69 & 7.13 & 16.49 & Present work \\ & 0.175 & 3.4 & 4.35 & 2.238 & 0.4 & 1.8 & β€” & β€” & [13] \\ & 0.15 & 4.0 & 4.5 & β€” & 2.45 & β€” & β€” & [20] \\ & 0.16 & 5.3 & 1.0 & β€” & 0.32 & 1.53 & β€” & β€” & [18] \\ & 0.15 & β€” & 1.55 & β€” & 0.90 & 2.5 & β€” & β€” & [17] \\ \end{tabular} \end{table} Table 4: Fundamental optoelectronic material parameters (exciton reduced mass \(\mu\), screening length \(r_{0}\), dielectric constant \(\kappa\), and bandgap energy \(E_{g}\)) extracted in the present work compared with data in other references. Also some exciton properties (diamagnetic coefficient \(\sigma\) for the \(1s\) state, exciton radii \(r_{1s}\), \(r_{2s}\), and \(r_{3s}\) calculated with the extracted material parameters. nonlinear optical response or thermoinduced magnetoexciton peaks in linear optical response. For the mentioned-above investigation, we have developed an effective method for solving the Schrodinger equation of a magnetoexciton in a monolayer TMDC. The method gives a very fast and convergent procedure to get highly accurate magnetoexciton energies and wave functions suitable for the fitting method, which usually requires a huge data generation. Besides, all matrix elements for the Hamiltonian are obtained in analytical expressions that may be useful for further investigation of analytical magnetoexciton energies as functions of material parameters. Fortran codes for magnetoexciton energy spectra in monolayer TMDCs are available upon request and will be published elsewhere. ###### Acknowledgements. D.-N.Ly and N.-H.P. are funded by Ho Chi Minh City University of Education Foundation for Science and Technology under grant numbers CS.2019.19.43TD and CS.2019.19.44TD. This work is funded by Foundation for Science and Technology of Vietnam Ministry of Education and Training under grant number B2022-SPS-09-VL. This work was carried out by the high-performance cluster at Ho Chi Minh City University of Education, Vietnam. ## Appendix A Analytical matrix elements For more effectively solving the Schrodinger equation (1), we first rewrite it in the \(\left(u,v\right)\) space by the Levi-Civita transformation \(x=u^{2}-v^{2},\;y=2uv\), where the interaction potential in the \(\left(u,v\right)\) space is defined as \(\hat{V}(u,v)=\left(u^{2}+v^{2}\right)\hat{V}_{h-e}\). The distance and angular momentum have the compact form \(r=u^{2}+v^{2}\) and \(\hat{l}_{z}=-\frac{i}{2}\left(v\frac{\partial}{\partial v}-u\frac{\partial}{ \partial v}\right)\). More about the application of the Levi-Civita transformation to two-dimensional atomic systems can also be found in Ref.[35]. One advantage of using the equation in \(\left(u,v\right)\) space is to apply the algebraic formalism via annihilation and creation operators \(\hat{a}(\omega)\), \(\hat{a}^{+}(\omega)\), \(\hat{b}(\omega)\), and \(\hat{b}^{+}(\omega)\), where the calculation technique is based on the commutation relations \(\left[\hat{a},\hat{a}^{+}\right]=1,\;\;\left[\hat{b},\hat{b}^{+}\right]=1\), and the basis vectors can be presented in the form \[\left|k,m\right\rangle=\frac{1}{\sqrt{(k+m)!(k-m)!}}(\hat{a}^{+})^{k+m}(\hat{b }^{+})^{k-m}|0(\omega)\rangle \tag{10}\] with the vacuum state \(\left|0(\omega)\right\rangle\) defined by the equations \(\hat{a}\left|0(\omega)\right\rangle=0,\;\;\hat{b}\left|0(\omega)\right\rangle=0.\) Here, the running quantum numbers have values \(m=0,\pm 1,\pm 2,\ldots\) and \(k=|m|,1+|m|,2+|m|,\ldots\). Using the annihilation and creation operators, we can rewrite all the terms in the Schrodinger equation as \[\hat{T}= \frac{\partial^{2}}{\partial u^{2}}+\frac{\partial^{2}}{\partial v ^{2}}\!\!= \;\omega\left(\hat{a}\hat{b}+\hat{a}^{+}\hat{b}^{+}-\hat{a}^{+}\hat{a}- \hat{b}^{+}\hat{b}-1\right),\] \[\hat{R}= u^{2}+v^{2}\;\;=\;\frac{1}{\omega}\left(\hat{a}\hat{b}+\hat{a}^{+ }\hat{b}^{+}+\hat{a}^{+}\hat{a}+\hat{b}^{+}\hat{b}+1\right).\] Particularly, the interaction potential can be rewritten as \[\hat{V}(u,v)=-\frac{1}{\kappa}\int\limits_{0}^{+\infty}\frac{dq}{\sqrt{1+ \alpha^{2}q^{2}}}\mathrm{e}^{-q\hat{R}}\hat{R}. \tag{11}\] With the algebraic forms (10) and (11), we can easily calculate all matrix elements just using the commutation relations of the annihilation and creation operators. Detailed calculation method can be found in monograph [29]. Here, in this Appendix, we provide only the results for the matrix elements. They are as follows \[{\cal R}_{jk} = \omega\left\langle j,m\right|\hat{R}\left|k,m\right\rangle=\sqrt {k^{2}-m^{2}}\,\delta_{j,k-1} \tag{12}\] \[+(2k+1)\,\delta_{jk}+\,\sqrt{(k+1)^{2}-m^{2}}\,\delta_{j,k+1}\,,\] \[{\cal T}_{jk} = \frac{1}{\omega}\left\langle j,m\right|\hat{T}\left|k,m\right\rangle =\sqrt{k^{2}-m^{2}}\,\delta_{j,k-1} \tag{13}\] \[-(2k+1)\,\delta_{jk}+\,\sqrt{(k+1)^{2}-m^{2}}\,\delta_{j,k+1}\,,\] \[({\cal R}^{3})_{jk}=\omega^{3}(j,m\,|\,\hat{R}^{3}\left|k,m\right\rangle\] \[\qquad=2\,(5k^{2}+5k+3-3m^{2})(2k+1)\,\delta_{jk}\] \[\qquad\quad+3\,(5k^{2}+1-m^{2})\sqrt{k^{2}-m^{2}}\,\delta_{j,k-1}\] \[\qquad\quad+3\,(2k-1)\sqrt{k^{2}-m^{2}}\sqrt{(k-1)^{2}-m^{2}}\, \delta_{j,k-2}\] \[+\sqrt{k^{2}-m^{2}}\sqrt{(k-1)^{2}-m^{2}}\sqrt{(k-2)^{2}-m^{2}}\, \delta_{j,k-3}\] \[+3\,(5k^{2}+10k+6-m^{2})\sqrt{(k+1)^{2}-m^{2}}\,\delta_{j,k+1}\] \[\qquad\quad+3\,(2k+3)\sqrt{(k+1)^{2}-m^{2}}\sqrt{(k+2)^{2}-m^{2}} \,\delta_{j,k+2}\] \[\qquad\quad\quad+\sqrt{(k+1)^{2}-m^{2}}\sqrt{(k+2)^{2}-m^{2}}\] \[\qquad\quad\quad\quad\times\sqrt{(k+3)^{2}-m^{2}}\,\delta_{j,k+3} \tag{14}\] Here, we use the Kronecker delta \(\delta_{jk}\). Differently, it is not trivial to calculate matrix elements of the operator \(\hat{V}\). However, by using the technique of constructing operators in a normal form of annihilation and creation operators, given in Ref. [29] (pages 232-233), we have formula \[\mathrm{e}^{-q\left(\hat{a}\hat{b}+\hat{a}^{+}\hat{b}^{+}+\hat{a }^{+}\hat{a}+\hat{b}^{+}\hat{b}+1\right)}=\mathrm{e}^{-\frac{a}{1+\hat{a}} \,\hat{a}^{+}\hat{b}^{+}}\] \[\qquad\quad\times\mathrm{e}^{-\ln(1+q)\left(\hat{a}^{+}\hat{a}+ \hat{b}^{+}\hat{b}+1\right)}\mathrm{e}^{-\frac{a}{1+q}\hat{a}}. \tag{15}\] With this operator in this normal form, we can apply the algebraic technique to get \[{\cal V}_{jk} = \langle j,m|\omega\hat{V}|k,m\rangle \tag{16}\] \[= (2k+1)\,U_{jk}+\sqrt{k^{2}-m^{2}}\,U_{j,k-1}\] \[\qquad+\sqrt{(k+1)^{2}-m^{2}}\,U_{j,k+1}\] with \[U_{jk} =-\frac{1}{\kappa\,\alpha}\sum_{s=|m|}^{\min(k,j)}\sum_{t=0}^{j+k-2s} (-1)^{j+k+t}\binom{j+k-2s}{t}\] \[\times\sqrt{\binom{j+m}{s+m}}\sqrt{\binom{j-m}{s-m}}\sqrt{\binom{k +m}{s+m}}\sqrt{\binom{k-m}{s-m}}\] \[\times\int\limits_{0}^{+\infty}\frac{dq}{(1+q)^{2s+t+1}\sqrt{q^{2 }+1/\omega^{2}\alpha^{2}}}\,, \tag{20}\] where \(\binom{n}{k}=\frac{n!}{(n-k)!k!}\) is a binomial coefficient. In Eq. (A), the definite integrals \[J_{p}(x)=\int\limits_{0}^{+\infty}\frac{dq}{(1+q)^{p}\sqrt{q^{2}+x^{2}}}\] with \(p\geq 1\) and \(x=1/\omega\alpha>0\) are easy to calculate numerically. Besides, for an analytical formulation, we can derive an iterative formula for these integrals as follows \[J_{p}=\frac{(2p-3)J_{p-1}-(p-2)J_{p-2}+x}{(x^{2}+1)(p-1)} \tag{21}\] for \(p\geq 2\), where \(J_{1}(x)\) has the following explicit formula \[J_{1}(x)=\frac{\ln\left(x+\sqrt{x^{2}+1}\right)+\ln\left(1+\sqrt{x^{2}+1} \right)-\ln(x)}{\sqrt{x^{2}+1}}.\] Noting that althought \(J_{0}(x)\) is divergent, relation (A) is still valid for \(p=2\) by considering the limit \[\lim_{p\to 0}\,pJ_{p}(x)=1\] so that \[J_{2}(x) =\frac{J_{1}(x)-1+x}{x^{2}+1}\,.\]
2306.15160
Study of Baryon Number Transport Dynamics and Strangeness Conservation Effects Using $Ξ©$-hadron Correlations
In nuclear collisions at RHIC energies, an excess of $\Omega$ hyperons over $\bar{\Omega}$ is observed, indicating that $\Omega$ carries a net baryon number despite $s$ and $\bar{s}$ quarks being produced in pairs. The baryon number in $\Omega$ could have been transported from the incident nuclei and/or produced in baryon-pair production of $\Omega$ with other types of anti-hyperons, such as $\bar{\Xi}$. To investigate these two scenarios, we propose to measure correlations between $\Omega$ and $K$, as well as between $\Omega$ and anti-hyperons. We will use two versions, the default and string-melting, of a multiphase transport (AMPT) model to illustrate the method to measure the correlation and to demonstrate the general shape of the correlation. We will present the $\Omega$-hadron correlations from simulated $\mathrm{Au}$+$\mathrm{Au}$ collisions at $\sqrt{s_{NN}} = 7.7$ and $14.6 \ \mathrm{GeV}$, and discuss the dependence on collision energy and on the hadronization scheme in these two AMPT versions. These correlations can be used to explore the mechanism of baryon number transport and the effects of baryon number and strangeness conservation in nuclear collisions.
Weijie Dong, Xiaozhou Yu, Siyuan Ping, Xiatong Wu, Gang Wang, Huan Zhong Huang, Zi-Wei Lin
2023-06-27T02:39:46Z
http://arxiv.org/abs/2306.15160v2
Study of Baryon Number Transport Dynamics and Strangeness Conservation Effects Using \(\Omega\)-hadron Correlations ###### Abstract In nuclear collisions at RHIC energies, an excess of \(\Omega\) hyperons over \(\overline{\Omega}\) is observed, indicating that \(\Omega\) carries a net baryon number despite \(s\) and \(\bar{s}\) quarks being produced in pairs. The baryon number in \(\Omega\) could have been transported from the incident nuclei and/or acquired and balanced in baryon pair productions associated with other types of anti-hyperons, such as \(\overline{\Xi}\). To investigate these two scenarios, we propose to measure correlations between \(\Omega\) and \(K\), as well as between \(\Omega\) and anti-hyperons. We will use two versions, the default and string-melting, of a multiphase transport (AMPT) model to illustrate the correlation method. We will present the \(\Omega\)-hadron correlations from simulated Au+Au collisions at \(\sqrt{s_{NN}}=7.7\) and 14.6 GeV, and discuss the dependence on collision energy and on the hadronization scheme in these two AMPT versions. These correlations from the AMPT model provide a baseline for experimental exploration of the dynamics of baryon number transport and the effects of baryon number and strangeness conservation in nuclear collisions. ## I Introduction Strangeness enhancement was proposed as a signature of the quark-gluon plasma (QGP) created in relativistic heavy-ion collisions [1] and has been a subject of intensive theoretical and experimental investigations [2]. Since the incident protons and neutrons are composed of \(u\) and \(d\) quarks, the strange quarks observed in the aftermath of the collision can only originate from pair of \(s\bar{s}\) production. Lattice Quantum ChromoDynamics (QCD) predicted that the temperature at which the quark-hadron phase transition occurs is approximately 150 MeV. Thus, in the QGP phase where the temperature is higher than the \(s\)-quark mass, strangeness may be abundantly produced via flavor creation (\(qq\to s\bar{s}\), \(gg\to s\bar{s}\)) and gluon splitting (\(g\to s\bar{s}\)), leading to enhanced production of strangeness in the final state [1; 3]. The investigation of multi-strange hyperon production is particularly valuable for studying the equilibration of strangeness in the QGP. The yields of \(\Omega\) hyperons in heavy-ion collisions, for example, have been measured to be significantly higher than those in \(p\)+\(p\) collisions scaled by the number of participants at CERN Super Proton Synchrotron (SPS) [4], Relativistic Heavy Ion Collider (RHIC) [5; 6], and the Large Hadron Collider (LHC) [7; 8]. Furthermore, many strange hadrons are believed to have small hadronic rescattering cross-sections, so that they retain information from the hadronization stage and can be used to probe the phase boundary of the quark-hadron transition [9; 10; 11]. The RHIC Beam Energy Scan (BES) program aims to search for a possible critical point in the QCD phase diagram where strangeness production also plays a major role [12]. At low BES energies, the measured ratios of anti-baryons to baryons are significantly lower than unity at midrapidities for three hyperons (\(\Lambda\), \(\Xi\), and \(\Omega\)). Therefore, these hyperons must carry a net baryon number. The baryon number transport dynamics have been a subject of interest of the heavy-ion collision physics since its inception [13; 14; 15]. The baryon number in a proton or neutron may be attributed to valence \(u\) and \(d\) quarks, each carrying 1/3 of the baryon number. However, in high energy collisions the valence quarks tend to inherit a significant fraction of the incident nucleon momentum, making them ineffective in transporting baryon number from beam rapidities to midrapidities. Previous theoretical calculations have proposed the existence of topological objects known as gluon junctions, which can effectively convey baryon number over large rapidity gaps in nuclear collisions [16]. Once a gluon junction reaches midrapidities during a nuclear collision, it must emerge as a baryon in the final state, with its flavor determined by the quark flavors present in the surrounding QGP medium. Such exotic dynamics have a particularly discernible impact on net \(\Omega\) hyperons, as \(\Omega\) hyperons consist of three \(s\) quarks that must be pair-produced. The quantum numbers of strangeness and baryon number are strictly conserved in nuclear collisions, leading to correlations among particles in the final state. To gain insights into the production dynamics of \(\Omega\) hyperons, we propose to measure the correlations between \(\Omega\) and other particles, namely \(K^{+}\), \(\bar{\Lambda}\), and \(\bar{\Xi}^{+}\), along with their respective anti-particle pairs. By examining the shape and strength of these correlation functions, we illustrate the extent that we can quantitatively characterize the role of conservation laws in the \(\Omega\) production dynamics in nuclear collisions. These measurements will also provide a baseline reference to the search for exotic dynamics such as gluon junction transport. In this paper, we use a multiphase transport (AMPT) model [17] to simulate Au+Au collisions at \(\sqrt{s_{NN}}=7.7\) GeV and 14.6 GeV and present the \(\Omega\)-hadron correla tions from these simulations. In section 2, we describe briefly the AMPT simulations and the analysis methods. The correlation results and discussions are presented in section 3, followed by a summary in section 4. ## II AMPT Simulations and Analysis Methods ### AMPT Simulations To investigate the impact of different hadronization schemes on baryon number transport dynamics, we exploit both the default and string melting (SM) versions of the AMPT model to simulate Au+Au collisions. The initial phase space is provided by the HIJING model [18; 19; 20; 21], which is a Monte Carlo event generator for parton and particle production in high-energy hadronic and nuclear collisions. In the default version of AMPT, minijets and their parent nucleons form excited strings after partonic interactions of minijets, and these strings fragment into hadrons via Lund String Fragmentation [22]. In the SM version, the strings are converted through the "string melting" mechanism into partons, which subsequently interact with each other during the evolution, and the coalescence mechanism is used to combine partons into hadrons. The parton interactions are described by the Boltzmann equation, and solved by the ZPC model [23], which only includes two-body elastic scatterings. In the quark coalescence process, the two or three nearest partons (quarks and anti-quarks) in the phase space recombine to form a meson or a baryon. The AMPT version v1.25t4cu/v2.25t7cu, which strictly conserves net electric charge, strangeness, and baryon number, is employed in our study. We have generated more than 50 million minimum bias events of Au+Au collisions at both 7.7 GeV and 14.6 GeV. ### \(\Omega\) Production and Conservation of Strangeness and Baryon Number The \(\Omega^{-}\) production is governed by both strangeness conservation (SC) and baryon number conservation. Additionally, the presence of net \(\Omega\) hyperons at midrapidities indicates the involvement of baryon number transport (BNT) dynamics. Figure 1 illustrates two basic scenarios based on the quark coalescence picture for the \(\Omega^{-}\) production. In Scenario 1 (left side of Fig. 1), three \(\bar{s}\) quarks are pair-produced with the three \(s\) quarks in \(\Omega^{-}\), and may combine with \(u\) or \(d\) quarks from the incident nuclei to form three kaons. In this case, the \(\Omega^{-}\) hyperon does not have an accompanying anti-baryon and carries a baryon number initially residing in the \(u\) and \(d\) quarks from the colliding nuclei. In exotic dynamics involving gluon junction, the production of the three \(s\)-\(\bar{s}\) pairs may be combined with a gluon junction, which gives rise to \(\Omega^{-}\). Consequently, Scenario 1 would describe a BNT via gluon junction to the \(\Omega\) hyperon, while valence \(u/d\) quarks form the kaons [15]. In default version, associated production of \(\Omega\) and kaons could also yield qualitatively similar features as the coalescence picture. Therefore, Scenario 1 encompasses contributions from both ordinary physical processes and the gluon junction dynamics. The AMPT model in our simulation does not invoke the gluon junction dynamics, and our results may serve as a baseline to the experimental search for the exotic dynamics. In Scenario 2 (right side of Fig. 1), the three pair-produced \(\bar{s}\) quarks combine with \(u(\bar{u})\) or \(d(\bar{d})\) quarks to form an anti-hyperon and a kaon. For instance, \(\bar{\Xi}^{+}(\bar{\Xi}^{0})\) and \(K^{0}(K^{+})\) could emerge alongside the \(\Omega^{-}\) hyperon. In this scenario, the baryon number in \(\Omega^{-}\) is balanced by the other types of anti-hyperon, and no direct BNT from the incident nuclei is present. To characterize the numbers of kaons and anti-baryons associated with the \(\Omega\) production, we introduce \[\Delta N_{A}\equiv\langle A\rangle_{\rm w.\Omega^{-}}-\langle A\rangle_{\rm w.o.\Omega^{-}}, \tag{1}\] where \(\langle A\rangle_{\rm w.\Omega^{-}}\) and \(\langle A\rangle_{\rm w.\Omega^{-}}\) denote the average numbers of particle \(A\) in events with one \(\Omega^{-}\) and without any Figure 1: Schematic illustration of two possible scenarios for the \(\Omega^{-}\) production based on the quark coalescence picture with strangeness and baryon number conservation. In Scenario 1 (left), \(\Omega^{-}\) carries the baryon number initially residing in the \(u\) and \(d\) quarks from colliding nuclei. In Scenario 2 (right), \(\Omega^{-}\) does not carry a net baryon number. Associated production of \(\Omega\) and kaon in default version could have similar qualitative features as scenario 1 in coalescence picture. \(\Omega^{-}\), respectively. Here, we assume that all other aspects of the two event classes are the same. Table 1 lists the \(\Delta N_{K}\) and \(\Delta N_{\bar{B}}\) values expected by the two scenarios, where \(K\) represents both \(K^{+}\) and \(K^{0}\), and \(\bar{B}\) refers to anti-baryons (\(\bar{\Lambda}\), \(\bar{\Sigma}\), \(\bar{\Xi}\), and so on) associated with the \(\Omega^{-}\) production. In our study, we choose the beam energies of 7.7 and 14.6 GeV in order to limit the fraction of Au+Au events that produce multiple \(\Omega\) (\(\bar{\Omega}\)) hyperons. Compared with Scenario 2, Scenario 1 exhibits a stronger \(\Omega^{-}\)-\(K\) correlation and a weaker \(\Omega^{-}\)-\(\bar{B}\) correlation. ### Correlation Using the Event Mixing Technique To divide out the combinatorial background, we first analyze the correlations between \(\Omega^{-}\) and strange hadrons with the traditional normalization using mixed events. The correlation function is \[C(k^{*})=\mathcal{N}\frac{A(k^{*})}{B(k^{*})}, \tag{2}\] where \(A(k^{*})\) is the same-event distribution, \(B(k^{*})\) is the mixed-event distribution, and \(k^{*}\equiv\frac{1}{2}|k_{1}-k_{2}|\) is the reduced momentum in the pair rest frame. \(\mathcal{N}\) is the normalization factor determined by matching the same-event and mixed-event correlations in the uncorrelated phase space, for example, by requiring \(C(k^{*}>1\) GeV\(/c)=1\). The technique of event mixing normalization allows for the investigation of the correlation length between two types of particles in momentum space by analyzing their distributions. The underlying assumption is that the distribution in mixed events should be equivalent to that in same events in the absence of any correlation, thereby representing the combinatorial background. By dividing out this mixed-event background, one can obtain the correlation between the two particles of interest. It is crucial to choose a proper \(k^{*}\) range for normalization in order to achieve meaningful results. Typically, the \(k^{*}\) range with the high counting density is selected for normalization, as it provides a reliable estimate of the background. This method has also been applied to examine the correlation function for \(\Omega^{-}\)-\(K^{+}\), \(\Omega^{-}\)-\(\bar{\Lambda}^{0}\), and \(\Omega^{-}\)-\(\bar{\Xi}^{+}\) in terms of relative rapidity and relative transverse momentum. However, it is important to note that due to the normalization procedure, this correlation function cannot fully capture the possibility of multiple kaons being correlated with the \(\Omega\) production, nor can it fully account for the sensitivity of the correlation to the production dynamics. ### Correlations Using Combinatorial Background Subtraction In order to make the correlation measurement sensitive to the number of associated hadrons, we introduce the combinatorial background subtracted (CBS) correlations, by taking the difference between the \(\Omega\)-hadron correlation and the \(\bar{\Omega}\)-hadron correlation, each normalized by the corresponding number of \(\Omega\) or \(\bar{\Omega}\) hyperons. For example, the \(\Omega^{-}\)-\(K^{+}\) correlation is defined as \[C^{\rm CBS}_{\Omega^{-}K^{+}}(k^{*})=\frac{dN_{\Omega^{-}K^{+}}/dk^{*}}{N_{ \Omega^{-}}}-\frac{dN_{\bar{\Omega}^{+}K^{+}}/dk^{*}}{N_{\bar{\Omega}^{+}}}. \tag{3}\] This background subtraction approach is intended to extract the main component of the correlation due to SC in the \(\Omega^{-}\)-\(K^{+}\) pairs. The opposite-sign pair (\(\Omega^{-}\)-\(K^{+}\) or \(\bar{\Omega}^{+}\)-\(K^{-}\)) distribution in the first term contains the signal, while the same-sign pair distribution in the second term (\(\bar{\Omega}^{+}\)-\(K^{+}\) or \(\Omega^{-}\)-\(K^{-}\)) models the uncorrelated background. This subtraction scheme is sensitive to the difference in the number of kaons between events with \(\Omega^{-}\) and events with \(\bar{\Omega}^{+}\), as well as to the phase space distribution of the extra kaons. There may be variations of kinematic phase spaces for these pairs in nuclear collisions, and such effects can be mitigated with this normalization scheme and with fine collision centrality bins. Similar approaches have been applied to study the correlations between \(\Omega\) and other hadrons. ## III Correlation Results and Discussions ### Strangeness Conservation and Strange Hadron Yields We first list in Table 2 the AMPT simulations of the difference in the number of \(s\bar{s}\) pairs between events with one \(\Omega^{-}\) and events without any \(\Omega^{-}\) or \(\bar{\Omega}^{+}\) in Au+Au collisions at \(\sqrt{s_{NN}}=7.7\) GeV and 14.6 GeV. For events with one \(\Omega\), at least three \(s\bar{s}\) pairs are produced because of SC. The AMPT results are slightly greater than three, and the excess indicates that the underlying strangeness production may affect the probability for the \(\Omega\) formation. Compared with the SM version of AMPT, the default version yields a slightly larger number of \(s\bar{s}\) pairs, as well as a smaller \(\Omega\) formation probability. This correlation is presumably due to the difference in the formation dynamics and/or the \(s\bar{s}\) phase space. The AMPT simulations suggest that at the lower collision energies, slightly more strangeness in the underlying event is needed to form \(\Omega\) hyperons than at higher energies. Table 3 shows the AMPT calculations of \(\Delta N_{K}\) and \(\Delta N_{\bar{B}}\) as defined in Eq. (1). When compared with the expectations in Table 1 corresponding to the two \(\Omega\) production scenarios, these numbers indicate that the \(\Omega\) production is likely to receive contributions from both scenarios. The SM version seems to Scenario 1, with the \(\Delta N_{K}\) values close to three and the lower \(\Delta N_{\bar{B}}\) values. Strangeness and baryon number can be represented by various particle types in the final states of nuclear collisions. For example, \(s\) quarks can exist in \(K^{+}\) and \(K^{0}\). The actual distributions of strangeness and baryon numbers in the final-state particles could be sensitive to nuclear dynamics and may also depend on beam energy. Figures 2 and 3 show the AMPT results of the difference in strange hadron yields between \(\Omega^{-}\) events and non-\(\Omega^{-}\) events of Au+Au collisions at 7.7 GeV and 14.6 GeV, respectively. At 7.7 GeV and 14.6 GeV, the correlations between \(\Omega^{-}\) and kaons are stronger in the SM version, but the correlations between \(\Omega^{-}\) and anti-hyperons are stronger in the default version. For \(\Omega^{-}\) events, the number of kaons is much larger than the number of anti-hyperons, suggesting that the correlations between \(\Omega^{-}\) and kaons are much stronger than those between \(\Omega^{-}\) and anti-hyperons. These numbers seem to support that scenario 1 contributes more significantly to \(\Omega\) yields in both versions. the pair rest frame using the event mixing normalization. Figure 4 presents the correlation functions for (a) \(\Omega^{-}\)-\(K^{+}\), (b) \(\Omega^{-}\)-\(\tilde{\Lambda}\), and (c) \(\Omega^{-}\)-\(\tilde{\Xi}^{+}\) pairs for 0-5% centrality Au + Au collisions at 7.7 GeV. The normalization between the same event and the mixed event distributions is determined by the \(k^{*}\) region of 0.6-1.5 GeV/\(c\). There is a distinct correlation between \(\Omega^{-}\) and \(K^{+}\), indicating that SC must play a major role in the \(\Omega^{-}\) and \(K^{+}\) yields, with the typical correlation length on the order of less than \(k^{*}\) of 0.5 GeV/\(c\). The \(\Omega^{-}\) may also be correlated with the anti-hyperons of \(\tilde{\Lambda}\) and \(\tilde{\Xi}^{+}\), but the current simulated results do not have sufficient statistics to be definitive. Figure 5 shows (a) \(\Omega^{-}\)-\(K^{+}\), (b) \(\Omega^{-}\)-\(\tilde{\Lambda}\), and (c) \(\Omega^{-}\)-\(\tilde{\Xi}^{+}\) correlations for 0-5% centrality Au + Au collisions at 14.6 GeV. The \(\Omega^{-}\)-\(K^{+}\) correlation at 14.6 GeV seems to be less prominent than that at 7.7 GeV. This could be explained by our normalization scheme that involves the dilution of the \(\Omega^{-}\)-\(K^{+}\) correlation by uncorrelated kaons, which are expected to be more significant at 14.6 GeV. According to Table 1, the correlation between \(\Omega\) and anti-hyperons may also be susceptible to the two \(\Omega\) production scenarios. The middle and right panels of Figs. 4 and 5 show the \(\Omega^{-}\)-\(\tilde{\Lambda}^{0}\) and \(\Omega^{-}\)-\(\tilde{\Xi}^{+}\) correlation functions using the event-mixing technique at 7.7 GeV and 14.6 GeV, respectively. At both energies, the \(\Omega^{-}\)-\(\tilde{\Lambda}^{0}\) results lack enough statistics for a definitive conclusion, whereas some level of the \(\Omega^{-}\)-\(\tilde{\Xi}^{+}\) correlation may exist. However, there is no significant difference between \(\Omega^{-}\)-\(\tilde{\Xi}^{+}\) and \(\tilde{\Omega}^{+}\)-\(\Xi^{-}\), similar to the \(\Omega\)-\(K\) results. For both beam energies, there is no significant difference in the observed correlations between the two AMPT hadronization schemes. It seems that the correlations using the event-mixing technique are only sensitive to the kinematic region where SC links the \(s\bar{s}\) pairs. The magnitude differences as shown in Table 3 are, however, not quantitatively reflected in the measured correlations due to the normalization scheme. We will explore the CBS correlations within the same event framework and background subtraction scheme. Our goal is to use the \(\bar{\Omega}^{+}\)-\(K^{-}\) correlation (Scenario 2 only) as a baseline for the \(\Omega^{-}\)-\(K^{+}\) correlation. For each of these CBS correlations, we again select events with one \(\Omega^{-}\) or \(\bar{\Omega}^{+}\) from 0-5% most central collisions. As described in Eq. (3), the combinatorial background of the \(\Omega^{-}\)-\(K^{+}\) correlation is modeled by the \(\bar{\Omega}^{+}\)-\(K^{+}\) correlation based on events with one \(\bar{\Omega}^{+}\). \(\bar{\Omega}^{+}\) and \(K^{+}\) both containing \(\bar{s}\) quarks and the narrow centrality bin make the \(\bar{\Omega}^{+}\)-\(K^{+}\) correlation a good candidate for the combinatorial background in the \(\Omega^{-}\)-\(K^{+}\) correlation. Similarly, we can also take the difference between \(\bar{\Omega}^{+}\)-\(K^{-}\) and \(\Omega^{-}\)-\(K^{-}\) to calculate the \(\bar{\Omega}^{+}\)-\(K^{-}\) correlation, which represents Figure 4: Correlation functions between \(\Omega^{-}\) and strange hadrons using the event mixing normalization in the 0–5% centrality range of Au + Au collisions at 7.7 GeV (\(|\eta|<1\)). Figure 5: Correlation functions between \(\Omega^{-}\) and strange hadrons using the event mixing normalization in the 0–5% centrality range of Au + Au collisions at 14.6 GeV (\(|\eta|<1\)). contributions only from SC. Therefore, any difference between \(\Omega^{-}\)-\(K^{+}\) and \(\bar{\Omega}^{+}\)-\(K^{-}\) correlations will be sensitive to the BNT dynamics, which is absent in Scenario 2. Figures 6 and 7 show the CBS correlations, for (left) \(\Omega^{-}\)-\(K^{+}\) and (right) \(\bar{\Omega}^{+}\)-\(K^{-}\) at 7.7 GeV and 14.6 GeV, respectively. The correlations are shown as a function of \(k^{*}\), rapidity difference (\(\Delta y\)), and transverse momentum difference (\(\Delta p_{T}\)) in three rows, respectively. At 7.7 GeV, the two AMPT versions exhibit clear differences. Compared with the SM version, the default version shows a stronger \(\bar{\Omega}^{+}\)-\(K^{-}\) correlation, presumably indicating a stronger and more localized SC. For the \(\Omega^{-}\)-\(K^{+}\) correlation, which is sensitive to both SC and BNT dynamics, while the total magnitudes (integral) are similar between the two AMPT versions, the correlation shapes are different as a function of both \(k^{*}\) and \(\Delta p_{T}\). In the SM version, the coalescence formation mechanism and the BNT dynamics seem to yield a narrower correlation function between \(\Omega^{-}\) and \(K^{+}\). At 14.6 GeV (Fig. 7), the difference between the two hadronization schemes is relatively small. The shape difference in the correlation as a function of \(k^{*}\) and \(\Delta p_{T}\) is still visible, albeit much less prominent than at 7.7 GeV. The CBS correlations at 7.7 GeV and 14.6 GeV suggest that the event-level \(\Omega^{-}\)-\(K^{+}\) correlation is stronger than the \(\bar{\Omega}^{+}\)-\(K^{-}\) one. Since \(\bar{\Omega}^{+}\) is only produced in Scenario 2, whereas \(\Omega^{-}\) can be produced in both scenarios, Figure 8: CBS correlations between \(\Omega^{-}(\bar{\Omega}^{+})\) and \(\bar{\Lambda}(\Lambda)\) in 0–5% most central Au + Au collisions at 7.7 GeV (\(|\eta|<1\)). The left columns show the results of \(40\times C^{\rm CBS}_{\Omega^{-}\Lambda}\), and the right columns, those of \(C^{\rm CBS}_{\Omega^{+}\Lambda}\). the stronger \(\Omega^{-}\)-\(K^{+}\) correlation indicates the presence of Scenario 1 for the \(\Omega^{-}\) production at both beam energies. Particularly at 7.7 GeV, Scenario 1 seems to contribute more to the \(\Omega^{-}\) production in the SM version, as the difference in correlation amplitudes between \(\Omega^{-}\)-\(K^{+}\) and \(\bar{\Omega}^{+}\)-\(K^{-}\) becomes noticeably larger. Figures 8 and 9 show the CBS correlations between \(\Omega\) and \(\bar{\Lambda}\), and their anti-particle pairs as a function of three kinematic variables in 0-5% most central Au + Au collisions at 7.7 GeV and 14.6 GeV, respectively. Besides strangeness and baryon number conservation that govern the correlations between \(\Omega^{-}\) and \(\bar{\Lambda}\), the \(\bar{\Omega}^{+}\)-\(\Lambda\) correlations are also sensitive to BNT dynamics. At 7.7 GeV, the default version of AMPT displays stronger correlations than the SM version. For both AMPT versions, the \(\Omega^{-}\)-\(\bar{\Lambda}\) correlations are much weaker than the \(\bar{\Omega}^{+}\)-\(\Lambda\) ones, confirming that \(\bar{\Omega}^{+}\) is only produced via Scenario 2. The default version also indicates a larger discrepancy between \(\bar{\Omega}^{+}\)-\(\Lambda\) and \(\Omega^{-}\)-\(\bar{\Lambda}\), which suggests a more prominent contribution from Scenario 2, complementary to the information on the difference between the \(\Omega^{-}\)-\(K^{+}\) and \(\bar{\Omega}^{+}\)-\(K^{-}\) results. At 14.6 GeV, the \(\bar{\Omega}^{+}\)-\(\Lambda\) correlations (the right column in Fig. 9) are also higher than \(\Omega^{-}\)-\(\bar{\Lambda}\) (the left column in Fig. 9) by about an order of magnitude. However, there seems to be no significant difference between the two AMPT versions at this energy. Figures 10 and 11 show the CBS correlations between \(\Omega\) and \(\bar{\Xi}\) as a function of three kinematic variables in 0-5% most central Au + Au collisions at 7.7 GeV and 14.6 GeV, respectively. At both beam energies, the stronger \(\bar{\Omega}^{+}\)-\(\Xi^{-}\) correlations relative to \(\Omega^{-}\)-\(\bar{\Xi}^{+}\) also imply that Scenario 2 provides a larger contribution to the \(\bar{\Omega}^{+}\) production than to \(\Omega^{-}\). At 7.7 GeV, the difference in the \(\Omega^{-}\)-\(\bar{\Xi}^{+}\) correlations between the two AMPT versions suggests that the default version generates a larger contribution from Scenario 2 to the \(\Omega^{-}\) production. At 14.6 GeV, the \(\bar{\Omega}^{+}\)-\(\Xi^{-}\) correlations are still stronger than \(\Omega^{-}\)-\(\bar{\Xi}^{+}\), but no significant difference appears between the two versions. ## IV Summary The \(\Omega\) production in nuclear collisions at the RHIC BES involves dynamics of baryon number transport, strangeness conservation, and baryon number conservation. To investigate these effects, we have used the AMPT model with both the default and the string melting versions to simulate central Au+Au collisions at 7.7 and 14.6 GeV, and showed that \(\Omega^{-}\)-\(K^{+}\) and \(\Omega^{-}\)-anti-hyperon correlations are sensitive to the dynamics. In particular, we have considered two \(\Omega\) production scenarios, one with three \(\bar{s}\) quarks in kaons (Scenario 1), and the other with \(\bar{s}\) quarks in anti-hyperon and kaons (Scenario 2). Both scenarios are constrained by strangeness and baryon number conservation, and only the former is sensitive to baryon number transport. The AMPT simulations show that in Au+Au collisions, both scenarios contribute to the \(\Omega\) production, and Scenario 1 becomes more important from 14.6 to 7.7 GeV. The shape of the correlations can also be sensitive to the hadronization schemes in the default and the string melting versions of the AMPT model. Experimental measurements of these correlations and comparisons with our AMPT simulation results could greatly advance our understanding of baryon transport dynamics and effects of strangeness and baryon number conservation on the \(\Omega\) production, and possibly enable future experimental searches for exotic baryon transport mechanisms such as gluon junction.
2306.16469
Recipes for computing radiation from a Kerr black hole using Generalized Sasaki-Nakamura formalism: I. Homogeneous solutions
Central to black hole perturbation theory calculations is the Teukolsky equation that governs the propagation and the generation of radiation emitted by Kerr black holes. However, it is plagued by a long-ranged potential associated to the perturbation equation and hence a direct numerical integration of the equation is challenging. Sasaki and Nakamura devised a formulation that transforms the equation into a new equation that is free from the issue for the case of out-going gravitational radiation. The formulation was later generalized by Hughes to work for any type of radiation. In this work, we revamp the Generalized Sasaki-Nakamura (GSN) formalism and explicitly show the transformations that convert solutions between the Teukolsky and the GSN formalism for both in-going and out-going radiation of scalar, electromagnetic and gravitational type. We derive all necessary ingredients for the GSN formalism to be used in numerical computations. In particular, we describe a new numerical implementation of the formalism, GeneralizedSasakiNakamura.jl, that computes homogeneous solutions to both perturbation equation in the Teukolsky and the GSN formalism. The code works well at low frequencies and is even better at high frequencies by leveraging the fact that black holes are highly permeable to waves at high frequencies. This work lays the foundation for an efficient scheme to compute gravitational radiation from Kerr black holes and an alternative way to compute quasi-normal modes of Kerr black holes.
Rico K. L. Lo
2023-06-28T18:00:39Z
http://arxiv.org/abs/2306.16469v1
Recipes for computing radiation from a Kerr black hole using Generalized Sasaki-Nakamura formalism: I. Homogeneous solutions ###### Abstract Central to black hole perturbation theory calculations is the Teukolsky equation that governs the propagation and the generation of radiation emitted by Kerr black holes. However, it is plagued by a long-ranged potential associated to the perturbation equation and hence a direct numerical integration of the equation is challenging. Sasaki and Nakamura devised a formulation that transforms the equation into a new equation that is free from the issue for the case of out-going gravitational radiation. The formulation was later generalized by Hughes to work for any type of radiation. In this work, we revamp the Generalized Sasaki-Nakamura (GSN) formalism and explicitly show the transformations that convert solutions between the Teukolsky and the GSN formalism for both in-going and out-going radiation of scalar, electromagnetic and gravitational type. We derive all necessary ingredients for the GSN formalism to be used in numerical computations. In particular, we describe a new numerical implementation of the formalism, GeneralizedSasakiNakamura.jl, that computes homogeneous solutions to both perturbation equation in the Teukolsky and the GSN formalism. The code works well at low frequencies and is even better at high frequencies by leveraging the fact that black holes are highly permeable to waves at high frequencies. This work lays the foundation for an efficient scheme to compute gravitational radiation from Kerr black holes and an alternative way to compute quasi-normal modes of Kerr black holes. ## I Introduction The first detection of binary black hole merger by the two detectors of the Laser Interferometer Gravitational-Wave Observatory (LIGO) in 2015 [1] marked the beginning of a new era in physics where scientists can directly observe gravitational radiation emitted from collisions of compact objects such as black holes (BHs), allowing the strong field regime of gravity to be probed. Subsequent observing runs of the Advanced LIGO [2], Advanced Virgo [3], and KAGRA [4; 5; 6] detectors have unveiled about a hundred more such gravitational waves (GWs) coming from the collisions of compact objects [7; 8; 9; 10]. With planned updates to the current detectors [11] and constructions of new detectors [12; 13], some targeting different frequency ranges such as the Laser Interferometer Space Antenna (LISA) [14] and the Deci-hertz Interferometer Gravitational wave Observatory (DECIGO) [15], we will be observing GWs coming from various kind of sources on a regular basis. In order to identify GW signals from noisy data and characterize properties of their sources, it is imperative to have theoretical understanding of what those waveforms look like so that we can compare them with observations. Gravitational waveforms can be computed using a number of approaches, such as numerically solving the full non-linear Einstein field equation, or solving a linearized field equation as an approximation. BH perturbation theory is one such approximation scheme where the dynamical spacetime is decomposed into a stationary background spacetime and a small radiative perturbation on top of it. The metric of the background spacetime is known exactly, and we only need to solve, usually numerically, for the metric perturbation. See for example Refs. [16; 17; 18; 19] for a comprehensive review on BH perturbation theory. At the core of BH perturbation theory is the Teukolsky formalism [20; 21; 22; 23] where a rotating (and uncharged) BH of mass \(M\) and angular momentum per unit mass \(a\) is used as the background spacetime. The metric for such a spacetime is known as the Kerr metric [24], and in the Boyer-Lindquist coordinates \((t,r,\theta,\phi)\) the exact line element \(ds\) is given by [25; 26] \[ds^{2}=-\left(1-\frac{2Mr}{\Sigma}\right)dt^{2}-\frac{4Mar\sin^{ 2}\theta}{\Sigma}dtd\phi+\frac{\Sigma}{\Delta}dr^{2}\\ +\Sigma d\theta^{2}+\sin^{2}\theta\left(r^{2}+a^{2}+\frac{2Ma^{2} r\sin^{2}\theta}{\Sigma}\right)d\phi^{2}, \tag{1}\] where \(\Sigma\equiv r^{2}+a^{2}\cos^{2}\theta\) and \(\Delta\equiv r^{2}-2Mr+a^{2}=(r-r_{+})(r-r_{-})\) with \(r_{+}=M+\sqrt{M^{2}-a^{2}}\) as the outer event horizon and \(r_{-}=M-\sqrt{M^{2}-a^{2}}\) as the inner Cauchy horizon. In the Teukolsky formalism, instead of solving directly the perturbed radiative field (e.g. the metric for gravitational radiation, and the electromagnetic field tensor for electromagnetic radiation), we solve for its (gauge-invariant) _scalar_ projections onto a tetrad. For instance, the (Weyl) scalar \(\psi_{0}\) and \(\psi_{4}\) contain information about the in-going and the out-going gravitational radiation respectively [20]. Teukolsky showed that these scalar quantities all follow the same form of the master equation (aptly named the Teukolsky equation), and it is given by [20] \[\left[\frac{\left(r^{2}+a^{2}\right)^{2}}{\Delta}-a^{2}\sin^{2} \theta\right]\frac{\partial^{2}\psi}{\partial t^{2}}+\frac{4Mar}{\Delta}\frac{ \partial^{2}\psi}{\partial t\partial\phi}\\ +\left[\frac{a^{2}}{\Delta}-\frac{1}{\sin^{2}\theta}\right]\frac{ \partial^{2}\psi}{\partial\phi^{2}}-\Delta^{-s}\frac{\partial}{\partial r} \left(\Delta^{s+1}\frac{\partial\psi}{\partial r}\right)\\ -\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin \theta\frac{\partial\psi}{\partial\theta}\right)-2s\left[\frac{a\left(r-M \right)}{\Delta}+\frac{i\cos\theta}{\sin^{2}\theta}\right]\frac{\partial\psi }{\partial\phi}\\ -2s\left[\frac{M\left(r^{2}-a^{2}\right)}{\Delta}-r-ia\cos \theta\right]\frac{\partial\psi}{\partial t}\\ +\left(s^{2}\cot^{2}\theta-s\right)\psi=4\pi\Sigma T, \tag{2}\] where \(T\) is a source term for the Teukolsky equation, and \(\psi\) can correspond to different scalar projections with different spin weights \(s\). In particular, \(s=0\) for scalar radiation, \(s=\pm 1\) for in-going and out-going electromagnetic radiation respectively, and \(s=\pm 2\) for in-going and out-going gravitational radiation respectively. For example, \(\psi_{0}\) satisfies Eq. (2) by setting \(\psi\equiv\psi_{0}\) and \(s=2\), whereas \(\psi_{4}\) satisfies the equation by setting \(\psi\equiv(r-ia\cos\theta)^{4}\psi_{4}\) and \(s=-2\). Despite its fearsome look, Eq. (2) is actually separable by writing \(\psi(t,r,\theta,\phi)=R(r)S(\theta,\phi)e^{-i\omega t}\). The separation of variables gives one ordinary differential equation (ODE) for the angular part in \(\theta\) (since the \(\phi\) dependence must be \(\psi\sim e^{im\phi}\) with \(m\) being an integer due to the azimuthal symmetry of a Kerr BH), and another ODE for the radial part in \(r\). We discuss the angular part of the Teukolsky equation and the recipes for solving the equation numerically more in depth in App. A. Limiting ourselves to consider the source-free (\(T=0\)) case for now1, the ODE for the radial part is given by [20] Footnote 1: We consider the \(T\neq 0\) case in a subsequent paper (see Sec. IV.1). \[\Delta^{-s}\frac{d}{dr}\left(\Delta^{s+1}\frac{dR}{dr}\right)-V_{\rm T}(r)R=0, \tag{3}\] with \[V_{\rm T}(r)=\lambda-4is\omega r-\frac{K^{2}-2is(r-M)K}{\Delta}, \tag{4}\] where \(K\equiv(r^{2}+a^{2})\omega-ma\), and \(\lambda\) is a separation constant related to the angular Teukolsky equation (see App. A, and in particular Eq. (10)). The general solution of \(\psi(t,r,\theta,\phi)\) can then be written as \[\psi(t,r,\theta,\phi)=\sum_{\ell m\omega}s_{\ell m\omega}(r)_{s}S_{\ell m \omega}(\theta,\phi)e^{-i\omega t}, \tag{5}\] where \(\ell\) labels an eigenfunction of the angular Teukolsky equation (c.f. App. A). While the radial Teukolsky equation in Eq. (3) looks benign, it is challenging to solve it numerically in that form because the potential associated to the ODE is long-ranged. To see this, we can re-cast Eq. (3) into the Schrodinger equation form that is schematically given by \[\frac{d^{2}Y}{dr_{*}^{2}}+\left(\omega^{2}-V_{Y}\right)Y=0, \tag{6}\] with \(r_{*}\) being the tortoise coordinate for Kerr BHs defined by \[\frac{dr_{*}}{dr}=\frac{r^{2}+a^{2}}{\Delta}, \tag{7}\] where \(Y\) is some function transformed from the Teukolsky function \(R\), and \(V_{Y}\) is the potential associated to the ODE [21]. For the radial Teukolsky equation, the potential \(V_{Y}\) is long-ranged2 in the sense that \(V_{Y}\sim-2is\omega/r\) as \(r\rightarrow\infty\), as opposed to a short-ranged potential that falls at \(1/r^{n}\) with \(n\geq 2\) (for an illustration, see Fig. 3). The long-ranged-ness of the potential \(V_{\rm T}\) implies that the two wave-like "left-going" and "right-going" solutions of Eq. (3) will have different power-law dependences of \(r\) in their wave amplitudes as \(r\rightarrow\infty\)[21; 27]. A direct numerical integration of Eq. (3) will suffer from the problem where the solution with a higher power of \(r\) in its asymptotic amplitude will overwhelm the other solution and eventually take over the entire numerical solution due to finite precision in computation when \(r\) becomes large [21; 27]. In fact, the same problem arises when \(r\to r_{+}\) (equivalently when \(\Delta\to 0\)) where the left- and the right-going waves have again different power-law dependences of \(\Delta\) in their wave amplitudes and the solution with a smaller power of \(\Delta\) in its asymptotic amplitude will overwhelm the other one numerically as \(\Delta\to 0\)[21; 27].3 Therefore, a direct numerical integration, at least with the Boyer-Lindquist coordinates, is not suitable for solving the radial Teukolsky equation accurately. Footnote 2: A prime example of a long-ranged potential is the Coulomb potential in electrostatics. Fortunately, there are other techniques that can get around this issue and allow us to solve for \(R(r)\) accurately. One such technique is the Mano-Suzuki-Takasugi (MST) method [28], originally as a low frequency expansion and later extended by Fujita and Tagoshi [29; 30] as a numerical method for solving the homogeneous radial Teukolsky equation at arbitrary frequency. The Sasaki-Nakamura (SN) formalism [31; 32; 33], which is the main topic of this paper (and subsequent papers), also enables accurate and efficient numerical computations of homogeneous solutions to the radial Teukolsky equation. In short, Sasaki and Nakamura devised a class of transformations, originally only for \(s=-2\), that convert the radial Teukolsky equation with the long-ranged potential \(V_{\rm T}\) into another ODE with a short-ranged potential. One can then solve the numerically better-behaved ODE instead. The transformations were later generalized by Hughes Hughes (2013) to work for arbitrary integer spin-weight \(s\). Comparing to the MST method, the Generalized Sasaki-Nakamura (GSN) formalism is conceptually simpler and thus easier to implement. Practically speaking, the MST method expresses a homogeneous solution to the radial Teukolsky solution \(R(r)\) in terms of special functions, which makes it ideal for analytical work. However, for numerical work there are no closed-form expressions for these special functions and oftentimes the evaluations of these special functions involve solving some ODEs numerically 1! Thus, efficiency-wise the GSN formalism is not inferior, at the very least, to the MST method even at low frequencies. On the other hand, while the extension of the MST method by Fujita and Tagoshi Fujita and Tagoshi (2012, 2013) allows the method to in principle compute homogeneous solutions at arbitrary frequency, practically the authors of Refs. Fujita and Tagoshi (2012, 2013) reported that it was numerically challenging to find solutions when wave frequencies become somewhat large. The GSN formalism, as we will show later, becomes _even more efficient_ in those cases at high frequencies. Footnote 1: For more discussions on solving the inhomogeneous radial Teukolsky equation using the SN formalism, see Sec. IV.1. Another appealing capability of the SN formalism has to do with computing solutions to the inhomogeneous radial Teukolsky equation. The solutions encode the physical information about the radiation emitted by a perturbed BH, say for example the GW emitted when a test particle plunges towards a BH. Based on the SN transformation (for the source-free case), the SN formalism has a prescription to convert a Teukolsky source term that could be divergent, near infinity or the horizon (or both), into a well-behaved source term.2 Footnote 2: For more discussions on solving the inhomogeneous radial Teukolsky equation using the SN formalism, see Sec. IV.1. In this paper, we revamp the GSN formalism for the source-free case to take full advantages of the formalism for computing radiation from a Kerr BH. We explicitly show the GSN transformations for physically relevant radiation fields (\(s=0,\pm 1,\pm 2\)) that transform the radial Teukolsky equation with a long-ranged potential into a new ODE, referred to as the GSN equation, which has a short-ranged potential instead. To aid numerical computations using the GSN formalism, we derive expressions for the higher-order corrections to the asymptotic solutions of the GSN equation, improving the accuracy of numerical solutions. We also derive expressions for the frequency-dependent conversion factors that convert asymptotic amplitudes of GSN solutions to that of their corresponding Teukolsky solutions, which are needed in wave scattering problems and computations of inhomogeneous solutions. Furthermore, we describe an open-source implementation of the aforementioned GSN formalism that is written in julia (2013), a modern programming language designed with numerical analysis and scientific computing in mind. The numerical implementation leverages the reformulation of the GSN equation, which is a second-order linear ODE, into a form of first-order non-linear ODE known as a Riccati equation to gain additional performance. Our new code is validated by comparing results with an established code Teukolsky(2014) that implements the MST method. The paper is structured as follows: In Sec. II, we first review the GSN formalism for the source-free case. We then derive the asymptotic behaviors and the appropriate boundary conditions for solving the GSN equation. In Sec. III, we describe our numerical implementation of the GSN formalism and compare it with the MST method. Finally, in Sec. IV we summarize our results and briefly discuss two applications of the GSN formalism developed in this paper, namely laying the foundation for an efficient procedure to compute gravitational radiation from BHs near _both_ infinity and the horizon, and as an alternative method for determining quasi-normal modes (QNMs). For busy readers, in App. E we give "ready-to-use" expressions for both the GSN transformations, the asymptotic solutions to the corresponding GSN equation, as well as the conversion factors to convert between the Teukolsky and the GSN formalism. Throughout this paper, we use geometric units \(c=G=M=1\), and a prime to denote differentiation with respect to \(r\). ## II Generalized Sasaki-Nakamura formalism In this section, we first review, following Ref. Hughes (2013) closely, the core idea behind the Generalized Sasaki-Nakamura (GSN) formalism, i.e. performing a transformation, which is different for each spin weight \(s\), from the Teukolsky function \(R(r)\) into a new function \(X(r_{*})\). This new function \(X(r_{*})\) is referred to as the GSN function, expressed in the tortoise coordinate \(r_{*}\) (for Kerr BHs) instead of the Boyer-Lindquist \(r\)-coordinate. A defining feature of the \(r_{*}\)-coordinate is that it maps the horizon to \(r_{*}\rightarrow-\infty\) and infinity to \(r_{*}\rightarrow\infty\). The GSN transformations were chosen such that the new ODE that \(X(r_{*})\) satisfies, which is referred to as the GSN equation, is more suitable for numerical computations than the original radial Teukolsky equation in Eq. (3). We then study the leading asymptotic behaviors, approaching the horizon \(r\to r_{+}\,(r_{*}\rightarrow-\infty)\) and approaching infinity \(r\rightarrow\infty\,(r_{*}\rightarrow\infty)\), of both the GSN equation and the GSN transformations to establish the boundary conditions to be imposed, as well as the conversion factors for converting the _complex_ amplitude of a GSN function to that of the corresponding Teukolsky function at the two boundaries. To aid numerical computations when using numerically-_finite_ inner and outer boundaries (in place of negative and positive infinity respectively in the \(r_{*}\) coordinate), we also derive the higher-order corrections to the asymptotic boundary conditions. ### Generalized Sasaki-Nakamura transformation The GSN transformation can be broken down into two parts. The first part transforms the Teukolsky function \(R(r)\) and its derivative \(R^{\prime}(r)\) into a new set of functions \((\chi(r),\chi^{\prime}(r))\) as an intermediate step. In general, we write such a transformation as \[\chi(r)=\tilde{\alpha}(r)R(r)+\tilde{\beta}(r)R^{\prime}(r), \tag{8}\] where \(\tilde{\alpha}(r)\) and \(\beta(r)\) are weighting functions that generate the transformation. This kind of transformation is also known as a Generalized Darboux transformation [37], but differs from a "conventional" Darboux transformation that the weighting function \(\tilde{\beta}(r)\) for a conventional Darboux transformation is a constant instead of a function of \(r\). For later convenience, we rescale \(\tilde{\beta}\) by \(\Delta^{s+1}\) and write \(\alpha(r)=\tilde{\alpha}(r)\) and \(\beta(r)=\tilde{\beta}(r)\Delta^{-(s+1)}\). Differentiating Eq. (8) with respect to \(r\) and packaging them into a matrix equation, we have [27] \[\begin{pmatrix}\chi\\ \chi^{\prime}\end{pmatrix}=\begin{pmatrix}\alpha&\beta\Delta^{s+1}\\ \alpha^{\prime}+\beta V_{\rm T}\Delta^{s}&\alpha+\beta^{\prime}\Delta^{s+1} \end{pmatrix}\begin{pmatrix}R\\ R^{\prime}\end{pmatrix}, \tag{9}\] where we have used Eq. (3) to write \(R^{\prime\prime}\) in terms of \(R,R^{\prime}\) as \[R^{\prime\prime}(r)=\frac{V_{\rm T}}{\Delta}R(r)-\frac{2(s+1)(r-1)}{\Delta}R^ {\prime}(r). \tag{10}\] The inverse transformation going from \((\chi(r),\chi^{\prime}(r))\) to \((R(r),R^{\prime}(r))\) is obtained by inverting Eq. (9) and is given by [27] \[\begin{pmatrix}R\\ R^{\prime}\end{pmatrix}=\frac{1}{\eta}\begin{pmatrix}\alpha+\beta^{\prime} \Delta^{s+1}&-\beta\Delta^{s+1}\\ -(\alpha^{\prime}+\beta V_{\rm T}\Delta^{s})&\alpha\end{pmatrix}\begin{pmatrix} \chi\\ \chi^{\prime}\end{pmatrix}, \tag{11}\] where \(\eta(r)\) is the determinant of the above matrix, which is given by [27] \[\eta=\alpha\left(\alpha+\beta^{\prime}\Delta^{s+1}\right)-\beta\Delta^{s+1} \left(\alpha^{\prime}+\beta V_{\rm T}\Delta^{s}\right). \tag{12}\] In the second step of the GSN transformation, we further rescale \(\chi(r)\) to \(X(r_{*})\) (the motivation of doing so can be found in Ref. [27]) by \[X(r_{*}(r))=\chi(r)\sqrt{(r^{2}+a^{2})\Delta^{s}}, \tag{13}\] where an analytical expression of \(r_{*}(r)\) can be obtained by integrating Eq. (7) (with a particular choice of the integration constant) such that the transformation from \(r\) to \(r_{*}\) is given by \[r_{*}(r)=r+\frac{2r_{+}}{r_{+}-r_{-}}\ln\left(\frac{r-r_{+}}{2}\right)-\frac{ 2r_{-}}{r_{+}-r_{-}}\ln\left(\frac{r-r_{-}}{2}\right). \tag{14}\] It should be noted that there is no simple analytical expression for the inverse transformation \(r=r(r_{*})\) and one has to invert \(r_{*}\) numerically, typically using root-finding algorithms (for example see App. B). In short, the GSN transformation amounts to acting a linear differential operator \({}_{s}\Lambda\) on the Teukolsky radial function \(R(r)\) that transforms it into the GSN function \(X(r_{*})\).5 Schematically this means Footnote 5: This is a generalization of the \(\Lambda\) operator introduced in Ref. [19] for \(s=-2\) to any integer \(s\). \[X(r_{*}(r))={}_{s}\Lambda\left[R(r)\right]. \tag{15}\] Using Eq. (9) and Eq. (13) we see that the \({}_{s}\Lambda\) operator is given by \[{}_{s}\Lambda\left[R(r)\right]=\sqrt{\left(r^{2}+a^{2}\right) \Delta^{s}}\left[\left(\alpha+\beta\Delta^{s+1}\frac{d}{dr}\right)R(r)\right] \tag{16}\] While the inverse GSN transformation amounts to acting the inverse operator \({}_{s}\Lambda^{-1}\) on the GSN function that gives back the Teukolsky function. Again, schematically this can be written as \[R(r(r_{*}))={}_{s}\Lambda^{-1}\left[X(r_{*})\right]. \tag{17}\] Using Eq. (11) and Eq. (13) we see that \({}_{s}\Lambda^{-1}\) is given by \[{}_{s}\Lambda^{-1}\left[X(r_{*})\right]=\] \[\frac{1}{\eta}\left\{\left[\left(\alpha+\beta^{\prime}\Delta^{s+ 1}\right)-\beta\Delta^{s+1}\frac{d}{dr}\right]\frac{X(r_{*})}{\sqrt{\left(r^{2 }+a^{2}\right)\Delta^{s}}}\right\}. \tag{18}\] Equipped with the transformation, one can show that by substituting \(R(r),R^{\prime}(r)\) given by Eq. (11) into Eq. (3), the intermediate function \(\chi(r)\) satisfies the following ODE, which is given by [27] \[\Delta^{-s}\left(\Delta^{s+1}\chi^{\prime}\right)^{\prime}-\Delta F_{1}\chi^{ \prime}-U_{1}\chi=0, \tag{19}\] with \[F_{1}(r) =\frac{\eta^{\prime}}{\eta}, \tag{20}\] \[U_{1}(r) =V_{\rm T}+\frac{1}{\beta\Delta^{s}}\left[\left(2\alpha+\beta^{ \prime}\Delta^{s+1}\right)^{\prime}-F_{1}\left(\alpha+\beta^{\prime}\Delta^{s+ 1}\right)\right]. \tag{21}\] Further rewriting Eq. (19) in terms of \(X\) and its first and second derivatives with respect to \(r_{*}\) using Eq. (13) and (7), one can show that \(X(r_{*})\) satisfies the GSN equation, which is given by [27] \[\frac{d^{2}X}{dr_{*}^{2}}-\mathcal{F}(r)\frac{dX}{dr_{*}}-\mathcal{U}(r)X=0, \tag{22}\] with the GSN potentials \(\mathcal{F}(r)\) and \(\mathcal{U}(r)\) given by [27] \[\mathcal{F}(r)= \frac{\Delta F_{1}}{r^{2}+a^{2}}, \tag{23}\] \[\mathcal{U}(r)= \frac{\Delta U_{1}}{\left(r^{2}+a^{2}\right)^{2}}+G^{2}+\frac{ \Delta G^{\prime}}{r^{2}+a^{2}}-\frac{\Delta GF_{1}}{r^{2}+a^{2}}, \tag{24}\] where \[G=\frac{r\Delta}{\left(r^{2}+a^{2}\right)^{2}}+\frac{s(r-1)}{r^{2}+a^{2}}.\] While the GSN equation given by Eq. (22) looks significantly more complicated than the original radial Teukolsky equation given by Eq. (3), Eq. (22) actually represents _a collection of ODEs_ equivalent to Eq. (3) that we can engineer so that the resulting ODE has a short-ranged potential and thus can be solved more easily and efficiently with numerical algorithms. Up to this point, the weighting functions \(\alpha(r)\) and \(\beta(r)\) are arbitrary, apart from being continuous and differentiable (so that Eq. (9) and Eq. (11) make sense). However, in order to generate useful transformations, these functions have to satisfy certain criteria. For example, they can be constrained by requiring that when \(a\to 0\), the function \(X(r_{*})\) satisfies the Regge-Wheeler equation [31, 32, 27, 33]. Transformations for fields with different spin-weight \(s\) that satisfy such a constraint were first given in Ref. [27] and can be written in the form of \[\chi=\begin{cases}\left(\sqrt{\left(r^{2}+a^{2}\right)\Delta} \right)^{|s|}g_{0}(r)J_{-}\left[g_{1}(r)J_{-}\left[g_{2}(r)\ldots J_{-}\left[ g_{|s|}(r)\left(\frac{1}{\sqrt{r^{2}+a^{2}}}\right)^{|s|}R\right]\right]\right],&s<0\\ g_{0}(r)R,&s=0\;,\\ \left(\sqrt{\frac{r^{2}+a^{2}}{\Delta}}\right)^{s}g_{0}(r)J_{+}\left[g_{1}(r) J_{+}\left[g_{2}(r)\ldots J_{+}\left[g_{s}(r)\left(\frac{\Delta}{\sqrt{r^{2}+a^{2}}} \right)^{s}R\right]\right]\right],&s>0\end{cases} \tag{25}\] where \(J_{\pm}\) are two linear differential operators defined by \[J_{\pm}=\frac{d}{dr}\pm i\frac{K}{\Delta}. \tag{26}\] Inspecting Eq. (25), we see that for a spin-\(|s|\) field, the operator \(J_{\pm}\) will act on \(R(r)\)\(|s|\)-many times, leading to an expression relating \(\chi(r)\)_linearly_ to \(R(r),R^{\prime}(r),\ldots,R^{(|s|)}(r)\). Higher-order derivatives \(R^{(n)}(r)\) can be evaluated in terms of \(R(r),R^{\prime}(r)\) by using Eq. (10) successively for \(n\geq 2\). Therefore, by comparing Eq. (9) and Eq. (25), one can extract the appropriate \(\alpha(r)\) and \(\beta(r)\) for different \(s\) modulo some functions \(g_{i}(r)\) that remain unspecified. These functions \(g_{i}(r)\) should reduce to non-vanishing constants when \(a\to 0\) such that Eq. (22) is exactly the Regge-Wheeler equation for Schwarzschild BHs. In practice it was found that choosing \(g_{i}(r)\) as simple rational functions of \(r\) leads to desirable short-ranged GSN potentials. With some particular choices of \(g_{i}(r)\), which we explicitly show in App. E for fields with spin-weight \(s=0,\pm 1,\pm 2\), the expressions for \(\alpha(r)\) and \(\beta(r)\) can be quite concise, and we can write \(\eta(r)\) in a compact form as \[\eta(r)=c_{0}+c_{1}/r+c_{2}/r^{2}+c_{3}/r^{3}+c_{4}/r^{4}. \tag{27}\] It should be noted that if one chooses instead \(g_{i}(r)=1\), while the associated GSN potentials are still short-ranged, the corresponding expression for \(\eta(r)\)_cannot_ be written in the form of Eq. (27) and the weighting functions \(\alpha(r)\) and \(\beta(r)\) are long (except for \(s=0\)). ### Asymptotic behaviors and boundary conditions of the Generalized Sasaki-Nakamura equation #### ii.2.1 Teukolsky equation Before studying the asymptotic behaviors of the GSN equation, it is educational to first revisit the asymptotic behaviors of the radial Teukolsky equation so that we can compare the behaviors of the two equations and understand the reasons why it is preferred to use the GSN equation instead of the Teukolsky equation when performing numerical computations. It can be shown that (for example see Refs. [21, 19]) when \(r\rightarrow\infty\left(r_{*}\rightarrow\infty\right)\) the radial Teukolsky equation admits two (linearly-independent) asymptotic solutions that go like \(R\sim r^{-1}e^{-i\omega r_{*}}\) or \(R\sim r^{-(2s+1)}e^{i\omega r_{*}}\). Similarly, when \(r\to r_{+}\left(r_{*}\rightarrow-\infty\right)\) the equation admits two (linearly-independent) asymptotic solutions \(R\sim\Delta^{-s}e^{-ipr_{*}}\) or \(R\sim e^{ipr_{*}}\), where we define a new wave frequency \[p\equiv\omega-m\Omega_{\text{H}}, \tag{28}\] with \(\Omega_{\text{H}}\equiv a/(2r_{+})\) being the angular velocity of the horizon (therefore intuitively speaking \(p\) is the "effective" wave frequency near the horizon). Using these asymptotic solutions at the two boundaries, we can construct pairs of linearly independent solutions. A pair that is commonly used in literature (and is physically motivated) is \(\left\{R^{\text{in}},R^{\text{up}}\right\}\) with \(R^{\text{in}}\) satisfying a purely-ingoing boundary condition at the horizon and \(R^{\text{up}}\) satisfying a purely out-going boundary condition at infinity.6 Mathematically, Footnote 6: In some literature, for example Ref. [27], \(R^{\rm in}\) is also denoted by \(R^{\rm H}\) and \(R^{\rm up}\) also being denoted by \(R^{\infty}\). \[R^{\rm in}(r) =\begin{cases}B_{\rm T}^{\rm trans}\Delta^{-s}e^{-ipr_{*}},&r\to r _{+}\\ B_{\rm T}^{\rm inc}\frac{e^{-i\omega r_{*}}}{r}+B_{\rm T}^{\rm ref}\frac{e^{i \omega r_{*}}}{r^{2s+1}},&r\to\infty\end{cases}, \tag{29}\] \[R^{\rm up}(r) =\begin{cases}C_{\rm T}^{\rm ref}\Delta^{-s}e^{-ipr_{*}}+C_{\rm T }^{\rm inc}e^{ipr_{*}},&r\to r_{+}\\ C_{\rm T}^{\rm trans}\frac{e^{i\omega r_{*}}}{r^{2s+1}},&r\to\infty\end{cases}. \tag{30}\] Here we follow mostly Ref. [17] in naming the coefficients/amplitudes in front of each of the asymptotic solutions (except renaming \(C^{\rm up}\) in Ref. [17] to \(C^{\rm inc}\) for a more symmetric form and adding a subscript \({\rm T}\) for Teukolsky formalism). These amplitudes carry physical interpretations. Conceptually for the \(R^{\rm in}\) (\(R^{\rm up}\)) solution, imagine sending a "left-going" wave from infinity towards the horizon (a "right-going" wave from the horizon towards infinity)7 with an amplitude \(B_{\rm T}^{\rm inc}\) (\(C_{\rm T}^{\rm inc}\)). As the wave propagates through the potential barrier (see Fig. 1), part of the _incident_ wave is _trans_mitted through the barrier and continues to travel with an amplitude \(B_{\rm T}^{\rm trans}\) (\(C_{\rm T}^{\rm trans}\)), while part of the incident wave is _reflected_ by the barrier and travels in the opposite direction with an amplitude \(B_{\rm T}^{\rm ref}\) (\(C_{\rm T}^{\rm ref}\)). This setup is reminiscent to a potential well problem in quantum mechanics.8 Footnote 7: As we have assumed a harmonic time dependence of \(\exp\left(-i\omega t\right)\), radial functions of the form \(\exp\left(i\omega r_{*}\right)\) are said to be traveling to the right since the waves would depend on the combination \(t-r_{*}\). Similarly, for radial functions of the form \(\exp\left(-i\omega r_{*}\right)\) they are said to be traveling to the left since the waves would depend on the combination \(t+r_{*}\). Footnote 8: However, unlike a potential well problem in quantum mechanics, the square of the reflection amplitude and the square of the transmission amplitude (each normalized by the incidence amplitude) does not have to add up to unity. This is known as super-radiance where energy is being extracted from the black hole. In numerical computations, however, instead of starting with an incident wave, it is easier to start with a transmitted wave, and then integrate outward (inward) for \(R^{\rm in}\) (\(R^{\rm up}\)) to extract the corresponding incidence and reflection amplitude at infinity (at the horizon). Inspecting Eq. (29) and (30), we can see why it is challenging to accurately read off those amplitudes if one solves the Teukolsky equation numerically using Eq. (3) directly as the amplitude of the incident and the reflected wave are of different orders of magnitude. For the \(R^{\rm in}\) solution as \(r\to\infty\), the ratio of the amplitude of the right-going wave to that of the left-going wave is \(\sim 1/r^{2s}\) (which becomes infinitely-large for \(s<0\) and infinitely-small for \(s>0\)). While for the \(R^{\rm up}\) solution as \(r\to r_{+}\), that ratio is \(\sim\Delta^{s}\) (which again becomes infinitely-large for \(s<0\) and infinitely-small for \(s>0\) as \(\Delta\to 0\) when \(r\to r_{+}\)). This implies that when solving Eq. (3) numerically with a finite precision, the numerical solution will be completely dominated by the right-going wave and thus impossible to extract the amplitude for the left-going wave. To see that \(R^{\rm in}\) and \(R^{\rm up}\) are indeed linearly independent, we can calculate the scaled Wronskian \(\mathcal{W}_{R}\) of the two solutions, which is given by \[\mathcal{W}_{R}=\Delta^{s+1}\left(R^{\rm in}R^{\rm up}r-R^{\rm up}R^{\rm in^ {\prime}}\right). \tag{31}\] Substituting the asymptotic forms of the two solutions \(R^{\rm in,up}\) in Eq. (29) and (30) respectively when \(r\to\infty\) gives the relation \[\mathcal{W}_{R}=2i\omega C_{\rm T}^{\rm trans}B_{\rm T}^{\rm inc}, \tag{32}\] which is a non-zero constant9 (when \(\omega\neq 0\)) and thus they are indeed linearly independent. If instead we substitute the asymptotic forms of \(R^{\rm in,up}\) when \(r\to r_{+}\) into Figure 1: Physical interpretations of the amplitudes in front of each of the asymptotic solutions, for the _IN_ solution (upper panel) and for the _UP_ solution (lower panel). Eq. (31), we obtain another relation for \(\mathcal{W}_{R}\), which is \[\mathcal{W}_{R}=\left[2ip(r_{+}^{2}+a^{2})+2s(r_{+}-1)\right]B_{\rm T}^{\rm trans }C_{\rm T}^{\rm inc}. \tag{33}\] By equating Eq. (32) and Eq. (33), we get an _identity_ relating \((B_{\rm T}^{\rm inc}/B_{\rm T}^{\rm trans})\) with \((C_{\rm T}^{\rm inc}/C_{\rm T}^{\rm trans})\). From a numerical standpoint, we can use this identity as a sanity check of numerical solutions. More explicitly, the identity is given by \[\frac{B_{\rm T}^{\rm inc}}{B_{\rm T}^{\rm trans}}=\frac{p(r_{+}^{2}+a^{2})-is(r _{+}-1)}{\omega}\frac{C_{\rm T}^{\rm inc}}{C_{\rm T}^{\rm trans}}. \tag{34}\] It also means that we technically only need to read off \(\left\{B_{\rm T}^{\rm er},B_{\rm T}^{\rm inc},C_{\rm T}^{\rm erf}\right\}\) or \(\left\{B_{\rm T}^{\rm erf},C_{\rm T}^{\rm erf},C_{\rm T}^{\rm inc}\right\}\) from numerical solutions since the rest of the amplitudes are either fixed by the normalization convention (which will be covered shortly below), or by the constant scaled Wronskian which can be computed at an arbitrary location within the domain of the numerical solutions. #### iii.2.2 Generalized Sasaki-Nakamura equation Now we turn to the GSN equation. Suppose the GSN transformation is of the form of Eq. (25) and satisfies Eq. (27), the GSN potentials \(\mathcal{F}(r)\) and \(\mathcal{U}(r)\) then have the following asymptotic behaviors (see Fig. 2 for a visualization) \[\mathcal{F}(r)\sim\begin{cases}0+\mathcal{O}(r-r_{+})&r\to r_{+}\\ \frac{-c_{1}/c_{0}}{r^{2}}+\mathcal{O}(r^{-3})&r\to\infty\end{cases}, \tag{35}\] \[\mathcal{U}(r)\sim\begin{cases}-p^{2}+\mathcal{O}(r-r_{+})&r\to r _{+}\\ -\omega^{2}+\mathcal{O}(r^{-2})&r\to\infty\end{cases}. \tag{36}\] To see more clearly that the GSN potentials are indeed short-ranged, we re-cast the GSN equation into the same form as Eq. (6) by writing \(Y\equiv X/\sqrt{\eta}\). Fig. 3 shows the magnitude of the potential \(V_{Y}(r)\) associated to the Teukolsky equation (blue) and the GSN equation (orange) respectively. Specifically we are showing the potentials of the \(s=-2,\ell=2,m=2\) mode with \(a=0.7\) and \(\omega=1\) as examples. We can see that the potential for the Teukolsky equation decays only at \(1/r\) when \(r\to\infty\) (and hence long-ranged) while the potential for the GSN equation decays at \(1/r^{2}\) when \(r\to\infty\) (and hence short-ranged). The asymptotic behaviors of the GSN potentials imply that as \(r\to r_{+}\), the GSN equation behaves like a simple wave equation \(d^{2}X/dr_{*}^{2}+p^{2}X=0\), admitting simple plane-wave solutions \(e^{\pm ipr_{*}}\). Similarly when \(r\to\infty\), the GSN equation behaves like \(d^{2}X/dr_{*}^{2}+\omega^{2}X=0\), again admitting plane-wave solutions \(e^{\pm i\omega r_{*}}\). Therefore, we can similarly construct the pair of linearly-independent solutions \(\left\{X^{\rm in},X^{\rm up}\right\}\) that satisfies the purely-ingoing boundary condition at the horizon and the purely-outgoing boundary condition at infinity respectively using these asymptotic solutions. Mathematically, \[X^{\rm in}(r_{*})=\begin{cases}B_{\rm SN}^{\rm trans}e^{-ipr_{*}}&r_{*} \to-\infty\\ B_{\rm SN}^{\rm inc}e^{-i\omega r_{*}}+B_{\rm SN}^{\rm erf}e^{i\omega r_{*}}&r_{*} \to\infty\end{cases}, \tag{37}\] \[X^{\rm up}(r_{*})=\begin{cases}C_{\rm SN}^{\rm erf}e^{-ipr_{*}}+ C_{\rm SN}^{\rm inc}e^{ipr_{*}}&r_{*}\to-\infty\\ C_{\rm SN}^{\rm erf}e^{i\omega r_{*}}&r_{*}\to\infty\end{cases}. \tag{38}\] Here the amplitudes in front of each of the asymptotic solutions have the same physical interpretations as in Eq. (29) and (30) (c.f. Fig. 1). Again by inspecting Eq. (37) and (38), we can see that it is easy to accurately read off those amplitudes as the ratio of the asymptotic amplitude of the incident wave to that of the reflected wave at both boundaries is \(\sim\mathcal{O}(1)\), instead of being Figure 3: Potential \(V_{Y}(r)\) associated to the Teukolsky equation and the GSN equation. As \(r\to\infty\), the potential for the Teukolsky equation decays at \(1/r\) and thus it is long-ranged, while the potential for the GSN equation decays at a steeper \(1/r^{2}\) and hence it is short-ranged. Figure 2: Asymptotic behaviors of the GSN potentials \(\mathcal{F}(r)\) and \(\mathcal{U}(r)\). Both potentials quickly approach to their corresponding asymptotic values. In particular, \(\mathcal{U}(r)\) approaches to \(-p^{2}\) near the horizon and \(-\omega^{2}\) near infinity respectively. infinitely-large or infinitely-small in the Teukolsky formalism. Similar to the case of Teukolsky functions, we can also define a scaled Wronskian \(\mathcal{W}_{X}\) for the GSN functions, namely \[\mathcal{W}_{X}=\frac{1}{\eta}\left[X^{\rm in}(dX^{\rm up}/dr_{*})-(dX^{\rm in} /dr_{*})X^{\rm up}\right], \tag{39}\] which is also a constant. Substituting the asymptotic forms of \(X^{\rm in,up}\) in Eq. (37) and (38) respectively as \(r_{*}\to\infty\), and the fact that \(\eta(r)\to c_{0}\) as \(r\to\infty\), it can be shown that \[\mathcal{W}_{X}=\frac{2i\omega C_{\rm SN}^{\rm trans}B_{\rm SN}^{\rm inc}}{c_{ 0}}. \tag{40}\] Equivalently, we can also use the asymptotic forms of \(X^{\rm in,up}\) as \(r_{*}\to-\infty\), and the fact that \(\eta(r\to r_{+})\sim\mathcal{O}(1)\) to show that \[\mathcal{W}_{X}=\frac{2ipB_{\rm SN}^{\rm trans}C_{\rm SN}^{\rm inc}}{\eta(r_{+} )}, \tag{41}\] We can again equate Eq. (40) and Eq. (41) to get an _identity_ relating \(B_{\rm SN}^{\rm inc}/B_{\rm SN}^{\rm trans}\) with \(C_{\rm SN}^{\rm inc}/C_{\rm SN}^{\rm trans}\) to check the sanity of numerical solutions. Explicitly, the identity is given by \[\frac{B_{\rm SN}^{\rm inc}}{B_{\rm SN}^{\rm trans}}=\frac{pc_{0}}{\omega\eta(r_ {+})}\frac{C_{\rm SN}^{\rm inc}}{C_{\rm SN}^{\rm trans}}. \tag{42}\] An interesting and useful relation between the scaled Wronskians for GSN functions \(\mathcal{W}_{X}\) and that for Teukolsky functions \(\mathcal{W}_{R}\) (with the same \(s,\ell,m,a,\omega\)) is that despite having different definitions (see Eq. (31) for \(\mathcal{W}_{R}\) and Eq. (39) for \(\mathcal{W}_{X}\)), they are actually _identical_, i.e. \[\mathcal{W}_{X}=\mathcal{W}_{R}, \tag{43}\] where we give a derivation in App. C. This means that GSN transformations (not limited only to our particular choices of \(g_{i}\)) are scaled-Wronskian-preserving. This also means that one can compute the QNM spectra of Kerr BHs using either the Teukolsky formalism or the GSN formalism (see Sec. IV.2). Since one can freely rescale a homogeneous solution by a constant factor, we use this freedom to set \(B_{\rm SN}^{\rm trans}=C_{\rm SN}^{\rm trans}=1\), i.e. we _normalize_ our solutions to the GSN equation to have a unit SN transmission amplitude. However, the common normalization convention in literature is to normalize \(R^{\rm in}(r)\) and \(R^{\rm up}(r)\) to each have a unit transmission amplitude, i.e. \(B_{\rm T}^{\rm trans}=C_{\rm T}^{\rm trans}=1\). In fact, one can relate incidence/reflection/transmission amplitudes in the GSN formalism to that in the Teukolsky formalism and vice versa by frequency-dependent conversion factors. To see why this is the case and to obtain the conversion factors, note that when going from a Teukolsky function to the corresponding GSN function, we have the \({}_{s}\Lambda\) operator that satisfies \[{}_{s}\Lambda\left[f(r)e^{\pm ikr_{*}}\right]\propto e^{\pm ikr_{*}}, \tag{44}\] and vice versa with the inverse operator \({}_{s}\Lambda^{-1}\) that satisfies \[{}_{s}\Lambda^{-1}\left[f(r)e^{\pm ikr_{*}}\right]\propto e^{\pm ikr_{*}}, \tag{45}\] for any differentiable function \(f(r)\) and \(k\) is any non-zero constant, since both \({}_{s}\Lambda\) and \({}_{s}\Lambda^{-1}\) are linear differential operators. This means that we can simply match the asymptotic solution in one formalism with the corresponding asymptotic solution with the _same_ exponential dependence in another formalism transformed by either \({}_{s}\Lambda\) or \({}_{s}\Lambda^{-1}\) at the appropriate boundary. For example, to get the conversion factor \(C_{\rm T}^{\rm trans}/C_{\rm SN}^{\rm trans}\), we match the asymptotic solution as \(r\to\infty\) for the Teukolsky and the GSN formalism like \[C_{\rm SN}^{\rm trans}\left[1+\mathcal{O}\left(\frac{1}{r}\right) \right]e^{i\omega r_{*}}=\\ C_{\rm T}^{\rm trans}{}_{s}\Lambda\left\{\frac{1}{r^{2s+1}} \left[1+\mathcal{O}\left(\frac{1}{r}\right)\right]e^{i\omega r_{*}}\right\}, \tag{46}\] where the expression on the RHS, to the leading order, should be \(\sim\mathcal{O}\left(1\right)e^{i\omega r_{*}}\). We can then obtain the desired conversion factor by taking the limit as \[\frac{C_{\rm SN}^{\rm trans}}{C_{\rm T}^{\rm trans}}=\lim_{r\to\infty}{}_{s} \Lambda\left\{\frac{1}{r^{2s+1}}\left[1+\mathcal{O}\left(\frac{1}{r}\right) \right]e^{i\omega r_{*}}\right\}e^{-i\omega r_{*}}, \tag{47}\] and we know that the expression on the RHS does not depend on \(e^{\pm i\omega r_{*}}\) using Eq. (44) so that the limit could be determinate. Equivalently, we can also match the asymptotic solution as \(r\to\infty\) in the two formalism like this instead \[C_{\rm T}^{\rm trans}\frac{1}{r^{2s+1}}\left[1+\mathcal{O}\left( \frac{1}{r}\right)\right]e^{i\omega r_{*}}=\\ C_{\rm SN}^{\rm trans}{}_{s}\Lambda^{-1}\left\{\left[1+\mathcal{ O}\left(\frac{1}{r}\right)\right]e^{i\omega r_{*}}\right\}, \tag{48}\] where the expression on the RHS, to the leading order, should be \(\sim\mathcal{O}\left(1\right)r^{-(2s+1)}e^{i\omega r_{*}}\). Similarly we can obtain \[\frac{C_{\rm T}^{\rm trans}}{C_{\rm SN}^{\rm trans}}=\lim_{r\to\infty}{}_{s} \Lambda^{-1}\left\{\left[1+\mathcal{O}\left(\frac{1}{r}\right)\right]e^{i \omega r_{*}}\right\}r^{2s+1}e^{-i\omega r_{*}}, \tag{49}\] and again we know that the RHS of the expression does not depend on \(e^{\pm i\omega r_{*}}\) using Eq. (45) so that the limit could be determinate. We find that sometimes it is more convenient to compute the limit in the form of Eq. (47) than to use the limit in the form of Eq. (49) in order to find the same conversion factor, and in some cases the reverse is true even though formally both expressions should give the same answer. In fact, using the identity between the scaled Wronskian of the GSN functions \(\mathcal{W}_{X}\) and that of the Teukolsky functions \(\mathcal{W}_{R}\), we can simplify expressions for these conversion factors by equating expressions of \(\mathcal{W}_{X}\) in terms of the incidence and transmission amplitudes in the GSN formalism with expressions of \(\mathcal{W}_{R}\) in terms of those amplitudes in the Teukolsky formalism. In particular, we get identities relating these conversion factors as \[\frac{C_{\mathrm{T}}^{\mathrm{trans}}}{C_{\mathrm{SN}}^{\mathrm{trans}}} \frac{B_{\mathrm{T}}^{\mathrm{inc}}}{B_{\mathrm{SN}}^{\mathrm{inc}}} =\frac{1}{c_{0}}, \tag{50}\] \[\frac{B_{\mathrm{T}}^{\mathrm{trans}}}{B_{\mathrm{SN}}^{\mathrm{ trans}}}\frac{C_{\mathrm{T}}^{\mathrm{inc}}}{C_{\mathrm{SN}}^{\mathrm{inc}}} =\frac{2ip}{\eta(r_{+})\left[2ip(r_{+}^{2}+a^{2})+2s(r_{+}-1)\right]}. \tag{51}\] These identities imply that we only need to derive either \(\frac{C_{\mathrm{T}}^{\mathrm{trans}}}{C_{\mathrm{SN}}^{\mathrm{trans}}}\) or \(\frac{B_{\mathrm{T}}^{\mathrm{inc}}}{B_{\mathrm{SN}}^{\mathrm{inc}}}\) and either \(\frac{B_{\mathrm{T}}^{\mathrm{trans}}}{B_{\mathrm{SN}}^{\mathrm{trans}}}\) or \(\frac{C_{\mathrm{T}}^{\mathrm{inc}}}{C_{\mathrm{SN}}^{\mathrm{inc}}}\). #### ii.2.3 Higher-order corrections to asymptotic behaviors In Eq. (37) and (38), we use the asymptotic solutions of the GSN equation only to their leading order (i.e. \(\mathcal{O}(r^{0})\)). However, in order to obtain accurate numerical solutions solved on a numerically-finite interval (e.g. \(\left[r_{*}^{\mathrm{in}},r_{*}^{\mathrm{out}}\right]\)), it is more efficient to include higher-order corrections to the asymptotic solutions than to simply set \(r_{*}^{\mathrm{in}}\) as a small number and \(r_{*}^{\mathrm{out}}\) as a large number. To find such higher-order corrections, we use an ansatz of the form \[X(r_{*})\sim\begin{cases}f_{\pm}^{\infty}(r)e^{\pm i\omega r_{*}},&r_{*}\to \infty\\ g_{\pm}^{\mathrm{H}}(r)e^{\pm ipr_{*}},&r_{*}\to-\infty\end{cases}, \tag{52}\] where the plus (minus) sign corresponds to the out/right-going (in/left-going) mode, and the superscript \(\infty\) (H) corresponds to the outer (inner) boundary at infinity (the horizon). Substituting Eq. (52) back to the GSN equation in Eq. (22), we get _four_ second-order ODEs for each of the functions \(f_{\pm}^{\infty}(r)\) and \(g_{\pm}^{\mathrm{H}}(r)\) (c.f. Eq. (101)). We look for their _formal series expansions_ of the form \[f_{\pm}^{\infty}(r)= \sum_{j=0}^{\infty}\frac{\mathcal{C}_{\pm,j}^{\infty}}{(\omega r )^{j}}, \tag{53}\] \[g_{\pm}^{\mathrm{H}}(r)= \sum_{j=0}^{\infty}\mathcal{C}_{\pm,j}^{\mathrm{H}}\left[\omega( r-r_{+})\right]^{j}, \tag{54}\] where \(\mathcal{C}_{\pm,j}^{\infty/\mathrm{H}}\) are the expansion coefficients. In App. D, we show how one can compute these coefficients using recurrence relations. Such recurrence relations for some of the spin weights (\(s=0\) and \(s=-2\)) can also be found in literature (e.g. Refs. [38; 39; 40]).10 In App. E, we show explicitly the expressions of the expansion coefficients \(\mathcal{C}_{\pm,j}^{\infty}\) for \(j=0,1,2,3\). Footnote 10: Unfortunately the expansion coefficients given in Refs. [27] are incorrect except for the case with \(s=0\) because the author made an incorrect assumption that the GSN potentials are purely real, which is not true in general. With the explicit GSN transformation and hence the GSN potentials and the GSN equation as discussed in Sec. II.1, as well as the asymptotic solutions to the GSN equation and the conversion factors for converting asymptotic amplitudes between the Teukolsky and the GSN formalism as discussed in Sec. II.2, we now have all the necessary ingredients to use the GSN formalism to perform numerical computations. In the next section, we describe the recipes to use those ingredients to get homogenous solutions to both the Teukolsky and the GSN equation. ## III Numerical implementation In principle, a frequency-domain Teukolsky/GSN equation solver can be implemented in any programming language with the help of the ingredients in Sec. II and App. E. Here we describe an open-source implementation of the GSN formalism that is written in julia[35], namely GeneralizedSasakiNakamura.jl.11 Instead of fixing a particular choice of an numerical integrator for solving Eq. (22), the code can be used in conjunction with other julia packages, such as DifferentialEquations.jl[41], which implements a suite of ODE solvers. The GSN potentials \(\mathcal{F}(r),\mathcal{U}(r)\) for \(s=0,\pm 1,\pm 2\) are implemented as pure functions in julia, and can be evaluated to arbitrary precision. This also allows us to use automatic differentiation (AD) to compute corrections to the asymptotic boundary conditions at arbitrary order (see App. D).12 Footnote 11: [https://github.com/ricokalkolo/GeneralizedSasakiNakamura.jl](https://github.com/ricokalkolo/GeneralizedSasakiNakamura.jl) Footnote 12: In particular, we use two variants of AD. The first type is referred to as the forward-mode AD as implemented in ForwardDiff.jl[42]. However, the computational cost of using the forward-mode AD to compute higher-order derivatives scales exponentially with the order. Therefore, for computing corrections to the asymptotic boundary conditions we switch to the second type, which is based on Taylor expansion as implemented in TaylorSeries.jl[43], where the cost only scales linearly with the order of the derivatives. ### Numerical solutions to the Generalized Sasaki-Nakamura equation #### iii.1.1 Rewriting Generalized Sasaki-Nakamura functions as complex phase functions Instead of solving directly for the GSN function \(X(r_{*})\), we follow Ref. [44] and introduce a complex phase function \(\Phi(r_{*})\) such that \[X(r_{*})\equiv\exp\left[i\Phi\left(r_{*}\right)\right]. \tag{55}\] Substituting Eq. (55) into Eq. (22), we obtain a _first-order non-linear_ differential equation13 as Footnote 13: Unlike what was claimed in App. 3 of Ref. [44], we find that the ODE for _both_ the real and the imaginary part of \(\Phi\) can be integrated immediately to first-order (non-linear) differential equations in \((d\Phi_{\text{R}e}/dr_{*},d\Phi_{\text{Im}}/dr_{*})\), which is expected since solutions to a homogeneous ODE are determined only up to a multiplicative factor. Combining the differential equations for \(d\Phi_{\text{R}e}/dr_{*}\) and \(d\Phi_{\text{Im}}/dr_{*}\) such that \(d\Phi/dr_{*}=(d\Phi_{\text{R}e}/dr_{*}+id\Phi_{\text{Im}}/dr_{*})\) will give Eq. (56). \[\frac{d}{dr_{*}}\left(\frac{d\Phi}{dr_{*}}\right)=-i\,\mathcal{U}+\mathcal{F} \left(\frac{d\Phi}{dr_{*}}\right)-i\left(\frac{d\Phi}{dr_{*}}\right)^{2}. \tag{56}\] Such a differential equation is also known as a Riccati equation. Furthermore, the conversion between \((X,dX/dr_{*})\) and \((\Phi,d\Phi/dr_{*})\) is given by \[\Phi =-i\log\left(X\right), \tag{57}\] \[\frac{d\Phi}{dr_{*}} =-i\frac{dX/dr_{*}}{X}. \tag{58}\] While at first glance it may seem unwise to turn a linear problem into a non-linear problem, solving Eq. (56) numerically presents no additional challenge compared to solving directly Eq. (22). In fact, there are advantages in writing the GSN function in the form of Eq. (55), especially when \(|\omega|\) is large. Recall that asymptotically (both near infinity and near the horizon) GSN functions behave like plane waves, i.e. \(X\) oscillates like \(\exp\left(\pm ikr_{*}\right)\) where \(|k|\) is the oscillation frequency (assuming \(k\) is real, and recall that \(|k|\to|\omega|\) when \(r_{*}\to\infty\) and \(|k|\to|p|\) when \(r_{*}\to-\infty\)). Therefore, in order to properly resolve the oscillations, the step size \(\delta r_{*}\) for the numerical integrator needs to be much less than the wavelength, i.e. \(\delta r_{*}\ll 1/|k|\). This can get quite small for large \(|k|\), which results in taking a longer time to integrate Eq. (22) for a fixed accuracy. Fortunately this is not the case when solving for the complex phase function \(\Phi(r_{*})\) since it is varying much slower (spatially) than the GSN function \(X(r_{*})\). Intuitively this is because the complex exponential in Eq. (55) accounts for most of the oscillatory behaviors. This is especially true if we consider the asymptotic plane-wave-like solutions of the GSN equation, where the real part of the phase function \(\Phi_{\text{R}e}(r_{*})\sim kr_{*}\) is linear in \(r_{*}\), and the imaginary part of the phase function \(\Phi_{\text{Im}}(r_{*})\) is constant in \(r_{*}\). However, this might not be the case when we consider general solutions to the GSN equation where the left-going and the right-going modes are superimposed, for example the \(X^{\text{in,up}}\) pair as shown in Eq. (37) and Eq. (38). That being said, the variation of the complex phase function due to the beating or interference between the left-/right-going modes depends on their relative amplitude (which is in general a complex number and hence introduces a phase shift). In particular, physically Kerr BHs are _much more_ permeable to waves at high frequencies (see Fig. 6). This means that at those high frequencies, the relative amplitudes of the left-/right-going modes are going to be extreme and hence the beating will be suppressed. #### iv.2.2 Solving \(X^{\text{in,up}}\) as initial value problems Recall that there is a pair of linearly-independent solutions to the GSN equation that is of particular interest, namely \(\left\{X^{\text{in}},X^{\text{up}}\right\}\), where \(X^{\text{in}}\) satisfies the boundary condition that it is purely in-going at the horizon as given by Eq. (37), and \(X^{\text{up}}\) satisfies the boundary condition that it is purely out-going at infinity as given by Eq. (38), respectively. Despite the usage of the term "boundary condition", what we are really enforcing is _the asymptotic form of a solution at one of the two boundaries_, \(X^{\text{in}}\) at the horizon and \(X^{\text{up}}\) at infinity respectively. This can be formulated as an initial value problem. Explicitly for \(\hat{X}^{\text{in}}\), where a hat denotes a numerical solution hereafter, we integrate Eq. (56) outwards from the (finite) inner boundary \(r_{*}^{\text{in}}\) to the (finite) outer boundary \(r_{*}^{\text{out}}\) with \[\hat{X}(r_{*}^{\text{in}}) =g_{-}^{\text{H}}\left(r(r_{*}^{\text{in}})\right)e^{-ipr_{*}^{ \text{in}}}, \tag{59}\] \[\frac{d\hat{X}(r_{*}^{\text{in}})}{dr_{*}} =-ip\hat{X}(r_{*}^{\text{in}})+\left.\frac{dr}{dr_{*}}\frac{dg_{- }^{\text{H}}(r)}{dr}\right|_{r=r(r_{*}^{\text{in}})}e^{-ipr_{*}^{\text{in}}}, \tag{60}\] as the initial values at \(r_{*}=r_{*}^{\text{in}}\) after converting them to \(\hat{\Phi}^{\text{in}}\) and \(d\hat{\Phi}^{\text{in}}/dr_{*}\) using Eq. (57) and Eq. (58) respectively. Similarly for \(\hat{X}^{\text{up}}\), we integrate Eq. (56) inwards from the outer boundary \(r_{*}^{\text{out}}\) to the inner boundary \(r_{*}^{\text{in}}\) with \[\hat{X}(r_{*}^{\text{out}}) =f_{+}^{\infty}\left(r(r_{*}^{\text{out}})\right)e^{i\omega r_{*}^ {\text{out}}}, \tag{61}\] \[\frac{d\hat{X}(r_{*}^{\text{out}})}{dr_{*}} =i\omega\hat{X}(r_{*}^{\text{out}})+\left.\frac{dr}{dr_{*}}\frac{df _{-}^{\infty}(r)}{dr}\right|_{r=r(r_{*}^{\text{out}})}e^{i\omega r_{*}^{\text {out}}}, \tag{62}\] as the initial values at \(r_{*}=r_{*}^{\text{out}}\) after converting them to \(\hat{\Phi}^{\text{up}}\) and \(d\hat{\Phi}^{\text{up}}/dr_{*}\) using again Eq. (57) and Eq. (58) respectively. Note that for both \(\hat{X}^{\text{in}}\) and \(\hat{X}^{\text{up}}\), we have chosen the normalization convention of a unit transmission amplitude, i.e. \(B_{\text{SN}}^{\text{trans}}=C_{\text{SN}}^{\text{trans}}=1\). After solving Eq. (56) numerically for a complex phase function \(\hat{\Phi}(r_{*})\) and its derivative \(d\hat{\Phi}/dr_{*}\) on a grid of \(r_{*}\in[r_{*}^{\text{in}},r_{*}^{\text{out}}]\), we first convert them back to \(\hat{X}\) and \(d\hat{X}/dr_{*}\) using Eq. (55) and Eq. (58) respectively. #### iv.2.3 Transforming Generalized Sasaki-Nakamura functions to Teukolsky functions In principle, if we want to transform a GSN function \(\hat{X}\) back to a Teukolsky function, we simply need to apply the inverse operator \({}_{s}\Lambda^{-1}\) on the numerical GSN function. Since we have the numerical solutions to both \(\hat{X}\) and \(d\hat{X}/dr_{*}\), the inverse operator can actually be written as a matrix multiplication to the column vector \(\left(\hat{X},d\hat{X}/dr_{*}\right)^{T}\). First, consider the conversion from \(\left(\hat{X},d\hat{X}/dr_{*}\right)^{T}\) to \(\left(\hat{X},\hat{X}^{\prime}\right)^{T}\). This can be done by left-multiplying the column vector with the matrix \[M_{1}=\begin{pmatrix}1&0\\ 0&\dfrac{r^{2}+a^{2}}{\Delta}\end{pmatrix}. \tag{63}\] Next, consider the transformation from \(\left(\hat{X},\hat{X}^{\prime}\right)^{T}\) to \(\left(\hat{\chi},\hat{\chi}^{\prime}\right)^{T}\) using Eq. (13). Again this can be done by left-multiplying the column vector \(\left(\hat{X},\hat{X}^{\prime}\right)^{T}\) by the matrix \[M_{2}=\begin{pmatrix}\dfrac{1}{\sqrt{\left(r^{2}+a^{2}\right)\Delta^{s}}}&0 \\ \left(\dfrac{1}{\sqrt{\left(r^{2}+a^{2}\right)\Delta^{s}}}\right)^{\prime}& \dfrac{1}{\sqrt{\left(r^{2}+a^{2}\right)\Delta^{s}}}\end{pmatrix}. \tag{64}\] At last, the transformation from \(\left(\hat{\chi},\hat{\chi}^{\prime}\right)^{T}\) to \(\left(R,R^{\prime}\right)^{T}\) is given by the matrix equation as shown in Eq. (11), where we now explicitly define the matrix as \[M_{3}=\dfrac{1}{\eta}\begin{pmatrix}\alpha+\beta^{\prime}\Delta^{s+1}&-\beta \Delta^{s+1}\\ -(\alpha^{\prime}+\beta V_{\text{T}}\Delta^{s})&\alpha\end{pmatrix}. \tag{65}\] The overall transformation from \(\hat{X}\) and \(d\hat{X}/dr_{*}\) to \(R\) and \(R^{\prime}\) is thus given by the matrix equation \[\begin{pmatrix}\hat{R}\\ \hat{R^{\prime}}\end{pmatrix}=M_{3}M_{2}M_{1}\begin{pmatrix}\dfrac{\hat{X}}{d \hat{X}}\\ \dfrac{d\hat{X}}{dr_{*}}\end{pmatrix}. \tag{66}\] By multiplying \(\left(\hat{X},d\hat{X}/dr_{*}\right)^{T}\) with the overall transformation matrix \(M_{3}M_{2}M_{1}\) that we explicitly simplified in order to facilitate cancellations between terms. This allows us to accurately convert numerical GSN functions to Teukolsky functions close to the horizon (\(\Delta\to 0\)) when some of the terms, such as \(\alpha(r)\), diverge near the horizon. ### Extracting incidence and reflection amplitudes from numerical solutions Apart from evaluating a GSN or a Teukolsky function numerically on a grid of \(r\)- or \(r_{*}\)-coordinates, it is also useful to be able to determine the incidence and the reflection amplitude at a particular frequency \(\omega\) (see Sec. II.2 for a theoretical discussion) from a numerical solution accurately. This is essential for constructing inhomogeneous solutions using the Green's function method (e.g. calculating gravitational waveforms observed at infinity) and for scattering problems (e.g. calculating the greybody factor of a BH as a function of the wave frequency \(\omega\)). Since we only have numerical solutions on a finite grid of \(r_{*}\in[r_{*}^{\text{in}},r_{*}^{\text{out}}]\), in order to determine the reflection amplitude \(\hat{B}_{\text{SN}}^{\text{ref}}\) and the incidence amplitude \(\hat{B}_{\text{SN}}^{\text{inc}}\) of a \(\hat{X}^{\text{in}}\) solution in the GSN formalism we solve the system of linear equations at the outer boundary \(r_{*}^{\text{out}}\) that \[\begin{pmatrix}f_{+}^{\infty}(r)e^{i\omega r_{*}}&f_{-}^{\infty}(r)e^{-i \omega r_{*}}\\ (df_{+}^{\infty}/dr_{*}+i\omega f_{+}^{\infty})e^{i\omega r_{*}}&(df_{-}^{ \infty}/dr_{*}-i\omega f_{-}^{\infty})e^{-i\omega r_{*}}\end{pmatrix}\Bigg{|}_ {r_{*}^{\text{out}}} \tag{67}\] where we impose continuity of the numerical solution \((\hat{X}^{\text{in}},d\hat{X}^{\text{in}}/dr_{*})\) with the analytical asymptotic solution near infinity at \(r_{*}=r_{*}^{\text{out}}\). Similarly, we use the same scheme to determine the reflection amplitude \(\hat{C}_{\text{SN}}^{\text{ref}}\) and the incidence amplitude \(\hat{C}_{\text{SN}}^{\text{inc}}\) of a \(\hat{X}^{\text{up}}\) solution in the GSN formalism at the inner boundary \(r_{*}^{\text{in}}\) by solving \[\begin{pmatrix}g_{+}^{\text{H}}(r)e^{ipr_{*}}&g_{-}^{\text{H}}(r)e^{-ipr_{*}} \\ (dg_{+}^{\text{H}}/dr_{*}+ipg_{+}^{\text{H}})e^{ipr_{*}}&(dg_{-}^{\text{H}}/dr _{*}-ipg_{-}^{\text{H}})e^{-ipr_{*}}\end{pmatrix}\Bigg{|}_{r_{*}^{\text{in}}} \tag{68}\] where again we impose continuity of the numerical solution \((\hat{X}^{\text{up}},d\hat{X}^{\text{up}}/dr_{*})\) to the asymptotic solution near the horizon at \(r_{*}=r_{*}^{\text{in}}\).14 Footnote 14: This matching procedure at the two numerical boundaries actually allows us to obtain β€œsemi-analytical” GSN functions (and by extension Teukolsky functions) that are _accurate everywhere_, even outside the grid \([r_{*}^{\text{in}},r_{*}^{\text{out}}]\). Using \(X^{\text{in}}\) as an example, for \(r_{*}<r_{*}^{\text{in}}\) the analytical ansatz \(g_{-}^{\text{H}}(r_{*})e^{-ipr_{*}}\) can be used. This is because the numerical solution \(\hat{X}^{\text{in}}\) was constructed by using that ansatz to compute the appropriate initial conditions. While for \(r_{*}>r_{*}^{\text{out}}\), the linear combination of the analytical ansatzes \(\hat{B}_{\text{SN}}^{\text{ref}}f_{+}^{\infty}(r_{*}(r_{*}))e^{i\omega r_{*}}+ \hat{B}_{\text{SN}}^{\text{inc}}f_{-}^{\infty}(r_{*}(r_{*}))e^{-i\omega r_{*}}\) can be used, where the reflection and the incidence coefficient were constructed to ensure continuity with the numerical solution. Indeed, the inclusion of the higher-order corrections \(f_{\pm}^{\infty}\) at the outer boundary and \(g_{\pm}^{\text{H}}\) at the inner boundary respectively allow us to get very good agreements on the incidence and the reflection amplitudes over a range of frequencies with the MST method, which we will show in the next sub-section. ### Numerical results Here we showcase some numerical results obtained using our GeneralizedSasakiNakamura.jl implementation. Unless otherwise specified, we use the ODE solver Vern9[45] as implemented in DifferentialEquations.jl[41], and we include corrections to the asymptotic solutions at infinity up to the third order (i.e. truncating the sum in Eq. (53) at \(j=3\)) and that at the horizon only to the zeroth order (i.e. taking only the leading term \(j=0\) in the sum in Eq. (54)). We set the numerical inner boundary at \(r_{*}^{\rm in}=-50M\)15 and the outer boundary at \(r_{*}^{\rm out}=1000M\). We use double-precision floating-point numbers throughout, and both the "absolute tolerance" abstol (roughly the error around the zero point) and the "relative tolerance" reltol (roughly the local error) passed to the numerical ODE solver are set to \(10^{-12}\). Footnote 15: More concretely, this corresponds to \(\left(r^{\rm in}-r_{+}\right)/M\approx 8\times 10^{-10}\) when \(a/M=0.7\). This difference is a monotonically increasing function in \(|a|/M\) (for a similar discussion but for \(r_{*}/M=0\), see Fig. 12). #### iv.3.1 Numerical solutions Fig. 4 shows the IN solution in the GSN formalism of the \(s=-2,\ell=2,m=2\) mode for a BH with \(a/M=0.7\) and two different values of \(\omega\), in terms of the GSN function \(X\) and the complex frequency function \(d\Phi/dr_{*}\). Recall that for an IN solution, it is purely in-going at the horizon. We see from the figure that for both \(M\omega=0.5\) (upper panel) and \(M\omega=1\) (lower panel), near the horizon, \(d\hat{\Phi}/dr_{*}\) is flat and approaches to the imposed asymptotic value \(-p^{2}\), while \(\hat{X}\) is oscillating with the frequency \(p\). On the other hand when \(r_{*}\to\infty\), the IN solution is an admixture of the left- and the right-going modes where their relative amplitude, \(B_{\rm SN}^{\rm ref}/B_{\rm SN}^{\rm inc}\), is \(\omega\)-dependent. We see from Fig. 4a that both \(\hat{X}\) and \(d\hat{\Phi}/dr_{*}\) exhibit oscillatory behaviors, and that the oscillation frequency for \(d\hat{\Phi}/dr_{*}\) from beating is twice of that for \(\hat{X}\). While we see from Fig. 4b that \(\hat{X}\) is oscillatory but \(d\hat{\Phi}/dr_{*}\) is flat as the ratio of the left- and right-going mode is extreme and hence beating is heavily suppressed. This can be more easily seen in Fig. 5 where it shows the first derivative of the numerical IN solutions \(d\hat{\Phi}/dr_{*}\), i.e. \(d^{2}\hat{\Phi}/dr_{*}^{2}\), as indicators of how much they change locally as functions of \(r_{*}\), for both the \(M\omega=0.5\) and the \(M\omega=1\) case. We compute the numerical derivatives using AD on the interpolant of the numerical solutions of \(d\hat{\Phi}/dr_{*}\) to avoid issues with using a finite difference method. We see from the upper panel (Fig. 5a) that for \(M\omega=0.5\) the oscillation in \(d\hat{\Phi}/dr_{*}\) is significant, while for \(M\omega=1\) we can see from the lower panel (Fig. 5b) that the oscillation is much more minute. Note that the two panels have very different scales for their \(y\)-axes. Physically this boils down to the fact that the potential barriers of a Kerr BH for different types of radiation are all very permeable to waves at high frequencies. Fig. 6 shows the reflectivity of the potential barriers (for \(s=0,\pm 1,\pm 2\) with \(a/M=0.7\)) as defined by \(B_{\rm SN}^{\rm ref}/B_{\rm SN}^{\rm inc}\). This ratio compares the wave amplitude \(B_{\rm SN}^{\rm ref}\) that is reflected off the potential barrier when a wave with an asymptotic amplitude \(B_{\rm SN}^{\rm inc}\) is approaching the barrier from infinity. We see from Fig. 6 that the reflectivities become zero when the wave frequency gets large (while we only show for the \(a/M=0.7\) case, the same is true for other values of \(a/M\) as well). A low reflectivity means that the ratio of the left- and the right-going mode is going to be extreme. Explicitly for the case in Fig. 6, the right-going mode has an amplitude \(|B_{\rm SN}^{\rm ref}|\) that is much smaller than the left-going mode \(|B_{\rm SN}^{\rm inc}|\) when \(M\omega\gtrsim 1\). The lack of beating in Fig. 4b is a manifestation of this fact. Fig. 7 is similar to Fig. 4 but showing the UP solu Figure 4: GSN IN solution of the \(s=-2,\ell=2,m=2\) mode of a BH with \(a/M=0.7\) and two different values of \(\omega\) (upper panel: \(M\omega=0.5\); lower panel: \(M\omega=1\)), in terms of the GSN function \(X\) and the complex frequency function \(d\Phi/dr_{*}\). tion instead. Recall that for an UP solution, it is purely out-going at infinity. Again, we see from the figure that for both \(M\omega=0.5\) and \(M\omega=1\) (upper and lower panel respectively), \(d\hat{\Phi}/dr_{*}\) is flat and approaches to the imposed asymptotic value \(\omega^{2}\) as \(r_{*}\rightarrow\infty\), while \(\hat{X}\) is oscillating with the frequency \(\omega\). Similar to the IN solutions shown in Fig. 4, since an UP solution is an admixture of the left- and the right-going modes near the horizon, depending on their relative amplitude \(C_{\rm SN}^{\rm ref}/C_{\rm SN}^{\rm inc}\), both \(\hat{X}\) and \(d\hat{\Phi}/dr_{*}\) can be oscillatory near the horizon as shown in Fig. (a)a. When the frequency \(\omega\) is sufficiently high, the beating in \(d\hat{\Phi}/dr_{*}\) is suppressed while \(\hat{X}\) remains oscillatory as shown in Fig. (b)b. #### iv.2.2 Numerical accuracy As numerical solutions are only approximations to the true solutions, it is necessary to verify their accuracies. First, we need to show that the initial conditions \(\hat{X}\) and \(d\hat{X}/dr_{*}\) that we use are sufficiently accurate such that when solving for \(\hat{X}^{\rm in,up}\) the corresponding asymptotic boundary forms are satisfied. Next, we need to show that the numerical solutions actually satisfy the GSN equation inside the integration domain. In both cases, we can evaluate the residual \(\varepsilon\), which is defined as \[\varepsilon=\left|\frac{d^{2}\hat{X}}{dr_{*}^{2}}-\mathcal{F}(r)\frac{d\hat{X} }{dr_{*}}-\mathcal{U}(r)\hat{X}\right|, \tag{69}\] where a smaller value (ideally zero) means a better agreement of a numerical solution \(\hat{X}\) with the GSN equation. Fig. 8 shows the residual \(\varepsilon\) of the ansatz, \(f_{\pm}^{\infty}\) near infinity (upper panel) and \(g_{\pm}^{\rm H}\) near the horizon (lower panel) as functions of \(r_{*}\). For both panels, solid lines correspond to the out-going ansatzes and dash lines correspond to the in-going ansatzes truncated to different orders \(N=0,1,2,3\), i.e. keeping the first \(N+1\) terms in Eq. (53) and Eq. (54) respectively. Recall that for all the numerical results we have shown previously, we set the numerical outer boundary \(r_{*}^{\rm out}=1000M\) and truncate \(f_{\pm}^{\infty}\) at \(N=3\) (i.e. including the first four terms). From Fig. (a)a we see that this corresponds to \(\varepsilon\approx 10^{-13}\). As expected, for a fixed \(r_{*}\gg 1\), the residual \(\varepsilon\) decreases as one keeps more terms (i.e. higher \(N\)) in the summation in Eq. (53). Alternatively, for a fixed \(N\), the residual \(\varepsilon\) goes down as one has an numerical outer boundary \(r_{*}^{\rm out}\) further away from the BH. As for the numerical inner boundary \(r_{*}^{\rm in}\), recall that we set \(r_{*}^{\rm in}=-50M\) and truncate \(g_{\pm}^{\rm H}\) such that only the leading term is kept (i.e. \(N=0\)). From Fig. (b)b we see that this corresponds to \(\varepsilon\approx 10^{-10}\). Similar to \(f_{\pm}^{\infty}\) Figure 5: First derivative of the numerical solutions to the complex frequency function in Fig. 4 (i.e. \(d/dr_{*}(d\hat{\Phi}/dr_{*})\)), computed using AD, as indicators of how much the numerical solutions are changed locally as functions of \(r_{*}\) (upper panel: \(M\omega=0.5\); lower panel: \(M\omega=1\)). Figure 6: Reflectivity \(B_{\rm SN}^{\rm ref}/B_{\rm SN}^{\rm inc}\) of a Kerr BH potential barrier in the GSN formalism. We see that for all the spin weights \(s\) considered in this paper, the corresponding potential barriers are very permeable to high-frequency (\(M\omega\gtrsim 1\)) waves, meaning that the potentials will not reflect off the incidence waves and instead allow them to pass right through. In this figure, the BH angular momentum was set to \(a/M=0.7\) but the same is true for other values of \(a/M\) as well. the residual decreases with a higher \(N\) in the summation of Eq. (54) for a fixed \(r_{*}\) until the precision of a double-precision floating-point number (around \(10^{-15}\)) is reached and \(\varepsilon\) plateaus. Again, for a fixed \(N\), as one sets the inner boundary closer to the horizon, the residual drops until around \(10^{-15}\). Fig. 9 shows the residual \(\varepsilon\) for the numerical GSN UP solutions in Fig. 7 (with \(s=-2,\ell=2,m=2\), and \(a/M=0.7\)), for both \(M\omega=0.5\) and \(M\omega=1\). We see that the residuals are indeed very small, and stay roughly at \(\varepsilon\approx 10^{-12}\), which is the absolute and relative tolerance given to the ODE solver. As for the numerical GSN IN solutions, the residuals are similar to that for the UP solutions. The scaled Wronskian \(\mathcal{W}_{X}\) (c.f. Eq. (39)) can be used as a sanity check. Using again the numerical solutions in Fig. 7 for the UP solution and Fig. 4 for the IN solution with \(M\omega=0.5\) and \(M\omega=1\), we evaluate the magnitude of the complex scaled Wronskian \(|\mathcal{W}_{X}|\), which should be constant, at four different values of \(r_{*}/M=-50,0,50,1000\) respectively. The scaled Wronskian can also be computed using the asymptotic amplitudes at infinity (c.f. Eq. (40)) and at the horizon (c.f. Eq. (41)) respectively. The values are tabulated in Tab. 1. We see that the scaled Wronskians computed from the numerical solutions for the two values of \(M\omega\) are indeed constant, at least up to the eleventh digit, across the integration domain \(r_{*}\in[-50M,1000M]\). This means that our method for solving GSN functions are numerically stable. The agreement of the scaled Wronskian evaluated at different locations in the integration domain and that evaluated using the asymptotic amplitudes at both boundaries also implies that our procedure of extracting incidence and reflection amplitudes from numerical solutions works. Figure 8: Residual \(\varepsilon\) of the ansatz (c.f. Eq. (52)) \(f_{\pm}^{\infty}\) (upper panel) and \(g_{\pm}^{\mathrm{H}}\) (lower panel) that we use in evaluating the initial conditions when solving for \(X^{\mathrm{in,up}}\) and extracting the incidence and reflection amplitudes from the numerical solutions. In particular, we set \(s=-2,\ell=2,m=2,a/M=0.7\) for the purpose of demonstration. For both plots, solid lines correspond to the _out-going_ ansatzes and dash lines correspond to the _in-going_ ansatzes truncated to different orders \(N=0,1,2,3\) respectively. Figure 7: GSN UP solution of the \(s=-2,\ell=2,m=2\) mode of a BH with \(a/M=0.7\) and two different values of \(\omega\) (upper panel: \(M\omega=0.5\); lower panel: \(M\omega=1\)), in terms of the GSN function \(X\) and the complex frequency function \(d\Phi/dr_{*}\). #### iii.3.3 Comparisons with the Mano-Suzuki-Takasugi method As mentioned in Sec. I, there are other ways of computing homogeneous solutions to the radial Teukolsky equation, and one of which is the MST method. Using the MST method, asymptotic amplitudes of Teukolsky functions (i.e. incidence and reflection amplitudes normalized by transmission amplitudes) can be determined accurately, together with the homogenous solutions themselves. Here we compare our numerical solutions and asymptotic amplitudes using the GSN formalism with that using the MST method. In particular, we use the implementation in the Teukolsky[46] Mathematica package from the Black Hole Perturbation Toolkit [36]. We compute the scaled Wronskian \(\mathcal{W}_{R}\) of the numerical solutions for \(s=-2,\ell=2,m=2,a/M=0.7\) mode for both \(M\omega=0.5\) and \(M\omega=1\) (the same setup as in Tab. 1), using the MST method. Similar to the case for GSN functions, we can compute \(\mathcal{W}_{R}\) either from the numerical solutions \(R^{\rm in,up}\) using Eq. (31), or from the asymptotic amplitudes using Eq. (32) or Eq. (33), and they should agree. In addition, the values for \(\mathcal{W}_{R}\) should be the same as \(\mathcal{W}_{X}\).16 The results are tabulated in Tab. 2. We see that the numbers shown in Tab. 1, which were computed using the GSN formalism, agree with the numbers in Tab. 2 at least up to the eleventh digit, testifying the numerical accuracy and correctness of the solutions and the asymptotic amplitudes computed using GeneralizedSasakiNakamura.jl. It should also be remarked that the implementation of the MST method in the Teukolsky package seems to be struggling either very close (e.g. \(r_{*}=-50M\)) or very far away (e.g. \(r_{*}=1000M\)) from the BH, and in general the MST method struggles more as \(M\omega\) becomes larger17 while the GSN formalism becomes more efficient instead.18 Footnote 16: Note that the Teukolsky package uses a normalization convention that \(B_{\rm T}^{\rm trans}=C_{\rm T}^{\rm trans}=1\), which is different from our GeneralizedSasakiNakamura.jl implementation. To account for the difference in the normalization convention, a factor of \(\left(C_{\rm T}^{\rm trans}/C_{\rm SN}^{\rm trans}\right)\left(B_{\rm T}^{\rm trans }/B_{\rm T}^{\rm trans}\right)\) is multiplied to \(\mathcal{W}_{R}\) computed from the Teukolsky code. Footnote 17: We performed the same set of calculations in Sec. III.3 using another MST-based Fortran code described in Ref. [47] that uses machine-precision numbers. The same conclusion is reached. Footnote 18: More concretely, the authors of Ref. [30] gave explicit examples (\(s=-2,\ell=2,a/M=0,M\omega>5\)) where they found their MST code were struggling to compute, while the GSN formalism, for example using our code, can handle these cases with ease. ## IV Conclusion and Future Work In this paper, we have revamped the Generalized Sasaki-Nakamura (GSN) formalism for computing homogeneous solutions to both the GSN equation and the radial Teukolsky equation for scalar, electromagnetic and gravitational perturbations. Specifically, we have provided explicitly expressions for the transformations between the Teukolsky formalism and the GSN formalism. \begin{table} \begin{tabular}{r r r} \(r_{*}/M\) & \(M\omega=0.5\) & \(M\omega=1\) \\ \(-\infty\) & \(0.06686918718(132409)\) & \(0.09801150092(211632)\) \\ \(-50\) & \(0.06686918718(132406)\) & \(0.09801150092(220787)\) \\ 0 & \(0.06686918718(135844)\) & \(0.09801150092(220655)\) \\ 50 & \(0.06686918718(1732757)\) & \(0.09801150092(220637)\) \\ 1000 & \(0.06686918718(173902)\) & \(0.09801150092(220587)\) \\ \(\infty\) & \(0.06686918718(244163)\) & \(0.09801150092(220785)\) \\ \end{tabular} \end{table} Table 1: Magnitude of the (complex) scaled Wronskian in Fig. 7 for \(M\omega=0.5\) and \(M\omega=1\), evaluated both the absolute tolerance and the relative tolerance passed to the numerical ODE solver are set to \(10^{-12}\). \begin{table} \begin{tabular}{r r r} \(r_{*}/M\) & \(M\omega=0.5\) & \(M\omega=1\) \\ \(-\infty\) & \(0.06686918718(132406)\) & \(0.09801150092(211632)\) \\ \(-50\) & \(0.06686918718(132406)\) & \(0.09801150092(220787)\) \\ 0 & \(0.06686918718(135844)\) & \(0.09801150092(220655)\) \\ 50 & \(0.06686918718(1732757)\) & \(0.09801150092(220637)\) \\ 1000 & \(0.06686918718(173902)\) & \(0.098011500092(220587)\) \\ \(\infty\) & \(0.06686918718(244163)\) & \(0.09801150092(220785)\) \\ \end{tabular} \end{table} Table 2: Magnitude of the (complex) scaled Wronskian \(|\mathcal{W}_{R}|\) of two frequencies, \(M\omega=0.5\) and \(M\omega=1\), evaluated at four different positions, \(r_{*}/M=-50,0,50,1000\) respectively, and evaluated using the asymptotic amplitudes, with the MST method implemented in the Teukolsky code. Note that in the computations we use the arbitrary-precision arithmetic in Mathematica (specifically 64-digit accurate). Digits beyond the eleventh digit are shown in brackets and truncated to the seventeenth digit to match Tab. 1. The computations at \(r_{*}=-50M\) for both cases were aborted after running for an hour. Figure 9: Residual \(\varepsilon\) of the numerical GSN UP solutions shown in Fig. 7 for \(M\omega=0.5\) and \(M\omega=1\) respectively. Recall that both the absolute tolerance and the relative tolerance passed to the numerical ODE solver are set to \(10^{-12}\). We have also derived expressions for higher-order corrections to asymptotic solutions of the GSN equation, as well as frequency-dependent conversion factors between asymptotic solutions in the Teukolsky and the GSN formalism. Both are essential for using the GSN formalism to perform numerical work. We have also described an open-source implementation of the now-complete GSN formalism for solving homogeneous solutions, where the implementation re-formulated the GSN equation further into a Riccati equation so as to gain extra efficiency at high frequencies. In the following we discuss two potential applications of the GSN formalism in BH perturbation theory, namely as an efficient procedure for computing gravitational radiation from BHs, and as an alternative method for QNM determination. ### An efficient procedure for computing gravitational radiation from Kerr black holes As we have demonstrated in Sec. III.3, the GSN formalism is capable of producing accurate and stable numerical solutions to the homogenous GSN equation, which can then be converted to numerical Teukolsky functions, across a wide range of \(r_{*}/M\) when the MST method tends to struggle when \(r_{*}/M\ll 1\) and \(r_{*}/M\gg 1\) as shown in Sec. III.3.3. While we have only shown the numerical results for \(M\omega=0.5\) and \(M\omega=1\) explicitly, it is reasonable to expect the formalism to also work for other frequencies, if not even better at high frequencies when we gain extra efficiency by further transforming a GSN function \(X(r_{*})\) into a complex frequency function \(d\Phi/dr_{*}\), while the MST method requires a much higher working precision for computation. This can occur, for example, when computing a higher harmonic of an extreme mass-ratio inspiral (EMRI) waveform. For a generic orbit, the harmonic has a frequency \(\omega\) given by [48] \[\omega=m\Omega_{\phi}+k\Omega_{\theta}+n\Omega_{r}, \tag{70}\] where \(\Omega_{\phi},\Omega_{\theta},\Omega_{r}\) are the fundamental orbital frequency for the \(\phi\)-, \(\theta\)- and \(r\)- motion respectively. Indeed, we see from Sec. III.3.1 that in some regions of the parameter space, it is more efficient to solve for the complex frequency function \(d\Phi/dr_{*}\) than to solve for the GSN function \(X\) itself. There are, however, cases where the reverse is true instead, especially at a lower wave frequency when the BH potential barrier is less transmissive, since it is numerically more efficient (requiring fewer nodes) to track a less oscillatory function than a more oscillatory function (c.f. Fig. 4). This means that a better numerical scheme solving for \(X^{\rm in,up}\) (and by extension \(R^{\rm in,up}\)) can be formulated by first solving the first-order non-linear ODE for \(d\Phi/dr_{*}\), and then "intelligently" switching to solving the second-order linear ODE for \(X\) when it is more efficient, for example, when \(d/dr_{*}(d\dot{\Phi}/dr_{*})\) is above some pre-defined threshold. This hybrid approach is similar in sprit to some of the state-of-the-art solvers for oscillatory second order linear ODEs [49].19 Footnote 19: As mentioned in both Ref. [44] and Ref. [49], pseudo-spectral methods can be adopted instead of finite-difference methods (like the Vern9 algorithm that this paper uses) to achieve exponential convergence. We leave this as a future improvement to this work. While the GSN formalism is a great alternative to the MST method for computing homogeneous solutions (i.e. \(T=0\)) to the radial Teukolsky equation, the real strength of the GSN formalism is the ability to also compute inhomogeneous solutions (i.e. \(T\neq 0\)). Given an extended Teukolsky source term, such as a plunging test particle from infinity, the convolution integral with the Teukolsky functions can be divergent when using the Green's function method to compute the inhomogeneous solution and regularization of the integral is needed [50; 51]. In Ref. [33], Sasaki and Nakamura had worked out a formalism, which was developed upon their SN transformation, to compute the inhomogeneous solution for \(s=-2\) where the new source term, constructed from the Teukolsky source term, is short-ranged such that the convolution integral with the SN functions is convergent when using the Green's function method. In a forthcoming paper, we show that their construction can also be extended to work for \(s=2\), and the corresponding GSN transformation, in a similar fashion, serves as the foundation of the method. This will be important for studying near-horizon physics [52; 53; 54], such as computing gravitational radiation from a point particle plunging towards a BH as observed near the horizon, where the polarization contents are encoded in \(\psi_{0}\) (with \(s=2\)) instead of \(\psi_{4}\) (with \(s=-2\)). In particular, the Teukolsky-Starobinsky identities [55; 23] are not valid in this case (since the source term does not vanish near the horizon) and we cannot use them to convert the asymptotic amplitude for \(\psi_{4}\) to that for \(\psi_{0}\).20 Footnote 20: Note that it is still possible to compute the asymptotic amplitude for \(\psi_{0}\) using the Green’s function method constructed from the Teukolsky functions, but regularization is needed as the convolution integral is again divergent [56]. ### An alternative method for quasi-normal mode determination The re-formulation of a Schrodinger-like equation into a Riccati equation introduced in Sec. III.1.1 is not new and had actually been used previously, for instance, in the seminal work by Chandrasekhar and Detweiler on QNMs of Schwarzschild BHs [57]. It was used (c.f. Eq. (5) of Ref. [57]) to alleviate the numerical instability associated with directly integrating the Zerilli equation, and equivalently also the Regge-Wheeler equation to which the GSN equation reduces in the non-spinning limit. Therefore, it is reasonable to expect that the re formulation to be useful for determining QNM frequencies and their associated radial solutions. Recall that a QNM solution is both purely-ingoing at the horizon and purely-outgoing at infinity. In terms of the asymptotic amplitudes of the corresponding Teukolsky function (c.f. Eq. (29) and Eq. (30)) at a particular frequency \(\omega_{\rm QNM}\), we have \[\begin{split}& B_{\rm T}^{\rm inc}(\omega_{\rm QNM})=C_{\rm T}^{ \rm inc}(\omega_{\rm QNM})=0\\ &\Rightarrow\mathcal{W}_{R}(\omega_{\rm QNM})=0,\end{split} \tag{71}\] where the second line uses Eq. (32). This means that searching for QNM frequencies is the same as searching for zeros of \(\mathcal{W}_{R}\), the scaled Wronskian for Teukolsky functions. Also recall that in App. C, we proved that the scaled Wronskian for Teukolsky functions \(\mathcal{W}_{R}\) and that for the corresponding GSN functions \(\mathcal{W}_{X}\) are the same, implying that the QNM spectra for Teukolsky functions coincide with the QNM spectra for GSN functions.21 Thus, we can use the GSN equation, which has a short-ranged potential, instead of the Teukolsky equation for determining the QNM frequencies and the corresponding excitation factors (after applying the conversion factors shown in App. E). Footnote 21: The two equations, the radial Teukolsky equation and the GSN equation, are therefore said to be iso-spectral. Indeed, Glampedakis and Andersson proposed methods to calculate QNM frequencies and excitation factors given a short-ranged potential [58], alternative to the Leaver's method [59]. They demonstrated their methods by computing a few of the QNM frequencies for scalar perturbations (\(s=0\)) and gravitational perturbations (\(|s|=2\)), as well as the QNM excitation factors for scalar perturbations of Kerr BHs. Together with the GSN transformations and the asymptotic solutions from this paper, it is straightforward to compute the QNM frequencies and their excitation factors for scalar, electromagnetic, and gravitational perturbations using the GSN formalism.22 We leave this for future work. Footnote 22: The excitation factors for gravitational perturbations of Kerr BHs have been calculated using a different method [60; 61; 62], by explicitly computing the gravitational waveform from an infalling test particle and then extracting the amplitudes for each of the excited QNMs. ###### Acknowledgements. The author would like to thank Yanbei Chen, Manu Srivastava, Shuo Xin, Emanuele Berti, Scott Hughes, Aaron Johnson, Jonathan Thompson and Alan Weinstein for the valuable discussions and insights when preparing this work. The author would like to especially thank Manu Srivastava for a read of an early draft of this manuscript, and Shuo Xin for performing the scaled Wronskian calculations using the Fortran code in Ref. [47]. R. K. L. L. acknowledges support from the National Science Foundation Awards No. PHY-1912594 and No. PHY-2207758. ## Appendix A Angular Teukolsky equation After performing the separation of variables to the Teukolsky equation in Eq. (2) using an ansatz of the form \(\psi(t,r,\theta,\phi)=R(r)S(\theta,\phi)e^{-i\omega t}\), the equation is separated into two parts: the angular part and the radial part. In this appendix, we focus only on solving the angular part (aptly named the angular Teukolsky equation) numerically, and the radial part is treated in the main text. Let us define \({}_{s}S_{\ell m\omega}(\theta,\phi)\equiv{}_{s}S_{\ell m}(x\equiv\cos\theta;c \equiv a\omega)e^{im\phi}\), where the integer \(m\) labels the (trivial) eigenfunctions that satisfy the azimuthal symmetry. The angular Teukolsky equation then reads \[\begin{split}&\frac{d}{dx}\left[(1-x^{2})\frac{d}{dx}{}_{s}S_{ \ell m}(x;c)\right]+\\ &\left[(cx)^{2}-2csx+s+{}_{s}\mathcal{A}_{\ell m}(c)-\frac{(m+ sx)^{2}}{1-x^{2}}\right]{}_{s}S_{\ell m}(x;c)=0,\end{split} \tag{72}\] where \({}_{s}\mathcal{A}_{\ell m}\) is the angular separation constant and it is related to \(\lambda\) (c.f. Eq. (4)) by \[\lambda={}_{s}\mathcal{A}_{\ell m}+c^{2}-2mc. \tag{73}\] The angular Teukolsky equation is solved under the boundary conditions that the solutions at \(x=\pm 1\) (or equivalently at \(\theta=0,\pi\)) are finite, and the solutions are also known as the spin-weighted spheroidal harmonics, denoted by \({}_{s}S_{\ell m\omega}(\theta,\phi)\). There are multiple methods for solving the angular Teukolsky equation numerically, such as Leaver's continued fraction method [59]. A spectral decomposition method for solving the angular Teukolsky equation can be formulated [63; 64] by writing a spin-weighted spheroidal harmonic \({}_{s}S_{\ell m\omega}(\theta,\phi)\) as a sum of spin-weighted _spherical_ harmonics \({}_{s}\mathcal{I}_{\ell m}(\theta,\phi)\). The details for such a formulation can be found in, for example, Ref. [63] and Ref. [64]. We briefly summarize the method here, mostly following and using the notations in Ref. [64], for the sake of completeness. ### Spectral decomposition method A spin-weighted spheroidal harmonic \({}_{s}S_{\ell m}(x;c)\) is expanded using spin-weighted spherical harmonics \({}_{s}\mathcal{I}_{\ell m}(\theta)\), or equivalently \({}_{s}\mathcal{S}_{\ell m}(x;0)\) as [64] \[\begin{split}{}_{s}S_{\ell m}(x;c)&=\sum_{\ell= \ell_{\rm min}}^{\infty}{}_{s}C_{\ell\,\ell m}(c)\;{}_{s}S_{\ell^{\prime}m}(x;0 )\\ &=\left(\vec{C}_{\ell}\right)^{T}\vec{S}_{\ell},\end{split} \tag{74}\] where \(\ell_{\rm min}=\max(|m|,|s|)\) and \({}_{s}C_{\ell^{\prime}\ell m}(c)\) is the expansion coefficient of the \(\ell\)-th spheroidal harmonic with the \(\ell^{\prime}\)-th spherical harmonic (of the same value of \(s\) and \(m\) and we drop them in the subscripts hereafter), as a function of \(c\equiv a\omega\). Equivalently, we can define two column vectors \(\vec{C}_{\ell}\) and \(\vec{S}_{\ell}\), where the rows are labelled by the index \(\ell^{\prime}\). For example, the first row of the vectors (of index \(\ell^{\prime}=\ell_{\rm min}\)) are \({}_{s}C_{\ell_{\rm min}\ell m}\) and \({}_{s}S_{\ell_{\rm min}m}(x;0)\) respectively. The index for the rows goes up to \(\ell^{\prime}=\ell_{\rm max}\rightarrow\infty\), and the vectors have a size of \(\ell_{\rm max}-\ell_{\rm min}+1\). Then the spin-weighted spheroidal harmonic \(S_{\ell}(x;c)\) is the dot product of the two vectors. Substituting Eq. (18) into Eq. (17), we get an eigenvalue equation [64] \[\mathbb{M}\vec{C}_{\ell}=\mathcal{A}_{\ell}\vec{C}_{\ell}, \tag{19}\] where \(\mathbb{M}\) is a \((\ell_{\rm max}-\ell_{\rm min}+1)\times(\ell_{\rm max}-\ell_{\rm min}+1)\) matrix, and recall that \(\mathcal{A}_{\ell}\equiv{}_{s}\mathcal{A}_{\ell m}(c\equiv a\omega)\) is the angular separation constant (after writing back all the subscripts). The matrix elements \(\mathbb{M}_{\ell\ell^{\prime}}\) are given by [64] \[\mathbb{M}_{\ell\ell^{\prime}}=\begin{cases}-c^{2}\mathbb{A}_{\ell^{\prime}m} &\text{if}\ \ \ell^{\prime}=\ell-2,\\ -c^{2}\mathbb{D}_{\ell^{\prime}m}+2cs\mathbb{F}_{\ell^{\prime}m}&\text{if}\ \ \ell^{\prime}=\ell-1,\\ \mathcal{A}_{\ell^{\prime}}(0)-c^{2}\mathbb{B}_{\ell^{\prime}m}+2cs\mathbb{H }_{\ell^{\prime}m}&\text{if}\ \ \ell^{\prime}=\ell,\\ -c^{2}\mathbb{E}_{\ell^{\prime}m}+2cs\mathbb{G}_{\ell^{\prime}m}&\text{if}\ \ \ell^{ \prime}=\ell+1,\\ -c^{2}\mathbb{C}_{\ell^{\prime}m}&\text{if}\ \ \ell^{\prime}=\ell+2,\\ 0&\text{otherwise}.\end{cases}, \tag{20}\] where \[\mathbb{A}_{\ell m} = \mathbb{F}_{\ell m}\mathbb{F}_{(\ell+1)m}, \tag{21a}\] \[\mathbb{B}_{\ell m} = \mathbb{F}_{\ell m}\mathbb{G}_{(\ell+1)m}+\mathbb{G}_{\ell m} \mathbb{F}_{(\ell-1)m}+\mathbb{H}_{\ell m}^{2},\] (21b) \[\mathbb{C}_{\ell m} = \mathbb{G}_{\ell m}\mathbb{G}_{(\ell-1)m},\] (21c) \[\mathbb{D}_{\ell m} = \mathbb{F}_{\ell m}\mathbb{H}_{(\ell+1)m}+\mathbb{F}_{\ell m} \mathbb{H}_{\ell m},\] (21d) \[\mathbb{E}_{\ell m} = \mathbb{G}_{\ell m}\mathbb{H}_{(\ell-1)m}+\mathbb{G}_{\ell m} \mathbb{H}_{\ell m},\] (21e) \[\mathbb{F}_{\ell m} = \sqrt{\frac{(\ell+1)^{2}-m^{2}}{(2\ell+3)(2\ell+1)}\,\frac{(\ell +1)^{2}-s^{2}}{(\ell+1)^{2}}},\] (21f) \[\mathbb{G}_{\ell m} = \begin{cases}\sqrt{\frac{\ell^{2}-m^{2}}{4\ell^{2}-1}\frac{\ell^{ 2}-s^{2}}{\ell^{2}}}&\text{if}\ \ \ell\neq 0\\ 0&\text{if}\ \ \ell=0\end{cases},\] (21g) \[\mathbb{H}_{\ell m} = \begin{cases}-\frac{ms}{\ell(\ell+1)}&\text{if}\ \ \ell\neq 0\ \text{ and}\ \ s\neq 0\\ 0&\text{if}\ \ \ell=0\ \ \text{or}\ \ s=0\end{cases},\] (21h) \[\mathcal{A}_{\ell}(0) = \ell(\ell+1)-s(s+1). \tag{21i}\] Solving the angular Teukolsky equation now amounts to solving the eigenvalue problem in Eq. (19) for the eigenvalue \(\mathcal{A}_{\ell}\) and the eigenvector \(\vec{C}_{\ell}\). The spin-weighted spheroidal harmonic can then be constructed using the eigenvector \(\vec{C}_{\ell}\) and the corresponding spin-weight spherical harmonics with Eq. (18). In practice, we cannot solve a matrix eigenvalue problem of infinite size and we truncate the column vector \(\vec{C}_{\ell}\) to have a finite value of \(\ell_{\rm max}\). The accuracy of the numerical eigenvalue and eigenvector solution depends on the size of the truncated matrix. SpinWeightedSpheroidalHarmonics.jl23 is our open-source implementation of the abovementioned spectral decomposition method for solving spin-weighted spheroidal harmonics in julia. The code solves the truncated24 version of Eq. (19) to obtain the angular separation constant \({}_{s}\mathcal{A}_{\ell m}\) and the eigenvector \({}_{s}\vec{C}_{\ell m}\). Apart from the angular separation constant, the code can also compute the separation constant \(\lambda\) (c.f. Eq. (4)), and evaluate numerical values of spin-weight spheroidal harmonics and their derivatives.25 In particular, the code adopts the normalization convention for \({}_{s}S_{\ell m\omega}(\theta,\phi)\) that Footnote 23: [https://github.com/ricokaloklo/SpinWeightedSpheroidalHarmonics.jl](https://github.com/ricokaloklo/SpinWeightedSpheroidalHarmonics.jl) Footnote 24: By default the truncated matrix \(\mathbb{M}\) is \(10\times 10\), but the size is adjustable by setting a different \(\ell_{\rm max}\) if a higher accuracy or a faster run time is needed. Footnote 25: It should be noted that our code is also capable of handling complex \(\omega\), which is necessary for carrying out quasi-normal mode related computations. \[\int_{0}^{\pi}\left[{}_{s}S_{\ell m}(\theta;c)\right]^{2}\sin(\theta)\,d\theta= \frac{1}{2\pi}. \tag{22}\] To evaluate numerical values of the spin-weighted spheroidal harmonics \({}_{s}S_{\ell m\omega}(\theta,\phi)\) and their derivatives, it is necessary to also be able to numerically (and possibly efficiently) evaluate the spin-weighted spheroidal harmonics \({}_{s}Y_{\ell m}(\theta,\phi)\). ### Evaluation of \({}_{s}S_{\ell m\omega}(\theta,\phi)\) Recall from Eq. (18) that the spin-weighted spheroidal harmonic \({}_{s}S_{\ell m\omega}(\theta,\phi)\) is expanded in terms of the spin-weighted spherical harmonics, i.e. \[{}_{s}S_{\ell m}(\theta,\phi;a\omega)=\sum_{\ell^{\prime}=\ell_{\rm min}}^{ \infty}{}_{s}C_{\ell^{\prime}\ell m}(a\omega)\ {}_{s}Y_{\ell m}(\theta,\phi),\] and the spectral decomposition method solves for the expansion coefficients \({}_{s}C_{\ell^{\prime}\ell m}(a\omega)\), which is only part of the ingredients. It is possible to evaluate \({}_{s}Y_{\ell m}(\theta,\phi)\) exactly, and the expression is given by [65] \[{}_{s}Y_{\ell m}(\theta,\phi) = (-1)^{m}e^{im\phi}\sqrt{\frac{(\ell+m)!(\ell-m)!(2\ell+1)}{4\pi( \ell+s)!(\ell-s)!}}\] \[\times\sum_{r=0}^{\ell-s}\left[{\ell-s\choose r}{\ell+s\choose r+ s-m}(-1)^{\ell-r-s}\ \ \.\right.\] \[\times \left.\left.\cos^{2r+s-m}(\frac{\theta}{2})\sin^{2\ell-2r-s+m}( \frac{\theta}{2})\right]\right.\] In principle, obtaining the value of a spin-weighted spherical harmonic \({}_{s}Y_{\ell m}(\theta,\phi)\) is as simple as evaluating the sum as shown in Eq. (10). Oftentimes, however, when we solve the eigenvalue problem in Eq. (11), the index \(\ell\) can be big enough so that a direct evaluation of the pre-factor \[\sqrt{\frac{(\ell+m)!(\ell-m)!}{(\ell+s)!(\ell-s)!}}=\begin{cases}\sqrt{\frac{ \left(\ell-m\right)\left(\ell-m-1\right)\ldots\left(\ell-m-(s-m)+1\right)}{ \left(\ell+m+(s-m)\right)\left(\ell+m+(s-m)-1\right)\ldots\left(\ell+m+1 \right)}}&\text{if }\ s>m,\\ \sqrt{\frac{\left(\ell+s+(m-s)\right)\left(\ell+s+(m-s)-1\right)\ldots\left( \ell+s+1\right)}{\left(\ell-s\right)\left(\ell-s-1\right)\ldots\left(\ell-s-( m-s)+1\right)}}&\text{if }\ s<m,\\ 1&\text{if }\ |s|=|m|.\end{cases} \tag{12}\] and now evaluations of \({}_{s}S_{\ell m\omega}(\theta,\phi)\) using Eq. (10) are free from overflow. ### Evaluation of \(\partial^{n}_{\theta,\phi}\)\({}_{s}S_{\ell m\omega}(\theta,\phi)\) In order to evaluate partial derivatives of spin-weighted spheroidal harmonics, \(\partial^{n}_{\theta,\phi}\)\({}_{s}S_{\ell m\omega}(\theta,\phi)\), which are needed for evaluating source terms \(T\) of the Teukolsky equation (c.f. Eq. (2)), we can use the fact that the expansion coefficients \({}_{s}C_{\ell^{\prime}\ell m}(c\equiv a\omega)\) in Eq. (10) are independent of \(\theta\) and \(\phi\). This means that the partial derivatives \(\partial^{n}_{\theta,\phi}\)\({}_{s}S_{\ell m\omega}(\theta,\phi)\) are given by the sum of the partial derivatives of \({}_{s}Y_{\ell m}(\theta,\phi)\) with the same set of the expansion coefficients, i.e. \[\left(\frac{\partial}{\partial\left\{\theta,\phi\right\}}\right) ^{n}\,{}_{s}S_{\ell m\omega}(\theta,\phi)=\\ \sum_{\ell=\ell_{\min}}^{\infty}\left[{}_{s}C_{\ell^{\prime}\ell m }(a\omega)\,\left(\frac{\partial}{\partial\left\{\theta,\phi\right\}}\right) ^{n}{}_{s}Y_{\ell m}(\theta,\phi)\right]. \tag{13}\] In principle, we can evaluate the partial derivatives using AD. However, the evaluation can be more performant by noticing that the exact evaluation of the partial derivative with respect to \(\phi\) is trivial because of the \(e^{im\phi}\) dependence. Each partial differentiation with respect to \(\phi\) gives a factor of \(im\). As for the partial derivative of a spin-weighted spherical harmonic with respect to \(\theta\), the computation scheme is less trivial. Note that each term in Eq. (10) is of the form \(c_{r}\cos^{\alpha_{r}}(\theta/2)\sin^{\beta_{r}}(\theta/2)\), where \(r\) is the summation index and \(c_{r}\) is the pre-factor with \(\alpha_{r}\) and \(\beta_{r}\) being the exponent for the \(\cos(\theta/2)\) and \(\sin(\theta/2)\) factor respectively. Each partial differentiation with respect to \(\theta\) splits the term into two terms, one with \((c_{r}/2)\beta_{r}\cos^{\alpha_{r}+1}(\theta/2)\sin^{\beta_{r}-1}(\theta/2)\), and one with \((-c_{r}/2)\alpha_{r}\cos^{\alpha_{r}-1}(\theta/2)\sin^{\beta_{r}+1}(\theta/2)\). We can keep track of the coefficients and the exponents for the cosine and the sine factor with the help of a binary tree. We represent each term in the summation with index \(r\) in Eq. (10) as the root node of a tree (for an illustration, see Fig. 10) with an entry of three numbers \((c_{r},\alpha_{r},\beta_{r})\). Each partial differentiation with respect to \(\theta\) corresponds to adding two child nodes with the entry \((c_{r}\beta_{r}/2,\alpha_{r}+1,\beta_{r}-1)\) and \((-c_{r}\alpha_{r}/2,\alpha_{r}-1,\beta_{r}+1)\) respectively. Therefore, the \(n\)-th order partial derivative of \(\theta\) can be evaluated exactly by traversing all the nodes of depth \(n\) and then summing over their contributions. Appendix B Fast inversion from the tortoise coordinate \(r_{*}\) to the Boyer-Lindquist coordinate \(r\) The tortoise coordinate \(r_{*}\) (for Kerr BHs) is defined by \[\frac{dr_{*}}{dr}=\frac{r^{2}+a^{2}}{\Delta}=\frac{r^{2}+a^{2}}{\left(r-r_{+} \right)\left(r-r_{-}\right)}. \tag{14}\] Using Eq. (7) one can generate different "tortoise coordinate" which differ to each other only by an integration constant. Here, and in most of the literature, we choose the integration constant such that \[r_{*}(r)=r+\frac{2r_{+}}{r_{+}-r_{-}}\ln\left(\frac{r-r_{+}}{2}\right)-\frac{2r _{-}}{r_{+}-r_{-}}\ln\left(\frac{r-r_{-}}{2}\right). \tag{15}\] However, there is no simple analytical expression that gives \(r=r(r_{*})\), and one will have to instead numerically invert Eq. (14). Such an inversion scheme that is both fast and accurate is needed for our numerical implementation of the GSN formalism because we numerically solve the GSN equation in the \(r_{*}\)-coordinate instead of the Boyer-Lindquist \(r\)-coordinate, and yet the GSN potentials, which will be evaluated at many different values of \(r_{*}\) during the numerical integration, are written in terms of \(r\). This coordinate inversion is equivalent to a root-finding problem. Given a value of the tortoise coordinate \(r_{*}^{0}\), we solve for \(h^{0}\equiv\left(r^{0}-r_{+}\right)>0\) that satisfies \[r_{*}^{0}-r_{*}(r_{+}+h^{0})=0, \tag{10}\] in order to find the corresponding Boyer-Lindquist coordinate \(r^{0}\equiv r_{+}+h^{0}\) that is _outside_ the horizon 26. Footnote 26: A similar construction (i.e. enforcing \(h^{0}<0\)) can be used to find the Boyer-Lindquist coordinate \(r\in(r_{-},r_{+})\) that gives the same \(r_{*}^{0}\) Fig. 11 shows a plot of \(r\) as a function of \(r_{*}\) for \(a/M=0.7\). As the value of \(r_{*}\) becomes larger, the simple approximation \(r(r_{*})\approx r_{*}\) works better. In fact, the slope \(dr/dr_{*}\to 1\) as \(r_{*}\gg 0\). Therefore, derivative-based methods such as the Newton-Raphson method and secant methods [34] are efficient in performing the coordinate inversion (since we can evaluate the derivatives exactly and cheaply). However, these methods are going to be inefficient for negative values of \(r_{*}\) near the horizon since the slope tends to zero. In our numerical implementation, we use a _hybrid_ of root-finding algorithms. For \(r_{*}^{0}>0\), we use the Newton-Raphson method [34] with an initial guess of \(h=r_{*}^{0}\), and switch to using the bisection method [34] for \(r_{*}^{0}\leq 0\). To use the bisection method, an interval of \(h\) that contains the root of Eq. (10) is given to the algorithm as an initial guess. Since \(r=r_{+}\) maps to \(r_{*}\rightarrow-\infty\), a natural choice for the lower bound of the bracketing interval would be \(h=0\). For the upper bracketing bound, from Fig. 12 we see that the value of \(h\) that corresponds to \(r_{*}=0\) is a monotonically-increasing function of the spin magnitude \(|a|\). Therefore, we can simply choose the upper bound value to be (equal to or greater than) the limiting value of \(h\) that corresponds to \(r_{*}=0\) when \(|a|\to 1\). Explicitly, the numerical implementation in GeneralizedSasakiNakamura.jl uses the bracketing interval \(0<h<1.4\). Appendix C Deriving the identity between the scaled Wronskians for Teukolsky functions and Generalized Sasaki-Nakamura functions Recall that the scaled Wronskian \(\mathcal{W}_{R}\) for the Teukolsky functions \(R^{\rm in,up}\) is defined by \[\mathcal{W}_{R}=\Delta^{*+1}\left(R^{\rm in}R^{\rm up\prime}-R^{\rm up}R^{ \rm in^{\prime}}\right), \tag{31}\] Figure 11: The Boyer-Lindquist \(r\)-coordinate as a function of the tortoise \(r_{*}\) coordinate for \(a/M=0.7\). As the value of \(r_{*}\) becomes larger (upper inset), the approximation \(r(r_{*})\approx r_{*}\) (dashed) gets increasingly better as \(dr/dr_{*}\to 1\). Meanwhile as the value of \(r_{*}\) becomes more negative (lower inset), \(r(r_{*})\) approaches \(r=r_{+}\) as constructed and \(dr/dr_{*}\to 0\). Figure 10: Binary tree representation of a term and its partial derivatives with respect to \(\theta\) in the summation of Eq. (10). In each node, the three numbers correspond to the pre-factor, the exponent for the \(\cos(\theta/2)\) and the \(\sin(\theta/2)\) factor respectively. A partial differentiation with respect to \(\theta\) creates two leaf nodes with the pre-factor and the exponents computed according to rules of partial differentiation. The \(n\)-th partial derivative with respect to \(\theta\) of the term in the root node can be evaluated by simply summing over all the nodes of depth \(n\). whereas the scaled Wronskian \(\mathcal{W}_{X}\) for the GSN functions \(X^{\text{in,up}}\) is defined by \[\mathcal{W}_{X}=\frac{1}{\eta}\left[X^{\text{in}}(dX^{\text{up}}/dr_{*})-(dX^{ \text{in}}/dr_{*})X^{\text{up}}\right]. \tag{39}\] They are called _scaled_ Wronskians because they are not the same as "ordinary" Wronskians. For a generic second-order linear ODE \[\frac{d^{2}y(x)}{dx^{2}}+p(x)\frac{dy(x)}{dx}+q(x)y(x)=0, \tag{30}\] suppose it admits two linearly-independent solutions \(y_{1}(x)\) and \(y_{2}(x)\), then the Wronskian \(W(x)\) is defined by \[W(x)=y_{1}\frac{dy_{2}}{dx}-y_{2}\frac{dy_{1}}{dx}, \tag{31}\] which is a function of \(x\) in general. It can be shown that \(W(x)\) satisfies the ODE [66] \[\frac{dW}{dx}+p(x)W=0. \tag{32}\] Let us define the scaled Wronskian \(\mathcal{W}\) such that \[\mathcal{W}\equiv\exp\left(\int^{x}p(x^{\prime})\;dx^{\prime}\right)W(x), \tag{33}\] we see that \(d\mathcal{W}/dx=0\), i.e. \(\mathcal{W}\) is a constant. It is not immediately obvious that \(\mathcal{W}_{X}\), evaluated using Eq. (39), is the same as \(\mathcal{W}_{R}\), evaluated using Eq. (31). From Eq. (39) and using Eq. (7), we have \[\mathcal{W}_{X}=\frac{\Delta}{(r^{2}+a^{2})\eta}\left(X^{\text{in}}X^{\text{ up}\prime}-X^{\text{up}}X^{\text{in}\prime}\right). \tag{34}\] Recall that the GSN function \(X\) is transformed from a Teukolsky function \(R\) using the \({}_{s}\Lambda\) operator that \[\begin{split} X(r)&={}_{s}\Lambda\left[R(r)\right] \\ &=\sqrt{\left(r^{2}+a^{2}\right)\Delta^{s}}\left[\left(\alpha+ \beta\Delta^{s+1}\frac{d}{dr}\right)R(r)\right].\end{split} \tag{35}\] One can show that \[\begin{split}& X^{\text{in}}X^{\text{up}\prime}-X^{\text{up}}X^{ \text{in}\prime}\\ =&\left(r^{2}+a^{2}\right)\Delta^{s}\left\{\eta-(s+ 1)\alpha\beta\Delta^{s}\left[2\left(r-1\right)-\Delta^{\prime}\right]\right\} \\ &\times\left(R^{\text{in}}R^{\text{up}\prime}-R^{\text{up}}R^{ \text{in}\prime}\right)\\ =&\frac{\left(r^{2}+a^{2}\right)\eta}{\Delta} \Delta^{s+1}\left(R^{\text{in}}R^{\text{up}\prime}-R^{\text{up}}R^{\text{in} \prime}\right)\\ =&\frac{\left(r^{2}+a^{2}\right)\eta}{\Delta} \mathcal{W}_{R},\end{split} \tag{36}\] using Eq. (10) and the fact that \(\Delta^{\prime}=2(r-1)\). From here, we see that indeed \[\mathcal{W}_{X}=\mathcal{W}_{R}. \tag{43}\] Appendix D Recurrence relations for the higher order corrections to the asymptotic boundary conditions of the Generalized Sasaki-Nakamura equation In addition to the asymptotic boundary conditions to the leading order as shown in Eq. (37) and (38), it is useful to also compute these boundary conditions to higher orders. To start off, we assume the following ansatz for the GSN function \[X(r_{*})\sim\begin{cases}f_{\pm}^{\infty}(r)e^{\pm i\omega r_{*}},&r_{*} \rightarrow\infty\\ g_{\pm}^{\text{H}}(r)e^{\pm ipr_{*}},&r_{*}\rightarrow-\infty\end{cases}. \tag{52}\] By substituting the ansatz in Eq. (52) into the GSN equation in Eq. (22), it can be shown that as \(r\rightarrow\infty\), the functions \(f_{\pm}^{\infty}\) satisfy the following second-order ODE \[f_{\pm}^{\infty\,\prime\prime}+P_{\pm}^{\infty}(r)f_{\pm}^{\infty\,\prime}+Q _{\pm}^{\infty}(r)f_{\pm}^{\infty}=0, \tag{37}\] where we define the functions \[P_{\pm}^{\infty}(r) =\left(\frac{r^{2}+a^{2}}{\Delta}\right)\left[\left(\frac{\Delta }{r^{2}+a^{2}}\right)^{\prime}\pm 2i\omega-\mathcal{F}\right], \tag{38}\] \[Q_{\pm}^{\infty}(r) =\left(\frac{r^{2}+a^{2}}{\Delta}\right)^{2}\left(-\omega^{2}\mp i \omega\mathcal{F}-\mathcal{U}\right). \tag{39}\] As \(r\to r_{+}\), the functions \(g_{\pm}^{\text{H}}\) satisfy the following second-order ODE \[g_{\pm}^{\text{H}\prime\prime}+P_{\pm}^{\text{H}}(r)g_{\pm}^{\text{H}^{\prime} }+Q_{\pm}^{\text{H}}(r)g_{\pm}^{\text{H}}=0, \tag{40}\] Figure 12: The difference between \(r_{*}=0\) and the horizon in the Boyer-Lindquist \(r\)-coordinate, \(r(r_{*}=0)-r_{+}\), as a function of the spin \(a\) of the BH. We see that the difference is monotonically increasing with \(|a|\), and it is the smallest when \(a=0\), and the largest (\(\approx 1.3\)) when \(|a|\to 1\). We can use this to construct an interval of \(r\) that must contain \(r=r(r_{*})\) for \(r_{*}\leq 0\) when using the bisection method. where we define the functions \[P^{\rm H}_{\pm}(r) =\left(\frac{r^{2}+a^{2}}{\Delta}\right)\left[\left(\frac{\Delta}{r^ {2}+a^{2}}\right)^{\prime}\pm 2ip-\mathcal{F}\right], \tag{101}\] \[Q^{\rm H}_{\pm}(r) =\left(\frac{r^{2}+a^{2}}{\Delta}\right)^{2}\left(-p^{2}\mp ip \mathcal{F}-\mathcal{U}\right). \tag{102}\] We look for formal series expansions of the solutions \(f^{\infty}_{\pm}\) at infinity and \(g^{\rm H}_{\pm}\) at the horizon respectively. We then truncate these expansions at an arbitrary order and use them to set the boundary conditions when solving the GSN equation on a numerically-finite interval. ### Formal series expansion about infinity Inspecting Eq. (100) with \(P^{\infty}_{\pm}(r)\) and \(Q^{\infty}_{\pm}(r)\) defined in Eq. (101) and (102) respectively and performing the standard change of variable \(z\equiv 1/r\), we see that infinity (i.e. \(z=0\)) is an irregular singularity of rank 1. We expand \(P^{\infty}_{\pm}(r)\) and \(Q^{\infty}_{\pm}(r)\) as \(r\to\infty\) with \[P^{\infty}_{\pm}(r) =\sum_{j=0}^{\infty}\frac{P^{\infty}_{\pm,j}}{r^{j}}, \tag{103}\] \[Q^{\infty}_{\pm}(r) =\sum_{j=0}^{\infty}\frac{Q^{\infty}_{\pm,j}}{r^{j}}. \tag{104}\] In particular, we find that \(Q^{\infty}_{\pm,0}\) and \(Q^{\infty}_{\pm,1}\) are zero. Using these facts, the functions \(f^{\infty}_{\pm}\) have the following formal series expansions near infinity as [67] \[f^{\infty}_{\pm}(r)=e^{\nu_{\pm}r}r^{\kappa_{\pm}}\sum_{j=0}^{\infty}\frac{a_ {\pm,j}}{r^{j}}, \tag{105}\] (note that we suppress the \(\infty\) superscript on the RHS since the context is clear) where \(\kappa_{\pm}\) is given by \[\kappa_{\pm}=-\frac{P_{\pm,1}\nu_{\pm}+Q_{\pm,1}}{P_{\pm,0}+2\nu_{\pm}}, \tag{106}\] and \(\nu_{\pm}\) is a solution to the characteristic equation \[\nu_{\pm}^{2}-P_{\pm,0}\nu_{\pm}=0. \tag{107}\] There are two solutions to the characteristic equation: \(\nu_{\pm}=0\) or \(\nu_{\pm}=P_{\pm,0}\). We pick \(\nu_{+}=\nu_{-}=0\) as this gives the desired form for the series expansions and as a result we have both \(\kappa_{+}=\kappa_{-}=0\) (recall that \(Q_{\pm,1}=0\)). The expansion coefficients \(a_{\pm,j}\) can be evaluated using the recurrence relation [67] \[P_{0}ja_{j}=j(j-1)a_{j-1}+\sum_{k=1}^{j}\left[Q_{k+1}-(j-k)\,P_{k}\right]a_{j- k}, \tag{108}\] where we further suppress the \(\pm\) subscript (both the outgoing and the in-going mode have the same form above for the recurrence relations), and we set \(a_{0}=1\). As an example, the coefficient \(a_{1}\) is given by \(a_{1}=Q_{2}/P_{0}\). Comparing Eq. (53) with Eq. (105), we have \[\mathcal{C}^{\infty}_{\pm,j}=\omega^{j}a^{\infty}_{\pm,j}. \tag{109}\] ### Formal series expansion about the horizon Inspecting Eq. (100) with \(P^{\rm H}_{\pm}(r)\) and \(Q^{\rm H}_{\pm}(r)\) defined in Eq. (101) and Eq. (102) respectively, we see that \(r=r_{+}\) is a regular singularity. In particular, \(P^{\rm H}_{\pm}(r)\left(r-r_{+}\right)\) and \(Q^{\rm H}_{\pm}(r)\left(r-r_{+}\right)^{2}\) are analytic at \(r=r_{+}\) since \[P^{\rm H}_{\pm}(r)\left(r-r_{+}\right) =\left(\frac{r^{2}+a^{2}}{r-r_{-}}\right)\left[\left(\frac{\Delta }{r^{2}+a^{2}}\right)^{\prime}\pm 2ip-\mathcal{F}\right],\] \[Q^{\rm H}_{\pm}(r)\left(r-r_{+}\right)^{2} =\left(\frac{r^{2}+a^{2}}{r-r_{-}}\right)^{2}\left(-p^{2}\mp ip \mathcal{F}-\mathcal{U}\right).\] A formal series expansion near the horizon can be obtained using the Frobenius method. We expand \(P^{\rm H}_{\pm}(r)\) and \(Q^{\rm H}_{\pm}(r)\) near \(r=r_{+}\) as \[P^{\rm H}_{\pm}(r) =\sum_{j=0}^{\infty}P^{\rm H}_{\pm,j}(r-r_{+})^{j-1}, \tag{110}\] \[Q^{\rm H}_{\pm}(r) =\sum_{j=0}^{\infty}Q^{\rm H}_{\pm,j}(r-r_{+})^{j-2}. \tag{111}\] The functions \(g^{\rm H}_{\pm}(r)\) again have the formal series expansions near the horizon as [67] \[g^{\rm H}_{\pm}(r)=(r-r_{+})^{\nu_{\pm}}\sum_{j=0}^{\infty}a_{\pm,j}(r-r_{+})^ {j}, \tag{112}\] (note that we again suppress the H superscript on the RHS since the context is clear) where \(\nu_{\pm}\) is a root to the indicial polynomial \(I(\nu_{\pm})\), which is given by [67] \[I(\nu_{\pm})=\nu_{\pm}(\nu_{\pm}-1)+P_{\pm,0}\nu_{\pm}+Q_{\pm,0}. \tag{113}\] Note that we have \(Q_{\pm,0}=0\), therefore the indicial equation \(I(\nu_{\pm})=0\) has two solutions: \(\nu_{\pm}=0\) or \(\nu_{\pm}=(1-P_{\pm,0})\). Again we pick \(\nu_{+}=\nu_{-}=0\) as this gives the desired expansions. The expansion coefficients \(a_{\pm,j}\) can be evaluated again using a recurrence relation as [67] \[I(j)a_{j}=-\sum_{k=0}^{j-1}\left(kP_{j-k}+Q_{j-k}\right)a_{k}, \tag{114}\] where we again further suppress the \(\pm\) subscript (both the out-going and the in-going mode have the same form above for the recurrence relations), and we set \(a_{0}=1\). For example, explicitly \(a_{1}=-Q_{1}/P_{0}\). Comparing Eq. (54) with Eq. (112), we have \[\mathcal{C}^{\rm H}_{\pm,j}=\omega^{-j}a^{\rm H}_{\pm,j}. \tag{115}\] Appendix E Explicit Generalized Sasaki-Nakamura transformations for physically relevant radiation fields Here in this appendix we explicitly show our choices of \(g_{i}(r)\) for radiation fields with spin weight \(s=0,\pm 1,\pm 2\) that we use to construct the GSN transformation. For each transformation, we give _explicit expressions_ for the weighting functions \(\alpha(r),\beta(r)\), the determinant of the transformation matrix \(\eta(r)\), the asymptotic solutions to the GSN equation at infinity and at the horizon for both the in-going and the out-going mode, and the conversion factors for transforming the asymptotic amplitudes between the Teukolsky function \(R\) and the SN function \(X\). Together with Sec. II and this appendix, one should have all the necessary ingredients to use the GSN formalism to numerically solve the _homogenous_ radial Teukolsky equation for physically relevant radiation fields (\(s=0\) for scalar radiation, \(s=\pm 1\) for electromagnetic radiation, and \(s=\pm 2\) for gravitational radiation). Despite being long-winded, we opt to show the expressions explicitly for the sake of completeness. Accompanying this paper are Mathematica notebooks deriving and storing all the expressions shown here, and they can be found on Zenodo.27 While the GSN formalism was proposed to facilitate numerical computations, all the expressions in this appendix and Sec. II are exact. In particular, we _do not assume_ that \(\omega\) is real when deriving expressions shown here and they can be used in QNM calculations with the GSN formalism (such as Ref. [68] using the parametrized BH quasi-normal ringdown formalism [69; 70; 71] to compute semi-analytical corrections from QNM frequencies for a non-rotating BH, and Sec. IV.2). We also do not use the identities shown in Eq. (50) and Eq. (51) to simplify the expressions for the conversion factors below. Footnote 27: [https://doi.org/10.5281/zenodo.8080242](https://doi.org/10.5281/zenodo.8080242) ### Scalar radiation \(s=0\) By choosing \(g_{0}(r)=1\), we have the weighting functions \[\alpha(r) = 1, \tag{51a}\] \[\beta(r) = 0. \tag{51b}\] The determinant of the transformation matrix \(\eta(r)\) can be written as \[\eta=c_{0}+c_{1}/r+c_{2}/r^{2}+c_{3}/r^{3}+c_{4}/r^{4}\] with the coefficients \[c_{0} = 1, \tag{52a}\] \[c_{1,2,3,4} = 0. \tag{52b}\] The asymptotic out-going mode of \(X\) when \(r_{*}\to\infty\) is given by \[X(r_{*}\to\infty)\propto f_{+}^{\infty}(r)e^{i\omega r_{*}}=e^{i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\frac{\mathcal{C}_{+,j}^{\infty}}{r^{j }}\right)\] with the first three expansion coefficients \[\mathcal{C}_{+,1}^{\infty} = \frac{1}{2}i\left(\lambda+2am\omega\right), \tag{53a}\] \[\mathcal{C}_{+,2}^{\infty} = \frac{1}{8}\left\{-\lambda^{2}+\lambda\left(2-4am\omega\right)\right.\] \[\left.+4\omega\left[i-a^{2}m^{2}\omega+a\left(m+2im\omega\right) \right]\right\},\] \[\mathcal{C}_{+,3}^{\infty} = -\frac{1}{48}i\left\{\lambda^{3}+\lambda^{2}(-8+6am\omega)\right.\] \[\left.+4\lambda\left[3-\left(9i+8am\right)\omega+a\left(2a-6im+3 am^{2}\right)\omega^{2}\right]\right.\] \[\left.+8\omega\left[3i+a^{2}\left(-1+m^{2}(-3-6i\omega)\right) \omega\right.\right.\] \[\left.\left.+a^{3}m\left(2+m^{2}\right)\omega^{2}+am\left(3-3i \omega-8\omega^{2}\right)\right]\right\}.\] The asymptotic in-going mode of \(X\) when \(r_{*}\to\infty\) is given by \[X(r_{*}\to\infty)\propto f_{-}^{\infty}(r)e^{-i\omega r_{*}}=e^{-i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\frac{\mathcal{C}_{-,j}^{\infty}}{r^{j }}\right)\] with the first three expansion coefficients \[\mathcal{C}_{-,1}^{\infty} = -\frac{1}{2}i\left(\lambda+2am\omega\right), \tag{54a}\] \[\mathcal{C}_{-,2}^{\infty} = \frac{1}{8}\left\{-\lambda^{2}+\lambda\left(2-4am\omega\right)\right.\] \[\left.-4\omega\left[i+am\left(-1+2i\omega\right)+a^{2}m^{2} \omega\right]\right\},\] \[\mathcal{C}_{-,3}^{\infty} = \frac{1}{48}i\left\{\lambda^{3}+\lambda^{2}\left(-8+6am\omega\right)\right.\] \[\left.+4\lambda\left[3+\left(9i-8am\right)\omega+a\left(2a+6im+ 3am^{2}\right)\omega^{2}\right]\right.\] \[\left.+8\omega\left[-3i+a^{2}\left(-1+m^{2}(-3+6i\omega)\right) \omega\right.\right.\] \[\left.\left.+a^{3}m\left(2+m^{2}\right)\omega^{2}+am\left(3+3i \omega-8\omega^{2}\right)\right]\right\}.\] These expressions (except for \(\mathcal{C}_{+,j}^{\infty}\)) match with those found in Ref. [27]. Note that \(\mathcal{C}_{+,j}^{\infty}=\left(\mathcal{C}_{-,j}^{\infty}\right)^{*}\) as claimed in Ref. [27] is true only for real \(\omega\) since the GSN potentials \(\mathcal{F},\mathcal{U}\) are real-valued in this case. The conversion factors between the GSN and the Teukolsky formalism are found to be \[\frac{B_{\mathrm{T}}^{\mathrm{ref}}}{B_{\mathrm{SN}}^{\mathrm{ ref}}}=\frac{C_{\mathrm{T}}^{\mathrm{trans}}}{C_{\mathrm{SN}}^{\mathrm{trans}}} = 1, \tag{55a}\] \[\frac{B_{\mathrm{T}}^{\mathrm{inc}}}{B_{\mathrm{SN}}^{\mathrm{ inc}}} = 1,\] (55b) \[\frac{C_{\mathrm{T}}^{\mathrm{inc}}}{C_{\mathrm{SN}}^{\mathrm{ inc}}} = \frac{1}{\sqrt{2r_{+}}},\] (55c) \[\frac{B_{\mathrm{T}}^{\mathrm{trans}}}{B_{\mathrm{SN}}^{\mathrm{ trans}}}=\frac{C_{\mathrm{T}}^{\mathrm{ref}}}{C_{\mathrm{SN}}^{\mathrm{ref}}} = \frac{1}{\sqrt{2r_{+}}}. \tag{55d}\] Note that these conversion factors are frequency-independent. ### Electromagnetic radiation #### e.2.1 \(s=+1\) By choosing \(g_{0}(r)=\dfrac{r^{2}+a^{2}}{r^{2}}\) and \(g_{1}(r)=1\), we have the weighting functions \[\alpha(r) = \dfrac{1}{r^{2}\sqrt{\Delta}}\left[-ia^{3}m-iamr^{2}+ia^{4}\omega\right. \tag{100a}\] \[\left.+r^{3}\left(1+ir\omega\right)+a^{2}\left(-2+r+2ir^{2}\omega \right)\right],\] \[\beta(r) = \dfrac{\left(r^{2}+a^{2}\right)}{r^{2}\Delta^{3/2}}. \tag{100b}\] The determinant of the transformation matrix \(\eta(r)\) can be written as \[\eta=c_{0}+c_{1}/r+c_{2}/r^{2}+c_{3}/r^{3}+c_{4}/r^{4}\] with the coefficients \[c_{0} = -\left(2+\lambda\right), \tag{101a}\] \[c_{1} = 2iam,\] (101b) \[c_{2} = -a^{2}\left(3+2\lambda\right),\] (101c) \[c_{3} = -2a^{2}\left(1-iam\right),\] (101d) \[c_{4} = -a^{4}\left(1+\lambda\right). \tag{101e}\] The asymptotic out-going mode of \(X\) when \(r_{*}\rightarrow\infty\) is given by \[X(r_{*}\rightarrow\infty)\propto f_{+}^{\infty}(r)e^{i\omega r_{*}}=e^{i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\dfrac{\mathcal{C}_{+,j}^{\infty}}{r^{ j}}\right)\] with the first three expansion coefficients \[\mathcal{C}_{+,1}^{\infty} = \dfrac{1}{2}i\left(2+\lambda+2am\omega\right), \tag{102a}\] \[\mathcal{C}_{+,2}^{\infty} = \dfrac{1}{8}\left[-\lambda^{2}-2\lambda\left(1+2am\omega\right)\right.\] \[\left.-4a\omega\left(m-2a\omega-2im\omega+am^{2}\omega\right) \right],\] \[\mathcal{C}_{+,3}^{\infty} = -\dfrac{1}{48}i\left\{\lambda^{3}+\lambda^{2}\left(-2+6am\omega\right)\right.\] \[\left.+4\lambda\left[-2-2\left(3i+am\right)\omega\right.\right.\] \[\left.\left.+a\left(-4a-6im+3am^{2}\right)\omega^{2}\right]\right.\] \[\left.+8\omega\left[-6i+a^{3}m\left(-4+m^{2}\right)\omega^{2}\right.\] \[\left.+3a^{2}\omega\left(-1-2im^{2}\omega\right)-am\left(3+6i \omega+8\omega^{2}\right)\right]\right\}.\] The asymptotic in-going mode of \(X\) when \(r_{*}\rightarrow\infty\) is given by \[X(r_{*}\rightarrow\infty)\propto f_{-}^{\infty}(r)e^{-i\omega r_{*}}=e^{-i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\dfrac{\mathcal{C}_{-,j}^{\infty}}{r^ {j}}\right)\] with the first three expansion coefficients \[\mathcal{C}_{-,1}^{\infty} = \dfrac{1}{2c_{0}}i\left[4+\lambda^{2}+8am\omega+2\lambda\left(2+ am\omega\right)\right], \tag{102a}\] \[\mathcal{C}_{-,2}^{\infty} = \dfrac{1}{8c_{0}}\left\{\lambda^{3}+4\lambda^{2}(1+am\omega)\right.\] \[\left.+8a\omega\left[m\left(2+2i\omega\right)-a\omega+3am^{2} \omega\right]\right.\] \[\left.+4\lambda\left[1+am\left(5+2i\omega\right)\omega+a^{2} \left(-2+m^{2}\right)\omega^{2}\right]\right\},\] \[\mathcal{C}_{-,3}^{\infty} = -\dfrac{1}{48c_{0}}i\left\{\lambda^{4}+6am\lambda^{3}\omega\right.\] \[\left.+4\lambda^{2}\left[-3+\left(6i+4am\right)\omega\right.\] \[\left.+a\left(-4a+6im+3am^{2}\right)\omega^{2}\right]\right.\] \[\left.+8\lambda\left[-2+\left(12i-5am\right)\omega\right.\right.\] \[\left.+a\left(12im+a(-4+9m^{2})\right)\omega^{2}\right.\] \[\left.+am\left(-8+6iam+a^{2}(-4+m^{2})\right)\omega^{3}\right]\] \[\left.+16\omega\left[6i+a^{3}m\left(-1+4m^{2}\right)\omega^{2}\right.\] \[\left.+3ia^{2}\omega\left(i-2\omega+4m^{2}\omega\right)\right.\] \[\left.+am\left(-3+12i\omega-8\omega^{2}\right)\right]\right\}.\] The conversion factors between the GSN and the Teukolsky formalism are found to be \[\dfrac{B_{\mathrm{T}}^{\mathrm{ref}}}{B_{\mathrm{SN}}^{\mathrm{ ref}}}=\dfrac{C_{\mathrm{T}}^{\mathrm{trans}}}{C_{\mathrm{SN}}^{\mathrm{trans}}} = \dfrac{1}{2i\omega}, \tag{103a}\] \[\dfrac{B_{\mathrm{T}}^{\mathrm{inc}}}{B_{\mathrm{SN}}^{\mathrm{ inc}}} = \dfrac{2i\omega}{c_{0}},\] (103b) \[\dfrac{C_{\mathrm{T}}^{\mathrm{inc}}}{C_{\mathrm{SN}}^{\mathrm{ inc}}} = \dfrac{r_{+}^{3/2}}{4\sqrt{2}},\] (103c) \[\times\left[r_{+}(1+4i\omega-iam)-a^{2}(1+2i\omega)\right]^{-1}\] \[\dfrac{B_{\mathrm{T}}^{\mathrm{trans}}}{B_{\mathrm{SN}}^{\mathrm{ trans}}}=\dfrac{C_{\mathrm{T}}^{\mathrm{ref}}}{C_{\mathrm{SN}}^{\mathrm{ref}}} = \sqrt{2r_{+}}\dfrac{2r_{+}\omega-am}{2am+2i(2+\lambda)}. \tag{103d}\] #### e.2.2 \(s=-1\) By choosing \(g_{0}(r)=\dfrac{r^{2}+a^{2}}{r^{2}}\) and \(g_{1}(r)=1\), we have the weighting functions \[\alpha(r) = -\dfrac{\sqrt{\Delta}}{r^{2}}\left[r+i\dfrac{\left(r^{2}+a^{2} \right)K}{\Delta}\right], \tag{104a}\] \[\beta(r) = \dfrac{\sqrt{\Delta}\left(r^{2}+a^{2}\right)}{r^{2}}. \tag{104b}\] The determinant of the transformation matrix \(\eta(r)\) can be written as \[\eta=c_{0}+c_{1}/r+c_{2}/r^{2}+c_{3}/r^{3}+c_{4}/r^{4}\] with the coefficients \[c_{0} = -\lambda, \tag{111a}\] \[c_{1} = -2iam,\] (111b) \[c_{2} = a^{2}\left(1-2\lambda\right),\] (111c) \[c_{3} = -2a^{2}\left(1+iam\right),\] (111d) \[c_{4} = a^{4}\left(1-\lambda\right). \tag{111e}\] The asymptotic out-going mode of \(X\) when \(r_{*}\rightarrow\infty\) is given by \[X(r_{*}\rightarrow\infty)\propto f_{+}^{\infty}(r)e^{i\omega r_{*}}=e^{i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\frac{\mathcal{C}_{+,j}^{\infty}}{r^{ j}}\right)\] with the first three expansion coefficients \[\mathcal{C}_{+,1}^{\infty} = -\frac{1}{2c_{0}}i\left(\lambda^{2}+4am\omega+2am\lambda\omega \right), \tag{112a}\] \[\mathcal{C}_{+,2}^{\infty} = \frac{1}{8c_{0}}\left[\lambda^{3}-\lambda^{2}\left(2-4am\omega \right)-\right.\] \[\left.8a\omega\left(m-a\omega-2am^{2}\omega\right)\right.\] \[\left.+4a\omega\lambda\left(m-2a\omega-2im\omega+am^{2}\omega \right)\right],\] \[\mathcal{C}_{+,3}^{\infty} = \frac{1}{48c_{0}}i\left\{\lambda^{4}+\lambda^{3}(-8+6am\omega)\right.\] \[\left.+4\lambda^{2}\left[3-\left(6i+5am\right)\omega\right.\right.\] \[\left.\left.+a\left(-4a-6im+3am^{2}\right)\omega^{2}\right]\right.\] \[\left.+48a\omega\left[a(-1+2i\omega)\omega+a^{2}m^{3}\omega^{2}\right.\] \[\left.-2iam^{2}\omega(-i+\omega)+m\left(1-2i\omega+a^{2}\omega^{2 }\right)\right]\right.\] \[\left.+8a\lambda\omega\left[4a\omega+3am^{2}(1-2i\omega)\omega+a^ {2}m^{3}\omega^{2}\right.\right.\] \[\left.\left.-4m\left(1+(2+a^{2})\omega^{2}\right)\right]\right\}.\] The asymptotic in-going mode of \(X\) when \(r_{*}\rightarrow\infty\) is given by \[X(r_{*}\rightarrow\infty)\propto f_{-}^{\infty}(r)e^{-i\omega r_{*}}=e^{-i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\frac{\mathcal{C}_{-,j}^{\infty}}{r^{ j}}\right)\] with the first three expansion coefficients \[\mathcal{C}_{-,1}^{\infty} = -\frac{1}{2}i\left(\lambda+2am\omega\right), \tag{113a}\] \[\mathcal{C}_{-,2}^{\infty} = \frac{1}{8}\left[-\lambda^{2}+\lambda\left(2-4am\omega\right)\right.\] \[\left.+4a\omega\left(m+2a\omega-2im\omega-am^{2}\omega\right) \right],\] \[\mathcal{C}_{-,3}^{\infty} = \frac{1}{48}i\left\{\lambda^{3}+\lambda^{2}(-8+6am\omega)\right.\] \[\left.+4\lambda\left[3+\left(6i-8am\omega\right)+a\left(-4a+6im+ 3am^{2}\right)\omega^{2}\right]\right.\] \[\left.+8a\omega\left[a\omega+3am^{2}\left(-1+2i\omega\right) \omega+a^{2}m^{3}\omega^{2}\right.\right.\] \[\left.\left.+m\left(2-4\left(2+a^{2}\right)\omega^{2}\right) \right]\right\}.\] These expressions (except for \(\mathcal{C}_{+,j}^{\infty}\)) match with those found in Ref. [27]. Note that \(\mathcal{C}_{+,j}^{\infty}=\left(\mathcal{C}_{-,j}^{\infty}\right)^{*}\) as claimed in Ref. [27] is _not true_ even for real \(\omega\) since the GSN potentials \(\mathcal{F},\mathcal{U}\) are in general complex-valued. The conversion factors between the GSN and the Teukolsky formalism are found to be \[\frac{B_{\mathrm{T}}^{\mathrm{ref}}}{B_{\mathrm{SN}}^{\mathrm{ ref}}}=\frac{C_{\mathrm{T}}^{\mathrm{trans}}}{C_{\mathrm{SN}}^{\mathrm{trans}}} = -\frac{2i\omega}{c_{0}}, \tag{114a}\] \[\frac{B_{\mathrm{T}}^{\mathrm{inc}}}{B_{\mathrm{SN}}^{\mathrm{inc}}} = -\frac{1}{2i\omega},\] (114b) \[\frac{C_{\mathrm{T}}^{\mathrm{inc}}}{C_{\mathrm{SN}}^{\mathrm{inc}}} = -\frac{\sqrt{r_{+}}\left[\left(am-4\omega\right)r_{+}+2a^{2}\omega \right]}{\sqrt{2}\left(am-i\lambda\right)},\] (114c) \[\frac{B_{\mathrm{T}}^{\mathrm{trans}}}{B_{\mathrm{SN}}^{\mathrm{ trans}}}=\frac{C_{\mathrm{T}}^{\mathrm{ref}}}{C_{\mathrm{SN}}^{\mathrm{ref}}} = \frac{r_{+}^{3/2}}{4\sqrt{2}}\] \[\times\left[\left(1+iam-4i\omega\right)r_{+}-a^{2}\left(1-2i \omega\right)\right]^{-1}.\] ### Gravitational radiation #### e.3.1 \(s=+2\) By choosing \(g_{0}(r)=\frac{r^{2}}{r^{2}+a^{2}}\), \(g_{1}(r)=1\), and \(g_{2}(r)=\frac{r^{2}+a^{2}}{r^{2}}\), we have the weighting functions \[\alpha(r) = \frac{1}{r^{2}\Delta}\left\{4a^{3}mr\left(i+r\omega\right)+2amr^{ 2}\left(i-3ir+2r^{2}\omega\right)-2a^{4}\left(-3+2ir\omega+r^{2}\omega^{2} \right)\right.\] \[\left.+r^{3}\left[-2\lambda+r\left(2+\lambda+10i\omega\right)-2r^{ 3}\omega^{2}\right]-a^{2}r\left(8+2m^{2}r-r\lambda+2ir\omega+4ir^{2}\omega+4r^ {3}\omega^{2}\right)\right\},\] \[\beta(r) = \frac{1}{r\Delta^{3}}\left[-2iamr+a^{2}\left(-4+2ir\omega\right)+2 r\left(3-r+ir^{2}\omega\right)\right]. \tag{114b}\] The determinant of the transformation matrix \(\eta(r)\) can be written as \[\eta=c_{0}+c_{1}/r+c_{2}/r^{2}+c_{3}/r^{3}+c_{4}/r^{4}\] with the coefficients \[c_{0} = 24+12i\omega+\lambda(10+\lambda)-12a\omega\left(a\omega-m\right), \tag{117a}\] \[c_{1} = -32iam-8iam\lambda+8ia^{2}\omega(1+\lambda),\] (117b) \[c_{2} = 12a^{2}-24iam-24a^{2}m^{2}+24ia^{2}\omega+48a^{3}m\omega-24a^{4} \omega^{2},\] (117c) \[c_{3} = -24ia^{3}\left(a\omega-m\right)-24a^{2},\] (117d) \[c_{4} = 12a^{4}. \tag{117e}\] The asymptotic out-going mode of \(X\) when \(r_{*}\rightarrow\infty\) \[X(r_{*}\rightarrow\infty)\propto f_{+}^{\infty}(r)e^{i\omega r_{*}}=e^{i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\frac{\mathcal{C}_{+,j}^{\infty}}{r^{j }}\right)\] with the first three expansion coefficients \[\mathcal{C}_{+,1}^{\infty} = \frac{1}{2}i\left(6+\lambda+2am\omega\right), \tag{118a}\] \[\mathcal{C}_{+,2}^{\infty} = -\frac{1}{8}\left\{\lambda^{2}+2\lambda\left(5+2am\omega\right)+ 4\left[6+\left(3i+5am\right)\omega+am\left(-2i+am\right)\omega^{2}\right] \right\},\] (118b) \[\mathcal{C}_{+,3}^{\infty} = -\frac{1}{48}i\left\{\lambda^{3}+2\lambda^{2}\left(5+3am\omega \right)+4\lambda\left[6+\left(3i+10am\right)\omega+a\left(2a-6im+3am^{2}\right) \omega^{2}\right]\right.\] (118c) \[\left.+8a\omega\left[a\omega+6am^{2}\left(1-i\omega\right)\omega+ a^{2}m^{3}\omega^{2}+m\left(2-9i\omega+2(-4+a^{2})\omega^{2}\right)\right]\right\}.\] The asymptotic out-going mode of \(X\) when \(r_{*}\rightarrow\infty\) \[X(r_{*}\rightarrow\infty)\propto f_{+}^{\infty}(r)e^{i\omega r_{*}}=e^{i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\frac{\mathcal{C}_{+,j}^{\infty}}{r^{j }}\right)\] with the first three expansion coefficients \[\mathcal{C}_{-,1}^{\infty} = \frac{1}{2c_{0}}i\left\{-\lambda^{3}-2\lambda^{2}(8+am\omega)+4 \lambda\left[-21-3\left(i+4am\right)\omega+7a^{2}\omega^{2}\right]\right.\] \[\left.+8\left[-18-\left(9i+23am\right)\omega+a\left(11a-3im-3am^{2 }\right)\omega^{2}+3a^{3}m\omega^{3}\right]\right\},\] \[\mathcal{C}_{-,2}^{\infty} = -\frac{1}{8c_{0}}\left\{\lambda^{4}+4\lambda^{3}\left(5+am\omega \right)+4\lambda^{2}\left[37+2am\left(13+i\omega\right)\omega+a^{2}\left(-11+m^ {2}\right)\omega^{2}\right]\right.\] \[\left.-8\lambda\left[-60+8am\left(-11-2i\omega\right)\omega+a^{2} \left(39-19m^{2}\right)\omega^{2}+14a^{3}m\omega^{3}\right]\right.\] \[\left.-16\left[a^{2}\left(34+m^{2}(-49-9i\omega)+3i\omega\right) \omega^{2}+a^{3}m\left(43-3m^{2}+6i\omega\right)\omega^{3}+3a^{4}\left(-4+m^{ 2}\right)\omega^{4}\right.\right.\] \[\left.\left.-9\left(4+\omega^{2}\right)+2am\omega\left(-44-15i \omega+3\omega^{2}\right)\right]\right\},\] \[\mathcal{C}_{-,3}^{\infty} = -\frac{1}{48c_{0}}i\left\{-\lambda^{5}-2\lambda^{4}(10+3am\omega) -4\lambda^{3}\left[37+2am\left(20+3i\omega\right)\omega+a^{2}\left(-13+3m^{2 }\right)\omega^{2}\right]\right.\] \[\left.-8\lambda^{2}\left[60+2a^{2}\left(-29+3m^{2}(9+i\omega) \right)\omega^{2}+a^{3}m\left(-31+m^{2}\right)\omega^{3}+am\omega\left(157+4 8i\omega-8\omega^{2}\right)\right]\right.\] \[\left.+16\lambda\left[a^{2}\left(91+m^{2}(-210-81i\omega)+9i \omega\right)\omega^{2}+2a^{3}m\left(73-13m^{2}+21i\omega\right)\omega^{3}+3a ^{4}\left(-10+7m^{2}\right)\omega^{4}\right.\right.\] \[\left.\left.-9\left(4+\omega^{2}\right)+2am\omega\left(-116-63i \omega+29\omega^{2}\right)\right]+96a\omega\left[-a^{3}m^{4}\omega^{3}+a \omega\left(18+9i\omega-11a^{2}\omega^{2}\right)\right.\right.\] \[\left.\left.+a^{2}m^{3}\omega^{2}\left(-28-7i\omega+a^{2}\omega^{2 }\right)+am^{2}\omega\left(-70-55i\omega+2(7+15a^{2})\omega^{2}+6ia^{2}\omega^ {3}\right)\right.\right.\] \[\left.+m\left(-36-36i\omega+(25+47a^{2})\omega^{2}+i(8+23a^{2}) \omega^{3}-2a^{2}(4+5a^{2})\omega^{4}\right)\right]\right\}.\] The conversion factors \[\frac{B_{\rm T}^{\rm ref}}{B_{\rm SN}^{\rm ext}}=\frac{C_{\rm T}^{ \rm trans}}{C_{\rm SN}^{\rm trans}} = -\frac{1}{4\omega^{2}}, \tag{101a}\] \[\frac{B_{\rm T}^{\rm inc}}{B_{\rm SN}^{\rm inc}} = -\frac{4\omega^{2}}{c_{0}},\] (101b) \[\frac{C_{\rm T}^{\rm inc}}{C_{\rm SN}^{\rm inc}} = -\frac{r_{+}^{3/2}}{4\sqrt{2}}\left\{\left[2\left(-1-6i\omega+8 \omega^{2}\right)+a^{2}\left(2+m^{2}+9i\omega-8\omega^{2}\right)+am\left(3i-8 \omega\right)\right]r_{+}^{2}\right.\] (101c) \[\left.+a^{3}\left(-3i+4\omega\right)\left(mr_{+}-a\omega\right) \right\}^{-1},\] \[\frac{B_{\rm T}^{\rm trans}}{B_{\rm SN}^{\rm trans}}=\frac{C_{\rm T }^{\rm ref}}{C_{\rm SN}^{\rm ref}} = 2\sqrt{2}r_{+}^{3/2}\] (101d) \[\times\left\{\left[4\omega\left(i-4\omega\right)-am\left(i-8 \omega\right)-a^{2}\left(m^{2}+2i\omega-4\omega^{2}\right)\right]r_{+}^{2}+a^{ 2}\left(i-4\omega\right)\left(am-2\omega\right)r_{+}\right\}\] \[\times\left\{2r_{+}^{3}\left(24+10\lambda+\lambda^{2}+12i\omega \right)-r_{+}^{2}\left[8iam\left(11+2\lambda+6i\omega\right)\right.\right.\] \[\left.\left.+a^{2}\left(24+24m^{2}+10\lambda+\lambda^{2}-28i \omega-16i\lambda\omega+48\omega^{2}\right)\right]\right.\] \[\left.+8ia^{3}r_{+}\left[m\left(7+\lambda-6i\omega\right)-a \omega\left(4+\lambda\right)\right]+12a^{5}\omega\left(a\omega-3m\right) \right\}^{-1}.\] #### e.2.2 \(s=-2\) By choosing \(g_{0}(r)=\frac{r^{2}}{r^{2}+a^{2}}\), \(g_{1}(r)=1\), and \(g_{2}(r)=\frac{r^{2}+a^{2}}{r^{2}}\)28, we have the weighting functions Footnote 28: Note that \(g_{0},g_{1},g_{2}\) here are not the same as the \(f,g,h\) in Ref. [33]. In fact, we see that \(g=g_{1}=1\) and \(h=g_{2}=\frac{r^{2}+a^{2}}{r^{2}}\) but \(f=g_{0}g_{1}g_{2}=1\). \[\alpha(r) = \frac{1}{r^{2}\Delta}\left\{4a^{3}mr\left(-i+r\omega\right)+2amr^ {2}\left(3i-ir+2r^{2}\omega\right)+a^{4}\left(6+4ir\omega-2r^{2}\omega^{2}\right)\right.\] \[\left.+a^{2}r\left[-24+r\left(12-2m^{2}+\lambda-6i\omega\right)+1 2ir^{2}\omega-4r^{3}\omega^{2}\right]\right.\] \[\left.+r^{2}\left[24-2r\left(12+\lambda\right)+r^{2}\left(6+ \lambda-18i\omega\right)+8ir^{3}\omega-2r^{4}\omega^{2}\right]\right\},\] \[\beta(r) = \frac{2\Delta}{r}\left[iamr+a^{2}\left(-2-ir\omega\right)+r\left( 3-r-ir^{2}\omega\right)\right]. \tag{101b}\] The determinant of the transformation matrix \(\eta(r)\) can be written as \[\eta=c_{0}+c_{1}/r+c_{2}/r^{2}+c_{3}/r^{3}+c_{4}/r^{4}\] with the coefficients \[c_{0} = -12i\omega+\lambda(2+\lambda)-12a\omega\left(a\omega-m\right), \tag{102a}\] \[c_{1} = 8iam\lambda+8ia^{2}\omega(3-\lambda),\] (102b) \[c_{2} = -24ia\left(a\omega-m\right)+12a^{2}\left[1-2\left(a\omega-m \right)^{2}\right],\] (102c) \[c_{3} = 24ia^{3}\left(a\omega-m\right)-24a^{2},\] (102d) \[c_{4} = 12a^{4}. \tag{102e}\] The asymptotic out-going mode of \(X\) when \(r_{*}\rightarrow\infty\) \[X(r_{*}\rightarrow\infty)\propto f_{+}^{\infty}(r)e^{i\omega r_{*}}=e^{i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\frac{\mathcal{C}_{+,j}^{\infty}}{r^{j}}\right)\] with the first three expansion coefficients \[\mathcal{C}^{\infty}_{+,1} = -\frac{1}{2c_{0}}i\left\{-\lambda^{3}-2\lambda^{2}(2+am\omega)+4 \lambda\left[-1+\left(3i-8am\right)\omega+7a^{2}\omega^{2}\right]\right.\] \[\left.+24\omega\left[i-a^{2}\left(1+m^{2}\right)\omega+a^{3}m \omega^{2}+iam(i+\omega)\right]\right\},\] \[\mathcal{C}^{\infty}_{+,2} = -\frac{1}{8c_{0}}\left\{\lambda^{4}+4\lambda^{3}(1+am\omega)+4 \lambda^{2}\left[1+2am\left(7-i\omega\right)\omega+a^{2}\left(-11+m^{2}\right) \omega^{2}\right]\right.\] \[\left.-8a\lambda\omega\left[-5a\omega-15am^{2}\omega+2m\left(-4+4 i\omega+7a^{2}\omega^{2}\right)\right]\right.\] \[\left.-48\omega^{2}\left[-3-a^{3}m\left(-5+m^{2}+2i\omega\right) \omega+a^{4}\left(-4+m^{2}\right)\omega^{2}+2am\left(i+\omega\right)+ia^{2} \left(-\omega+5im^{2}+3m^{2}\omega\right)\right]\right\},\] \[\mathcal{C}^{\infty}_{+,3} = \frac{1}{48c_{0}}i\left\{-\lambda^{5}-6am\lambda^{4}\omega-4 \lambda^{3}\left[-3+2am\left(8-3i\omega\right)\omega+a^{2}\left(-13+3m^{2} \right)\omega^{2}\right]\right.\] \[\left.-8\lambda^{2}\left[-2+2a^{2}\left(10+3m^{2}(6-i\omega) \right)\omega^{2}+a^{3}m\left(-31+m^{2}\right)\omega^{3}-am\omega\left(11+12i \omega+8\omega^{2}\right)\right]\right.\] \[\left.+16\lambda\omega\left[-9\omega+3a^{2}\left(5+m^{2}(-10+19i \omega)-3i\omega\right)\omega-2a^{3}m\left(-11+11m^{2}+21i\omega\right)\omega ^{2}+3a^{4}\left(-10+7m^{2}\right)\omega^{3}\right.\right.\] \[\left.\left.+2am\left(6+3i\omega+13\omega^{2}\right)\right]+96 \omega^{2}\left[6+am(-3-8i\omega)\omega+a^{4}\left(9-m^{4}+2m^{2}(8-3i\omega )\right)\omega^{2}+a^{5}m\left(-10+m^{2}\right)\omega^{3}\right.\right.\] \[\left.\left.+a^{3}m\omega\left(-9+m^{2}(-12+7i\omega)+5i\omega-8 \omega^{2}\right)+a^{2}\left(-3i\omega+m^{2}\left(6+9i\omega+14\omega^{2} \right)\right)\right]\right\}.\] The asymptotic in-going mode of \(X\) when \(r_{*}\rightarrow\infty\) \[X(r_{*}\rightarrow\infty)\propto f_{-}^{\infty}(r)e^{-i\omega r_{*}}=e^{-i \omega r_{*}}\left(1+\sum_{j=1}^{\infty}\frac{\mathcal{C}^{\infty}_{-,j}}{r^{j}}\right)\] with the first three expansion coefficients \[\mathcal{C}^{\infty}_{-,1} = -\frac{1}{2}i\left(2+\lambda+2am\omega\right),\] (E24a) \[\mathcal{C}^{\infty}_{-,2} = \frac{1}{8}\left\{-\lambda^{2}-2\lambda(1+2am\omega)-4\omega\left[ -3i+a^{2}m^{2}\omega+a\left(m+2im\omega\right)\right]\right\},\] (E24b) \[\mathcal{C}^{\infty}_{-,3} = \frac{1}{48}i\left\{\lambda^{3}+\lambda^{2}\left(-2+6am\omega \right)+4\lambda\left[-2-\left(3i+2am\right)\omega+a\left(2a+6im+3am^{2} \right)\omega^{2}\right]\right.\] \[\left.+8\omega\left[6i+a^{3}m\left(2+m^{2}\right)\omega^{2}+3a^{2 }\omega\left(-1+2im^{2}\omega\right)-am\left(6+3i\omega+8\omega^{2}\right) \right]\right\}.\] The conversion factors \[\frac{B_{\mathrm{T}}^{\mathrm{ref}}}{B_{\mathrm{SN}}^{\mathrm{ref}}} = \frac{C_{\mathrm{T}}^{\mathrm{trans}}}{C_{\mathrm{SN}}^{\mathrm{trans}}} = -\frac{4\omega^{2}}{c_{0}},\] (E25a) \[\frac{B_{\mathrm{T}}^{\mathrm{inc}}}{B_{\mathrm{SN}}^{\mathrm{ inc}}} = -\frac{1}{4\omega^{2}},\] (E25b) \[\frac{C_{\mathrm{T}}^{\mathrm{inc}}}{C_{\mathrm{SN}}^{\mathrm{ inc}}} = -\frac{4p\sqrt{2r_{+}}}{\eta\left(r_{+}\right)}\left[2pr_{+}+i\left(r_{+}-1 \right)\right],\] (E25c) \[\frac{B_{\mathrm{T}}^{\mathrm{trans}}}{B_{\mathrm{SN}}^{\mathrm{ trans}}} = \frac{C_{\mathrm{T}}^{\mathrm{ref}}}{C_{\mathrm{SN}}^{\mathrm{ref}}} = \frac{1}{\sqrt{2r_{+}}}\left[\left(8-24i\omega-16\omega^{2}\right)r_{+ }^{2}\right.\] \[\left.+\left(12iam-16+16am\omega+24i\omega\right)r_{+}+\left(-4a^ {2}m^{2}-12iam+8\right)\right]^{-1}.\] These expressions match those found in literature, for example Refs. [63; 40; 64]. Note again that \(\mathcal{C}^{\infty}_{+,j}\neq\left(\mathcal{C}^{\infty}_{-,j}\right)^{*}\) even for real \(\omega\) since the GSN potentials \(\mathcal{F},\mathcal{U}\) are in general complex-valued.29 Footnote 29: This was corrected in the erratum [73] for Ref. [63]. In both Refs. [72; 73], expressions for \(\mathcal{C}^{\infty}_{+,j}\) written in a form much more concise than that in Eq. (E23) were shown by relating them with the complex conjugate of \(\mathcal{C}^{\infty}_{-,j}\). Those expressions are valid only for real \(\omega\). We opt to not make such an assumption when deriving the expressions and hence not many simplifications can be made.
2304.00472
Querying Large Language Models with SQL
In many use-cases, information is stored in text but not available in structured data. However, extracting data from natural language text to precisely fit a schema, and thus enable querying, is a challenging task. With the rise of pre-trained Large Language Models (LLMs), there is now an effective solution to store and use information extracted from massive corpora of text documents. Thus, we envision the use of SQL queries to cover a broad range of data that is not captured by traditional databases by tapping the information in LLMs. To ground this vision, we present Galois, a prototype based on a traditional database architecture, but with new physical operators for querying the underlying LLM. The main idea is to execute some operators of the the query plan with prompts that retrieve data from the LLM. For a large class of SQL queries, querying LLMs returns well structured relations, with encouraging qualitative results. Preliminary experimental results make pre-trained LLMs a promising addition to the field of database systems, introducing a new direction for hybrid query processing. However, we pinpoint several research challenges that must be addressed to build a DBMS that exploits LLMs. While some of these challenges necessitate integrating concepts from the NLP literature, others offer novel research avenues for the DB community.
Mohammed Saeed, Nicola De Cao, Paolo Papotti
2023-04-02T06:58:14Z
http://arxiv.org/abs/2304.00472v3
# Querying Large Language Models with SQL [Vision] ###### Abstract. In many use-cases, information is stored in text but not available in structured data. However, extracting data from natural language text to precisely fit a schema, and thus enable querying, is a challenging task. With the rise of pre-trained Large Language Models (LLMs), there is now an effective solution to store and use information extracted from massive corpora of text documents. Thus, we envision the use of SQL queries to cover a broad range of data that is not captured by traditional databases by tapping the information in LLMs. To ground this vision, we present Galois, a prototype based on a traditional database architecture, but with new physical operators for querying the underlying LLM. The main idea is to execute some operators of the the query plan with prompts that retrieve data from the LLM. For a large class of SQL queries, querying LLMs returns well structured relations, with encouraging qualitative results. Preliminary experimental results make pre-trained LLMs a promising addition to the field of database systems, introducing a new direction for hybrid query processing. However, we pinpoint several research challenges that must be addressed to build a DBMS that exploits LLMs. While some of these challenges necessitate integrating concepts from the NLP literature, others offer novel research avenues for the DB community. 2023- Mohammed Saeed, Nicola De Cao, and Paolo Papotti 2023Querying Large Language Models with SQL [Vision]. In _Proceedings of VLDB Conference (Conference'23)_. VLDB, Vancouver, BC, CAN, 7 pages. [https://doi.org/10.1145/mnmnnn.mnnnnn](https://doi.org/10.1145/mnmnnn.mnnnnn) 2 ## 1. Introduction _Declarative querying_ is recognized as the main feature behind the popularity of database systems. Users specify _what_ they want to obtain, leaving the _how_ to the system. However, SQL can be executed only on structured datasets with a well defined schema, leaving out of immediate reach information expressed as unstructured text. Several technologies have been deployed to extract structured data from unstructured text and to model such data in relations or triples (Papotti et al., 2019; Zhao et al., 2020). While these methods have been studied for more than 20 years, creating well-formed structured data from text is still time consuming and error prone. Existing tools require engineers to prepare extraction pipelines, which are typically static and can only extract fixed sets of attributes/tables. Indeed, the precise extraction of typed data in a coherent tuple format is a task still unsolved (Bauer et al., 2019). While declarative querying of text is a big challenge, there has recently been incredible progress in _question answering_ (QA) over text (Zhou et al., 2020). In this setting, a question in natural language (NL) is answered by gathering information from large corpora of text documents. Transformers have enabled the creation of Large Language Models (LLMs), neural networks that are used in a wide variety of NL processing tasks. LLMs, such as those in the GPT family (Papotti et al., 2019; Zhao et al., 2020; Zhao et al., 2020), have been trained on large data, such as the entire Web textual content, and can answer complex questions in a closed-book fashion (Zhou et al., 2020) (example (2) in Figure 1). Question answering is reaching new state of the art performance with the release of new LLMs, but it is still not possible to query, in a SQL-like declarative fashion, such models. While it has been shown that such models store high quality factual information (Zhou et al., 2020; Zhao et al., 2020), they are not trained to answer complex SQL queries and may fail short with such input. In this work, we envision querying pre-trained LLMs with SQL scripts. As depicted in example (1) in Figure 1, the pre-trained LLM can act as the database that contains the information to answer the query. To ground our vision, we built Galois1, a prototype that tackles this problem while aiming at preserving three main characteristics of SQL when executed over this new source of data: (i) queries are written in SQL over a user defined relational schema, despite the target LLM does not come with a schema; (ii) queries are _portable_: we enable any application to query with standard SQL existing pre-trained LLMs that expose textual prompts; (iii) answers are _correct_ and _complete_ w.r.t. the information stored in the LLM. Indeed, we focus on the correct execution of the queries Figure 1. We assume a SQL query as input. Galois executes the SQL query, and obtains relations, by retrieving data from a pre-trained LLM (1). The corresponding question answering task requires the translation of the SQL query in natural language and the parsing of the output into a relation (2). and do not argue that LLMs always contain complete and correct information. In contrast to generation of images, where small errors are unnoticeable by users in most cases, in querying any data error can be critical for the target application. While LLMs still make factual mistakes, we believe this work shows that it is already possible to collect tuples from them with promising results. With the ongoing efforts in LLMs, with new training architectures and increasing amount of text used as input, there is evidence that their factuality and coverage is improving over time (Krizhevsky et al., 2014; Krizhevsky et al., 2015). Our vision enables the execution of SQL queries to obtain relations from the information stored in LLMs, without having to manually rewrite the query as a NL question and then parsing the results into a structured format. This is a promising solution for several applications. In any domain-specific setting, such as medical or enterprise, data can be scattered across different modalities such as email, text, and PDF files. Their representation in a LLM enables information extraction queries, such as the retrieval of the proteins to treat a variant of a virus. Galois offers the ability to query data beyond what is already modeled in a structured schema. The data from the LLM can be used as a source in metadata inference (Bauer et al., 2015), data integration (Krizhevsky et al., 2014), augmentation (Saeed et al., 2015), imputation(Saeed et al., 2015), and cleaning (Krizhevsky et al., 2015). Looking forward, we envision the use of LLMs within a polystore system sitting on top of heterogeneous storage engines (Bauer et al., 2015; Krizhevsky et al., 2015). We introduce an LLM interface compatible with such vision and show a path to combine traditional DBMSs and LLMs in novel hybrid query execution plans (Krizhevsky et al., 2015). This paper describes how to query pre-trained LLMs and pre-liminary empirical evidence of its potential. The core idea is that the query plan is a natural decomposition of the (possibly complex) process to obtain the result, in analogy with the recent approaches in NLP showing that breaking a complex task in a chain of thoughts is key to get the best results (Krizhevsky et al., 2015; Krizhevsky et al., 2015). To bridge the gap between a logical query plan and its execution on a LLM, we introduce new physical operators for the plan of the SQL query. Each of these new operators implements a textual prompt that is executed over the pre-trained LLM to gather the data. Our contributions may be summarized in the following points: * We introduce the problem of querying with SQL existing pre-trained LLMs with user-provided schemas. We present a prototype, Galois, that executes SPJA queries under assumptions that enable a large class of applications2. Footnote 2: Code available at [https://gitlab.enreecom.fr/seedml/galois](https://gitlab.enreecom.fr/seedml/galois) * The logical query plan breaks down the complex task into simpler steps that can be handled effectively by the LLM. Physical operators in the query plan are implemented as textual prompts for LLMs. For this task, we introduce prompting methods that preserve traditional pipelining. * We report results on 46 queries executed on top of four popular LLMs and show how their results compare w.r.t. the same queries executed on traditional DBMS. We also show that Galois's results are better than those obtained by manually rewriting the query (and parsing the results) in NL for question answering over the same LLM. * We outline next steps and future research directions. The remainder of this paper is organized as follows. Section 2 covers recent progress in natural language processing (NLP) and compares Galois to prior work. Section 3 discusses the challenges in querying LLMs. Section 4 describes the architecture of the first prototype. Section 5 reports preliminary experimental results from datasets in the Spider corpus. Section 6 discusses future research. ## 2. Background Our vision is inspired by recent advances in the domain of natural language processing (NLP). Progress in this field has been driven by two major concepts: the Transformer neural network architecture and the application of transfer learning in training (Krizhevsky et al., 2015). The Transformer has now become the standard architecture in NLP. One of its benefits is its suitability for parallelization w.r.t. previous approaches, which has enabled the creation of massive pre-trained LLMs (Bauer et al., 2015). These models are pre-trained on tasks, such as predicting the next word in a sentence, for which large amounts of data are easily accessible. Although pre-training is costly, the models can then be adapted to a number of new tasks. Traditionally, "fine-tuning" with annotated examples for a target task has been the main way of customizing pre-trained LLMs. However, the latest generation of pre-trained models has opened up new possibilities. Models of sufficient size complete new tasks without any additional training, simply by being given NL descriptions of the task ("instruction tuning"). Precision is improved by incorporating a limited number of examples (e.g., five to ten) that pair the input for the task with its solution ("few-shot learning"). An example of a prompt for GPT-3 is a question in natural language ("what is the capital of USA?") or a request for the capital cities in EU ("The EU state capitals are:"). Our effort is different from the problem of semantic parsing, i.e., the task of translating NL questions into a formal representation such as SQL (Krizhevsky et al., 2015; Krizhevsky et al., 2015). Our goal is also different from querying an existing relational database to answer a NL question (Krizhevsky et al., 2015). Instead, we tackle the problem of querying the LLM with SQL queries, with the traditional semantics and with the output expressed in the relational model, as if the query were executed on a DBMS. While some of these facts can be retrieved with QA, (i) the SQL query must be rewritten as an equivalent question in NL, (ii) the textual result must be parsed into a relation, (iii) our experiments show that current LLMs in some cases fail in answering complex queries expressed as NL. QA systems are optimized for answering questions with a text, while SQL queries return results in the form of tuples, possibly with complex operations to combine intermediate values, such as aggregates, where LLMs fail short (Krizhevsky et al., 2015). To overcome some of these limits, it has recently been shown that a series of intermediate reasoning steps ("chain of thought" and question decomposition (Krizhevsky et al., 2015)) improve LLMs' ability in complex tasks (Krizhevsky et al., 2015). Our work is also different from the recent proposal for Neural DBs (Saeed et al., 2015), where textual facts are encoded with a transformer and queries are posed with short NL sentences. We do not assume facts as input and we focus on traditional SQL scripts executed on LLMs. ## 3. Design Considerations Our goal is to execute SQL query over the data stored into LLMs. When we look at these models from a DB perspective, they (i) have extensive coverage of facts from massive textual sources; (ii) have perfect availability, as they are immediately available to all; (iii) directly query a very compressed version of the data, as facts are stored effectively in the parameters of the model: the Common-Crawl+ text corpus takes 45TB, while GPT-3 only 350GB. However, LLMs have their shortcomings, as we discuss next, including poor data manipulation skills, e.g., they fail with numerical comparisons. Conversely, traditional query operators are great at processing data with rich operators, such as joins and aggregates, but only within the data available in the given relation. The combination of LLMs and traditional DBMSs shows the potential for a hybrid system that can jointly query existing relational tables and facts in the LLM. However, it is crucial to consider the limitations and challenges in querying LLMs. We now delve into three key issues that have impacted the design of Galois. **1. Tuples and Keys.** As far as we know, LLMs do not have a concept of tuple, but they model existing relationships between entities ("Rome is located in Italy") or between entities and their properties ("Rome has 3M residents"). However, a query asking for city names may assume that a name identifies a city, which is not the case in reality, e.g., there is a Rome city in Georgia, USA. In some cases, key attributes exist in the real world. For example, LLMs contain keys such as IATA codes for airports., e.g., 'JFK'. However, in general, we do not have a universal global key for several entities, such as cities, and the default semantics for the LLM is to pick the most popular interpretation, with popularity defined by occurrences of terms in the original pre-training data. In general, this problem can be solved with keys defined with multiple attributes, i.e., the _context_ in NLP terminology. For example a composite key defined over (name, state, country) enables to distinguish the Rome city in Italy from the one in Georgia. In our initial prototype, we assume that every relation involved in the query has a key and that the key can be expressed with one attribute, e.g., its name. This constraint can be relaxed by handling composite keys. **2. Schema Ambiguity.** A major challenge in language is ambiguity. Similarly to the issue with entities, several words, including attribute labels, can have multiple meanings. These alternatives are represented differently in the parameters of LLMs. In our setting, a given attribute label in the query can be mapped to multiple "real world" attributes in the LLM, e.g., _size_ for a city can refer to population or urban area (Shen et al., 2017). In this initial effort, we assume that meaningful labels for attributes and relations are used in the queries. This allows the system to obtain prompts of good quality automatically. We discuss in Section 6 the impact of prompt quality on the query results. **3. Factual Knowledge in LLMs.** LLMs do _not know what they know_. This is an intrinsic challenge in the transformer architecture and the decoder, specifically. The decoder returns the next token in a stream. Such token may be based on either reliable acquired knowledge, or it may be a guess. For this reason, a query result obtained LLMs is not 100% reliable and cannot be immediately verified as LLMs do not expose their sources with the results. However, with Galois, we experimentally demonstrate that it is possible to extract factual information from LLMs to answers SQL queries. Moreover, new models keep increasing the factuality of their answers3. In this work, we do no tackle the general problem of separating the _knowledge about language and reasoning_ from _factual knowledge_, which is an ongoing NLP research topic as we discuss in Section 6. Footnote 3: For example, β€œGFT-4 scores 40% higher than our latest GPT-3.5 on our factuality evaluations” – [https://openai.com/research/gpt-4](https://openai.com/research/gpt-4) – published on March 15\({}^{th}\) 2023 ## 4. Overview The high-level architecture of Galois is presented in Figure 1. We assume that the schema (but no instances) is provided together with the query. The system processes SQL queries over data stored in a pre-trained LLM. This design enables developers to implement their applications in a conventional manner, as the complexities of using a LLM are encapsulated within Galois. **Operators.** The core intuition of our approach is to use LLMs to implement a set of specialized physical operators in a traditional query plan, as demonstrated in Figure 2. As tuples are not directly available, we implement the access to the base relations (leaf nodes) with the retrieval of the key attribute values. We then retrieve the other attributes as we go across the plan. For example, if the selection operator is defined on attribute \(A\) different from the key, the corresponding implementation is a prompt that filters every key attribute based on the selection condition, e.g., "Has city _c.name_ more than 1M population?", where _c.name_ iterates over the set of key values. If a join or a projection involve an attribute that has not been collected for the tuple, this is retrieved with a special node injected right before the operation. For example, if a join involves an attribute "currentMayor", the corresponding attribute values are retrieved with a prompt that collects it for every key, such as "Get the current mayor of _c'.name_". Once the tuples are completed, regular operators, implemented in Python in our prototype, are executed on those, e.g., joins and aggregates. On one hand, the query plan acts as a chain of thought decomposition of the original task, i.e., the plan spells outs intermediate steps. On the other hand, the operators that manipulate data fill up the limitations of LLMs, e.g., in computing average values or comparing quantities (Les and Kules, 2017). Together, these two features make the LLM able to execute complex queries. Figure 2. Logical plan for query q’ with notes about its execution. Base relations are accessed by retrieving sets of tuples (_C_, _P_) with one key attribute (_name_) from the LLM. Other LLM operators consume and produce tuple sets, retrieving for every tuple the required attributes, if not in the tuple yet, such as _population_ in the selection condition for _Cities_. The last two operators do not involve the LLM. **Prompts.** Figure 2 shows how prompts, suitable for execution on LLMs, implement logical operators in Galois. In the figure, for each operator is reported a simplified prompt that is obtained automatically by combining a set of operator-specific prompt templates with the labels/selection conditions in the given SQL query. In the example query \(q^{\prime}\), politicians (Politicians p) are filtered according to their age (p.agec+40); this corresponds to retrieving a set of tuples (P) with one key attribute (name), which is then followed by a prompt that for each politician checks in the LLM its age. For example, we instantiate the template "Has _relationName keyName attributeName operator value?_", with 'politician', 'B. Obama, 'age', 'less than', '40', respectively. **Workflow.** The main operations in Galois's query processing are: 1. [leftmargin=*] 2. Obtain a logical query plan for a query \(q\) and the source schema. We assume the label of the key attribute is given. 3. Access the LLM to retrieve the tuples composed of the key attribute and to gather more attributes in case of selections, joins and projections. Each operation is done with a prompt template filled up with the labels and conditions at hand. 4. Convert the string of answers from the LLM to a set of values in the attribute. 5. Use traditional algorithms for any operator involving attributes that have already been retrieved. In this minimalist yet general design, there are two critical steps that enable the practical utilization of Galois. First, as relations can be large, we iterate with the a prompt until we stop getting new results. For example, we first ask for city names, we collect the answer in a set, and keep asking for more names with another prompt ("Return more results"). The termination condition could be replaced by a user-specified threshold, either on the number of result tuples or on the cost to retrieve them. A second practical issue is the cleaning of the data gathered from the LLM. In particular, numerical data can be retrieved in different formats. We normalize every string expressing a numerical value (say, 1k) into its expression as a number (1000). The enforcing of type and domain constraints is crucial to limit the incorrect output due to hallucinations of the model. The present iteration of Galois is crafted to show queries that effectively retrieve relations from LLMs. This initial implementation serves as the experimental platform as reported in the next section. ## 5. Experiments All experiments are executed on popular LLMs for a set of SQL queries for which we have the ground truth according to a database. **Dataset.** Spider is a Text2SQL dataset with 200 databases, each with a set of SQL queries (Zhu et al., 2017). For each query, it provides the paraphrases of the SQL query as a NL question. We focus on a subset of 46 queries for which is reasonable to obtain answers from a LLM. Indeed, some queries are specific to the given Spider dataset, e.g., "How many heads of the departments are older than 56?". We use datasets about world geography and airports, e.g., "What are the names of the countries that became independent after 1950?". If there are multiple paraphrases for a question, we pick the first one. **Setup.** We test four LLMs. _Flam-T5-large_ (**Flan**): fine-tuned T5 on a collection of datasets described via instructions (783M parameters). _TK-instruct-large_ (**TK**): T5 with instructions and few-shot with positive and negative examples (783M parameters). _InstructGPT-3_ (**GPT-3**): fine-tuned GPT-3 using instructions from humans (Zhu et al., 2017) (175B parameters). _GPT-3-5-turbo_ (**ChatGPT**), the latest model available in the OpenAI API (175B parameters). We construct prompts appropriately for each model, we report the one for InstructGPT-3 in Figure 3. For a given LLM \(M\) and a SQL query \(q\) with its Spider relation \(D\) and the corresponding NL question \(t\), we obtain several results: (a) relation \(R_{M}\) from Galois executing \(q\) over \(M\), (b) relation \(R_{D}\) by executing \(q\) over \(D\), (c) text \(T_{M}\) by asking \(t\) over \(M\). Only (b) uses the relations from Spider, (a) and (c) get the data from the LLM. **Evaluation.** We analyze the results across two dimensions. _1) Cardinality._ First, we measure to which extent Galois returns correct results in terms of number of tuples. As NL questions always return text paragraphs, we cannot include their results in this analysis: the output is not structured data. For Galois, all relations have the expected schema, this is obtained by construction from the execution of the query plan, i.e., every \(R_{M}\) has the same schema as every \(R_{D}\). However, in terms of number of tuples there are differences, as reported in Table 1. We compute the ratio of the sizes as \(f=\frac{|2*R_{D}|}{|R_{D}+R_{G}|}\), where 1 is the best result, and report the difference as percentage (averaged over all queries with non-empty results) with the formula \(1\)-\(f\). The results show that smaller models do worse and miss lots of result rows, up to 47.4% w.r.t. the size of results from the SQL execution \(R_{D}\). For GPT models, almost all queries return a number of tuples close to \(R_{D}\). Most of the differences are explainable with errors in the results of the prompts across the query pipeline, as we discuss in more details next. _2) Content._ Second, we measure the quality of the cell values in the results computed by all baselines. In this test, we compare the content of each cell value after manually mapping tuples between \(R_{D}\) on one side (ground truth) and (\(R_{G}\), \(T_{M}\)) on the other side. As \begin{table} \begin{tabular}{l c c c c} \hline & **Flan** & **TK** & **GPT-3** & **ChatGPT** \\ \hline Difference as \% of \(R_{D}\) size & -47.4 & -43.7 & +1.0 & +19.5 \\ \hline \end{tabular} \end{table} Table 1. Cardinality of Galois’s results w.r.t. the ground truth relations \(|R_{D}|\) for the 46 Spider queries. Closer to 0 is better. Figure 3. Instructions at beginning of GPT-3’s prompt. \begin{table} \begin{tabular}{l c c c} \hline & **All** & **Selections only** & **Aggregates** & **Joins** \\ \hline \(R_{M}\) (SQL Queries) & 0.50 & 0.80 & 0.29 & 0 \\ \(T_{M}\) (NL Questions) & 0.44 & 0.71 & 0.20 & 0.8 \\ \hline \end{tabular} \end{table} Table 2. Cell values matches (%) w.r.t. the ground truth results \(R_{D}\) for the 46 Spider queries. Averaged results for ChatGPT. \(T_{M}\) is NL text, we manually postprocess it (by splitting comma-seperated values and removing repeated values and punctuation) to extract the values as records. We consider a numerical value in (\(R_{G}\), \(T_{M}\)) as correct if the relative error w.r.t. \(R_{D}\) is less than 5%. As this analysis requires to manually verify every result, we conduct it only for one LLM. Results in Table 2 show that Galois executes the queries on ChatGPT with a better average accuracy in the results compared to the same queries expressed as questions in NL. We believe this is a very promising result, as one can think that the results coming from the NL QA task are the upper bound for what the LLM knows. For the easiest subclass of queries, selection-only, the query approach returns correct values in 80% of the cases. Joins are the most problematic, as we observe failure in the join step due to different formats of the same text, e.g., an attempt to join the country code "IT" with "ITA" for entity Italy. As we do not control the infrastructure used by OpenAI, we do not report API execution times. On average, **GPT-3** takes around 20 seconds to execute a query (about 110 batched prompts per query). The distribution for these metrics are skewed as they depend heavily on the result sizes. ## 6. Research Directions Galois aims at creating a system that can push the boundaries of declarative query execution over LLMs, while achieving comparable accuracy and performance to queries executed on a traditional DBMS. While the current prototype does not yet meet these goals, we discuss the main steps in this vision, including open research questions and associated challenges. **Query optimization.** As in a traditional DBMS, optimization can be organized according to the logical and physical plans. For the _logical_ plan, we need optimization heuristics to obtain equivalent logical plans that reduce the number of prompt executions (which can be large) over the LLM. In the example in Figure 2, pushing down the selection over city population to the data access call (leaf) requires to combine the prompts, e.g., "get names of cities with \(>1\)M population". This simple change removes the prompt executions for filtering the list of all cities. However, the optimization decision is not trivial as combining too many prompts lead to complex questions that have lower accuracy than simple ones. For the _physical_ plan, interesting problems arise around the textual prompts. Research questions include how to generate them automatically given only the attribute labels, especially when those are ambiguous or cryptic. The rule of thumb is that the more precise the prompt, the better will be the accuracy of its results. One direction is to make use of data samples, when available. Giving examples of the desired output would guide the LLM to the right format and encoding, which is a big challenge in our current implementation. Another approach is to optimize the prompt for the retrieval task, with some fine tuning or by exploiting pre-defined embeddings for the desired attribute types (Sutskever et al., 2017; Wang et al., 2018). Finally, a key research direction is how to enable hybrid systems that can jointly query traditional storage and LLMs, e.g., when to trigger the more expensive LLM prompts, or how to combine intermediate results. **Knowledge of the Unknown.** To overcome the problem of the results mixing real facts and invented tokens (hallucinations), one direction is to verify generated query answers by another model, possibly also build on LLMs. In most cases, verification is easier than generation, e.g., it is easier to verify a proof rather than generate it. Our enforcing of simple domain constraints shows benefit, but there is the need to adapt more general data cleaning techniques (Kang et al., 2019). Another direction is retrieval augmented language models, where they design modules that separate the "language understanding and reasoning" part and "factual knowledge" part (Dong et al., 2019). Our prompting is a basic approach to surface facts, but more principled solutions are needed to obtain reliable results (Kang et al., 2019). **Provenance.** Retrieval augmented models are also a promising direction to address the fact that LLMs do not cite the sources, or provenance (Sutskever et al., 2017), of their output. This is an issue, because it is not possible to judge correctness without the origin of the information. There are ongoing efforts on linking generated utterances, or values in our case, to sources (Bang et al., 2019). This can also be done through the generation process or in a post-processing step (Sutskever et al., 2017). **Schema-less querying.** We currently assume the SQL schema as given by the user. An interesting extension is to allow users to query without providing a schema. This removes friction from the user, but raises new challenges. Consider the following two queries. Q1: SELECT c.cityName, cm.birthDate FROM city c, cityMayor cm WHERE c.mayor=cm.name Q2: SELECT cityName, mayorBirthDate FROM city Both of them collect the names of cities with the birth date of the mayor. As the LLMs have no schema, both queries should give the same output when executed, i.e., two SQL queries that are both correct translation of the same NL question should give equivalent results. How to guarantee this natural property (for DBs) is a challenge that requires to combine the new challenges in the LLM setting with results on SQL query equivalence (Kang et al., 2019). **Updates and Cost.** We envision that querying LLMs will be less common than querying traditional DBMSs; LLMs are a source for some use cases, but not a replacement. In the hybrid case, optimization will aim at minimizing the number of prompts calls. However, training and using LLMs is expensive and energy consuming. Given the cost of training, it is not clear how to deal with the continuous creation of new information (Kang et al., 2019). One short term solution is to update LLMs without retraining (Bang et al., 2019; Wang et al., 2018). Looking at the long term, cost issues will be alleviated by the fact that training and inference are getting cheaper with better technology4. Footnote 4: For example, β€œThrough a series of system-wide optimizations, we’ve achieved 90% cost reduction for ChatGPT since December” - [https://openai.com/blog/introducing-chatgpt-and-whisper-apis-publishedonMarch17](https://openai.com/blog/introducing-chatgpt-and-whisper-apis-publishedonMarch17)\({}^{\text{\#}}\) 2023 **Coverage and Bias.** LLMs focus on the common and probable cases by design. We found that, for some queries, the missing results are due to their lower popularity, compared to those the surfaced by the LLM. This is likely to get better as researchers focus more on this challenge (Kang et al., 2019; Wang et al., 2018). However, LLMs do well with huge amount of data, which is available only for few languages. While the problem is mitigated with machine translation, i.e., by translating from English to a target language, in terms of factual knowledge there is no clear solution. If facts about an African country are missing from the English corpus, it is then impossible for the LLMs to capture them. While they can miss facts, LLMs do encode biases and stereotypes that are present in observed human language. As humans are biased and use stereotyping, we must be careful when applying these models in real-world applications [3].
2305.07034
Quran Recitation Recognition using End-to-End Deep Learning
The Quran is the holy scripture of Islam, and its recitation is an important aspect of the religion. Recognizing the recitation of the Holy Quran automatically is a challenging task due to its unique rules that are not applied in normal speaking speeches. A lot of research has been done in this domain, but previous works have detected recitation errors as a classification task or used traditional automatic speech recognition (ASR). In this paper, we proposed a novel end-to-end deep learning model for recognizing the recitation of the Holy Quran. The proposed model is a CNN-Bidirectional GRU encoder that uses CTC as an objective function, and a character-based decoder which is a beam search decoder. Moreover, all previous works were done on small private datasets consisting of short verses and a few chapters of the Holy Quran. As a result of using private datasets, no comparisons were done. To overcome this issue, we used a public dataset that has recently been published (Ar-DAD) and contains about 37 chapters that were recited by 30 reciters, with different recitation speeds and different types of pronunciation rules. The proposed model performance was evaluated using the most common evaluation metrics in speech recognition, word error rate (WER), and character error rate (CER). The results were 8.34% WER and 2.42% CER. We hope this research will be a baseline for comparisons with future research on this public new dataset (Ar-DAD).
Ahmad Al Harere, Khloud Al Jallad
2023-05-10T18:40:01Z
http://arxiv.org/abs/2305.07034v1
Ahmad Al Harere ###### Abstract The Quran is the holy scripture of Islam, and its recitation is an important aspect of the religion. Recognizing the recitation of the Holy Quran automatically is a challenging task due to its unique rules that are not applied in normal speaking speeches. A lot of research has been done in this domain, but previous works have detected recitation errors as a classification task or used traditional automatic speech recognition (ASR). In this paper, we proposed a novel end-to-end deep learning model for recognizing the recitation of the Holy Quran. The proposed model is a CNN-Bidirectional GRU encoder that uses CTC as an objective function, and a character-based decoder which is a beam search decoder. Moreover, all previous works were done on small private datasets consisting of short verses and a few chapters of the Holy Quran. As a result of using private datasets, no comparisons were done. To overcome this issue, we used a public dataset that has recently been published (Ar-DAD) and contains about 37 chapters that were recited by 30 reciters, with different recitation speeds and different types of pronunciation rules. The proposed model performance was evaluated using the most common evaluation metrics in speech recognition, word error rate (WER), and character error rate (CER). The results were 8.34% WER and 2.42% CER. We hope this research will be a baseline for comparisons with future research on this public new dataset (Ar-DAD). Deep Learning, End-to-End, Speech Recognition, Quran recitation, Natural Language Processing ## 1 Introduction Speech communication is an important way of social interaction to convey our thoughts, ideas, and emotions to others. Moreover, speech is also a crucial tool for learning and education, as it is the primary way in which information is exchanged between teachers and students. Processing speech using computers and artificial intelligence is a complex task that has been a hot research topic in recent years. One of the main challenges is automatically transcribing spoken words into text. The performance of such systems has greatly improved in recent years due to advancements in deep learning. Arabic language is a rich language with a long history and cultural significance. It is spoken by over 400 million people worldwide, and it is the official language in many countries. Arabic language recognition is a challenging task because of a lack of resources and, having many variations in pronunciation and dialects. Although these difficulties, researchers have made significant progress in developing Arabic speech recognition systems, which can be used in tasks such as automatic speech transcription and translation, and also in speech-enabled applications such as voice assistants and chatbots. The Holy Quran, which was revealed in Arabic language, holds a central place in the hearts and minds of Muslims. Quran is considered the holy book of Islam and the words of Allah. In addition, it is a guide for all aspects of life, providing moral and spiritual teachings and a source of inspiration and guidance. The Holy Quran recitation recognition is a particularly challenging task because of the specific requirements of the task, such as recognizing different recitation styles and checking the correct pronunciation of Tajweed rules, a set of pronunciation rules that must be applied to recite the Quran in the same way that the Prophet Muhammad did. Also, Quran recitation includes many unique sounds and intonations that are not used in other forms of Arabic spoken language. Many researches have been done on Quran recitation processing over time to make Quran recitation easier and more accessible to a wider audience. One of these hot topics of researches is Quran recitation recognition systems for tasks such as recognizing reciters and detecting errors in recitation based on Tajweed rules. Although the research topic of recognizing the recitation of the Holy Quran has been a hot topic in recent years, most of research papers are limited to detecting mispronunciation of words or some Tajweed rules on private small datasets. Some researchers proposed detecting the mispronunciation of reciting some verses directly from speech features, while others proposed converting the recitation into text using traditional ASR. As all previous works were conducted on small private data sets of few chapters, they do not cover a large number of examples of Tajweed rules and different speeds of recitation. As a result, previous works are still not effective to be applied in real-life applications because detecting only the mispronunciation of the Tajweed rules is less important than detecting the mispronunciation of words. Moreover, the use of traditional ASR in recitation recognition suffers from many problems as ASR models require specific forms of datasets which are not available for Quran recitations yet. We will discuss those problems in detail later in the related works section. This paper aims to fill these gaps through the use of end-to-end deep learning methodologies that overcome the problems of using traditional ASR. Experiments were done on the Ar-DAD dataset, which is a large public dataset covers most of Tajweed rules and the different speeds of recitation through the participation of about 30 readers in reciting the verses. The main contributions of this work are: * Using the end-to-end methodology instead of the traditional ASR, for automatic phoneme alignment, so we do not need alignment tools anymore. To the best of our knowledge, the task of Quran recitation recognition has not yet been tackled using an end-to-end deep learning approach, and this work is to fill this gap. * We have evaluated our model on a big public dataset, so it can be a baseline model for comparisons later. * By comparing the predicted text with the real text, our solution can determine the type of error (deleting, substituting, adding) at the level of words, characters, and some Tajweed rules, with the exact position of the error. In terms of limitations, first, all the samples in the Ar-DAD dataset are for men reciters, which makes the model less robust for recognizing the recitations by women and children. Second, the Ar-DAD dataset contains samples from only one recitation form, the 'Hafs from Aasim' recitation [1][2]. However, recitation of the Holy Quran has ten recitation forms (Qira'at) approved by scholars [3]. The differences between these ten forms are mainly in the pronunciation of certain words, prolongation, and intonation [1]. Each form of recitation has its own unique features that distinguish it from the others. This may lead the model to incorrectly recognize samples with different forms of recitation. ## 2 Related Works The Arabic language has several forms, two formal forms, and many slang forms. As for formal forms, Arabic has the classical Arabic language (CA), which is the language of the Holy Quran, and the modern standard Arabic (MSA) that is used in, news, books,... etc. As for the slang forms, Arabic has many dialects that differ from one country to another. Since this paper is for the recognition of classical Arabic speech, the literature review will focus only on papers done for the recognition of the Holy Quran recitation. But there are no researches done using end-to-end on Holy Quran, so we discussed some researches that used end-to-end deep learning methodologies on Modern Standard Arabic. There are several challenges in Quran Recitation Recognition. First, there are several letters In the Arabic language that have confusing pronunciation, as they share the articulation way out and some characteristics.[4], [5]. This confusion significantly impacts speech recognition models, as it leads to errors in the recognition of Arabic speech and decreases the overall accuracy of the model. For example, suppose a model is not able to distinguish between the letters "(")" (sad) and "\(\omega\)" (seen). In that case, it may incorrectly recognize the word "("(aslerat) as "(aslerat)" (aslerat) which can lead to misinterpretation of the text and an increase in both Word and Character Error Rates. Second, the complex and nuanced rules of Tajweed make it difficult for an AI model to accurately recognize recitation because some of these rules change the pronunciation way of letters when applied. For example, the Turning rule ("()) when applied, the pronunciation of "(noon) becomes "\(\ast\)" (meem). Third, having different forms of Quran Recitation make the ASR more complex task because of different ways of pronouncing some of the Tajweed rules, and the length or manner of pronunciation may vary depending on the context and the reciter. For example, Separated Lengthening ("()), which is the prolongation of a letter that comes at the end of a word, can be pronounced for 2, 4, or 5 counts in length. Another example is Concealment ("()), which is the rule of hiding the pronunciation of certain letters, it can be pronounced in different degrees of hiddenness. Moreover, one of the main difficulties in the Quran recitation detection task is that there are three different speeds for reciting the Quran: Hadm ("), Tahqeqeq ("()), and Tadweer ("()) [6]. Each speed has its own unique advantages, and each of them is used to help listeners understand the Quran better and to get the most out of the recitation. Hadm ("): is typically considered the fastest speed of recitation, where the emphasis is on fluency and the ability to recite large portions of the Quran quickly and smoothly. This speed is particularly useful for those who are already familiar with the Quran, have a deep understanding of the Quran, and are proficient in the rules of Tajweed. Tadweer ("()): is the moderate speed of recitation, where the emphasis is on proper pronunciation and intonation, while still maintaining a relatively moderate pace. This speed is particularly useful for those who have a basic knowledge of Tajweed and are trying to improve their recitation and pronunciation. Tahqeqeq ("()): is the slowest speed of recitation, where each letter is pronounced clearly and deliberately, allowing the listener to fully understand the meaning of each verse. This speed is particularly useful for those who are learning to recite the Quran for the first time or for those who are still not familiar with the rules of Tajweed. In this section, we will discuss two basic types of researches, researches on Quran and researches that used end-to-end deep learning on Modern Standard Arabic (MSA) as there are no end-to-end experiments on Quran. As for researches on Quran, we have two basic types of methodologies. * Researches based on detecting mispronunciation from speech directly. Either for Tajweed Rules Mispronunciation or Character Mispronunciation. Table 1 shows a comparison of these studies. * Researches based on traditional ASR that convert speech to text, then detect mispronunciation by comparing the result text with Quran text. ### Researches based on detecting mispronunciation from speech directly Researches based on detecting mispronunciation from speech directly were done using traditional methodologies, such as Hidden Markov Model (HMM) and Gaussian Mixture Model (GMM), or machine learning models such as Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP), as follows: Hassan et al. [7] developed a solution to recognize Qalqalah Kubra1 (\(\ll\),\(\ll\)) [4]) pronunciation using Multilayer Perceptron as a classifier and MFCC as features extraction. The dataset used contains 50 samples, each with correct and incorrect pronunciation, and the achieved results ranged from 95% to 100%. Footnote 1: Qalqalah: it is the vibration of sound at the end of the pronunciation of a letter. Al-Ayyoub et al. [8] used machine learning to build a model for the automatic recognition of Quran Recitation Rules (Tajweed). This model was able to determine the recitation correctness of the following eight rules of intonation: EdgamMeem2, EkhfaaMeem3, Tafkheem Lam4, Tarqeeq Lam5, Edgam Noon6(Noon), Edgam Noon (Meem), Edgam Noon (Waw) and Edgam Noon (Ya'). The authors used a dataset that consists of 3,071 audio files, each containing a recording of exactly one of the eight rules under consideration (in either the correct or the incorrect usage of the rule). For feature extraction, many feature extraction techniques were used such as Linear Predictive Code (LPC), Mel-frequency, Cepstral Coefficients (MFCC), Multi-Signal Wavelet Packet Decomposition (WPD), and Convolutional Restricted Boltzmann Machines (CRBM). As for classification, several classifiers were used such as k-Nearest Neighbors (KNN), Support Vector Machines (SVM), Artificial Neural Networks (ANN), and Random Forest (RF), with accuracy of 96% using SVM. Footnote 2: EdgamMeem :If the start of the word begins with a meem and is followed with a meem sakinah, then merge the words through the meem and apply shunnah. Footnote 3: EkhfaaMeem: If a [\(\sim\)] is followed after a meem sakinah, then apply a shunnah while hiding the meem salain before continuing to the [\(\sim\)]. Footnote 4: Tafkheem Lam: If there is a Fatha or a Dhamma before the word of Allah or Allahum, then laam in Allah will be heavy. Alagrami et al. in [9] proposed a solution that makes use of threshold scoring and support vector machine (SVM) to automatically recognize four different Tajweed rules (Edgham Meem, Ekhfaa Meem, takhfeed Lam, Tarqeeq Lam) with 99% accuracy, where the filter banks were adopted as feature extraction. The dataset used contained about 657 records of Arabic natives and non-natives, each rule has 160 records, and each of them is either the correct pronunciation or the wrong pronunciation of this rule. A Tajweed classification model was developed by Ahmad et al. in [10]. This solution focused on a set of Tajweed rules called "the Noon Sakinah rules" and in particular the rule of "Idgham" with and without "Ghunnah". Mel-Frequency Cepstral Coefficient and a neural network were used for the feature extraction and the classification process, where Gradient Descent with Momentum, Resilient Backpropagation, and the Levenberg-Marquardt optimization algorithms, were used to train the neural network. The Levernberg Marquardt algorithm achieved the highest test accuracy (77.7%), followed by Gradient Descent with Momentum (76.7%) and Resilient Backpropagation (73.3%). The dataset used was 300 audio files of recitation of two famous reciters, and each is a recitation of one of those Tajweed rules. Nahar et al. in [11] took a different path as they proposed a recognition model to recognize the "Qira'ah" from the Holy Quran recitation precisely 96%, since according to the narration "hadith" No. 5041, taken from [12], the Holy Quran has seven main reading modes, known as "Qira'at," which are acknowledged as the most popular methods of reciting from the Holy Quran, and three complementary readings of the seven. This model used the Mel-Frequency Cepstrum Coefficients (MFCC) features and Support Vector Machine (SVM), where the authors have built a dataset has 10 categories, each one representing a type of Holy Quran recitation or "Qira'ah", with a total of 258 wave files. For detecting letter and word errors from speech directly, the researches carried out in this field used classifiers trained on datasets containing samples of mispronunciation and correct pronunciation of specific verses, or they stored the characteristics of correct recitation of certain verses in a database. Compare users' recitations with stored recitations and then calculate the similarity using a threshold. One of the most significant shortcomings of this methodology is that it can only categorize the recitations of the verses that were presented in the datasets. One of the earliest works done in this field is the work of Tabbal et al. in [13], where an automated verse delimiter and an error detection model were developed for the recitation of the Holy Quran. HMM classifier and Mel Frequency Cepstral Coefficient (MFCC) features were used on a private dataset that is one hour of recitations of Surah Al-Ikhlas only. The best accuracy obtained by this solution was 85% for females and 90% for males. Putra et al. [14] developed software for Quranic Verse Recitation Learning. The solution proposed used Mel Frequency Cepstral Coefficient (MFCC) for feature extraction and the GMM model as a classifier. In order to test the reliability and accuracy of correction, a data set was collected from ten speakers reading some verses incorrectly and correctly for each of them. The achieved correction accuracy was 90% for hija'iyah letters (Arabic Alphabet Letters) pronunciation, 70% for recitation law where the law might be idgham, ikhfa' or idhar, and 60% for the combination of pronunciation and recitation law. In [15], Rahman et al. proposed an automated checking system to learn children the correct recitation of the Holy Quran. Mel-Frequency Cepstral Coefficient (MFCC) was used for feature extraction, and Hidden Markov Model (HMM) was used for classification and recognition. Using the HMM algorithm, the model can identify and highlight any discrepancy or inconsistency in children's recitation by comparing it with the correct teacher's recitation that was stored in a database, where only one chapter of the Quran was supported, Surah Al-Fatiha. Muhammad et al. [16] proposed E-Hafiz system to facilitate reciting learning of the holy Quran, where Mel Frequency Cepstral Coefficient (MFCC), Vector Quantization (VQ), and Calculation of distance between vectors were used to extract the features, reduce the number of features vectors and compare the result with the threshold value. A dataset of 10 expert recitations of the first 5 surahs of the Holy Quran was used, and recognition accuracy of 92%, 90%, and 86% was achieved for men, children, and women, respectively. Rajagede and Hastuti [17] propose a model to help users to check their Al-Quran memorization using the Siamese Long Short-Term Memory (LSTM) Network. Siamese LSTM network was used to check the similarity between two samples, so it verifies the recitation by matching the input with existing data for a verse read, without performing a speech-to-text extraction process. Two Siamese LSTM architectures were compared: the Siamese-Classifier, which employed binary classification, and the Manhattan LSTM, which produced a single numerical value to indicate similarity. In addition, the performance of the models was compared with Mel Frequency Cepstral Coefficient (MFCC), Mel Frequency Spectrum Coefficient (MFSC), and Delta Features, where an F1-score of 77.35% was given by using MFCC with delta features and Manhattan LSTM, as the best result obtained. Four reciters who recited 48 verses from the last 10 Surahs of the Quran provided the data used to train the model. ### Researches based on traditional ASR Very few researches suggested converting the recitation speech into text by using Automatic Speech Recognition techniques, then detecting mispronunciation by comparing the result text with Quran text. This methodology is better than models that detect mispronunciation from speech directly, as it helps to detect various errors in the user's recitation by comparing the predicted text with the original text of the verses. Moreover, no need for wrong pronunciation samples. To detect mistakes made during the recitation of the Holy Quran, Tabbaa and Soudan [18] created a computer-aided recitation training solution that combined Automatic Speech Recognition (ASR) with a classifier-based approach to increase the detection rate. This solution detects errors in two phases: the HMM-based ASR recognizes the recitation, and then the classifier applies, where two classifiers were used, one to distinguish between the stressed and non-emphasized pronunciations of the Arabic letter "R", and the other to separate closely related and frequently mixed-up letter pronunciations. The HMM recognizer was trained using CMU Sphinx, and the classifiers were built using WEKA (Waikato Environment for Knowledge Analysis), where numerous machine learning algorithms were tested. Up to 7 hours of recitations were recorded by phone calls to a TV program, the recitation scholar reads a page from the Quran before listening to the students' recitations and correcting any mistakes. According to the results, the system has a word-level accuracy of 91.2%, where it has been tested on 60 minutes of continuous recitation. Al-Bakeri et al. in [19] introduced an ASR integrated with a self-learning environment that depends on MVC architectures to correct recitation automatically. The speech recognition model was built using open-source CMU Sphinx tools, which also contain the Hidden Markov Model (HMM) code that was chosen for feature extraction, feature training, and pattern recognition. Also, language models were used in the process of building the system and built using CMU-CSLMT tools [20]. The corpus contains the recitations of two short chapters, Surah Al-Ikhlas and Surah Al-Rahman, which were recorded by 10 famous Quran reciters. To assess the ASR performance, the word error rate (WER) was used, where the ASR output was compared with the correct words of the verse considering insertions, deletions, and substitution, so the correctness was presented as a number ranged between 47.47% and 75.2% as reported. A speech recognizer for the Holy Quran was introduced by Tantawi et al. in [21]. This solution is able to recognize the recitation of some verses in addition to some Tajweed rules that were taken into account during the development process, where it was trained using 32 recordings of Chapter 20 from the Holy Quran according to Narration of "Hafs on the authority of Asim" (One of the ten reading forms of the Quran). The pronunciation dictionary for the Holy Quran recitations was built using an automated tool proposed by [22], where the transcription was passed to it to build the dictionary. As for the language model, The SRI Language Modeling (SRILM) toolkit [23] was used. With the KALDI toolkit, numerous experimental configurations with various dataset sizes and Tajweed rules were used. The best experimental setup used MFCC features and Time Delay Neural Networks (TDNN), where Word Error Rates (WER) and Sentence Error Rates (SER) ranged from 0.27 to 6.31% and 0.4 to 17.39%, respectively. However, this methodology is not effective in recognizing Quran recitation as it is based on traditional ASR, because there was a problem with the alignment process needed to train the acoustic model. The traditional ASR consists mainly of three models: * Pronunciation dictionary that converts words from the original language into a series of phonemes that express the pronunciation of these words. * Acoustic model that connects phonemes with the features extracted from the corresponding sound. * Language model that is responsible for determining the most likely sequence of words based on the context and grammar of the language. Figure 1 shows how these components together: Training datasets must contain the correct alignment between the acoustic frames and phonemes in order to train the acoustic models. This is one of the biggest problems in this field, as no dataset of this format is available for Quran recitations, unlike the MSA, which has several datasets for this format, such as the KAPD dataset [25] and the Nawar Halabi dataset [26]. As a result, researchers who proposed this method used automated tools that perform this alignment process, but the results were not good enough to recognize the recitation efficiently. For this reason, we proposed using the end-to-end methodology instead of the traditional ASR. The end-to-end models can do the alignment process automatically without any need for additional tools, and convert acoustic Figure 1: Traditional ASR workflow [24] Figure 3: End-to-End ASR pipeline [27] Figure 2: Traditional ASR pipeline [27] features to text transcription directly without the need for all other components needed in traditional ASR, which makes them more efficient and suitable for Quran recitation recognition. Figure 2 and Figure 3 show a comparison between the conventional ASR pipeline and the end-to-end pipeline. To the best of our knowledge, the task of Quran recitation recognition has not yet been tackled using an end-to-end deep learning approach, and this work is to fill this gap. \begin{table} \begin{tabular}{c c c c c c} \hline \hline **Work** & **Dataset** & **Feature Extraction** & **Model** & **Recognition Level** & **Accuracy** \\ \hline [7] & 100 samples for & Mel- & Multi-Layer & One Tajweed rule & 95\% - 100\% \\ & Qalqalah & Frequency & Perceptron & (Qalqalah) & \\ & pronunciation & Cepstral & (MLP) & & \\ & & Coefficient & neural & & \\ & & (MFCC) & network & & \\ [8] & 3071 audio & CRBM, & Support & Eight Tajweed rules & 96.4\% \\ & files, each & MFCC, & Vector & (EdgamMeem,EkhfaaMeem, \\ & containing a & WPD, & Machine & Tafkheem Lam, Tarqeeq \\ & recording of & HMM-SPL & (SVM) & Lam, Edgam Noon (Noon), \\ & exactly one of & & & Edgam Noon(Meem), \\ & the eight rules & & & Edgam Noon(Wang) and \\ & & & & Edgam Noon(Ya’)) & \\ [9] & 657 records of & Filter Banks & Support & Four different Tajweed & 99\% \\ & Arabic natives & & Vector & rules (Edgham Meem, \\ & and non-natives & & Machine & Ekhfaa Meem, takhfeef \\ & & & (SVM) & Lam, Tarqeeq Lam) & \\ [10] & 300 audio files & Mel- & Neural & β€œIdgham” rules with and & 73.3\% - 77.7\% \\ & of recitation of & Frequency & Network & without β€œGhunnah” & \\ & two famous & Cepstral & & & \\ & reciters & Coefficient & & & \\ & & (MFCC) & & & \\ [11] & 258 wave files & Mel- & Support & Classification of recitation & 96\% \\ & Frequency & Vector & into one of ten recitations & \\ & Cepstral & Machine & types (β€œQira’at”) & \\ & Coefficient & (SVM) & & & \\ & & (MFCC) & & & \\ [14] & Voice recorded & Mel- & Gaussian & Letter level and some & Letters: 90\% \\ & from an expert & Frequency & Mixture & Tajweed rules & Tajweed rules: 70\% \\ & Cepstral & Model & & Combination: 60\% \\ & Coefficient & (GMM) & & & \\ & & (MFCC) & & & \\ [16] & Consists of 10 & Mel- & Threshold & Word level for verses in the & 86\% - 92\% \\ & expert & Frequency & based on & dataset & \\ & recitations of & Cepstral & Euclidean & & \\ & the first 5 & Coefficient & distance & & \\ & surahs of the & (MFCC) & & & \\ & Holy Quran. & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: A comparison of some recitation recognition works As our proposed model is based on an end-to-end model and there is no such model for Quran recitation processing, we will discuss it in the MSA since there are some researchers who used end-to-end deep learning on modern standard Arabic. A comparison of these works is shown in Table 2. Hussein et al. [29] proposed an End-to-End transformer-based Arabic Automatic Speech Recognition (ASR) model with a multitask objective function of Connectionist Temporal Classification (CTC)/Attention, where long short-term memory (LSTM) and transformer-based language model (TLM) were the two kinds of language models utilized in this work. The proposed model was compared to the previous approaches for Modern Standard Arabic (MSA) recognition task using Multi-Genre Broadcast 2 (MGB2) [30] data and for the Dialectal Arabic recognition task using MGB3 [31] and MGB5 [32] data. While the conventional word error rate (WER) was used to evaluate the model results for the first task, the multi-reference word error rate (MR-WER) and averaged WER (AV-WER) which adopted from MGB3 [31] and MGB5 [32] challenges, were used to evaluate the model results for the second task. 12.5%, 27.5%, 33.8% WER were achieved for the MGB2 [30], MGB3 [31], and MGB5 [32] challenges, respectively. Ahmed et al. [33] introduced an end-to-end model based on Bidirectional Recurrent Neural Network with CTC objective function and a 15-gram language model as an Arabic speech-to-text transcription system. Also, a character-based decoder without a lexicon was used. This model was evaluated using 1200 hours corpus of the Aljazeera multi-Genre broadcast programs (MGP2) [30], where the WER was 12.03% for non-overlapping speech on the development set. Belinkov et al. [34] analyzed the internal learned representations in an end-to-end ASR model for two languages (English and Arabic). Three datasets were used, Librispeech [35] and TED-LIUM [36] were used for English and the MGB-2 corpus [30] which has 1200 h from the Al Jazeera Arabic TV channel, was used for Arabic. Alsayadi et al. [37] proposed end-to-end deep learning approaches to build a diacritized Arabic ASR. Two types of speech recognition approaches were used: The conventional ASR approach and the end-to-end ASR approach which consists of two models. The first model was built using Joint CTC attention based on the ESPnet toolkit [38] with an RNN-based language model, and the second model was built based on CNN-LSTM with the attention method using the Espresso toolkit [39] and with an external LM containing about 1.8 m words and 245k unique words. Training and testing of these models were done based on the Standard Arabic Single Speaker Corpus (SASSC), which contains 7 h of modern standard Arabic speech. The WER of 33.72%, 31.10%, and 28.48% were achieved for conventional ASR, the first end-to-end model, and the second end-to-end model, respectively. ## 3 Dataset The dataset used in this work is the Ar-DAD dataset [40] which is a large dataset of Arabic-based audio clips containing 15810 clips of 30 popular reciters reading 37 chapters from the Holy Quran in addition to 397 audio clips for 12 imitators of the top reciters and two plain text files that contain the same chapters' textual content read by the reciters with and without vocalization (vowelization). The audio samples, which are 10 seconds long on average and have a sampling rate of 44.1 kHz, 16-bit depth, and stereo channels, are shared in the WAV format. The dataset was split as 80% for training, 10% for testing, and 10% for validation, where 12648 clips, 1581 clips and 1581 clips, were selected randomly and used for training, testing, and validation, respectively. We noticed that the dataset contains all speeds of recitations mentioned before. In addition, the majority of the first verse samples transcripts of each chapter contain the sentence "\(\ast\)"\(\ast\)"\(\ast\)"\(\ast\)"\(\ast\)" while the corresponding audio clips do not contain the pronunciation of this sentence, so we removed this sentence from all transcripts because it would cause a problem in training the model since the number of these samples is about 1100 samples out of 15810 samples. ## 4 Methodology The proposed solution consists of two main components: a CNN-Bidirectional GRU encoder and a character-based decoder. The encoder maps the input vector of features to a latent representation. The decoder takes the latent representation and generates one prediction at a time. CTC is the objective function used to train the encoder. The next subsections discuss them in detail. ### Encoder The encoder is CNN-BiGRU. The reason behind using CNN as the first layer is that the ASR performance can be improved by applying convolutions in frequency and time domains to spectral input features [41, 42, 43]. In addition, using Bidirectional RNNs in speech recognition provides better context utilization, both forward and backward, to accurately predict words [44]. \begin{table} \begin{tabular}{c c c c c} \hline **Work** & **Dataset** & **Model** & **Language Model** & **WER** \\ \hline [29] & MGB2, MGB3, and MGB5 & transformer-based with a multitask objective function of CTC/Attention & LSTM and transformer-based & MGB2: 12.5\% \\ [33] & MGP2 & Bidirectional Recurrent Neural Network based with CTC objective function & & \\ [37] & SASSC & Conventional ASR & 3-gram & 33.72\%, \\ & Joint CTC attention & RNN based & 31.10\% \\ & CNN-LSTM with attention & external LM & 28.48\% \\ \hline \end{tabular} \end{table} Table 2: Comparison between works related to end-to-end approaches for Arabic ASR The input of the encoder is the normalized spectrogram of audio clips. Each audio clip is a time series of length \(T\) with a vector of audio features for each time slice. The input vectors \(V_{1},V_{2},\ldots,V_{T}\), are prepared by the 2D convolution layers (time and frequency domains) and then the CNN output is fed as input to Bidirectional GRUs. The output probabilities of the encoder are maximized using the CTC loss function. #### 4.1.1 Convolution Neural Network (CNN) Convolutional Neural Networks (CNNs) are a type of deep learning architecture widely used for image classification and recognition tasks [45]. CNNs are designed to automatically learn the features of the input data and make predictions based on those features. The input data is typically processed in a series of convolutional layers, activation functions, and pooling layers. In a convolutional layer, a set of kernels, also known as filters, slide over the input data, computing dot products between the input data and the weights of the kernels. The dot products are then used to produce feature maps, which are fed into the activation functions to introduce non-linearity to the model. The size of the kernels, the stride (the step size at which they move over the input data), and the padding (the addition of zeros around the input data to control the size of the output feature maps), are all hyperparameters that can be optimized for the specific task and input data. 2D Convolutional Neural Networks (2D-CNNs) are a specific type of CNN that operate on 2D input data, such as an image. In a 2D-CNN, the convolutional layers perform 2D convolutions, using 2D kernels to scan the input data and extract local features. The hierarchical representations built by the multiple convolutional and pooling layers allow 2D-CNNs to learn increasingly complex and discriminative features, making them well-suited for tasks such as object recognition and segmentation[46]. In addition, 2D-CNNs can be a powerful tool for speech recognition tasks, as they allow for the automatic extraction of relevant features from the spectrogram of an audio signal. By combining 2D-CNNs with other deep learning architectures, such as RNNs, end-to-end speech recognition systems can be created that can handle variable-length inputs and model the complex relationships between speech sounds [47]. #### 4.1.2 Bidirectional Recurrent Neural Networks (Bi-RNNs) A Bi-RNN is a combination of two RNNs that process two sequences, one in a forward direction and one in a backward direction, to benefit from the information of the past and the future and to compute the likelihood of the output character \(c\) at a given time input \(X_{t}\), depending on the previously hidden state \(h_{t-1}\), the current input \(X_{t}\) and the next hidden state \(h_{t+1}\)[44]. As shown in Figure 4, there is an additional-hidden layer for each Bi-RNN layer to accommodate the backward training process, where the forward and backward hidden states are updated at a given time \(t\) as follows: Figure 4: Bidirectional Recurrent Neural Network [48] \[A_{t}(Forward)=f\left(X_{t}*W_{XA}^{forward}+A_{t-1}(Forward)\right)*W_{AA}^{ forward}+b_{A}^{forward} \tag{1}\] \[A_{t}(Backward)=f\left(X_{t}*W_{XA}^{backward}+A_{t+1}(Backward)\right)*W_{AA}^{ backward}+b_{A}^{backward} \tag{2}\] Where \(b\) is the bias, \(W\) is the weight matrix and \(f\) is the activation function. And the hidden state is: \[h_{t}=A_{t}(Forward)+A_{t}(Backward) \tag{3}\] A Bi-RNN's network block can be vanilla RNNs that suffer from vanishing gradient [49] and exploding gradient problems [50], Gated Recurrent Units (GRU), or Long Short-Term Memory (LSTM). GRU and LSTM architectures are the most used when using long RNNs, because they handled vanishing and exploding gradient problems and are capable of learning long-term dependencies [51, 52]. In this work, we used GRU architecture because it has fewer parameters than LSTM and is faster to train (a GRU has two gates, reset and update gates, whereas an LSTM has three gates, input, output and forget gates). Also, it was discovered that the performance of GRU and LSTM were comparable for some tasks involving speech signal modeling and natural language processing [53, 54]. Figure 5 shows the architecture of a GRU cell and the equations to calculate the value of gates. Where \(z\) and \(r\) represent the update and reset gates respectively, \(\sigma\) is the sigmoid function, \(X_{t}\) is the current input, \(W\) is the matrix of weights, \(\hat{h}\) is the current memory content and \(h\) is the final memory at the current time step. The Bi-RNNs layers are followed by a fully connected layer and an output layer which uses the softmax function to calculate the probability distribution over characters as follows: \[p(c=k|x)=\frac{\exp(w_{k}^{L}\cdot h_{t}^{L-1})}{\sum_{j}\exp(w_{j}^{L}\cdot h_ {t}^{L-1})} \tag{4}\] \(L\) represents the output layer, so \(h^{L-1}\) is the hidden representation of the previous layer. #### 4.1.3 Connectionist Temporal Classification (CTC) Connectionist Temporal Classification (CTC) [56] is the loss function used to train the model. CTC is an output and scoring function that addresses sequence problems when the alignment between the input and the output is not known, so it is applied in applications like speech and handwriting recognition. For each time step \(t\) and a single input sequence \(X\) of length \(T\), the encoder gives a distribution over the vocabulary, \(p_{t}(c|X)\), then CTC computes the probability for a single sequence \(C\) of length \(T\) as follows: Figure 5: GRU architecture and equations [55] \[P(C|X)=\prod_{t=1}^{T}p(c_{t}|X) \tag{5}\] The same word can be represented by several different sequences, so finding the most likely sequence is done by summing over the probability of these sequences: \[P(S|X)=\sum P(C|X) \tag{6}\] Finally, CTC loss, which is the negative log probability of all valid sequences, is calculated using the dynamic programming algorithm which speeds up the calculation. In addition, to calculate its derivative, which utilizes the backpropagation through time algorithm to update the encoder's parameters. CTC loss function: \[L_{CTC}=\ -\log P(S|X) \tag{7}\] ### Decoder In this paper, a character-level decoder was used because this type of decoder has several advantages over word-level decoders. One of the main advantages is that character-level models are more robust to out-of-vocabulary (OOV) words and variations in pronunciation [57, 58, 59]. Since the model is trained to predict individual characters, it is able to handle words that it has not seen before by predicting the individual characters that make up the word. This is particularly useful in speech recognition, where there may be many rare or unknown words. Character-level decoders also tend to be more computationally efficient than word-level decoders [60]. Since the model is only predicting individual characters, it does not need to search through a large vocabulary to find the most likely word. This can make decoding faster and more efficient. In general, decoders are used to find the proper output for a given input by solving the following equation: \[S^{*}=argmax_{S}\ p(S|X) \tag{8}\] Greedy algorithms are used to solve this problem by using the output that is most likely at each time step. However, these algorithms have a big problem, which is overlooking the possibility that a single output sentence could have a variety of alignment forms [57]. For that, we used the CTC beam search decoder that sums the probabilities of each sentence to produce the best result. The CTC loss function is used to train the model, and the beam search decoder is used to generate the final output sequence. The beam search decoder works by maintaining a fixed number of top-scoring sequences (the "beam") at each decoding step, rather than considering all possible next steps. This reduces the search space and allows for faster decoding while still maintaining good accuracy. Additionally, the CTC loss function allows the decoder to be robust to variations in the timing of the input, making it well-suited for speech recognition tasks. Overall, the CTC beam search decoder is an efficient and effective method for decoding sequences in speech recognition and other sequence-to-sequence tasks [61]. ## 5 Experiment Setup Several speech recognition models that have shown highly accurate results were proposed in recent years. One such model is Deep Speech 2 [62]. Deep Speech 2 was developed by Baidu Research, it uses a combination of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to transcribe speech to text, as shown in Figure 6. The model starts with a convolutional layer that extracts features from the spectrogram, followed by several more convolutional layers that extract higher-level features. The output of these layers is then passed through a stack of bidirectional Long Short-Term Memory (LSTM) or Gated Recurrent Units (GRU) layers to process the sequential information in the speech, then a linear layer maps the output to the final text transcript. The training uses a Connectionist Temporal Classification (CTC) loss function that allows the model to align the output labels to the input speech, regardless of the length mismatch between the two. The architecture used in this work is shown in Figure 7. Our model consists of two 2D convolution layers with a kernel size of (11, 41) and a stride size of (2, 2) for the first layer, a kernel size of (11, 21) and a stride size of (1, 2) for the second layer, and 32 filters each. CNN layers are followed by 5 bidirectional GRU layers with 512 units for each layer and dropout layers of 0.5 rate. ReLU activation layers, and Batch normalization layers were also used. Batch normalization helps stabilize the training of the model and reduces the chances of overfitting by normalizing the activations of each layer to have zero mean and unit variance [63]. Rectified Linear Unit (ReLU) layers are a type of activation function that introduce non-linearity to the model, to help the model to learn more complex representations of the input data [64]. Then finally comes a dense layer containing 1024 neurons allows the model to learn interactions between different features, followed by a classification layer consisting of 46 neurons, to classify the current input into the blank symbol used in the CTC algorithm or the alphabet used in this work, which illustrated in Table 3. Adam optimizer with 1e-4 learning rate and CTC loss function were used to train the model and was implemented using the TensorFlow library in Python and trained on the Google Colab platform7 which provides access to an NVIDIA Tesla T4 GPU with 16 GB of memory. Footnote 7: **Colaboratory**: [https://colab.research.google.com/](https://colab.research.google.com/) Figure 6: Architecture of Deep Speech 2 [62] ## 6 Result & Discussion As the Ar-DAD dataset includes recitations by a large number of readers at different speeds, we performed many experiments to choose the best spectrogram extraction parameters that can be suitable for all speeds. Choosing appropriate parameters, values of frame length, and hop size, in particular, is important to balance the trade-off between time and frequency resolution when Fourier Transform is applied. In practice, the choice of these parameters depends on the application and the characteristics of the signal being analyzed, and it's an iterative process. The detailed experiments are shown in Table 4. We used the most common and widely used metrics for measuring the performance of speech recognition models, the Character Error Rate (CER), and the Word Error Rate (WER) metric. WER of 8.34% and CER of 2.42% were the best results we achieved. The formula for Word Error Rate (WER) is: \[WER\ =\ (S\ +\ D\ +\ I)\ /\ N \tag{9}\] The formula for Character Error Rate (CER) is: \[CER\ =\ (S\ +\ D\ +\ I)\ /\ M \tag{10}\] Where: S, D and I are the number of substitutions, deletions, and insertions required to change the recognized transcript into the reference transcript. N is the total number of words, and M is the total number of characters in the reference transcript. Figure 7: Our Proposed Solution Schema As for comparing our results with the results of previous work, unfortunately, all the datasets used before are private and this made it difficult for us to compare with others. Therefore, using a public dataset in this research will solve the problem in the future so that researchers in this field can compare their work with us. However, as we have shown before, the vast majority of the previous works consider the problem as a classification task to detect the mispronunciation of Tajweed rules and some verses by samples containing wrong and correct pronunciation. As for the few works that proposed solutions based on traditional ASR, as mentioned in the literature review, all of them used data containing a few chapters of the Quran at the same speed and a few verses by a small number of readers who recited those verses. So, their results cannot be compared with our results as the dataset we used includes about 37 chapters of the Holy Quran in addition to a large number of reciters, which reaches 30 reciters, who recited these chapters at different speeds and with different applications of Tajweed rules. Table 5 shows a comparison between the real text and the text predicted by the proposed model for some verses from the Ar-DAD dataset. ## 7 Conclusion In conclusion, this research presents a novel end-to-end deep learning model for recognizing Holy Quran recitation. Moreover, our proposed model provides the ability to give feedback to users about error type and \begin{table} \begin{tabular}{c c c c c} \hline **Experiment** & **FFT** & **Hop Size** & \multicolumn{2}{c}{**Test set**} \\ \cline{3-5} **Number** & & & **WER** & **CER** \\ \hline 1 & 512 & 256 & 9.02\% & 2.64\% \\ 2 & & 384 & 8.7\% & 2.45\% \\ 3 & 800 & 400 & **8.34\%** & 2.77\% \\ 4 & & 600 & 8.51\% & **2.42\%** \\ 5 & 1024 & 512 & 10.7\% & 3.3\% \\ 6 & & 768 & 11.5\% & 3.2\% \\ \hline \end{tabular} \end{table} Table 4: Experiments conducted to select feature extraction parameters \begin{table} \begin{tabular}{c c c} \hline **Actual** & **Predicted** \\ \hline [MISSING_PAGE_POST] location so that users can have a better experience in correcting their errors in their learning journey. The proposed solution consists of two main components: A CNN-Bidirectional GRU encoder uses the CTC loss function and a character-based decoder. Using this end-to-end model allows us to get rid of alignment tools, so reduce needed efforts, and improve performance. Our proposed model has been evaluated on a recently published public data set (Ar-DAD), which contains about 37 chapters recited by 30 reciters. Ar-DAD dataset used to train a model for recognizing Quran recitation has limitations. It only includes men reciters, making it less reliable for recognizing recitations by women and children. Additionally, it only contains samples from one recitation form, while there are ten approved forms, which may cause the model to incorrectly recognize recitations with different forms. The model's performance was evaluated using word error rate (WER) and character error rate (CER) as metrics, with the best results obtained being 8.34% WER and 2.42% CER. These results demonstrate the effectiveness of the proposed model in recognizing the recitation of the Holy Quran that outperforms previous related works. We hope that this paper will provide a baseline for fair comparisons in this task, as it is based on a publicly available dataset that can be used by all researchers. ## 8 Declarations ### Authors' contributions AAH performed the literature review, conducted the experiments, and wrote the manuscript. KAJ took on a supervisory role and made a contribution to the conception and analysis of the work. All authors read and approved the final manuscript. ### Funding The authors declare that they have no funding. ### Data Availability The data set used in this work is available at: [https://data.mendeley.com/datasets/3kndp5vs6b/3](https://data.mendeley.com/datasets/3kndp5vs6b/3) ### Ethics approval and consent to participate The authors Ethics approval and consent to participate. ### Consent for publication The authors consent for publication. ### Conflicts of Interest The authors declare that they have no competing interests.
2306.09549
QH9: A Quantum Hamiltonian Prediction Benchmark for QM9 Molecules
Supervised machine learning approaches have been increasingly used in accelerating electronic structure prediction as surrogates of first-principle computational methods, such as density functional theory (DFT). While numerous quantum chemistry datasets focus on chemical properties and atomic forces, the ability to achieve accurate and efficient prediction of the Hamiltonian matrix is highly desired, as it is the most important and fundamental physical quantity that determines the quantum states of physical systems and chemical properties. In this work, we generate a new Quantum Hamiltonian dataset, named as QH9, to provide precise Hamiltonian matrices for 999 or 2998 molecular dynamics trajectories and 130,831 stable molecular geometries, based on the QM9 dataset. By designing benchmark tasks with various molecules, we show that current machine learning models have the capacity to predict Hamiltonian matrices for arbitrary molecules. Both the QH9 dataset and the baseline models are provided to the community through an open-source benchmark, which can be highly valuable for developing machine learning methods and accelerating molecular and materials design for scientific and technological applications. Our benchmark is publicly available at https://github.com/divelab/AIRS/tree/main/OpenDFT/QHBench.
Haiyang Yu, Meng Liu, Youzhi Luo, Alex Strasser, Xiaofeng Qian, Xiaoning Qian, Shuiwang Ji
2023-06-15T23:39:07Z
http://arxiv.org/abs/2306.09549v4
# QH9: A Quantum Hamiltonian Prediction Benchmark for QM9 Molecules ###### Abstract Supervised machine learning approaches have been increasingly used in accelerating electronic structure prediction as surrogates of first-principle computational methods, such as density functional theory (DFT). While numerous quantum chemistry datasets focus on chemical properties and atomic forces, the ability to achieve accurate and efficient prediction of the Hamiltonian matrix is highly desired, as it is the most important and fundamental physical quantity that determines the quantum states of physical systems and chemical properties. In this work, we generate a new Quantum Hamiltonian dataset, named as QH9, to provide precise Hamiltonian matrices for 2,399 molecular dynamics trajectories and 130,831 stable molecular geometries, based on the QM9 dataset. By designing benchmark tasks with various molecules, we show that current machine learning models have the capacity to predict Hamiltonian matrices for arbitrary molecules. Both the QH9 dataset and the baseline models are provided to the community through an open-source benchmark, which can be highly valuable for developing machine learning methods and accelerating molecular and materials design for scientific and technological applications. Our benchmark is publicly available at [https://github.com/divelab/AIRS/tree/main/OpenDFT/QHBench](https://github.com/divelab/AIRS/tree/main/OpenDFT/QHBench). ## 1 Introduction Machine learning methods have shown great potential in accelerating computations in quantum chemistry tasks. For example, a variety of invariant geometric deep learning methods have been developed to encode pairwise distances and bond angles in molecular and materials systems (Schutt et al., 2018; Gasteiger et al., 2020, 2021; Liu et al., 2022; Wang et al., 2022) to accelerate the prediction of their chemical properties as data-driven surrogate approximations. To enhance the prediction of vectorial properties, such as force fields, equivariant deep learning methods have been developed to capture permutation, translation, and rotation equivariance for equivariant property prediction (Satorras et al., 2021; Schutt et al., 2021; Tholke and Fabritiis, 2022; Thomas et al., 2018; Batzner et al., 2022; Fuchs et al., 2020; Liao and Smidt, 2023; Anderson et al., 2019; Brandstetter et al., 2022). To support and facilitate the development of machine learning methods on quantum chemistry property prediction, many datasets have been generated to benchmark the respective tasks on molecular property prediction (Blum and Reymond, 2009; Ruddigkeit et al., 2012; Ramakrishnan et al., 2014; Wang et al., 2009; Nakata and Shimazaki, 2017), catalyst prediction (Chanussot et al., 2021; Tran et al., 2023), and force field prediction (Chmiela et al., 2017, 2023). In addition to these quantum chemistry prediction tasks, the quantum Hamiltonian is another significant and fundamental physical property that determines the quantum states and various materials properties(Marzari and Vanderbilt, 1997; Souza et al., 2001; Qian et al., 2010; Marzari et al., 2012; Bai et al., 2022). The quantum Hamiltonian can be calculated using Density Functional Theory (DFT) (Hohenberg and Kohn, 1964; Kohn and Sham, 1965) with a time complexity of \(O(n^{3}T)\), where \(n\) represents the number of electrons and \(T\) denotes the number of optimization steps required to achieve convergence. Given the high computational complexity of the DFT algorithms, accelerating such calculations for novel molecular and materials systems becomes a desirable but challenging task. To tackle this challenge, machine learning methods, such as quantum tensor networks (Li et al., 2022; Gong et al., 2023; Schutt et al., 2019; Yu et al., 2023; Unke et al., 2021), provide a highly promising approach for accelerating the DFT algorithms. These networks directly predict the final Hamiltonian matrix given the input 3D geometries, resulting in significant acceleration of calculations by orders of magnitude. Unlike invariant chemical properties, Hamiltonian matrices obey intrinsic block-by-block matrix equivariance. This equivariance can be represented by the rotation Wigner D-Matrix, which may contain higher order rotations beyond 3D space. In order to make physically meaningful predictions, it is important to design quantum tensor network architectures that preserve this equivariance property. To perform systematic and in-depth study of this new task, there is a clear need to generate large-scale quantum tensor datasets and benchmarks. Currently, the only quantum Hamiltonian datasets include the MD17 (Schutt et al., 2019; Gastegger et al., 2020) and mixed MD17 (Yu et al., 2023) datasets, which consist of data for a single and four molecules, respectively. To provide a much larger and more realistic dataset, we generate a new quantum tensor dataset named QH9. This dataset contains Hamiltonian matrices for 130,831 stable molecular geometries and 2,399 molecular dynamic trajectories. In order to provide comprehensive studies for quantum tensor networks, we have designed four specific tasks. The first two tasks, QH-stable-iid and QH-stable-ood, aim to explore the performance of the networks in both in-distribution and out-of-distribution scenarios, specifically focusing on stable molecular geometries. The QH-dynamic-geo task follows the setting of the mixed MD17, containing the same molecule with different geometries in the training, validation, and test. On the other hand, the QH-dynamic-mol task splits the trajectories based on different molecules. Finally, we evaluate the transferrability of the trained models on molecules with larger sizes, thereby testing the models' ability to generalize beyond the training dataset. To demonstrate the quality of the predicted Hamilton matrix, we use four metrics. These metrics are based on the Mean Absolute Error (MAE) of the predicted Hamiltonian matrix \(\mathbf{H}\), as well as the derived properties such as orbital energies \(\mathbf{\epsilon}\) and electronic wavefunction \(\psi\). Furthermore, to evaluate the quality of the predicted Hamiltonian in accelerating DFT calculations, we calculate the DFT optimization ratio by taking the model predictions as DFT initialization. ## 2 Background and Related Works ### Density Functional Theory (DFT) Modeling the quantum states of physical systems is a central topic in computational quantum physics and chemistry. It aims to solve the Schrodinger equation (Schrodinger, 1926), which describes the electronic states shown as \[\hat{H}\Psi\left(\mathbf{r}_{1},\cdots,\mathbf{r}_{n}\right)=E\Psi\left(\mathbf{r}_{1}, \cdots,\mathbf{r}_{n}\right), \tag{1}\] where \(\Psi\left(\mathbf{r}_{1},\cdots,\mathbf{r}_{n}\right)\) is the \(n\)-electronic wavefunctions and \(\mathbf{r}\) is the 3D coordinates. Electronic eigenvalues and wavefunctions play an important role in calculating numerous crucial physical properties, including the energy gap between the Highest Occupied Molecular Orbital (HOMO) and the Lowest Unoccupied Molecular Orbital(LUMO), _i.e._ the HOMO-LUMO gap, and charge density. However, due to the exponentially expanding input Hilbert space with the number of electrons, the computational cost to directly calculate many-electronic wavefunctions is extremely high. Therefore, various methods are proposed to approximate the solutions, such as the Hartree-Fock (HF) method (Szabo and Ostlund, 2012) that approximates the wavefunction itself, or density functional theory (Hohenberg and Kohn, 1964) that approximates the electron density. While the HF method scales with the number of electrons \(n\) as \(O(n^{4})\), DFT scales with \(O(n^{3})\) and therefore DFT is better suited for large-scale systems. DFT is based on the key discovery that the total energy and thus all ground-state properties of a system are uniquely determined by the ground-state electron density (Kohn and Sham, 1965). Both of these approaches divide an \(n\)-electron system into a set of \(n\) non-interacting one-electron wavefunctions \(\psi_{i}(\mathbf{r}_{i})\), also called molecular orbitals in molecular systems. These one-electron orbitals can then be approximated by a linear combination of basis functions \(\phi_{j}(\mathbf{r})\) as \(\psi_{i}(\mathbf{r})=\sum_{j}C_{ij}\phi_{j}(\mathbf{r})\). The basis functions can be represented in analytical forms, such as Slater-type orbitals (STOs), Gaussian-type orbitals (GTOs), or plane waves, under numerical approximations for obtaining the coefficients matrix \(\mathbf{C}\). With these approximations, the original Schrodinger Equation (1) for electrons can be transformed into a matrix form as \[\mathbf{H}\mathbf{C}_{i}=\mathbf{\epsilon}_{i}\mathbf{S}\mathbf{C}_{i}, \tag{2}\] where \(\mathbf{H}\) is the Hamiltonian matrix, \(\mathbf{S}\) is the overlap matrix, and \(\mathbf{\epsilon}_{i}\) is the energy for the \(i\)-th orbital. The Hamiltonian matrix can be decomposed into the sum \[\mathbf{H}=\mathbf{H}_{eN}+\mathbf{H}_{ee}+\mathbf{H}_{XC}, \tag{3}\] which describes electron-ion interactions (\(\mathbf{H}_{eN}\)), electron-electron interactions (\(\mathbf{H}_{ee}\), including kinetic energy and electron-electron Coulomb repulsion energy), and exchange-correlation energy (\(\mathbf{H}_{XC}\)). These matrices take the electron density \(\mathbf{\rho}(\mathbf{r})\) as an input to evaluate the Hamiltonian matrix. The exchange-correlation energy functional used in this paper was B3LYP (Lee et al., 1988; Becke, 1993), which is a hybrid functional that includes both the exchange energy from the HF method as well as a correlation potential. We implement the GTO basis set Def2SVP (Weigend and Ahlrichs, 2005) in a post-HF method that incorporates aspects of DFT, namely, an exchange-correlation potential, in order to more accurately capture electron-electron interactions compared to the HF method, which uses a mean field approximation of electron density. Equation (2) is satisfied for the final Hamiltonian matrix and its coefficient matrix once self-consistency is achieved using direct inversion in the iterative subspace (DIIS) (Pulay, 1980, 1982). The equation is solved iteratively by building and solving the Hamiltonian and coefficient matrices, constructing an error vector based on a linear combination of energy differences in the previous steps, then diagonalizing and recalculating \(\mathbf{H}\) until the error vector is below a convergence threshold. ### Group Equivariance and Equivariant Matrices In many quantum chemistry problems, the molecular property to be predicted (e.g., energy and force) is internally invariant or equivariant to transformations in SE(3) group, including rotations and translations. Formally, for an \(n\)-atom molecule whose 3D atom coordinates are \(\mathbf{r}_{1},...,\mathbf{r}_{n}\), any transformation in SE(3) group can be described as changing the 3D atom coordinates to \(\mathbf{R}\mathbf{r}_{1}+\mathbf{t},...,\mathbf{R}\mathbf{r}_{n}+\mathbf{t}\). Here, the translation vector \(\mathbf{t}\in\mathbb{R}^{3}\) is an arbitrary 3D vector, and the rotation matrix \(\mathbf{R}\in\mathbb{R}^{3\times 3}\) satisfies that \(\mathbf{R}^{T}\mathbf{R}=\mathbf{I},|\mathbf{R}|=1\). Let \(\mathbf{f}(\cdot)\) map the 3D atom coordinates to an \((2\ell+1)\)-dimensional prediction target vector, we say \(\mathbf{f}\) is order-\(\ell\) SE(3)-equivariant if \[\mathbf{f}(\mathbf{R}\mathbf{r}_{1}+\mathbf{t},...,\mathbf{R}\mathbf{r}_{n}+\mathbf{t})=D^{\ell}(\mathbf{R}) \mathbf{f}(\mathbf{r}_{1},...,\mathbf{r}_{n}) \tag{4}\] holds for any rotation matrix \(\mathbf{R}\) and translation vector \(\mathbf{t}\), where \(D^{\ell}(\mathbf{R})\in\mathbb{C}^{(2\ell+1)\times(2\ell+1)}\) is the order-\(\ell\) Wigner-D matrix of \(\mathbf{R}\). To accurately predict SE(3)-equivariant properties, an effective approach is to develop neural network models that are designed to maintain the same equivariance relations between inputs and outputs as in Equation (4). Recently, many studies have proposed SE(3)-equivariant neural network architectures by using SE(3)-invariant feature encoding (Schutt et al., 2018; Gasteiger et al., 2020, 2021; Liu et al., 2022), tensor product operations (Thomas et al., 2018; Brandstetter et al., 2022; Liao and Smidt, 2023), or atomic cluster expansion framework (Batatia et al., 2022; Drautz, 2019; Dusson et al., 2022; Kovacs et al., 2021). Different from vector-like molecular properties, the Hamiltonian matrix \(\mathbf{H}\) has a much more complicated SE(3) equivariance pattern that is associated with the intrinsic angular momentum of the atomic orbital pairs. In computational quantum chemistry algorithms such as DFT, the Hamiltonian matrix \(\mathbf{H}\) can be used to represent the interactions between these atomic orbitals, and the block \(\mathbf{H}_{ij}\) in Hamiltonian matrix represents the interactions between the atomic orbitals \(i\) in atom \(a_{i}\) with angular momentum \(\ell_{i}\) and atomic orbitals \(j\) in atom \(a_{j}\) with angular momentum \(\ell_{j}\), and the shape of this block \(\mathbf{H}_{ij}\) is \((2\ell_{i}+1)\times(2\ell_{j}+1)\). Usually, the atomic orbitals are arranged sequentially for the orbitals in the same atom and with the same angular momentum. For example, \(\mathbf{H}_{ij}\) can be located within the \(s_{i}\)-th to \((s_{i}+2\ell_{i})\)-th row, and the \(s_{j}\)-th to \((s_{j}+2\ell_{j})\)-th column of Hamiltonian matrix \(\mathbf{H}\). Specifically, its SE(3) equivariance can be described as \[\mathbf{H}_{ij}\left(\mathbf{\rho}(\mathbf{R}\mathbf{r}+\mathbf{t})\right)=D^{\ell_{i}}(\mathbf{R})\mathbf{ H}_{ij}\left(\mathbf{\rho}(\mathbf{r})\right)D^{\ell_{j}}(\mathbf{R}), \tag{5}\] where \(\mathbf{\rho}(\mathbf{r})\) is the electronic density at position \(\mathbf{r}\) and Hamiltonian matrix \(\mathbf{H}\) is a function of the electronic density \(\mathbf{\rho}(\mathbf{r})\) in the DFT algorithm. In other words, the SE(3) equivariance of different submatrices in \(\mathbf{H}\) has different mathematical forms, which is much more complicated than the SE(3) equivariance of vector-like molecular properties. Hence, it is much more challenging to develop SE(3)-equivariant neural network architectures for the prediction of Hamiltonian matrices. Nowadays, only a few studies (Li et al., 2022; Gong et al., 2023; Yu et al., 2023; Unke et al., 2021) have made initial exploration in this direction. ### Datasets for Quantum Chemistry To facilitate the usage of machine learning models to predict chemistry properties and accelerate simulations, numerous quantum chemistry datasets have been built to provide extensive and faithful data. Here, we introduce several existing datasets that have been constructed for different tasks respectively, including molecular property prediction, catalyst modeling, molecular force field prediction, and molecular Hamiltonian matrix prediction. For molecular property prediction, the QM7 (Blum and Reymond, 2009) dataset was initially constructed using 7,165 molecules from the organic molecule database GDB-13 (Blum and Reymond, 2009), with each selected molecule having no more than 7 heavy atoms. The primary purpose of creating the QM7 dataset is to provide atomization energies as the target molecular property. Then QM9 (Ramakrishnan et al., 2014; Schutt et al., 2018) was built based on GDB-17 (Ruddigkeit et al., 2012) to provide 134k stable small organic molecules with no more than 9 heavy atoms in each molecule. Moreover, it provides 13 different important quantum chemistry properties, including HOMO and LUMO energies. Based on the molecules from PubQChem (Wang et al., 2009, 2017; Kim et al., 2019, 2021, 2023), PubQChemQC (Nakata and Shimazaki, 2017) provides 3M ground-state molecular structures as well as the HOMO-LUMO gap and excitation energies for 2M molecules. In addition to the molecular property datasets, OC20 (Chanussoit et al., 2021) and OC22 (Tran et al., 2023) were developed to provide the data of interactions of catalysts on material surfaces. They provide the geometries of the initial structures to predict the final structures or energies as well as the relaxation trajectories with energy and atomic forces. For the molecular force field prediction datasets, MD17 (Chmiela et al., 2017) and MD22 (Chmiela et al., 2023) contain atomic forces for molecular and supramolecular trajectories respectively as valuable datasets to develop machine learning methods. The last category is the Hamiltonian matrices datasets. MD17 (Schutt et al., 2019, Gastegger et al., 2020) provides the Hamiltonian matrices for single molecular dynamic trajectories to study the Hamiltonian matrices for molecules with various geometries. Building upon this dataset, mixed MD17 (Yu et al., 2023) combines four molecular trajectories in the MD17 to study Hamiltonian matrix prediction tasks with multiple molecules. Alongside the increasing interest in Hamiltonian matrix prediction, there is a growing need for datasets that include Hamiltonian matrices with a greater number of molecules to facilitate the subsequent studies. ## 3 Datasets, Tasks, Methods, and Metrics ### Datasets **Dataset Generation.** For the QH9 dataset, we use open-source software PySCF (Sun et al., 2018, 2020) to conduct computational quantum chemistry calculations. In the QH9, there are two sub datasets. The first one is the QH-stable dataset containing Hamiltonian matrices for 130,831 molecules with their geometries. The stable molecular geometries come from a subset of dataset from QM9 dataset which is widely used in molecular property prediction tasks in Schutt et al. (2018); Gasteiger et al. (2020); Liu et al. (2022); Wang et al. (2022). The second one is the QH-dynamic dataset, it has molecular trajectories for \(2,399\) molecules and each trajectory contains \(60\) geometries. To obtain the accurate Hamiltonian matrices for this dataset, we set the hyper-parameters of the DFT algorithms to a tight level. Specifically, we set the grid density level to 3 to calculate accurate electronic density, and the SCF convergence condition is set to SCF tolerance of \(10^{-13}\) and gradient threshold of \(3.16\times 10^{-5}\) to ensure the the final states achieve tight convergence. For the density functional, we select the B3LYP exchange-correlation functional to conduct DFT calculations, and GTO orbital basis Def2SVP is selected to approximate the electronic wavefunctions. To accelerate and achieve the convergence of SCF algorithm, we use DIIS algorithm with consideration of the 8 previous steps. For the QH9-dynamic dataset, molecular dynamics simulations are conducted under the microcanonical ensemble, where the number of particles, volume, and energy remain constant (NVE). The temperature is set to 300K, and the time step for recording the molecular trajectory is set to \(2.419\times 10^{-3}\) fs. **Dataset Statistics** The statistical data, including the number of molecules and geometries for QH9-stable and QH9-dynamic, is presented in Table 1. These molecules consist of no more than 9 heavy atoms and are composed of four specific heavy atoms: carbon (C), nitrogen (N), oxygen (O), and fluorine (F). The distribution of molecule size for QH9-stable and QH9-dynamic is shown in Figure 0(a) and Figure 0(c). Meanwhile, the percentage of molecules in QH9-stable with different the number of heavy atoms is shown in 0(b). ### Tasks To comprehensively evaluate the quantum Hamiltonian prediction performance, we define the following tasks based on the obtained stable and dynamic geometries in the QH9 dataset. **QH-stable-iid.** We first randomly divide the obtained stable geometries in QH9 into three subsets, including \(80\%\) for training, \(10\%\) for validation, and \(10\%\) for testing. This serves as the basic evaluation task for predicting quantum Hamiltonian matrices. **QH-stable-ood.** We further split the stable geometries in QH9 by molecular size based on the number of constituting atoms. The training set consists of molecules with \(3\) to \(20\) atoms, maintaining a similar number of training samples as in the QH-stable-iid split. The validation set includes molecules with \(21\) to \(22\) atoms, while the testing set has molecules with \(23\) to \(29\) atoms. This task allows for an evaluation of the model's generalization ability under an out-of-distribution training setup. \begin{table} \begin{tabular}{l|c c c} \hline \hline Task & \# Total geometries & \# Molecules & \# Training/validation/testing geometries \\ \hline QH-stable-iid & \(130,831\) & \(130,831\) & \(104,664/13,083/13,084\) \\ QH-stable-ood & \(130,831\) & \(130,831\) & \(104,001/17,495/9,335\) \\ QH-dynamic-geo & \(143,940\) & \(2,399\) & \(119,950/11,995/11,995\) \\ QH-dynamic-mol & \(143,940\) & \(2,399\) & \(115,140/14,340/14,460\) \\ \hline \hline \end{tabular} \end{table} Table 1: The statistics of our defined four tasks. Figure 1: The dataset statistics on QH9-stable and QH9-dynamic, including molecule size distribution and percentage of molecules with different number of heavy atoms. **QH-dynamic-geo.** For this split and the following QH-dynamic-mol split, there are \(2,399\) molecular dynamics trajectories, while each trajectory includes \(60\) geometries. In QH-dynamic-geo, the split is performed geometry-wise. Specifically, for each molecule, \(60\) geometries are randomly divided into \(50\) for training, \(5\) for validation, and \(5\) for testing. Here, the molecules in the test set are visible during training but the geometric structures are different from training structures. **QH-dynamic-mol.** In this QH-dynamic-mol split, the \(2,399\) molecules are divided into training, validation, and testing subsets in a ratio of \(0.8/0.1/0.1\). Importantly, different from the above QH-dynamic-geo setup, all \(60\) geometries corresponding to a specific molecule are grouped together and assigned to the same subset. This setup introduces a more challenging task than QH-dynamic-geo since the geometries in the testing set correspond to different molecules as those in training. ### Methods To predict the quantum Hamiltonian matrix, several quantum tensor networks have been proposed. SchNorb (Schutt et al., 2019) uses pairwise distance and direction as the input geometric information to predict the final Hamiltonian matrix. However, SchNorb lacks the ability to ensure matrix equivariance and relies on data augmentation techniques to encourage equivariance. Another network, DeepH (Li et al., 2022), uses invariant local coordinate systems and a global coordinate system to handle the equivariance challenge. It uses the geometric features within these invariant local coordinate systems to predict invariant Hamiltonian matrix blocks. Next, as a post-processing step, DeepH applies a rotation using Wigner D-Matrix to transform the Hamiltonian matrix blocks from local coordinate systems back to the global coordinate system. Currently, DeepH is applied on predicting Hamiltonian matrices for materials. PhiSNet (Unke et al., 2021) uses an equivariant model architecture that inherently guarantees matrix equivariance. However, current implementation of PhiSNet is limited to supporting single molecule. This limitation arises from the design of the matrix prediction module in PhiSNet, which is designed to predict matrices for the same molecules with fixed matrix size. Therefore, equivariant quantum tensor network QHNet (Yu et al., 2023) is selected as the main baseline method in the QH9 benchmark currently. QHNet has an extendable expansion module that is built upon intermediate full orbital matrices, enabling its capability to effectively handle different molecules. This flexibility allows QHNet to accommodate various molecules in the QH9 benchmark. ### Metrics To evaluate the quality of the predicted Hamiltonian matrix, we adopt several metrics that are used to measure both approximation accuracy and computational efficiency. **MAE on Hamiltonian matrix H.** This metric calculates the Mean Absolute Error (MAE) between the predicted Hamiltonian matrix and the ground-truth labels from DFT calculation. Each Hamiltonian matrix consists of diagonal blocks and non-diagonal blocks, representing the interactions within individual atoms and the interactions between pairs of atoms, respectively. When the atom pair is distant, the values in the Hamiltonian matrix blocks are typically close to zero. Consequently, as the molecules increase in size, the proportion of distant atom pairs also increases, causing the overall mean value of the Hamiltonian matrix to decrease. Hence, in the subsequent experiments, we compare the MAEs of the diagonal and non-diagonal blocks separately as well as the total MAE on the Hamiltonian matrix. **MAE on occupied orbital energies \(\epsilon\)**. Orbital energy, which includes the Highest Occupied Molecular Orbital (HOMO) and Lowest Unoccupied Molecular Orbital (LUMO) energies, is a highly significant chemical property. It can be determined by diagonalizing the Hamiltonian matrix using Equation 2. Hence, this metric can serve as a measure to reflect the quality of the predicted Hamiltonian matrix in accurately deducing the desired property. Specifically, it calculates the MAE on all the occupied molecular orbital energies \(\epsilon\) derived from the predicted and the ground-truth Hamiltonian matrix. **Cosine similarity of orbital coefficients \(\psi\).** Electronic wavefunctions can describe the quantum states of molecular systems and are used to derive a range of chemical properties. In order to measure the similarity between the ground-truth wavefunctions and the predicted wavefunctions, we calculate the cosine similarity of the coefficients for the occupied molecular orbitals \(\psi\). The corresponding coefficients \(\mathbf{C}\) are derived from the predicted and ground-truth Hamiltonian matrix shown in Equation 2. **Optimization step ratio.** Besides the metrics assessing molecular properties, this optimization step ratio metric is introduced to measure the quality of the predicted Hamiltonian matrix in accelerating DFT calculation. Specifically, it calculates the ratio of the number of optimization steps between initializing with the predicted Hamiltonian matrix and using a random initialization. When the Hamiltonian matrix is accurately predicted, the Self-Consistent Field (SCF) algorithm is close to the convergence condition, leading to a significant reduction in the number of optimization steps. ## 4 Experiments **Setup.** To assess how deep learning approaches perform on the proposed dataset, we conduct experiments on the four designed tasks, as described in Section 3.2. To be more specific, we evaluate the performance of QHNet (Yu et al., 2023), a recently proposed SE(3)-equivariant network specifically designed for efficient and accurate quantum Hamiltonian matrix prediction. QHNet is known for its effectiveness and efficiency in handling the task at hand, making it a suitable testing method for our benchmark evaluation. For quantitative evaluation, we use the metrics as introduced in Section 3.4. Our implementation is based on PyTorch (Paszke et al., 2019), PyG (Fey and Lenssen, 2019), and e3nn (Geiger et al., 2022). We train models on either (1) a single 48GB Nvidia GeForce RTX A6000 GPU and Intel Xeon Silver 4214R CPU, or (2) a single Nvidia A100 GPU and Intel Xeon Gold 6258R CPU. Following the model setup in QHNet, in all implemented models, we employ five node-wise interaction layers to aggregate messages from neighboring nodes to update the node irreducible representations. We train all models with a total training step of either \(210,000\) or \(260,000\) using a batch size of \(32\). To expedite the convergence of model training, following the QHNet setup, we implement a learning rate scheduler. The scheduler gradually increases the learning rate from \(0\) to a maximum value of \(5\times 10^{-4}\) over the first \(1,000\) warm-up steps. Subsequently, the scheduler linearly reduces the learning rate, ensuring it reaches \(1\times 10^{-7}\) by the final step. **Overall performance.** We first evaluate the overall performance of the model on the four defined tasks by demonstrating its accuracy of the predicted Hamiltonian matrices on the testing set. As summarized in Table 2, the employed QHNet models can achieve a reasonably low MAE in predicting the Hamiltonian matrices on all proposed tasks. For reference, QHNet can achieve an MAE of \(83.12\times 10^{-6}E_{h}\) on the mixed MD17 dataset, which has a similar setup to our QH-dynamic-geo setup. In addition to MAE on Hamiltonian matrices, the trained models also achieve low errors on the predicted occupied orbital energies and orbital coefficients. This aligns with the prior reported work that QHNet is effective to predict the Hamiltonian matrices for multiple molecules (Yu et al., 2023). Notably, compared to the existing Hamiltonian matrix datasets, such as MD17 (Chmiela et al., 2017) and mixed MD17 (Yu et al., 2023), our proposed tasks involve predicting Hamiltonian matrices for significantly more molecules. Overall, we anticipate that the proposed new datasets and corresponding tasks can serve as more challenging and realistic testbeds for future research in Hamiltonian matrix prediction. **Investigation on out-of-distribution generalization.** Since we maintain a similar number of training samples for QH-stable-iid and QH-stable-ood, it is feasible to compare the performance of these two settings to investigate the out-of-distribution challenge in predicting Hamiltonian matrices. It \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Dataset & Model & \multicolumn{3}{c}{\(\mathbf{H}\left[10^{-6}E_{h}\right]\downarrow\)} & \multirow{3}{*}{\(\mathbf{\epsilon}\)\([10^{-6}E_{h}]\downarrow\)} \\ & & diagonal & non-diagonal & all & \\ \hline QH-stable-iid & QHNet & \(130.99\) & \(83.95\) & \(87.24\) & \(2055.53\) & \(94.46\) \\ QH-stable-ood & QHNet & \(135.63\) & \(80.26\) & \(83.22\) & \(893.91\) & \(92.50\) \\ \hline QH-dynamic-geo & QHNet & \(80.91\) & \(75.53\) & \(75.89\) & \(475.14\) & \(96.67\) \\ QH-dynamic-mol & QHNet & \(299.49\) & \(112.43\) & \(126.69\) & \(2208.69\) & \(91.27\) \\ \hline \hline \end{tabular} \end{table} Table 2: The overall performance on the testing set on the defined four tasks. The unit for the Hamiltonian \(\mathbf{H}\) and eigenenergies \(\mathbf{\epsilon}\) is Hartree denoted by \(E_{h}\). is worth noting that we cannot directly compare the performance on their respective test sets, as reported in Table 2, to demonstrate the out-of-distribution generalizability challenge. This is because the molecules in the QH-stable-ood test set have a larger number of atoms on average than those in QH-stable-iid. As explained in Section 3.4, molecules with larger size typically have more distant atom pairs, thus leading to a lower overall mean value of the Hamiltonian matrix. Hence, numerical results on molecules with different sizes are not directly comparable. To examine the presence of the out-of-distribution issue in the Hamiltonian prediction task, we adopt an alternative evaluation strategy. To be specific, we assess models that have been trained respectively on the QH-stable-iid and QH-stable-ood training sets, employing the same set of samples for evaluation in each instance. Specifically, we use the intersecting set of the QH-stable-iid and QH-stable-ood testing sets as our evaluation dataset. Clearly, the samples contained within this evaluation set are previously unseen during the training phase of either model, thereby maintaining the integrity of the assessment. The evaluation set contains \(923\) molecules with \(23\) to \(29\) atoms. Under this experimental setup, the primary challenge faced by the model trained on the QH-stable-ood training set stems from the novelty of molecular sizes during the evaluation phase. On the other hand, the model trained on the QH-stable-iid training set benefits from having been exposed to such molecular sizes during training. We denote that these two models are trained under out-of-distribution (OOD) and in-distribution (ID) training schema respectively in Table 3. By comparing the performance on the identical evaluation set, it becomes apparent that the model employing the ID training schema outperforms its OOD-trained counterpart, across all metrics including Hamiltonian MAE and predicted orbital energies and coefficients. Such a performance gap demonstrates that the out-of-distribution issue in molecular size is actually a valid concern particularly when extending trained models to molecular sizes not encountered during training. **Geometry-wise _vs._ molecule-wise generalization.** We further explore geometry-wise and molecule-wise generalizability by analyzing the difficulty differences between the QH-dynamic-geo and QH-dynamic-mol tasks. We consider the results in Table 2 for these two tasks to be comparable given that both models are trained with a similar number of geometry structures. We note that the model in the QH-dynamic-geo task demonstrates numerically better test performance than the model in the QH-dynamic-mol task. This is consistent with our intention when designing the tasks. Specifically, in QH-dynamic-geo, although the geometric structures in the test set are different, the molecules themselves are not entirely novel to the model due to the exposure during the training phase. In comparison, the QH-dynamic-mol task presents a more challenging and demanding scenario. In particular, the test set geometries in QH-dynamic-mol correspond to entirely different molecules than those seen during training. This task requires the model to generalize from its learned patterns to the unseen molecular structures. To summarize, both tasks serve as valuable testbeds in evaluating the model's generalization ability, and our analysis shows that QH-dynamic-mol task, which requires extrapolating to entirely new molecular structures, is notably more challenging and demanding. **Accelerating the DFT calculation.** We further measure the quality of the predicted Hamiltonian matrix by evaluating its ability in accelerating the DFT calculation. As introduced in Section 3.4, we compute the ratio of optimization steps required when initializing with the predicted Hamiltonian matrix as compared to a random initialization. In this experiment, following our data collection process, we use PySCF (Sun et al., 2018) to perform the DFT calculation \begin{table} \begin{tabular}{l l l} \hline \hline Dataset & Models & Optimization step ratio \(\downarrow\) \\ \hline QH-stable-iid & QHNet & \(0.718\) \\ QH-stable-ood & QHNet & \(0.723\) \\ QH-dynamic-geo & QHNet & \(0.674\) \\ QH-dynamic-mol & QHNet & \(0.733\) \\ \hline \hline \end{tabular} \end{table} Table 4: The performance of accelerating the DFT calculation. \begin{table} \begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{Training schema} & \multirow{2}{*}{Models} & \multicolumn{2}{c}{\(\mathbf{H}\)\([10^{-6}E_{h}]\downarrow\)} & \multirow{2}{*}{\(\boldsymbol{\epsilon}\)\([10^{-6}E_{h}]\downarrow\)} & \multirow{2}{*}{\(\boldsymbol{\psi}\)\([10^{-2}]\uparrow\)} \\ & & & non-diagonal & all & & \\ \hline ID & QHNet & \(91.87\) & \(59.69\) & \(61.42\) & \(663.05\) & \(94.28\) \\ OOD & QHNet & \(135.58\) & \(80.07\) & \(83.04\) & \(827.65\) & \(92.77\) \\ \hline \hline \end{tabular} \end{table} Table 3: The performance of in-distribution (ID) training and out-of-distribution (OOD) training on the constructed evaluation set for the OOD investigation. with using B3LYP exchange-correlation functional and def2SVP basis set. We select DIIS as the SCF algorithm for the DFT calculation and set a grid density level of \(3\) to ensure an accurate DFT calculation. For each dataset, we compute the average optimization step ratio for \(50\) randomly selected molecules. As shown in Table 4, when initializing from the predicted Hamiltonian matrices given by QHNet, it requires fewer optimization steps to reach the converged Hamiltonian matrix, which indicates that the predicted Hamiltonian matrix is close to the convergence condition. This experimental result demonstrates that machine learning approaches are helpful in accelerating the DFT calculation. ## 5 Conclusion We are interested in accelerating computation of quantum Hamiltonian matrices, which fundamentally determine the quantum states of physical systems and chemical properties. While various invariant and equivariant deep learning methods have been developed recently, current quantum Hamiltonian datasets consist of Hamiltonian matrices of molecular dynamic trajectories for only a single and four molecules, respectively. To significantly expand the size and variety of such datasets, we generate a much larger dataset based on the QM9 molecules. Our dataset provides precise Hamiltonian matrices for 130,831 stable molecular geometries and 2,399 molecular dynamics trajectories with \(60\) geometries in each trajectory. Extensive and carefully designed experiments are conducted to demonstrate the quality of our generated data. ## Acknowledgements This work was supported in part by National Science Foundation grant IIS-2006861, CCF-1553281, DMR-2119103, DMR-1753054, DMR-2103842, and IIS-2212419. Acknowledgment is also made to the donors of the American Chemical Society Petroleum Research Fund for partial support of this research.
2305.08307
Fusion Blossom: Fast MWPM Decoders for QEC
The Minimum-Weight Perfect Matching (MWPM) decoder is widely used in Quantum Error Correction (QEC) decoding. Despite its high accuracy, existing implementations of the MWPM decoder cannot catch up with quantum hardware, e.g., 1 million measurements per second for superconducting qubits. They suffer from a backlog of measurements that grows exponentially and as a result, cannot realize the power of quantum computation. We design and implement a fast MWPM decoder, called Parity Blossom, which reaches a time complexity almost proportional to the number of defect measurements. We further design and implement a parallel version of Parity Blossom called Fusion Blossom. Given a practical circuit-level noise of 0.1%, Fusion Blossom can decode a million measurement rounds per second up to a code distance of 33. Fusion Blossom also supports stream decoding mode that reaches a 0.7 ms decoding latency at code distance 21 regardless of the measurement rounds.
Yue Wu, Lin Zhong
2023-05-15T02:31:06Z
http://arxiv.org/abs/2305.08307v1
# Fusion Blossom: Fast MWPM Decoders for QEC ###### Abstract The Minimum-Weight Perfect Matching (MWPM) decoder is widely used in Quantum Error Correction (QEC) decoding. Despite its high accuracy, existing implementations of the MWPM decoder cannot catch up with quantum hardware, e.g., 1 million measurements per second for superconducting qubits. They suffer from a backlog of measurements that grows exponentially and as a result, cannot realize the power of quantum computation. We design and implement a fast MWPM decoder, called Parity Blossom, which reaches a time complexity almost proportional to the number of defect measurements. We further design and implement a parallel version of Parity Blossom called Fusion Blossom. Given a practical circuit-level noise of 0.1%, Fusion Blossom can decode a million measurement rounds per second up to a code distance of 33. Fusion Blossom also supports stream decoding mode that reaches a 0.7 ms decoding latency at code distance 21 regardless of the measurement rounds. ## I Introduction Quantum error correction (QEC) is essential for fault-tolerant quantum computing. The decoder of QEC must be fast enough to avoid exponential backlog effect discussed by Terhal [1]. That is, it must process all the syndrome bits generated by the quantum hardware within a smaller period of time. Such fast decoders are known as _online_ decoders. Also, a decoder design should be _scalable_ to support large code distances in order to reach the desired logical error rate. No scalable online MWPM decoders have been reported. Fowler, Adam and Lloyd [2] reported an almost linear-time MWPM decoder without a publicly accessible implementation. Higgott and Gidney recently reported an open-sourced, almost linear-time MWPM decoder at [3]. Since they are sequential algorithms, they will eventually fail to reach the throughput requirement at some large code distance. Fowler also suggested an idea to parallelize the MWPM decoder [4], without providing any empirical data regarding its performance. We design Fusion Blossom as an online MWPM decoder that scales to arbitrarily large code distance \(d\), using parallelization. Fusion Blossom is inspired by recently reported parallel realizations of the Union-Find (UF) decoder [5, 6, 7, 8] and by the relationship between the UF decoder and the MWPM decoder revealed in [9]. Fusion Blossom drastically speeds up QEC decoding to sub-microsecond per measurement round by using parallel CPU cores. Taking a rotated surface code of 0.1% circuit-level noise on a 64-core CPU as an example, it can decode up to \(d=33\) with throughput of one million rounds per second using batch decoding. Using stream decoding, it achieves a constant 0.7 ms average latency at \(d=21\) regardless of the number of measurement rounds. To the best of our knowledge, Fusion Blossom is the first publicly available parallel MWPM decoder [10], implemented in Rust with Python binding [11]. The key ideas of Fusion Blossom are two. First, it recursively divides a decoding problem into two sub-problems that can be solved independently and efficiently fuses their solutions, according to a tree structure computed offline. Second, it leverages a fast sequential MWPM decoder called Parity Blossom, which implements a novel variant of the blossom algorithm. Parity Blossom leverages the property of the _syndrome graph_[9] where the MWPM problem is defined: the syndrome graph is constructed from a much sparser graph called _decoding graph_. Parity Blossom works on the decoding graph to solve the MWPM problem for the syndrome graph. In an impressive parallel work, Higgott and Gidney [3] present Sparse Blossom, an implementation of blossom algorithm that shares the key idea of Parity Blossom: identifying tight edges using the decoding graph, and the same mathematical foundation. Sparse Blossom features several novel optimizations that are not used by Parity Blossom. This paper presents the following contributions that complement those by Sparse Blossom. * The mathematical foundation behind Sparse Blossom and Parity Blossom (III). * Fusion Blossom, a parallel MWPM decoder that could be based on either Sparse Blossom or Parity Blossom (IV). * A unified framework for implementing matching-based decoders including novel mathematically grounded optimizations (V). We evaluate Parity Blossom and Fusion Blossom in SVI and discuss related work in SSVII. Our implementation is open-source and available at [10]. ## II Background We first define the necessary data structures for decoding a surface code. More information can be found in [9]. ### _Quantum Error Correction (QEC) Codes_ We aim at decoding codes that can be represented by a data structure called _model graph_. Such codes include many of the topological codes [12]. Such a code consists of data qubits and stabilizers. Data qubits store the quantum information while stabilizers allow errors in data qubits to be observed classically: an error in a data qubit will impact the measurement outcome of the adjacent stabilizers that are designed to detect this type of error. Model GraphFollowing [9], we represent such a QEC code with a _model graph_\(G_{M}=(V_{M},E_{M})\). A vertex \(v\in V_{M}\) corresponds to a stabilizer measurement result. Each edge \(e\in E_{M}\) corresponds to an independent error source and connects with two vertices that correspond to the measurement outcomes of stabilizers adjacent to this error source. We add a _virtual vertex_ to an edge if the corresponding error source only connects with a single vertex. This results in a two-dimensional graph as show in Fig. 1(2). The model graph is sparse because \(|E_{M}|\) is \(O(|V_{M}|)\). Edges in the model graph are weighted. The weight of an edge can be computed from the error model of the corresponding error source \(P(e)\) as \(w_{e}=\log\left(\frac{1-P(e)}{P(e)}\right)\). The model graph can be generalized to be three-dimensional to account for erroneous stabilizer measurement shown in Fig. 4. The third dimension comes from multiple rounds of measurement, with each round represented by the two-dimensional model graph as shown in Fig. 1(2). Vertices corresponding to the measurement outcomes of the same stabilizer in two consecutive rounds are connected with a new edge, which represents the potential measurement error. Error Pattern & SyndromesWhen an independent error source in a code experiences an error, it will "flip" the measurement outcome of the two adjacent stabilizers. Because a stabilizer is adjacent to multiple independent error sources, its measurement outcome is determined by the parity of the number of erroneous sources: Only if an odd number of adjacent sources experience error, the stabilizer will have a defect measurement outcome. Because \(E_{M}\) denote the set of independent error sources in the code, \(\mathcal{E}\subseteq E_{M}\) denotes the subset that experience an error, or _error pattern_. \(P(\mathcal{E}),\forall\mathcal{E}\subseteq E_{M}\) indicates the probability that \(\mathcal{E}\) happens. It is the _error model_ of the code and can be obtained by characterizing the quantum hardware. One can compute the error model from the error model for each independent error source, \(P(e)\), as below: \[P(\mathcal{E})=\prod_{e\in\mathcal{E}}P(e)\prod_{f\in E_{M}\setminus\mathcal{E }}(1-P(f))\propto\prod_{e\in\mathcal{E}}\frac{P(e)}{(1-P(e))} \tag{1}\] Given an error pattern \(\mathcal{E}\), \(S(\mathcal{E})\) denotes the set of vertices that correspond to the defect measurement outcomes in the decoding graph and it is known as the _syndrome_ of \(\mathcal{E}\). Given the syndrome \(\mathcal{S}\) and the model graph, a decoder seeks to find an error pattern that produces \(\mathcal{S}\). The _decoding graph_ is the model graph with the syndrome \(\mathcal{S}\) marked, as shown in Fig. 1(3). The Union-Find decoder [13] uses the decoding graph [9] to find an error pattern that can produce the syndrome. Most-Likely Error DecoderA Most-Likely Error (MLE) decoder tries to find the most likely error pattern that generates the syndrome \(S\). \[\arg\max_{\mathcal{E}|S(\mathcal{E})=\mathcal{S}}P(\mathcal{E})=\arg\max_{ \mathcal{E}|S(\mathcal{E})=\mathcal{S}}\prod_{e\in\mathcal{E}}\frac{P(e)}{1-P (e)}\] The MLE decoding problem then becomes a problem for the decoding graph: find a subset of edges \(\mathcal{E}\subseteq E_{M}\) that generates the observed syndrome \(S(\mathcal{E})=\mathcal{S}\) while _maximizing_\(P(\mathcal{E})=\prod_{e\in\mathcal{E}}\frac{P(e)}{(1-P(e))}\). Note that we use \(\mathcal{E}\) as error pattern and subset of edges interchangeably because they represent the same thing. Since it's more common to define the summation of weights in graph problems, we can equivalently translate the problem into _minimizing_\(W(\mathcal{E})=\sum_{e\in\mathcal{E}}w_{e}\). \[\arg\min_{\mathcal{E}|S(\mathcal{E})=\mathcal{S}}\sum_{e\in\mathcal{E}}w_{e}\] MWPM DecoderThe Minimum-Weight Perfect Matching (MWPM) decoder is an exact MLE decoder when the error model can be precisely represented by a model graph. Unlike the Union-Find decoder, the MWPM decoder uses the _syndrome graph_, \(G(V,E)\), which is generated from the decoding graph by creating an edge between any two defect vertices and removing normal vertices \(v\in V_{M}\setminus\mathcal{S}\) (and their incident edges). That is, \(V=\mathcal{S}\) and \(E=\{(u,v)|\exists u,v\in\mathcal{S}\}\). The weight of an edge in the syndrome graph is calculated as that of a minimum-weight path between them. As its name suggests, the MWPM decoder finds an MLE error pattern by finding a minimum-weight perfect matching for the syndrome graph. We illustrate the workflow of the MWPM decoder in Fig. 1. Because the fastest known algorithm to solve the MWPM problem for a general graph is the blossom algorithm [14], most implementations of the MWPM decoder use off-the-shelf MWPM libraries such as Kolmogorov's blossom V library [15] and the Lemon library by Dezso _et al_[16]. These implementations must go through all the stages in Fig. 1. Because the syndrome graph is complete, i.e, \(|E|=|V|^{2}\), even the fastest implementations known [17, 18] have a time complexity of \(O(\sqrt{|V|}||E|)=O(|V|^{2.5})\), scaling faster than the number of defect stabilizer measurements \(|V|\). The key idea behind Parity Blossom is that it removes the stage of building the syndrome graph. As a result, Parity Blossom reaches an average runtime of almost \(O(|V|)\) given sufficiently low error rate. In doing so, unlike the blossom algorithm, Parity Blossom does not work for general graphs but decoding graphs representing syndromes of QEC codes. We derive this insight from our prior work [9], which shows that the UF decoder can be considered as an approximation of the MWPM decoder. Like the UF decoder, Parity Blossom uses the decoding graph. ### _Blossom Algorithm in General_ We next describe the blossom algorithm [14]. We elide details that are irrelevant to our contributions. The blossom algorithm formulate the MWPM problem as an integer linear-programming (ILP) problem. Given any graph \(G=(V,E)\) and edge weights \(w_{e},\forall e\in E\), the MWPM problem solves a perfect matching \(x_{e},\forall e\in E\) with minimum total weight \(\sum_{e\in E}w_{e}x_{e}\). A solution is represented by \(x_{e},\forall e\in E\), with all selected edges \(x_{e}=1\) and others \(x_{e}=0\). A perfect matching requires that for every vertex \(v\in V\), there is a unique edge with \(x_{e}=1\) incident to \(v\), and all other incident edges have \(x_{e}=0\). There is no constraint on the incident edges for a virtual vertex. The blossom algorithm solves the above ILP problem by first relaxing the integer constraint, becoming a linear-programming (LP) problem. It then adds some more con straints to the LP problem so that all optimal ILP solutions are optimal LP solutions [19]. It solves the following LP problem. \[\min \sum_{e\in E}w_{e}x_{e}\] (1) subject to \[\sum_{e\in\delta(v)}x_{e}=1 \forall v\in V \tag{1a}\] \[\sum_{e\in\delta(S)}x_{e}\geqslant 1 \forall S\in\mathcal{O}\] (1b) \[x_{e}\geqslant 0 \forall e\in E \tag{1c}\] where \(\mathcal{O}=\{S|S\subseteq V\land|S|>1\land|S|=1\mod 2\}\) and \(\delta(S)=\{e|e=(u,v)\in E\land((u\in S\wedge v\notin S)\lor(u\notin S\wedge v \in S))\}\). \(e\in\delta(S)\) is called a _hair_ of \(S\) and has one and only one incident vertex inside \(S\). The blossom algorithm creatively exploits the _dual_ formulation of the same problem. \[\max \sum_{v\in V}y_{v}+\sum_{S\in\mathcal{O}}y_{S}\] (2) subject to \[w_{e}-\sum_{v\in v}y_{v}-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! variables \(y_{e}\), \(\forall v\in V\) are non-negative during the whole process of finding the solution by the blossom algorithm. With **Theorem: Non-negative Vertex Dual**, we can simplify the LP problem for QEC decoding as follows. We define a set that includes both blossoms and single vertices \(\mathcal{O}^{\star}=\{S|S\subseteq V,|S|=1\mod 2\}\). \[\max \sum_{S\in\mathcal{O}^{\star}}y_{S}\] (3) subject to \[w_{e}-\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Definition: Cover.Given a blossom \(S\), it covers the set of points defined by the union of circles centered at \(\forall v\in S\) with \(d=\sum_{D\in\mathcal{D}_{v}(S)}y_{D}\). That is, \[\text{Cover}(S)=\cup_{v\in S}C(v,\sum_{D\in\mathcal{D}_{v}(S)}y_{D}).\] That is, \(\text{Cover}(S)\) consists of Circles around \(\forall v\in S\). The _boundary_ of a Cover consists of points of the Cover that are not inside any of its Circles. Because a Circle consists of a finite number of edges, \(\text{Cover}(S)\) also consists of a finite number of edges. For a blossom of a single vertex \(v\), its Cover is simply \(\text{Cover}(v)=C(v,y_{v})\). We emphasize that blossoms are defined on the syndrome graph while their Covers are defined on the decoding graph. As a result, the notion of Cover is an important bridge between the decoding and syndrome graphs. #### Iii-B2 Obstacle Detection on Decoding Graph Because the dual phase detects obstacles based on the syndrome graph, we must find a way to do so on the decoding graph. The key insight and theoretical result of this work is the next theorem, which show exactly how to do it. First of all, we note that detecting obstacles from dual constraints 2b is independent from the choice of syndrome vs. decoding graphs. Therefore, we only need to focus on those from dual constraints 2a. Second, because obstacles only occur on edges between different _Nodes_, it only needs to watch them to detect obstacles. Formally, we have **Theorem: Tight Edge Detection (Cover).** There exists a tight edge between two different nodes \(S_{1}\) and \(S_{2}\) if and only if Cover(\(S_{1}\)) and Cover (\(S_{2}\)) overlap. That is, \[\exists e=(v_{1},v_{2})\in E,v_{1}\in S_{1}\ \wedge\ v_{2}\in S_{2}\ \wedge\ w_{e}= \sum_{S\in\mathcal{O}^{*}|e\in\delta(S)}\hskip-14.226378pty_{S}\] \[\Longleftrightarrow\text{Cover}(S_{1})\cap\text{Cover}(S_{2})\neq\varnothing\] An obstacle of 2a is detected if such a tight edge exists and \(\Delta y_{S_{1}}+\Delta y_{S_{2}}>0\). That is, it can be detected by examining Covers of nodes on the decoding graphs. ### _Parity Blossom_ The key idea of Parity Blossom, as well as Sparse Blossom [3], is to detect obstacles using the decoding graph, leveraging the result of **Theorem: Tight Edge Detection (Cover)**. Therefore, Parity Blossom, like Sparse Blossom, uses the existing design of the primal phase, e.g, that of Blossom V [15]. Only in the dual phase, they eschew the use of the syndrome graph. We will describe our implementation of Parity Blossom in SSV. Using the decoding graph to detect obstacles is more advantageous than the syndrome graph given a low physical error rate \(p\ll 1\). As explained in SSII-A, generating the syndrome graph itself already takes quadratic time \(O(|V|^{2})\). On the decoding graph, however, large _Covers_ are exponentially unlikely with its size, so the average time complexity scales with roughly \(O(|V|)\). Note that when \(|V|\) is small or when \(p\) is large, it might be faster to use the syndrome graph. ## IV Fusion Blossom We next describe a parallel algorithm of solving the MWPM problem for QEC, called _Fusion Blossom_. Fusion Blossom recursively divides a decoding problem into sub-problems that can be solved independently and then recursively "fuses" their solutions to produce the solution to the original problem. We represent this recursive division/fusion as a full binary tree, called a _fusion tree_. Every leaf in the fusion tree invokes an MWPM solver, while other nodes fuse the solutions from their two children, also leveraging the MWPM solver. In our implementation, the MWPM solver is Parity Blossom. We next provide the mathematical formulation of division and fusion in SSV-A and SSV-B, respectively. We discuss how Fusion Blossom can make tradeoffs between decoding time and latency in SSV-C. ### _Division_ As illustrated by Fig. 3(1), a carefully selected set of vertices \(V_{b}\subset V_{M}\), e.g., a minimum vertex cut [22] of the decoding graph, can divide a decoding graph into two disjoint graphs that include \(V_{1}\) and \(V_{2}\), respectively. The only requirement of \(V_{b}\) is that, there is no edge in the decoding graph that connects vertices from both \(V_{1}\) and \(V_{2}\). With \(V_{b}\), we can create two sub-problems, one working on the subgraph covering vertices \(V_{1}\cup V_{b}\) and the other that covering \(V_{2}\cup V_{b}\), as illustrated by Figs. 3(2) to 3(4). Each of the sub-problems treats a vertex from \(V_{b}\) as a virtual vertex that can be matched arbitrary times. This effectively relaxes the parity constraints on vertices \(V_{b}\) in the sub-problems, which will be tightened later by fusion. For the \(i\)-th sub-problem, \(i\in\{1,2\}\), the primal and dual formulations as in Eq. 1 and 3, respectively, have \(E\) and \(\mathcal{O}^{*}\) as follows. \[E_{i}=\{e|e=(u,v)\in E\ \wedge\ u,v\in V_{i}\cup V_{b}\}\] \[\mathcal{O}^{*}_{i}=\{S|S\in\mathcal{O}^{*}\ \wedge\ S\subseteq V_{i}\}\] The process of division stops when the subproblem is adequately small for invoking the MWPM solver directly. We call such subproblems _leaf problems_ and the corresponding subgraphs _leaf partitions_. ### _Fusion_ After the sub-problems are solved independently, their solutions form an intermediate state for the original problem in terms of the values of the primal and dual variables. The fusion operation invokes the MWPM solver to find a solution to the original problem starting with this intermediate state. CorrectnessWe next show that the intermediate state is indeed a valid state for the blossom algorithm. For the primal variables, we remove the matchings to the temporary boundary vertices \(V_{b}\). Those matched pairs break into alternating trees and search for new matchings. Except for those, the matchings within \(V_{1}\) or \(V_{2}\) are preserved. We also create an alternating tree for each defect vertex in \(V_{b}\). We simply keep the existing dual variables, as shown in Fig. 3(4) to Fig. 3(5), given **Theorem: Feasible Dual Variables.** Solutions for the two disjoint sub-problems determine the values of the dual variables \(y_{S},S\in\mathcal{O}_{1}^{*}\cup\mathcal{O}_{2}^{*}\). These values plus setting \(y_{S}\) to 0 for \(S\in\mathcal{O}^{*}\setminus\mathcal{O}_{1}^{*}\cup\mathcal{O}_{2}^{*}\) constitute a feasible solution to the original dual problem. SpeedWe estimate the average time complexity of fusion operation to be no worse than \(O(p|V_{b}|)\) where \(p\ll 1\) is the physical error rate. Note the expected number of defect vertices in \(V_{b}\) is also \(O(p|V_{b}|)\). Our estimate is based on two intuitions. First, the fusion operation only needs to break about the same number of matched pairs from the sub-problem solution to match the defect vertices in \(V_{b}\). Second, due to the objective of minimum weight, it is more likely to find matches for these vertices close to \(V_{b}\). We note this estimate is independent of the size of the sub-problems \(|V_{1}|\) and \(|V_{2}|\). We confirm this independence empirically in SSVI-B3. ### _Schedule Design: Leaf Partitions and Fusion Tree_ When the leaf partitions are properly chosen, there can be multiple ways to fuse their solutions, allowing different tradeoffs between decoding time and latency. In this case, the fusion tree defines the space for scheduling leaf and fusion operations. One particularly relevant case is illustrated in Fig. 4, where the decoding graph is a stream of measurement rounds, each a two-dimensional graph. In this case, a leaf partition is simply a subgraph that consists of \(M\) consecutive measurement rounds. When the measurement rounds of a leaf partition become available, it invokes the MWPM solver to produce a solution. In an online system, as the measurement rounds stream in, the leaf partitions finish one by one. However, there are many ways in which their solutions can be fused, with three examples shown in Fig. 5, each making a different trade-off between decoding time and latency. As illustrated in Fig. 4, we define * Decoding Time: \(T\), the time from when decoding starts to when it finishes. * Latency: \(L\), the time from when all measurements are ready to when decoding finishes. * Measurement Rounds: \(N\), the number of rounds of stabilizer measurements. * Leaf Partition Size: \(M\), the number of measurement rounds in each leaf partition. We note the throughput of the system is related to decoding time as \(N/T\): how many rounds it can decode per unit time. **Batch Decoding.** Prior studies generally assumed that the syndrome of all \(N\) rounds of measurement is available at the time of decoding, which is known as batch decoding (see Fig. 4 (center)). For batch decoding, the decoding latency and time are the same, i.e., \(L=T\), and the decoding time is determined by the longest path from a leaf to the root given enough parallel resources. Therefore, the _balanced tree_, as shown in Fig. 5 (left), is preferable since its longest path (from leaf to root) is the shortest. **Stream Decoding.** In contrast to batch decoding, stream decoding starts as soon as enough rounds of measurement are ready for a leaf node (see Fig. 4 (right)). As a result, the decoding latency can be substantially shorter than the decoding time, i.e., \(L<T\). More importantly, to determine the decoding Figure 4: A 3D decoding graph (left), its Batch (center) and Stream (right) Decoding using three CPU cores. The Batch decoding starts when all \(N=80\) rounds of measurements are ready; it fuses solutions according to the Balanced tree in Fig. 5. The Stream decoding starts decoding whenever \(M=10\) rounds of measurements for a leaf node are ready; it fuses solutions according to the Linear tree in Fig. 5. Figure 3: Fusion Blossom example. The two sub-problems solves their local MWPM (2-4) individually, in parallel. The fusion operation first (5) recovers the temporary boundary vertices and then (6) evolves the intermediate state to a global MWPM. latency, one can no longer simply consider paths between leaves and the root but must add the time when the rounds of measurement for a leaf is ready (assuming those for the first leaf is ready at time zero). To minimize the decoding latency, one must balance the path length plus the ready time for all paths, allowing a shorter path for a later leaf. For example, when there are enough parallel resources that are fast enough, the path between the last leaf and the root determines the decoding latency. In this case, the _linear tree_ (Fig. 5 (center)) is preferable. With the balanced and linear trees as the two extreme cases in mind, we can create a continuum of trees between them called _mixed trees_. Given the parallel resources and decoding setup, e.g., \(M\) and \(N\), one must examine this continuum to find the tree that achieves the shortest decoding latency. To construct a mixed tree, one selects a height in the balanced tree, keeps balanced sub-trees below the height but constructs a linear tree above it. The higher this height, the smaller path difference between earlier and later leaves. For the balanced and linear trees, this mix height is root and leaf, respectively. Fig. 5 (right) shows the mixed tree with the mix height of one. In our latency evaluation (SVI-B2), we use the mixed tree that minimizes the decoding latency. We note that the mix height can be determined dynamically: the decoder can start with a balanced tree and switch to a linear tree to optimize the performance of the system. ## V Implementation We next describe our implementation of Parity Blossom and Fusion Blossom, including major ideas for optimizations. We implemented these algorithms in Rust with 12k lines of code. ### _Unified Framework for Matching Decoders_ A key idea behind our implementation is to use a unified framework for the blossom algorithm and its variants, including Parity Blossom, Union-Find [13], and more, as illustrated by Fig. 6. As the blossom algorithm iterates between the primal and dual phases as shown in Fig. 2, our unified framework implements them in modules with a narrow, well-defined interface with each other, marked as red and blue in Figs. 2 and 6. The interface allows any primal module to work with any dual module. This framework serves three purposes. (_i_) First, it shows how these variants and the blossom algorithm are related. (_ii_) Second, it allows code reuse between their implementations. For example, Parity Blossom optimizes the dual module (Parity) to work on the decoding graph instead of the syndrome graph (SSV-B1). It uses the same primal module (Standard) as the original blossom algorithm, with slight modification to support virtual vertices (SSV-B2). For another example, the UF decoder and Parity Blossom share the same Parity dual module. The UF decoder uses its own primal module (Union-Find) that computes the direction approximately, compared to the blossom algorithm [9]. (_iii_) Third, this framework reveals the existence of previously unknown variants that can achieve different trade-offs between accuracy and speed when applied to QEC. It also allows such new variants to be easily implemented. For example, one can design a new primal module based on the Standard primal module, which sets a limit to the size of alternating trees. Once an alternating tree reaches the size limit, the module treats it as an invalid cluster like the Union-Find primal module. When the size limit is infinite, the decoder is identical to Parity Blossom; when the limit is zero, it is identical to the UF decoder. As a result, by adjusting the size limit, we can produce a continuum of decoders between the UF decoder and Parity Blossom, making different tradeoffs between decoding accuracy and speed. ### _Parity Blossom_ As shown in Fig. 6, Parity Blossom uses the Parity Dual Module and the Standard Primal Module. #### V-B1 Parity Dual Module Because Parity Blossom uses the decoding graph to detect _Obstacles_, given **Theorem: Tight Edge Detection (Cover)**, the Parity dual module must efficiently track the Covers of nodes. Our first implementation idea comes from the UF decoder [23]: we maintain the boundary edges for each Cover. This is efficient and sufficient for updating Covers because when dual variables are adjusted according to \(\Delta\vec{y}\), the boundary edges of their Covers change. Our second idea removes implementation complication resulting from the fact that some vertices from the decoding graph may belong to multiple Covers. Zero edges, representing erasure errors [20, 21], specifically contribute to this complication because their vertices can belong to many Covers. To ensure that a vertex belongs to at most one "Cover", our idea is to use Pseudo-Covers, derived from Covers as follows. Figure 5: Fusion Trees. The leaves need to be fused recursively into a single root, which represents a global MWPM. Because a parent depends on the children, the paths between leaves and the root determine the decoding time and latency. Figure 6: Unified Framework for Matching Decoders. Definition: Pseudo-CoverAll Covers with a single vertex are Pseudo-Cover. That is, when the blossom algorithm starts, all the Covers on the decoding graph are Pseudo-Covers, each of them with a single defect vertex. (_i_) At the beginning of a dual phase, for a node with \(\Delta y_{S}<0\), its Pseudo-Cover is derived by removing all boundary non-defect vertices. (_ii_) The Pseudo-Cover for a node \(S\) with \(\Delta y_{S}>0\) is derived by modifying how its Cover grows. When adding a vertex to a growing Pseudo-Cover, the growing stops if the vertex is already inside another Pseudo-Cover. We denote the Pseudo-Cover of \(\text{Cover}(S)\) with \(\overline{\text{Cover}}(S)\). Since a vertex belongs to at most one \(\overline{\text{Cover}}\), its memory usage is constant. Also, since the incident vertices of an edge \(e=(u,v)\) each belongs to at most one \(\overline{\text{Cover}}\), there are at most two covered segment edges on \(e\). That is, the memory usage of an edge is also constant. The next theorem says that Pseudo-Covers can also be used to detect tight edges. **Theorem: Tight Edge Detection (Pseudo-Cover).** There exists a tight edge between two different nodes \(S_{1}\) and \(S_{2}\) with \(\Delta y_{S_{1}}+\Delta y_{S_{2}}>0\) if and only if there exists two different nodes \(S_{3}\) and \(S_{4}\) with \(\Delta y_{S_{3}}+\Delta y_{S_{4}}>0\) whose Pseudo-Covers meet on a decoding graph edge. That is, \[\exists S_{1},S_{2},e=(v_{1},v_{2})\in E,\] \[v_{1}\in S_{1},v_{2}\in S_{2},\Delta y_{S_{1}}+\Delta y_{S_{2}}>0,w_{e}=\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! by 1 on a global reset. An edge is invalid if its timestamp does not match the global one. Only when an invalid edge is being accessed, it is reset and its timestamp is updated to the global timestamp. ## VI Evaluation We evaluate our implementations of Parity Blossom and Fusion Blossom with both macro and micro benchmarks. The evaluation answers the following questions. * Correctness: Are they exact MWPM decoders? * Throughput: How many rounds of measurement can be decoded per unit time? * Latency: How long does it take from when the last round of measurement arrives to when decoding finishes? * Scalability: How throughput changes with code distance? We verify the correctness of our implementations by comparing against the blossom V library [15] over millions of randomized test cases with tractable code distances up to \(19\). We focus on throughput, latency, and scalability in the rest of this section. ### _Setup_ #### Vi-A1 Noise Model We use the circuit-level noise model [25] with a physical error rate of \(0.1\%\). We use a rotated surface code shown in Fig. 1(1). It has \(n=d^{2}\) data qubits and \((d^{2}-1)/2\) Z (X) stabilizers. Given a syndrome of \(N\) noisy rounds of measurement, the Z (X) decoding graph has \((N+1)(d^{2}-1)/2\) ordinary vertices and \((N+1)(d+1)\) virtual vertices, a total of \((N+1)(d+1)^{2}/2\). Since the X and Z decoding graphs can be decoded independently, we only use the Z decoding graph for evaluation. For simplicity, we use format \(N\times d\times d\) to represent the code. #### Vi-A2 Measurement We evaluate the decoding speed on a Linux server with dual Intel Xeon Platinum 8375C CPUs, a total of 64 cores, each supporting two hyper-threads. The server is an M6i instance from AWS (Amazon Web Services). Our results do not include the initialization time, during which the one-time, expensive memory allocation is performed. Once initialized, the decoder works on 100 simulation shots consecutively. Between two shots, the decoder is reset with a constant overhead, which is included in the result. Each shot by default includes \(10^{5}\) rounds of measurement. #### Vi-A3 Baseline For Sparse Blossom, we use the authors' own implementation through its Python binding [26] with the same setup as the above, with batch optimization enabled. For the traditional MWPM decoder, we use the blossom V library [15] with the following optimizations. It pre-computes a complete graph of \(V_{M}\) offline to reduce the runtime overhead of constructing the syndrome graph. It also eliminates edges in the complete graph if they have higher weight than the two vertices matching to virtual boundary respectively, because these edges would never be selected in an MWPM. #### Vi-A4 Metrics Given the decoding time \(T\) and measurement rounds \(N\), we define * Throughput: \(N/T\), decoded rounds per unit time. * Decoding time per measurement round: \(T/N\), the inverse of throughput. Since this is easier to compare with the measurement cycle of a quantum hardware, we use it in lieu of throughput in the figures. ### _Results_ #### Vi-B1 Throughput We use batch decoding in Fig. 4 for all throughput evaluations, assuming the syndrome is ready when the decoding begins. Instead of throughput, we report data in its inverse, i.e., decoding time per round (\(T/N\)). We first show the advantage of using the decoding graph over the syndrome graph, confirming the findings also reported by Higgott and Gidney in [3]. We benchmark the throughput on a single thread. As shown in Fig. 7(a), the decoding time of Parity Blossom and Sparse Blossom scales almost linearly with the number of qubits \(n=d^{2}\), which is the theoretically lower bound. In contrast, the traditional MWPM decoder based on the blossom V library scales poorly with the number of qubits \(n\). Not surprisingly, Parity Blossom is roughly 4x slower than Sparse Blossom in this case, because we have not incorporated some important optimizations (SSV-B1). Second, when the number of rounds \(N\) grows in Fig. 7(b), both Parity Blossom and Sparse Blossom see decoding time per round increases [3], due to increasing pressure on the memory hierarchy. Surprisingly, that of Fusion Blossom remains steady as \(N\) grows and beats that of Parity Blossom at large \(N\geqslant 10^{3}\), despite that Fusion Blossom is not supposed to enjoy any algorithmic advantage over Parity Blossom with a single thread. This is because Fusion Blossom divides the problem equally into small ones and solving a small problem enjoys better cache locality. On the other hand, a small \(M\) incurs more fusion operations and more overhead. Therefore, we empirically find the optimal \(M=100\) and use it as the default leaf partition size. Figure 8: Parallel decoding time (lower the better). Figure 7: Decoding time with a single thread (lower the better). Not surprisingly, Fusion Blossom beats all serial MWPM decoders when more threads are available. As shown in Fig. 8(a), the throughput of Fusion Blossom increases almost linearly with the number of threads, until the maximum number of hyper-threads (128) supported by the processor. At that point, it reaches the minimum decoding time per round (0.3 us). Using all 128 hyper-threads in the processor, Fusion Blossom can decode up to \(d=33\) at \(p=0.1\%\) with less than 1 us decoding time per round, as shown in Fig. 8(b). #### Vi-B2 Latency We observe a constant latency regardless of the measurement rounds \(N\) in the stream decoding (Fig. 4), compared to the linearly growing latency in the batch decoding. We emulate the stabilizer measurement cycle of 1 us, which is similar to that of state-of-the-art superconducting quantum hardware [27]. We use a mixed fusion tree in which each balanced subtree at the mix height has 50 leaves. Each leaf deals with \(M=20\) rounds of measurement. As shown in Fig. 9, the average latency is roughly 0.7 ms regardless of the number of measurement rounds \(N\). For the batch decoding, as predicted in Fig. 4, the latency scales linearly with \(N\). #### Vi-B3 Fusion Time Given a low physical error rate, a fusion operation only changes a small region around the boundary vertices on average. Thus, the fusion time should only increase with \(|V_{b}|=O(d^{2})\), as confirmed by Fig. 10(a), but not \(M\), the number of rounds, as confirmed by Fig. 10(b). Moreover, because a fusion operation may recursively invoke the children's MWPM solvers until leaf partitions are reached, the structure of the corresponding subtree of the fusion tree impacts fusion time. Fig. 10(c) show this with fusion time for both balanced and linear trees where the X axis is the number of leaf partitions in the subtree. Interestingly, the mean of fusion time of the balanced tree increases with the number of leaf partitions while that of the linear tree largely remains constant. This, again, is because a fusion operation only changes a small region around the boundary vertices on average. As a result, the operation is most likely to involve two leaf partitions next to each other and increasingly unlikely to involve partitions that are farther away from each other. In the balanced tree, the operation must travel through the entire height of tree to reach any two leaf partitions, even if they are next to each other. In contrast, in the linear tree, the operation is exponentially less likely to travel one level down the tree. For the example in Fig. 10(c) (center), fusion operation 14 is exponentially less likely to involve lower numbered leaf paritioned. #### Vi-B4 Scalability Finally, we show that given enough (\(\Omega(d^{2.68})\)) parallel resources, e.g., cores, Fusion Blossom can meet the throughput requirement by any code distance \(d\) when the physical error rate \(p\) is well below the threshold \(p_{\text{th}}\), using both analysis and empirical data. Let \(K\) denote the number of threads, each handling a \((N/K)\times d\times d\) partition on its own core. Note that the threads may reside in different machines, accessing shared memory via network with a constant-factor slowdown, e.g., using shared-memory rack-scale distributed systems like [28]. The decoding time of each thread scales with \(O(d^{2.68}N/K)\), according to Fig. 7(a). The solutions from the \(K\) concurrent threads can be fused with a balanced tree, with \(O(d^{2.34}\log K)\) time, according to Fig. 10(a). The decoding time per round has a complexity of \(O(d^{2.68}/K+d^{2.34}\log K/N)\). Thus, given a lower bound of \(K=\Omega(d^{2.68})\) and \(N=\Omega(d^{2.34}\log K)\), the decoding time per round will be bounded. This analysis assumes \(N\) polynomially grows with \(d\). This is reasonable because the lifetime of a logical qubit scales exponentially with \(d\), i.e., \(\max N\propto(p_{\text{th}}/p)^{(d+1)/2}\)[2]. Note the scaling factors of \(O(d^{2.68})\) and \(O(d^{2.34})\) are empirically derived from code distances up to \(100\) (Figs. 7(a) and 10(a)), which corresponds to about \(10^{4}\) physical qubits for each logical qubit, orders of magnitude higher than what is considered to be practical in the near future. We note that \(K=\Omega(d^{2.68})\) does not mean \(d^{2.68}\) threads (or cores) are necessary. When estimating how many cores are needed for a large \(d\), one can empirically derive the number for a small \(d\) and then extrapolate based on the scaling of \(d^{2.68}\). For example, to estimate how many cores are necessary to decode \(d=51\) with \(p=0.1\%\) using the setup in Section VI-A, one can pick a data point in Fig. 8(a) where \(d=21\) roughly needs 20 cores to meat the throughput requirement. We can estimate roughly \(20\times(51/21)^{2.68}\) or 216 cores are necessary for \(d=51\). ## VII Related Work As mentioned in SSI, Sparse Blossom is a contemporary work closely related to Parity Blossom, sharing the key idea of solving the MWPM problem using the decoding graph. We provide that first rigorous mathematical foundation for this idea and contribute new implementation optimizations. Related to Fusion Blossom, Fowler [4] presented a parallel design of the MWPM decoder. It partitions the qubits to parallel decoding units of customized hardware. Each decoding Figure 9: Latency Figure 10: Fusion time with a single thread (lower the better). unit handles a sufficiently large number of qubits so that the inter-unit communication is relatively rare. Paradoxically, its success requires both a large number of decoding units (for lower decoding time) and a large number of qubits in each unit (for lower communication overhead). To our best knowledge, perhaps not surprisingly, no implementation or empirical data has been reported for this design. Fusion Blossom, on the other hand, eliminates communications between the partitions and only synchronizes them during the fusion operations. This minimizes the need for communication and is scalable. There is a literature that seeks to parallelize solving the MWPM problem for general graphs. This literature, however, does not exploit the special structure of the QEC decoding problem as we do. As a result, its results have larger time complexity than Parity Blossom and Fusion Blossom when applied to QEC decoding. For example, Peterson and Karalekas [29] designed and implemented a distributed MWPM algorithm with \(O(|V|^{4})\) time complexity. Recently there is a growing interest in approximate algorithms for QEC decoding that sacrifice decoding accuracy to gain speed, e.g., parallelization with parallel-window technique [30, 6, 7] and fast decoders with cryogenic chips [31, 32, 33, 34]. Perhaps the most relevant is the (weighted) Union-Find (UF) decoder [35, 23] for which various design [5] and implementation [8] have been reported. The key idea of Parity Blossom draws inspiration from how the UF decoder approximates the MWPM decoder [9]. ## Acknowledgments This work was supported in part by Yale University and NSF MRI Award #2216030. The authors are grateful for the insightful discussion with Shruti Puri.
2306.07564
Causality Criteria from Stability Analysis at Ultra-High Boost
In this work, we have exclusively employed the linear stability analysis at ultra-high boost on two well-known stable-causal theories - second-order MIS and first-order BDNK, to identify the region of parameter space over which they are frame-invariantly stable and obey causal signal propagation. It has been shown that at near-luminal boost, stability criteria alone can provide the causality constraints on transport coefficients, which are identical to the asymptotic causality conditions, without actually going to the asymptotic limit of the theories. Thus, we present an alternative approach to derive the causality constraints, which is more appropriate for low-energy effective theories like relativistic hydrodynamics.
Shuvayu Roy, Sukanya Mitra
2023-06-13T06:27:52Z
http://arxiv.org/abs/2306.07564v3
# Causality Criteria from Stability Analysis at Ultra-High Boost ###### Abstract In this work, we have exclusively employed the linear stability analysis at ultra-high boost on two well-known stable-causal theories - second-order MIS and first-order BDNK, to identify the region of parameter space over which they are frame-invariantly stable and obey causal signal propagation. It has been shown that at near-luminal boost, stability criteria alone can provide the causality constraints on transport coefficients, which are identical to the asymptotic causality conditions, without actually going to the asymptotic limit of the theories. Thus, we present an alternative approach to derive the causality constraints, which is more appropriate for low-energy effective theories like relativistic hydrodynamics. _Introduction-_ Hydrodynamics is an effective theory that describes the dynamical evolution of the conserved quantities (the state variables of the system) at low energy, long wavelength limit [1] and has long served as a powerful tool to study the collective behaviour of a system [2; 3]. Its chronological development began with ideal hydrodynamics where the fluid is in its equilibrium state, then subsequently went on to include dissipative corrections for out of equilibrium scenario. These corrections are formulated by a systematic build up of gradients on the fundamental hydrodynamic fields [4]. For each order of hydrodynamic gradient expansion, the transport coefficients from the underlying microscopic theory enter the hydrodynamic evolution equations as a dynamical input of the system interaction. Based on these foundations, it is possible to derive a number of alternative hydrodynamic theories following different approaches. However, a reliable, pathology free theory needs to guarantee two major benchmark criteria. First, the signal propagation predicted by the equations of motion of the theory must be subluminal and second, its equilibrium state should be stable against fluctuations i.e., the fluctuations must not grow indefinitely with time. Now, in a number of works it has been established that the group velocity of the propagating mode exceeding the speed of light for some frequency range does not violate causality, as long as it is subluminal at the infinite frequency (wavenumber) limit [5; 6]. This necessary condition for causality is called the asymptotic causality condition which has been widely used to check the causal validity of a hydrodynamic theory [7; 8; 9]. But the conceptual anomaly with this approach is that the hydrodynamic gradient expansion has been tested to be a divergent series with factorial growth of large order corrections indicating a zero radius of convergence [10; 11]. Given the situation, an alternate definition of causality is imperative. On the other hand, the stability of a relativistic system has been known to behave distinctly depending upon the observer's frame of reference [12]. This issue has been recently addressed in [13; 14], where it has been argued that frame-invariant stability is possible only if the theory respects causality. The objective of the current work is to employ the stability invariance of a theory to establish its causality constraints. The non-triviality again comes from the fact that checking linear stability at arbitrary reference frames to identify the invariantly stable parameter space can be a cumbersome job. In this work for two well known stable-causal theories, we have demonstrated that the linear stability analysis in a reference frame boosted to a near luminal speed can alone provide the stability invariant parameter space at the spatially homogeneous limit of the theory and hence can be used to determine the causal domain of the theory as well. In [15], this identification has been observed from a kinetic theory derivation of a stable-causal first order theory. Here, we show that one can solely use the low-wavenumber stability analysis to produce the exact results of asymptotic causality. Since, relativistic hydrodynamics is a low-energy effective theory and, in this work we derive the causality criteria without departing from the low-energy limit, hence we believe, this approach provides us with a more appropriate definition of causality. _Basic setup-_ In this work, hydrodynamic stability has been analysed in a generalised Lorentz frame with an arbitrary boost velocity for both second order Muller-Israel-Stewart (MIS) theory [16; 17; 18], and the recently proposed first-order stable-causal (BDNK) theory [19; 20; 21; 22]. We linearize the conservation equations for small perturbations of fluid variables around their hydrostatic equilibrium, \(\psi(t,x)=\psi_{0}+\delta\psi(t,x)\), with the fluctuations expressed in the plane wave solutions via a Fourier transformation \(\delta\psi(t,x)\to e^{i(kx-\omega t)}\delta\psi(\omega,k)\), (subscript 0 indicates global equilibrium). The background fluid is considered to be boosted along the x-axis with a constant velocity \(\mathbf{v}\), \(u_{0}^{\mu}=\gamma(1,\mathbf{v},0,0)\) with \(\gamma=1/\sqrt{1-\mathbf{v}^{2}}\). The corresponding velocity fluctuation is \(\delta u^{\mu}=(\gamma\mathbf{v}\delta u^{x},\gamma\delta u^{x},\delta u^{y}, \delta u^{z})\) which gives \(u_{0}^{\mu}\delta u_{\mu}=0\) to maintain the velocity normalization. In the following analysis, we present the leading order stability analysis (at spatially homogeneous limit \(k\to 0\)) for both the theories at conformal, charge less limit. _Identifying stability invariant parameter space from ultra-high boost-_ First, we discuss the case of MIS theory where the energy-momentum tensor takes the form, \(T^{\mu\nu}=\epsilon u^{\mu}u^{\nu}+P\Delta^{\mu\nu}+\pi^{\mu\nu}\). The conservation of energy-momentum tensor \(\partial_{\mu}T^{\mu\nu}=0\) and the relaxation equation of shear viscous flow \(\pi^{\mu\nu}=-\tau_{\pi}\Delta^{\mu\nu}_{\alpha\beta}D\pi^{\alpha\beta}-2\eta \sigma^{\mu\nu}\) together give us the equations of motion to be linearized. In the transverse or shear channel, the leading term of the frequency (\(\omega\)) solution in wavenumber \(k\)-expansion is a single non-hydro non-propagating mode, \(\omega^{\perp}_{\rm MIS}=-i/\gamma(\tau_{\pi}-\tilde{\eta}\mathbf{v}^{2})\). Now the demand that stability requires the imaginary part of the frequency to be negative renders the stability criteria \(\tau_{\pi}/\tilde{\eta}>\mathbf{v}^{2}\)[6]. For sound channel, the leading order single non-propagating mode turns out to be, \(\omega^{\parallel}_{\rm MIS}=-i(1-\frac{\mathbf{v}^{2}}{3})/\gamma[\tau_{\pi} (1-\frac{\mathbf{v}^{2}}{3})-\frac{4\tilde{\eta}}{3}\mathbf{v}^{2}]\). For the range of boost velocity \(0\leq\mathbf{v}<1\), the stability condition becomes, \(\tau_{\pi}/\tilde{\eta}>\frac{4}{3}\mathbf{v}^{2}/(1-\frac{\mathbf{v}^{2}}{3})\). In both the channels, the right-hand sides of the inequalities for \(\tau_{\pi}/\tilde{\eta}\) are monotonically increasing functions of \(\mathbf{v}\) within the mentioned range that allow only positive values of \(\tau_{\pi}\) and give the strictest bound for \(\mathbf{v}\to 1\). So we infer that the allowed parameter space over the transport coefficients \(\eta\) and \(\tau_{\pi}\) set by stability criteria at the spatially homogeneous limit (\(k\to 0\)) for any boost velocity \(\mathbf{v}\), is always a subset of the same for any lower value of \(\mathbf{v}\). Hence, we conclude here that the \(\mathbf{v}\to 1\) bound (\(\tau_{\pi}>\tilde{\eta}\) for shear channel and \(\tau_{\pi}>2\tilde{\eta}\) for sound channel) provides the necessary and sufficient region in the parameter space where the system is stable at the spatially homogeneous limit for all reference frames (\(0\leq\mathbf{v}<1\)). So, checking stability alone in a reference frame with ultra-high boost (\(\mathbf{v}\to 1\)) suffices to conclude that the theory is frame invariantly stable. Next, we discuss the case of BDNK theory for which the energy-momentum tensor takes the form, \(T^{\mu\nu}=(\epsilon+\epsilon_{1})u^{\mu}u^{\nu}+(P+P_{1})\Delta^{\mu\nu}+(u^ {\mu}W^{\nu}+u^{\nu}W^{\mu})+\pi^{\mu\nu}\), with the first order dissipative field corrections, \(\epsilon_{1}=\mathcal{E}\frac{\partial v}{\partial\epsilon+P_{0}}+\mathcal{E }(\partial\cdot u),P_{1}=\frac{\mathcal{E}}{3}\frac{D\epsilon}{\epsilon_{1} +P_{0}}+\frac{\mathcal{E}}{3}(\partial\cdot u),W^{\mu}=\theta[\frac{\nabla^{ \mu}\mathcal{I}}{T}+Du^{\mu}]\) and \(\pi^{\mu\nu}=-2\eta\sigma^{\rm II\nu}\). The shear channel analysis is identical to that of MIS theory with the replacement \(\tau_{\pi}=\theta/(\epsilon_{0}+P_{0})\)[21]. However, the situation becomes significantly more mathematically involved in the sound channel. The leading order \(\omega\) solution in \(k\)-expansion gives rise to the quadratic dispersion relation \(a\omega^{2}+b\omega+c=0\), with \(a=\gamma^{2}[\tilde{\mathcal{E}}\tilde{\theta}-\frac{2}{3}\tilde{\mathcal{E} }(2\tilde{\eta}+\tilde{\theta})\mathbf{v}^{2}+\frac{1}{9}\theta(\tilde{ \mathcal{E}}-4\tilde{\eta})\mathbf{v}^{4}]\), \(b=i\gamma[(\tilde{\mathcal{E}}+\tilde{\theta})-\frac{1}{3}(\tilde{\theta}+ \tilde{\mathcal{E}}+4\tilde{\eta})\mathbf{v}^{2}]\) and \(c=(\mathbf{v}^{2}/3-1)\). This dispersion polynomial gives rise to two non-propagating, non-hydro modes whose stability has been analyzed using the Routh-Hurwitz (R-H) stability test [23]. The stability criteria constrain the parameter space for BDNK sound channel through the two following inequalities, \[\mathcal{E}\theta\left(1-\frac{\mathbf{v}^{2}}{3}\right)^{2}- \frac{4}{3}\eta\mathbf{v}^{2}\left(\mathcal{E}+\frac{\mathbf{v}^{2}}{3}\theta \right)>0\, \tag{1}\] \[(\mathcal{E}+\theta)\left(1-\frac{\mathbf{v}^{2}}{3}\right)- \frac{4}{3}\eta\mathbf{v}^{2}>0. \tag{2}\] Eq.(1) and (2) simultaneously only allow the parameter space set by, \[\frac{\theta}{\eta}>\frac{4}{3}\frac{\mathbf{v}^{2}}{\left(1-\mathbf{v}^{2}/3 \right)^{2}}\ \,\ \ \ \ \ \frac{\mathcal{E}}{\eta}>\frac{4}{9}\frac{\mathbf{v}^{4}}{\left(1-\mathbf{v}^{2}/3 \right)^{2}}. \tag{3}\] The right-hand sides of both the inequalities are monotonically increasing functions of \(\mathbf{v}\) which allow only positive values of \(\mathcal{E}\) and \(\theta\) with lower bounds ranging from \(0\) to \(\eta\) and \(0\) to \(3\eta\) respectively as \(\mathbf{v}\) ranges from \(0\) to \(1\). Following these conditions, Fig.1 shows that the stability region of parameter space for \(\mathbf{v}\to 1\) includes the same for any lower value of \(\mathbf{v}\). So, identical to the situation of MIS theory, for BDNK theory as well, the stability condition at \(\mathbf{v}\to 1\), is a necessary and sufficient condition for stability to hold at the spatially homogeneous limit for all possible boost velocities \(0\leq\mathbf{v}<1\). The detailed dispersion polynomials for both theories are given in the supplementary material. Given the above analysis for MIS and BDNK theories, we establish our first key finding here. For relativistic dissipative hydrodynamic theories like BDNK and MIS, performing stability analysis at ultra-high boost velocity (\(\mathbf{v}\to 1\)) alone suffices to conclude the stability invariance of the theory. Stability analysis at any other boost velocity lacks this confirmation. The stable parameter space at \(\mathbf{v}\to 1\) is a necessary and sufficient region of the Figure 1: Linearly stable parameter space for BDNK sound channel for different \(\mathbf{v}\) values at \(\eta/T^{3}=0.3\). theory for stability invariance to hold at the spatially homogeneous limit. _Causality from stability analysis-_ In this section, we will prove that only the stability criteria at \(\mathbf{v}\to 1\) limit is enough to provide the region of parameter space over which each individual theory is causal. The idea is that, since it has been proven for theories like MIS and BDNK that the stability conditions at \(\mathbf{v}\to 1\) identify the region of parameter space where the system is frame invariantly stable, and since stability invariance requires the causality properties of the theory to be respected according to the arguments put forward in [13; 14], hence the stability constraints at ultra-high boost automatically lead us to the causal region of the parameter space. For MIS theory, the stability conditions at \(\mathbf{v}\to 1\) limit for the shear and sound channels give us \(\frac{\tau_{s}}{\tilde{\eta}}>1\) and \(\frac{\tau_{s}}{2\tilde{\eta}}>1\) respectively. It can be shown that the expressions on the left-hand sides of the inequalities for both channels are functions of the square of respective asymptotic group velocities \(v_{g}=\lim_{k\to\infty}\Big{|}\frac{\partial\text{Re}(\omega)}{\partial k} \Big{|}\), \((v_{g}^{2})^{\perp}=\tilde{\eta}/\tau_{\pi}\) and \((v_{g}^{2})^{\parallel}=\frac{4\tilde{\eta}}{3\tau_{\pi}}+\frac{1}{3}\). These expressions for both the channels finally reduce to \(0<v_{g}^{2}<1\), and therefore, the stability criteria at \(\mathbf{v}\to 1\) boil down to the asymptotic causality condition \(0<v_{g}^{2}<1\) for the MIS theory in the parameter range \(\eta,\tau_{\pi}>0\). For BDNK theory, the shear channel stability condition at \(\mathbf{v}\to 1\) gives \(\frac{\theta}{\eta}>1\), which is again the asymptotic causality condition \(0<v_{g}^{2}<1\) where \(v_{g}^{2}=\frac{\eta}{\theta}\). Next, for the BDNK sound channel, we attempt to solve the inequalities (1) and (2) served as stability criteria in a boosted frame. Stability inequality (1) can be recast as, \[\left\{\left(1/\mathbf{v}^{2}\right)-x_{1}\right\}\left\{\left(1/\mathbf{v}^{ 2}\right)-x_{2}\right\}>0\, \tag{4}\] where \(x_{1},x_{2}\) are the roots of the equation, \[(\mathcal{E}\theta)x^{2}-\frac{2}{3}\mathcal{E}(2\eta+\theta)x+\frac{1}{9} \theta(\mathcal{E}-4\eta)=0. \tag{5}\] Inequality (4) has two possible solutions \(x_{1},x_{2}<\frac{1}{\mathbf{v}^{2}}\) or \(x_{1},x_{2}>\frac{1}{\mathbf{v}^{2}}\). Since \(|\mathbf{v}|\) ranges from \(0\) to \(1\) and hence \(1/\mathbf{v}^{2}\) ranges from \(1\) to \(\infty\), the second solution turns out to the unphysical. The first and only physically acceptable solution then gives us the strictest bound \(x_{1},x_{2}<1\) corresponding to the limit \(\mathbf{v}\to 1\). Now, incorporation of the second stability inequality (2) restricts the allowed region to only positive values of \(\mathcal{E}\) and \(\theta\). This restriction (along with \(\eta>0\)) leads to a positive discriminant of (5), which restricts both the roots of \(x\) to be real, among which at least one root is always positive in our stable parameter space at \(\mathbf{v}\to 1\). As it will be explicitly shown in the next section doing a large \(k\) analysis of the theory that the quadratic equation satisfied by \(v_{g}^{2}\) for the BDNK sound channel is exactly identical to (5), the inequalities (1) and (2) condense down together to give \(v_{g}^{2}<1\) with at least one \(v_{g}^{2}>0\) that produces two subluminal propagating modes. So, our stability analysis at ultra-high boost independently identifies the causal parameter space of the theory, which exactly reproduces the results of asymptotic causality analysis without going to the large \(k\) limit. _Causality from large \(k\) analysis-_ Now, let us analyze the situation of causality in the high-\(k\) regime itself and compare how accurately the subluminal parameter space has been predicted by stability analysis at ultra-high boost. The idea is that, at the large \(k\) limit, an expansion of the form \(\omega=v_{g}k+\sum_{n=0}^{\infty}c_{n}k^{-n}\) is used [8] as a solution of the dispersion equation from which a polynomial over the asymptotic group velocity \(v_{g}\) can be obtained. Next, we check the Schur stability of the polynomial [25] to check if the roots of these equations are subluminal and, if they are, then how the parameter space is constrained by them. Any polynomial \(P(z)\) of degree \(d\) is called "Schur stable" if its roots lie within a unit disc around the origin of the complex plane. This can be tested by introducing a Mobius transformation \(w=(z+1)/(z-1)\) which maps the unit disc about the origin of the complex plane into the left half plane, i.e., \(\text{Re}(w)<0\) if \(|z|<1\). So, \(P(z)\) will be Schur stable if and only if the transformed polynomial of the same degree \(Q(w)=(w-1)^{d}P\left(\frac{w+1}{w-1}\right)\) is Hurwitz stable. This method is extremely efficient, especially in cases where a direct extraction of roots from the polynomial is too complicated. For the shear channels, the Schur stability conditions that can give rise to subluminal, propagating modes are \(\tau_{\pi}-\tilde{\eta}>0\) and \(\tau_{\pi}+\tilde{\eta}>0\) for MIS and \(\theta-\eta>0\) and \(\theta+\eta>0\) for BDNK. In both cases, the first conditions are identically the stability conditions obtained at \(\mathbf{v}\to 1\) and the second conditions are obvious if the first ones are satisfied. For the propagating modes of MIS sound channel, the Schur stability conditions are given by \(\tau_{\pi}-2\tilde{\eta}>0\) and \(\tau_{\pi}+\tilde{\eta}>0\). Again, the first one is the \(\mathbf{v}\to 1\) stability criterion, and the rest is its obvious implication. So, we conclude that for both the shear channels and the MIS sound channel, the \(\mathbf{v}\to 1\) stability region exactly reproduces the causal parameter space. The situation in the BDNK sound channel is comparatively quite non-trivial. The \(v_{g}^{2}\) values are to be extracted from the following quadratic polynomial with \(z=v_{g}^{2}\), \[P(z)=(\mathcal{E}\theta)z^{2}-\frac{2}{3}\mathcal{E}(\theta+2\eta)z+\frac{1}{9} \theta(\mathcal{E}-4\eta)=0\, \tag{6}\] whose Schur stability is needed to be checked to find the causal parameter space. Its Mobius transformation again turns out to be a quadratic polynomial, \[Q(w)= \left(\frac{\mathcal{E}\theta}{3}-\mathcal{E}\eta-\frac{\eta\theta }{3}\right)w^{2}+\frac{2}{3}\theta\left(\eta+2\mathcal{E}\right)w\] \[+\left(\frac{4\mathcal{E}\theta}{3}+\mathcal{E}\eta-\frac{\eta \theta}{3}\right)=0\, \tag{7}\] whose Hurwitz stability requires all the three coefficients of Eq.(7) to be of the same sign, either positive or nega tive (along with a positive discriminant of \(P(z)\) to ensure that all the non-real roots of \(v_{g}^{2}\) on the complex plane are excluded). In Fig.2, the parameter space for which both the roots satisfy \(|v_{g}^{2}|<1\) are plotted for both the positive as well as negative conventions. The regions IA (red, crisscrossed), IB (blue, crisscrossed) and IC (black, solid-filled) are located within quadrants where both \(\theta\) and \(\mathcal{E}\) are of the same sign and indicate the regions of the parameter space where all the coefficients of (7) are positive. The regions IIA (yellow, striped), IIB (green, striped) and IIC (black, solid-filled) are located within quadrants with \(\theta\) and \(\mathcal{E}\) of opposite signs and denote the convention where all coefficients of (7) are negative. Together, all of these regions (IA-C, IIA-C) provide the full causal parameter space given by (6). Furthermore, the signs of the coefficients of (6) indicate that the regions IC and IIC bounded by \(\mathcal{E}>4\eta,\mathcal{E}<0,-2\eta<\theta<0\) give \(-1<v_{g}^{2}<0\) for both roots and hence, fail to generate any propagating mode. The rest of the regions (IA-B, IIA-B) correspond to at least one \(0<v_{g}^{2}<1\) and hence at least two subluminal propagating modes. The regions IA and IIA cover the parameter space with the additional constraints \(\mathcal{E}<0,\mathcal{E}>4\eta,\theta>0,\theta<-2\eta\), which give us both \(v_{g}^{2}\) values between \(0\) and \(1\) and hence, four subluminal propagating modes. The remaining two regions, IB and IIB, belong to the parameter space constrained by \(0<\mathcal{E}<4\eta\), which corresponds to \(-1<v_{g}^{2}<0\) for one root and \(0<v_{g}^{2}<1\) for the other, indicating the presence of two subluminal propagating modes besides the existence of two non-propagating modes. Now comes a crucial identification; we observe that the causal parameter space in the first quadrant covered by the regions IA and IB together exactly agrees with the stable region at \(\mathbf{v}\to 1\) and hence, with the frame-invariantly stable parameter space as well. This can be readily checked by realizing that the Schur condition from (7), \(-\frac{\eta\theta}{3}-\mathcal{E}\eta+\frac{\mathcal{E}\theta}{3}>0\) is exactly identical to the stability constraint (1) at \(\mathbf{v}\to 1\). The other two Schur conditions, \(\theta(\eta+2\mathcal{E})>0\) and \(-\frac{\eta\theta}{3}+\mathcal{E}\eta+\frac{\mathcal{E}\theta}{3}>0\) along with a positive discriminant of (6), further restrict the region exclusively to within the \(\theta>0,\mathcal{E}>0\) quadrant for propagating modes, which exactly resembles the role played by (2) with \(\mathbf{v}\to 1\) to define the stable parameter space. So, the entire causal parameter space obtained from the asymptotic equation (6) (by Schur convention I, all coefficients \(>0\)) is fully identified by the stable region at ultra-high boost depicted in Fig.1. In this context, we refer to the results obtained in [22], where the large wave-number causality constraint is given solely by region IA with four subluminal propagating modes. The analysis there lacks the region IB where two subluminal propagating modes are present along with two non-propagating modes. We duly point out that this lacking region is stable in every reference frame (Fig.1), which invariably identifies this region to respect causality since covariant stability is possible only for causal systems [13; 14]. So, we conclude that, because of the complexity involved, it is indeed difficult to analytically extract the full causal parameter space from the large-\(k\) dispersion polynomial. However, the method of stability analysis at \(\mathbf{v}\to 1\) presented in this work is much more effective in pointing out the full stable and causal parameter space unambiguously. We finally point out that for regions IIA and IIB, where \(\theta\) and \(\mathcal{E}\) are of opposite signs, the system is unstable in all reference frames. As mentioned in the stability arguments of [13], there could be other regions of the parameter space like IIA and IIB, where causality holds, but the system is invariantly unstable in all reference frames. The stability criteria at ultra-high boost strictly give us the parameter space where the theory is causal as well as stable in all reference frames. _Conclusion -_ We have shown her,e for the first time, for two well-known stable-causal hydrodynamic theories, viz. MIS and BDNK, an alternate way to derive the region of parameter space over which the theories are frame-invariantly stable at leading order in \(k\) and necessarily causal. Despite inherent differences in their construction, our analysis reveals that linearized stability analysis at ultra-high boost accurately leads us to the results of asymptotic causality conditions under which both the theories are frame-invariantly stable, without going to the large-\(k\) limit. Since the whole analysis is performed at a low-\(k\) limit, this approach liberates us from going to a non-perturbative high-\(k\) regime that seems outside the domain of validity of a low-energy effective theory like relativistic hydrodynamics. Moreover, in the presence of technical non-trivialities in solving the asymptotic causality equations, our method of stability check at \(\mathbf{v}\to 1\) is more effective and simpler in detecting the causal parameter space. The stability analysis done here is at the spatially ho Figure 2: The subluminal parameter space for BDNK sound channel from Schur stability at \(\eta/T^{3}=0.3\). mogeneous limit of the theory (\(k\to 0\)). The analysis with larger values of \(k\) is under progress. The causality criteria considered here are asymptotic causality criteria which are necessary, but not sufficient conditions [26]. A more rigorous study of causality requires a study of characteristics [27; 28], which will be explored in our future endeavors. _Conventions and notations:-_ Throughout the manuscript, we have used natural unit (\(\hbar=c=k_{B}=1\)) and flat space-time with mostly positive metric signature \(\eta^{\mu\nu}=\text{diag}\,(-1,1,1,1)\). The used notations read, \(D\equiv u^{\mu}\partial_{\mu}\), \(\nabla^{\mu}=\Delta^{\mu\nu}\partial_{\nu}\), \(\sigma^{\mu\nu}=\Lambda^{\mu\nu}_{\alpha\beta}\partial^{\alpha}u^{\beta}\) with \(\Delta^{\mu\nu\alpha\beta}=\frac{1}{2}\Delta^{\mu\alpha}\Delta^{\nu\beta}+ \frac{1}{2}\Delta^{\mu\beta}\Delta^{\nu\alpha}-\frac{1}{3}\Delta^{\mu\nu} \Delta^{\alpha\beta}\) and \(\Delta^{\mu\nu}=\eta^{\mu\nu}+u^{\mu}u^{\nu}\), \(\epsilon\equiv\text{energy density}\), \(P\equiv\text{pressure}\), \(u^{\mu}\equiv\text{hydrodynamic four-velocity}\), \(\tau_{\pi}\equiv\text{relaxation time}\) of shear-viscous flow, \(\eta\equiv\text{shear viscous coefficient}\). From the constraints of the second law of thermodynamics, \(\eta\) should always be a positive number [24]. The scaling notation \(\tilde{x}\) denotes \(x/(\epsilon_{0}+P_{0})\). _Acknowledgements.-_ We duly acknowledge Sayantani Bhattacharyya and Victor Roy for useful discussions, valuable inputs and critical reading of the manuscript. We also acknowledge Anirban Dinda for valuable discussions. S.R. would like to acknowledge Archisman Bhattacharjee and Najmul Haque for their technical inputs. The authors acknowledge financial support from the Department of Atomic Energy, India.
2307.10331
Epilegomena to the study of semiclassical orthogonal polynomials
In his monograph [Classical and quantum orthogonal polynomials in one variable, Cambridge University Press, 2005 (paperback edition 2009)], Ismail conjectured that certain structure relations involving the Askey-Wilson operator characterize proper subsets of the set of all $\mathcal{D}_q$-classical orthogonal polynomials, here to be understood as the Askey-Wilson polynomials and their limit cases. In this paper we give two characterization theorems for $\mathcal{D}_q$-semiclassical (and classical) orthogonal polynomials in consonance with the pioneering works by Maroni [Ann. Mat. Pura. Appl. (1987)] and Bonan, Lubinsky, and Nevai [SIAM J. Math. Anal. 18 (1987)] for the standard derivative, re-establishing in this context the perfect "symmetry" between the standard derivative and the Askey-Wilson operator. As an application, we present a sequence of $\mathcal{D}_q$-semiclassical orthogonal polynomials of class two that disproves Ismail's conjectures. Further results are presented for Hahn's operator.
K. Castillo, D. Mbouna
2023-07-19T14:02:31Z
http://arxiv.org/abs/2307.10331v2
# Epilegomena to the study of semiclassical orthogonal polynomials ###### Abstract. In his monograph [Classical and quantum orthogonal polynomials in one variable, Cambridge University Press, 2005 (paperback edition 2009)], Ismail conjectured that certain structure relations involving the Askey-Wilson operator characterize proper subsets of the set of all \(\mathcal{D}_{q}\)-classical orthogonal polynomials, here to be understood as the Askey-Wilson polynomials and their limit cases. In this paper we give two characterization theorems for \(\mathcal{D}_{q}\)-semiclassical (and classical) orthogonal polynomials in consonance with the pioneering works by Maroni [Ann. Mat. Pura. Appl. (1987)] and Bonan, Lubinsky, and Nevai [SIAM J. Math. Anal. 18 (1987)] for the standard derivative, re-establishing in this context the perfect "symmetry" between the standard derivative and the Askey-Wilson operator. As an application, we present a sequence of \(\mathcal{D}_{q}\)-semiclassical orthogonal polynomials of class two that disproves Ismail's conjectures. Further results are presented for Hahn's operator. 2010 Mathematics Subject Classification: 33D45 ## 1. Introduction The term semiclassical for a special class of orthogonal polynomials was coined in 1984 by Hendriksen and van Rossum (see [13]) during the Laguerre Symposium held at Bar-le-Duc, whose main speaker was Dieudonne. However, in the first line of his monumental work entitled _"Une theorie algebrique des polynomes orthogonaux. Applications aux polynomes orthogononaux semi-classiques"_ (see [25]), Maroni referred to Shohat1 (see [28]) as _"l'inventeur des polynomes semi-classiques"_. It was at the hands of Maroni that semiclassical orthogonal polynomials have become such a highly developed topic, although, as he himself points out, these sequences of orthogonal polynomials (OP) have always been present in certain structure relations which are as old as orthogonal polynomials themselves. Let us recall one of the best known problems in this regard. According to Al-Salam and Chihara (see [2, p. 69]), Askey raised the question of characterizing OP, \((P_{n})_{n\geq 0}\), satisfying Footnote 1: On his recent visit to Portugal, C. Brezinski showed us the only known photo of J. Shohat, which will appear in his book on the history of orthogonal polynomials written in collaboration with M. Redivo-Zaglia. \[\phi\,P_{n}^{\prime}=\sum_{j=-M}^{N}a_{n,j}P_{n+j}\qquad(c_{n,j}\in\mathbb{C}; \,M,N\in\mathbb{N}), \tag{1}\] \(\phi\) being a polynomial which does not depend on \(n\). In [2] it was proved that the only OP that satisfy (1) for \(M=N=1\) are the old classical polynomials, i.e., Jacobi, Bessel, Hermite, and Laguerre polynomials. However, for arbitrary ## 1. Introduction In this paper we study the existence of a solution of the Cauchy problem \[\begin{cases}\frac{\mathrm{d}P_{n}^{\alpha,\beta,M_{0},M_{1}}}{\mathrm{d}x}(x)= \sum_{j=-3}^{3}c_{n,j}P_{n+j}^{\alpha,\beta,M_{0},M_{1}}(x)\qquad(c_{n,j}\in \mathbb{C}).\end{cases} \tag{1}\] Here \(c_{n,j}\) is the constant constant of \(c_{n,j}\) and \(\beta\) is the constant of \(c_{n,j}\) and \(\beta\) is the constant of \(c_{n,j}\). The solution \(\frac{\mathrm{d}P_{n}^{\alpha,\beta,M_{0},M_{1}}}{\mathrm{d}x}(x)\) is a solution of the Cauchy problem \[\begin{cases}\frac{\mathrm{d}P_{n}^{\alpha,\beta,M_{0},M_{1}}}{\mathrm{d}x}(x )=\sum_{j=-3}^{3}c_{n,j}P_{n+j}^{\alpha,\beta,M_{0},M_{1}}(x)\qquad(c_{n,j}\in \mathbb{C}).\end{cases} \tag{2}\] Here \(c_{n,j}\) is the constant of \(c_{n, polynomials and their limit cases. Recall that the Askey-Wilson polynomials (see [17, Section 14.1]) \[p_{n}(x;a,b,c,d\,|\,q)=a^{-n}\,(ab,ac,ad;q)_{n}\,{}_{4}\phi_{3}\left(\left. \begin{matrix}q^{-n},&abcbdq^{n-1},&ae^{i\theta},&ae^{-i\theta}\\ &ab,&ac,&ad\end{matrix}\right|\,q,\ q\right),\] where \(x=\cos\theta\), are the \(q\)-analogues of the Wilson polynomials. (If we take \(a=q^{1/2\alpha+1/4}\), \(b=q^{1/2\alpha+3/4}\), \(c=-a\), and \(d=-b\), we get the continuous \(q\)-Jacobi polynomials. If we take \(c=d=0\), we get the Al-Salam-Chihara polynomials.) In this sense, there are in the literature two well-known conjectures posed by Ismail (see [14, Conjecture 24.7.8] and [14, Conjecture 24.7.9]). Conjecture 1.1.: _Let \((P_{n})_{n\geq 0}\) be a sequence of orthogonal polynomials and let \(\phi\) be a polynomial which does not depend on \(n\). If_ \[\phi\,\mathcal{D}_{q}\,P_{n}=\sum_{j=-1}^{1}\,a_{n,j}P_{n+j}\qquad(a_{n,j}\in \mathbb{C}), \tag{3}\] _then \(P_{n}\) is a multiple of the continuous \(q\)-Jacobi polynomials or Al-Salam-Chihara polynomials, or special or limiting cases of them. The same conclusion holds if_ \[\phi\,\mathcal{D}_{q}\,P_{n}=\sum_{j=-M}^{N}a_{n,j}P_{n+j}\qquad(a_{n,j}\in \mathbb{C};\,M,N\in\mathbb{N}). \tag{4}\] Conjecture 1.2.: _Let \((P_{n})_{n\geq 0}\) be a sequence of orthogonal polynomials and \(\phi\) be a polynomial of degree at most \(4\). Then \((P_{n})_{n\geq 0}\) satisfies_ \[\phi\,\mathcal{D}_{q}^{2}P_{n}=\sum_{j=-M}^{N}a_{n,j}P_{n+j}\qquad(a_{n,j}\in \mathbb{C};\,M,N\in\mathbb{N}), \tag{5}\] _if and only if \(P_{n}\) is a multiple of \(p_{n}(x;a,b,c,d\,|\,q)\) for some for some parameters \(a,b,c,d\)._ In [1], Al-Salam proved Conjecture 1.1 for \(\phi=1\) by characterizing the continuous \(q\)-Hermite polynomials (see [17, Section 14.26]). In [8], we prove that the Al-Salam Chihara polynomials (see [17, Section 14.26]), with nonzero parameters \(a\) and \(b\) such that \(a/b=q^{\pm 1/2}\), are the only OP satisfying (3) for \(\deg\phi=1\). We also prove that the Chebyschev polynomials of the first kind and the continuous \(q\)-Jacobi polynomials (see [17, Section 14.10]) are the only ones satisfying (3) for \(\deg\phi=2\). Moreover, in [6, Proposition 2.1] we prove that the continuous dual \(q\)-Hahn polynomials (see [17, Section 14.3]), with parameters \(a=1\), \(b=-1\), \(c=q^{1/4}\), and \(q\) replaced by \(q^{1/2}\), satisfy (4) with \(M=2\) and \(N=1\), which disproves the second part of Conjecture 1.1. On the other hand, Conjecture 1.2 is claimed to be positively solved in [16], but the authors only proved, partially, the case \(M=2\) and \(N=2\)3. Recently we noted that above conjectures are related with the theory of \(\mathcal{D}_{q}\)-semiclassical orthogonal polynomials. Indeed, in Section 3, we characterize \(\mathcal{D}_{q}\)-semiclassical (and classical) orthogonal polynomials from the structure relation (4), in consonance with the works by Maroni [21] and Bonan, Lubinsky, and Nevai [5] for the standard derivative, re-establishing in this context the perfect "symmetry" between the standard derivative and the Askey-Wilson operator. As an application of these results, in Section 4, we present an example of \(\mathcal{D}_{q}\)-semiclassical orthogonal polynomials that disproves Conjecture 1.1. In Section 4 we also show that the OP that disproves Conjecture 1.1 also disproves Conjecture 1.2. Finally, in Section 5, we explore our ideas when instead of the Askey-Wilson operator we consider the Hahn operator, which is related with the structure relation of another conjecture posed by Ismail (see [14, Conjecture 24.7.7]), but first some preliminary definitions and basic results are needed. ## 2. Preliminary results Let \(\mathcal{P}^{*}\) be the set of all linear forms on \(\mathcal{P}\) and let \(\mathcal{P}_{n}\) be the subspace of \(\mathcal{P}\) of all polynomials with degree less than or equal to \(n\). Set \(\mathcal{P}_{-1}=\{0\}\). A free system in \(\mathcal{P}\) is a sequence \((Q_{n})_{n\geq 0}\) such that \(Q_{n}\in\mathcal{P}_{n}\setminus\mathcal{P}_{n-1}\) for each \(n\). A free system \((P_{n})_{n\geq 0}\) is called OP with respect to \(\mathbf{u}\in\mathcal{P}^{*}\) if \[\langle\mathbf{u},P_{n}P_{m}\rangle=h_{n}\delta_{n,m}\quad(m=0,1,\ldots;\,h_{ n}\in\mathbb{C}\setminus\{0\}),\] \(\langle\mathbf{u},f\rangle\) being the action of \(\mathbf{u}\) on \(f\). \(\mathbf{u}\) is called regular if there exists an OP with respect to it. Recall that a (monic) OP, \((P_{n})_{n\geq 0}\), satisfies the following recurrence relation: \[xP_{n}(x)=P_{n+1}(x)+B_{n}P_{n}(x)+C_{n}P_{n-1}(x)\qquad(B_{n}\in\mathbb{C},\, C_{n+1}\in\mathbb{C}\setminus\{0\}), \tag{6}\] with initial conditions \(P_{-1}=0\) and \(P_{0}=1\). Hence it follows that \[B_{n}=\frac{\langle\mathbf{u},xP_{n}^{2}(x)\rangle}{\langle\mathbf{u},P_{n}^{ 2}(x)\rangle},\qquad C_{n}=\frac{\langle\mathbf{u},P_{n}^{2}\rangle}{\langle \mathbf{u},P_{n-1}^{2}\rangle}.\] (Of course, there is no loss of generality in assuming \(C_{0}=0\).) Since the elements of \(\mathcal{P}^{*}\) are completely determined by its action on a system of generators of \(\mathcal{P}\), we say that \(\mathbf{u}=\mathbf{v}\) if and only if \[\mathbf{u}_{n}=\langle\mathbf{u},x^{n}\rangle=\langle\mathbf{v},x^{n}\rangle\] for all \(n\in\mathbb{N}\). In the set \(\mathcal{P}^{*}\), addition and multiplications by scalars can be defined by \[\langle\mathbf{u}+\mathbf{v},x^{n}\rangle=\langle\mathbf{u},x^{n}\rangle+ \langle\mathbf{v},x^{n}\rangle\,,\] \[\langle c\mathbf{u},x^{n}\rangle=c\,\langle\mathbf{u},x^{n}\rangle\qquad(c \in\mathbb{C}),\] for all \(n\in\mathbb{N}\). \(\mathcal{P}^{*}\), endowed with these operations, is a vector space over \(\mathcal{P}\). In \(\mathcal{P}^{*}\), the identity for the additivity is denoted by \(\mathbf{0}\) and called the zero. The zero is therefore defined by the relation \(\langle\mathbf{0},x^{n}\rangle=0\) for all \(n\in\mathbb{N}\). Note that \(f\,\mathbf{u}=g\,\mathbf{u}=\mathbf{0}\)\((f,g\in\mathcal{P})\) if and only if \(\mathbf{u}=\mathbf{0}\). The left multiplication of \(\mathbf{u}\) by \(f\in\mathcal{P}\), denoted by \(f\mathbf{u}:\mathcal{P}\to\mathcal{P}\), is the form defined by \[\langle f\mathbf{u},x^{n}\rangle=\langle\mathbf{u},fx^{n}\rangle,\] for all \(n\in\mathbb{N}\). The division of \(\mathbf{u}\) by a polynomial, denoted by \((x-c)^{-1}\mathbf{u}:\mathcal{P}\to\mathcal{P}\), is the form defined by \[\left\langle(x-c)^{-1}\mathbf{u},f\right\rangle=\left\langle\mathbf{u},\frac{ f(x)-f(c)}{x-c}\right\rangle\qquad(c\in\mathbb{C};\,f\in\mathcal{P}).\] Define also \(\delta_{c}:\mathcal{P}\to\mathbb{C}\) by \(\delta_{c}f(x)=f(c)\). We check at once that \[(x-c)((x-c)^{-1}\mathbf{u})=\mathbf{u},\qquad(x-c)^{-1}((x-c)\mathbf{u})= \mathbf{u}-\mathbf{u}_{0}\,\delta_{c}.\] \(\mathcal{P}\) may be endowed with an appropriate strict inductive limit topology such that the algebraic and the topological dual spaces of \(\mathcal{P}\) coincide (see [29, Chapter 13]), that is, \[\mathcal{P}^{*}=\mathcal{P}^{\prime}. \tag{7}\] Given a free system \((Q_{n})_{n\geq 0}\), the corresponding dual basis is a sequence of linear forms \(\mathbf{a}_{n}:\mathcal{P}\to\mathbb{C}\) such that \[\langle\mathbf{a}_{n},Q_{m}\rangle=\delta_{n,m},\] and so, for an OP \((P_{n})_{n\geq 0}\), \(\mathbf{a}_{n}\) is explicitly given by \[\mathbf{a}_{n}=\frac{P_{n}}{\langle\mathbf{a}_{n},P_{n}^{2}\rangle}\mathbf{u}.\] In the sense of the weak dual topology, it would be easy to explicitly build bases in dual space. Indeed, \[\mathbf{u}=\sum_{j=0}^{\infty}\left\langle\mathbf{u},Q_{j}\right\rangle \mathbf{a}_{j}\qquad(\mathbf{u}\in\mathcal{P}^{\prime});\] this will be essential in the sequel. For more details we refer the reader to [25] (see also [10]). The Askey-Wilson average operator \(\mathcal{S}_{q}:\mathcal{P}\to\mathcal{P}\) is defined by (see [14, p. 301]) \[\mathcal{S}_{q}f(x(s))=\frac{f\big{(}x(s+1/2)\big{)}+f\big{(}x(s-1/2)\big{)}}{ 2}.\] for every polynomial \(f\). It is easy to see that \(\mathcal{D}_{q}\,x^{n}=\gamma_{n}x^{n-1}+(\)lower degree terms) and \(\mathcal{S}_{q}\,x^{n}=\alpha_{n}x^{n}+(\)lower degree terms) for all \(n\in\mathbb{N}\), where we have set \[\alpha_{n}=\frac{q^{n/2}+q^{-n/2}}{2},\qquad\gamma_{n}=\frac{q^{n/2}-q^{-n/2}} {q^{1/2}-q^{-1/2}}. \tag{8}\] Set \(\gamma_{-1}=-1\) and \(\alpha_{-1}=\alpha\). For every \(\mathbf{u}\in\mathcal{P}^{*}\) and \(f\in\mathcal{P}\), \(\mathbf{D}_{q}:\mathcal{P}^{*}\to\mathcal{P}^{*}\) and \(\mathbf{S}_{q}:\mathcal{P}^{*}\to\mathcal{P}^{*}\) are defined by transposition: \[\langle\mathbf{D}_{q}\mathbf{u},f\rangle=-\langle\mathbf{u},\mathcal{D}_{q}f \rangle,\qquad\langle\mathbf{S}_{q}\mathbf{u},f\rangle=\langle\mathbf{u}, \mathcal{S}_{q}f\rangle.\] The next definition extends the definition of classical linear forms given by Geronimus [12] and Maroni (see [26, Proposition 2.1]). Definition 2.1.: [9, Definition 3.1]_\(\mathbf{u}\in\mathcal{P}^{*}\) _is called \(\mathbf{D}_{q}\)-classical if it is regular and there exist \(\phi\in\mathcal{P}_{2}\setminus\mathcal{P}_{-1}\) and \(\psi\in\mathcal{P}_{1}\setminus\mathcal{P}_{-1}\) such that_ \[\mathbf{D}_{q}(\phi\mathbf{u})=\mathbf{S}_{q}(\psi\mathbf{u}). \tag{9}\] \((\)_We will call it simply classical when no confusion can arise.\()\)_ Observe that (9) condenses all the information of a sequence of classical orthogonal polynomials in the first three non-constant polynomials of said sequence. The next theorem gives tractable necessary and sufficient conditions for the existence of solutions of (9), characterizing the linear form \(\mathbf{u}\) and, in particular, solving the question of the existence of classical OP. Theorem 2.1.: [9, Theorem 4.1] _Suppose that \(\mathbf{u}\in\mathcal{P}^{*}\) satisfies (9) with \(\phi(x)=ax^{2}+bx+c\) and \(\psi(x)=dx+e\). Then \(\mathbf{u}\) is regular if and only if_ \[d_{n}\neq 0,\qquad\phi^{[n]}\left(-\frac{e_{n}}{d_{2n}}\right)\neq 0,\] _for all \(n\in\mathbb{N}\), where \(d_{n}=a\gamma_{n}+d\alpha_{n}\), \(e_{n}=b\gamma_{n}+e\alpha_{n}\), \(\alpha_{n}\) and \(\gamma_{n}\) being defined by (8), and_ \[\phi^{[n]}(x)=\big{(}d(\alpha^{2}-1)\gamma_{2n}+a\alpha_{2n}\big{)}\big{(}x^{2} -1/2\big{)}+\big{(}b\alpha_{n}+e(\alpha^{2}-1)\gamma_{n}\big{)}x+c+a/2.\] OP with respect to (\(\mathbf{D}_{q}\)-)classical linear forms are called (\(\mathcal{D}_{q}\)-)classical polynomials. Unlike when dealing with the standard derivative, in the case of Definition 2.1 it is still an open problem to describe the solutions of (9) (see [10, Theorem 3.2]). Theorem 2.2.: _[_14_, Theorem 20.1.3]_ _The equation_ \[f(x)\mathcal{D}_{q}^{2}\,y+g(x)\mathcal{S}_{q}\mathcal{D}_{q}\,y+h(x)\,y= \lambda_{n}\,y \tag{10}\] _has a polynomial solution \(P_{n}\in\mathcal{P}_{n}\setminus\mathcal{P}_{n-1}\) if and only if \(P_{n}\) is a multiple of \(p_{n}(x;a,b,c,d\,|\,q)\) for some parameters \(a,b,c,d\) including limiting cases as one or more of the parameters tends to \(\infty\). In all these cases \(f\), \(g\), \(h\), and \(\lambda_{n}\) reduce to_ \[f(x) =-q^{-1/2}(2(1+\sigma_{4})x^{2}-(\sigma_{1}+\sigma_{3})x-1+\sigma _{2}-\sigma_{4}),\] \[g(x) =\frac{2}{1-q}(2(\sigma_{4}-1)x+\sigma_{1}-\sigma_{3}),\qquad h(x )=0,\] \[\lambda_{n} =\frac{4q(1-q^{-n})(1-\sigma_{4}q^{n-1})}{(1-q)^{2}},\] _or a special or limiting case of it, \(\sigma_{j}\) being the jth elementary symmetric function of the Askey-Wilson parameters._ Let us recall some useful operations. Lemma 2.1.: _[_9_, Lemma 2.1]_ _Let \(f,g\in\mathcal{P}\) and \(\mathbf{u}\in\mathcal{P}^{*}\). Then the following hold:_ \[\mathcal{D}_{q}\big{(}fg\big{)} =\big{(}\mathcal{D}_{q}f\big{)}\big{(}\mathcal{S}_{q}g\big{)}+ \big{(}\mathcal{S}_{q}f\big{)}\big{(}\mathcal{D}_{q}g\big{)}, \tag{12}\] \[\mathcal{S}_{q}(fg) =(\mathcal{D}_{q}f)(\mathcal{D}_{q}g)\,\mathcal{U}_{2}+(\mathcal{ S}_{q}f\big{)}\big{(}\mathcal{S}_{q}g\big{)},\] (13) \[f\mathcal{D}_{q}g =\mathcal{D}_{q}\,\big{(}(\mathcal{S}_{q}f-\alpha^{-1}\, \mathcal{U}_{1}\mathcal{D}_{q}f)g\big{)}-\alpha^{-1}\mathcal{S}_{q}(g\mathcal{ D}_{q}f),\] (14) \[\alpha\mathbf{D}_{q}(f\mathbf{u}) =(\alpha\mathcal{S}_{q}f-\,\mathcal{U}_{1}\mathcal{D}_{q}f)\, \mathbf{D}_{q}\mathbf{u}+\mathcal{D}_{q}f\mathbf{S}_{q}\mathbf{u},\] (15) \[\alpha\mathbf{S}_{q}(f\mathbf{u}) =(\alpha^{2}\,\mathcal{U}_{2}-\,\mathcal{U}_{1}^{2})\mathcal{D}_{ q}f\,\,\mathbf{D}_{q}\mathbf{u}+(\alpha\mathcal{S}_{q}f+\,\mathcal{U}_{1} \mathcal{D}_{q}f)\mathbf{S}_{q}\mathbf{u},\] (16) \[f\mathbf{D}_{q}\mathbf{u} =\mathbf{D}_{q}\,(\mathcal{S}_{q}f\,\,\mathbf{u})-\mathbf{S}_{q} \,(\mathcal{D}_{q}f\,\,\mathbf{u})\,,\] (17) \[f\mathbf{S}_{q}\mathbf{u} =\mathbf{S}_{q}\,(\mathcal{S}_{q}f\,\,\mathbf{u})-\mathbf{D}_{q} \,(\mathcal{U}_{2}\mathcal{D}_{q}f\,\,\mathbf{u})\,,\] (18) \[\alpha\mathbf{D}_{q}^{n}\mathbf{S}_{q}\mathbf{u} =\alpha_{n+1}\mathbf{S}_{q}\mathbf{D}_{q}^{n}\mathbf{u}+\gamma_{n} \,\mathcal{U}_{1}\mathbf{D}_{q}^{n+1}\mathbf{u}\qquad(n\in\mathbb{N}), \tag{11}\] _where \(\mathcal{U}_{1}(x)=(\alpha^{2}-1)x\) and \(\mathcal{U}_{2}(x)=(\alpha^{2}-1)(x^{2}-1)\)._ The following result clarifies which are the \(\mathcal{D}_{q}\)-classical orthogonal polynomials. Proposition 2.1.: _The \(\mathcal{D}_{q}\)-classical sequences of orthogonal polynomials are the sequences of Askey-Wilson polynomials \((p_{n}(x;a,b,c,d\,|\,q))_{n=0}^{\infty}\) for some parameters \(a,b,c,d\) including limiting cases as one or more of the parameters tends to \(\infty\)._ Proof.: This follows from Theorem 2.2, after showing the equivalence between (9) and (10) with \(h=0\) (see [11, Theorem 5]). (The interested reader can also prove this easily following the proof of [26, Proposition 2.8].) From Definition 2.1 we introduce \(\mathbf{D}_{q}\)-semiclassical linear forms in a natural way (see, for instance, [26, Section 3]). Definition 2.2.: _We call a linear form, in \(\mathcal{P}^{*}\), \(\mathbf{D}_{q}\)-semiclassical if it is regular, not \(\mathbf{D}_{q}\)-classical, and there exist two polynomials \(\phi\) and \(\psi\) with at least one of them nonzero, such that (9) holds._ (_We will call it simply semiclassical when no confusion can arise._) Under the conditions of Definition 2.2, necessarily both \(\phi\) and \(\psi\) are not zero and \(\deg\psi\geq 1\). The class of \(\mathbf{u}\) is the positive integer \[s=\min_{(\phi,\psi)\in\mathcal{P}_{\mathbf{u}}}\max\{\deg\phi-2,\deg\psi-1\},\] where \(\mathcal{P}_{\mathbf{u}}\) is the set of all pairs \((\phi,\psi)\) of nonzero polynomials such that (9) holds. (Note that when \(s=0\) we have the classical linear forms.) The pair \((\phi,\psi)\in\mathcal{P}_{\mathbf{u}}\) where the class of \(\mathbf{u}\) is attained is unique up to a constant factor. OP with respect to a (\(\mathbf{D}_{q}\)-)semiclassical form of class \(s\) are called a (\(\mathcal{D}_{q}\)-)semiclassical orthogonal polynomials of class \(s\). We end this section with the following definition. Definition 2.3.: _We call a pair of polynomials \((\phi,\psi)\), \(\phi(x)=a_{p}\,x^{p}+(\)lower degree terms\()\) and \(\psi(x)=b_{q}\,x^{q}+(\)lower degree terms\()\)\((p\in\mathbb{N},q\in\mathbb{N}\setminus\{0\})\), admissible if \(p-1\neq q\) or \(a_{p}\,\gamma_{n}+b_{q}\,\alpha_{n-1}\neq 0\) whenever \(p-1=q\)._ Remark 2.1.: _We emphasize that from the point of view of [9], in this paper we are working with the particular lattice \(x(s)=(q^{-s}+q^{s})/2\). If we consider the lattice \(x(s)=\mathfrak{c}_{6}\), in the notation of [9], we have \(\alpha_{n-1}=1\) and \(\gamma_{n}=n\), and so Definition 2.3 reduces the admissibility condition for the standard derivative_ (_see [25, p. 119] and [27, p. 46]_)_._ ## 3. Characterization theorems The next theorem characterize \(\mathcal{D}_{q}\)-classical and \(\mathcal{D}_{q}\)-semiclassical orthogonal polynomials from the structure relation (4). Theorem 3.1.: _Let \(\mathbf{u}\in\mathcal{P}^{\prime}\) be regular and let \((P_{n})_{n\geq 0}\) denote the corresponding sequence of orthogonal polynomials. The following conditions are equivalent:_ * _There exist three nonzero polynomials,_ \(\phi\)_,_ \(\psi\) _and_ \(\rho\)_,_ \((\psi,\rho)\) _being an admissible pair, such that_ \[\mathbf{D}_{q}(\phi\mathbf{u})=\psi\mathbf{u},\qquad\mathbf{S}_{q}(\phi \mathbf{u})=\rho\mathbf{u}.\] * _There exist_ \(s\in\mathbb{N}\)_, complex numbers_ \((a_{n,j})_{j=0}^{n}\)_, and a polynomial_ \(\phi\) _such that_ (19) \[\phi\mathcal{D}_{q}P_{n}=\sum_{j=n-s}^{n+\deg\phi-1}a_{n,j}P_{j},\] _with_ \(a_{n,n-s}\neq 0\) _for all_ \(n\geq s\) Proof.: \(i)\implies ii)\): Write \(\rho(x)=a_{r}\,x^{r}+\left(\)lower degree terms\(\right)\) and \(\psi(x)=b_{s}\,x^{s}+\left(\)lower degree terms\(\right)\) (\(r\in\mathbb{N}\setminus\{0\},p\in\mathbb{N}\)). Set \(d_{n}=a_{r}\alpha_{n-1}+b_{s}\gamma_{n}\). Clearly, \[\phi\mathcal{D}_{q}P_{n}=\sum_{j=0}^{n+\deg\phi-1}a_{n,j}P_{j},\] where \[a_{n,j}=\frac{\left\langle\mathbf{u},\phi P_{j}\mathcal{D}_{q}P_{n}\right\rangle }{\left\langle\mathbf{u},P_{j}^{2}\right\rangle}.\] From (13) we get \[\left\langle\mathbf{u},P_{j}^{2}\right\rangle a_{n,j} =\left\langle\mathbf{u},\phi P_{j}\mathcal{D}_{q}P_{n}\right\rangle =\left\langle\phi\mathbf{u},P_{j}\mathcal{D}_{q}P_{n}\right\rangle\] \[=-\left\langle\mathbf{u},\left(\psi\left(\mathcal{S}_{q}P_{j}- \alpha^{-1}\mathsf{U}_{1}\mathcal{D}_{q}P_{j}\right)+\alpha^{-1}\rho\mathcal{ D}_{q}P_{j}\right)P_{n}\right\rangle.\] There is no loss of generality in assuming \(r-1\leq s\). For \(r-1<s\), we have \[-\alpha\left\langle\mathbf{u},P_{j}^{2}\right\rangle a_{n,j}=\left\{\begin{array} []{ll}a_{r}\alpha_{n-s-1}\left\langle\mathbf{u},P_{n}^{2}\right\rangle,&j=n-s,\\ 0,&j<n-s.\end{array}\right.\] and for \(r-1=s\), we get \[-\alpha\left\langle\mathbf{u},P_{j}^{2}\right\rangle a_{n,j}=\left\{ \begin{array}{ll}d_{n-s}\left\langle\mathbf{u},P_{n}^{2}\right\rangle,&j=n-s,\\ 0,&j<n-s.\end{array}\right.\] Hence \(a_{n,n-s}\neq 0\), for \(n\geq s\) and \(ii)\) follows. \(ii)\implies i)\): Let \((\mathbf{a}_{n})_{n\geq 0}\) be the dual basis associated to \((P_{n})_{n\geq 0}\). Note that \(ii)\) yields \[\left\langle\mathbf{D}_{q}(\phi\mathbf{a}_{n}),P_{j}\right\rangle =-\left\langle\mathbf{a}_{n},\phi\mathcal{D}_{q}P_{j}\right\rangle =-\sum_{l=j-s}^{j+\deg\phi-1}a_{j,l}\left\langle\mathbf{a}_{n},P_{l}\right\rangle\] \[=\left\{\begin{array}{ll}-a_{j,n},&n-\deg\phi+1\leq j\leq n+s, \\ 0,&\text{otherwise.}\end{array}\right.\] Writing \[\mathbf{D}_{q}(\phi\mathbf{a}_{n})=\sum_{j=0}^{\infty}\left\langle\mathbf{D}_{ q}(\phi\mathbf{a}_{n}),P_{j}\right\rangle\mathbf{a}_{j},\] in the sense of the weak dual topology in \(\mathcal{P}^{\prime}\), and taking into account that \(\left\langle\mathbf{u},P_{n}^{2}\right\rangle\mathbf{a}_{n}=P_{n}\mathbf{u}\), we get \[\mathbf{D}_{q}(\phi P_{n}\mathbf{u})=R_{n+s}\mathbf{u},\quad R_{n+s}=-\left\langle \mathbf{u},P_{n}^{2}\right\rangle\sum_{j=n-\deg\phi+1}^{n+s}\frac{a_{j,n}}{ \left\langle\mathbf{u},P_{j}^{2}\right\rangle}P_{j}.\] (Note that \(R_{n+s}\) is a polynomial of degree \(n+s\).) Taking \(n=0\) and \(n=1\) in the above expression, we have \[\mathbf{D}_{q}(\phi\mathbf{u})=R_{s}\mathbf{u}, \tag{20}\] \[\mathbf{D}_{q}(\phi P_{1}\mathbf{u})=R_{s+1}\mathbf{u}. \tag{21}\] From (21), and using (14) and (20), we obtain \[\alpha R_{s+1}\mathbf{u} =\alpha\mathbf{D}_{q}(\phi P_{1}\mathbf{u})=\big{(}\alpha\mathcal{ S}_{q}P_{1}-\mathtt{U}_{1}\mathcal{D}_{q}P_{1}\big{)}\mathbf{D}_{q}(\phi \mathbf{u})+\mathcal{D}_{q}P_{1}\mathbf{S}_{q}(\phi\mathbf{u})\] \[=(x-\alpha B_{0})R_{s}\mathbf{u}+\mathbf{S}_{q}(\phi\mathbf{u}).\] Hence \[\mathbf{S}_{q}(\phi\mathbf{u})=\big{(}\alpha R_{s+1}-(x-\alpha B_{0})R_{s} \big{)}\mathbf{u}. \tag{22}\] Note that \(\alpha R_{s+1}-(x-\alpha B_{0})R_{s}\neq 0\). To obtain a contradiction, suppose that the last assertion is false. Consequently, \(\phi\mathbf{u}=0\) with \(\phi\neq 0\) and \(\mathbf{u}\) regular, which is impossible. We claim that \(\big{(}R_{s},\alpha R_{s+1}-(x-\alpha B_{0})R_{s}\big{)}\) is an admissible pair. According to Definition 2.3, this is equivalent to showing that \(d_{n}=a_{r}\alpha_{n-1}+b_{s}\gamma_{n}\neq 0\), where \[a_{r}=-\frac{\big{\langle}\mathbf{u},P_{0}^{2}\big{\rangle}}{\langle\mathbf{u },P_{s}^{2}\rangle}a_{s,0},\quad b_{s}=-\alpha\frac{\big{\langle}\mathbf{u},P _{1}^{2}\big{\rangle}}{\big{\langle}\mathbf{u},P_{s+1}^{2}\big{\rangle}}a_{s+ 1,1}-a_{r}.\] For \(\deg{(\alpha R_{s+1}-(x-\alpha B_{0})R_{s})}<s+1\), we have \(b_{s}=0\), and so \(d_{n}=a_{r}\alpha_{n-1}\neq 0\). Assume \(\deg{(\alpha R_{s+1}-(x-\alpha B_{0})R_{s})}=s+1\). (Note that in this case \(b_{s}\neq 0\).) We now claim \[\phi\mathcal{S}_{q}P_{n}=\sum_{j=n-s-1}^{n+\deg{\phi}}\widetilde{a}_{n,j}P_{j},\quad\widetilde{a}_{n,n-s-1}=-\alpha a_{n,n-s}C_{n-s}+a_{n-1,n-s-1}C_{n}, \tag{23}\] where we have assumed that \((P_{n})_{n\geq 0}\) satisfies (6). Indeed, apply \(\phi\mathcal{D}_{q}\) to (6) to obtain \[\phi(x)\mathcal{S}_{q}P_{n}(x) =\phi(x)\left(-\alpha x\mathcal{D}_{q}P_{n}(x)+\mathcal{D}_{q}P_{ n+1}(x)+B_{n}\mathcal{D}_{q}P_{n}(x)+C_{n}\mathcal{D}_{q}P_{n-1}(x)\right)\] \[=-\alpha x\sum_{j=n-s}^{n+\deg{\phi}-1}a_{n,j}P_{j}(x)+\sum_{j=n-s +1}^{n+\deg{\phi}}a_{n+1,j}P_{j}(x)\] \[\quad+B_{n}\sum_{j=n-s}^{n+\deg{\phi}-1}a_{n,j}P_{j}(x)+C_{n}\sum _{j=n-s-1}^{n+\deg{\phi}-2}a_{n-1,j}P_{j}(x),\] and (23) follows by using (6). We also claim that \[a_{n,n-s}=\Big{(}k_{1}q^{n/2}+k_{2}q^{-n/2}\Big{)}\prod_{j=n-s+1}^{n}C_{j}, \qquad n\geq s. \tag{24}\] \[2\widetilde{a}_{n,n-s-1}=-(q^{1/2}-q^{-1/2})\left(k_{1}q^{n/2}-k_{2}q^{-n/2} \right)\prod_{j=n-s}^{n}C_{j},\qquad n\geq s+1, \tag{25}\] where \(k_{1}\) and \(k_{2}\) are complex numbers. Indeed, apply \(\phi\mathcal{S}_{q}\) to (6) and use (12) to obtain \[\mathbb{U}_{2}(x)\phi(x)\mathcal{D}_{q}P_{n}(x)+\alpha x\phi(x) \mathcal{S}_{q}P_{n}(x)\] \[=\phi(x)\mathcal{S}_{q}P_{n+1}(x)+B_{n}\phi(x)\mathcal{S}_{q}P_{n }(x)+C_{n}\phi(x)\mathcal{S}_{q}P_{n-1}(x).\] Combining (19), (23) and (6), we obtain \[\sum_{j=n-s-2}^{n+\deg\phi+1}r_{n,j}P_{j}=0.\] Since \((P_{n})_{n\geq 0}\) is a free system, we have \(r_{n,j}=0\) for all \(j\). By identifying the coefficient of \(P_{n-s-2}\), we find \[0=r_{n,n-s-2}=(\alpha^{2}-1)a_{n,n-s}C_{n-s}C_{n-s-1}+\alpha\widetilde{a}_{n,n -s-1}C_{n-s-1}-\widetilde{a}_{n-1,n-s-2}C_{n}.\] Using the expression of \(\widetilde{a}_{n,n-s-1}\) given in (23), we get the following second order linear homogeneous equation: \[y(n)-2\alpha y(n-1)+y(n-2)=0, \tag{26}\] where \[y(n)=\frac{a_{n,n-s}}{\prod_{j=n-s+1}^{n}C_{j}},\qquad n\geq s.\] Note that \(q^{1/2}\) and \(q^{-1/2}\) are the solutions of the characteristic equation of (26) and, therefore, we find \[y(n)=k_{1}q^{n/2}+k_{2}q^{-n/2},\] and (24) follows. Moreover, from the expression of \(\widetilde{a}_{n,n-s-1}\) given in (23), (25) follows. Finally, using (24) and (25), we obtain \[d_{n} =-\frac{\left<\mathbf{u},P_{0}^{2}\right>}{\left<\mathbf{u},P_{s }^{2}\right>}a_{s,0}\alpha_{n-1}+\frac{\left<\mathbf{u},P_{0}^{2}\right>}{ \left<\mathbf{u},P_{s+1}^{2}\right>}\widetilde{a}_{s+1,0}\gamma_{n}\] \[=\frac{\left<\mathbf{u},P_{0}^{2}\right>}{\left<\mathbf{u},P_{s +1}^{2}\right>}\left(-a_{s,0}C_{s+1}\alpha_{n-1}+\widetilde{a}_{s+1,0}\gamma_ {n}\right)\] \[=-\frac{1}{2}\left(2\alpha_{n-1}(k_{1}q^{s/2}+k_{2}q^{-s/2})+(q^ {n/2}-q^{-n/2})(k_{1}q^{(s+1)/2}-k_{2}q^{-(s+1)/2})\right)\] \[=-\alpha\big{(}k_{1}q^{(n+s)/2}+k_{2}q^{-(n+s)/2}\big{)}\] \[=-\alpha\,a_{n+s,n}\prod_{j=1}^{n+s}C_{j}^{-1}\neq 0,\] and so \((R_{s},\alpha R_{s+1}-(x-\alpha B_{0})R_{s})\) is an admissible pair. Thus, \(i)\) follows from (20) and (22), and the theorem is proved. Remark 3.1.: _A regular linear form \(\mathbf{u}\) satisfying Theorem 3.1\(i)\) is classical or semiclassical. Indeed, using (18) and (16) we get_ \[\mathbf{D}_{q}(\rho\mathbf{u}) =\mathbf{D}_{q}\mathbf{S}_{q}(\phi\mathbf{u})=(\alpha-\alpha^{-1}) \mathbf{S}_{q}\mathbf{D}_{q}(\phi\mathbf{u})+\alpha^{-1}\mathpzc{U}_{1} \mathbf{D}_{q}^{2}(\phi\mathbf{u})\] \[=(\alpha-\alpha^{-1})\mathbf{S}_{q}(\psi\mathbf{u})+\alpha^{-1} \mathpzc{U}_{1}\mathbf{D}_{q}(\psi\mathbf{u})\] \[=(\alpha-\alpha^{-1})\mathbf{S}_{q}(\psi\mathbf{u})+\alpha^{-1} (\alpha\mathbf{D}_{q}(\mathpzc{U}_{1}\psi\mathbf{u})-(\alpha^{2}-1)\mathbf{S} _{q}(\psi\mathbf{u}))\] \[=\alpha\mathbf{S}_{q}(\psi\mathbf{u})+\mathbf{D}_{q}(\mathpzc{U} _{1}\psi\mathbf{u}).\] _Thus \(\mathbf{D}_{q}\big{(}(\rho-\mathpzc{U}_{1}\psi)\mathbf{u}\big{)}=\alpha \mathbf{S}_{q}(\psi\mathbf{u})\) as claimed. Theorem 3.1 is the analogue of [21, Theorem 3.1]\((\)see also [25]\()\), from which a distributional version of [5, Theorem 1.1] follows._ Although Theorem 3.1 could be very useful in many situations it is little precise regarding the classical or semiclassical character of the linear form. In the following theorem we will be more precise in this sense, but in counterpart we lose the direct connection with the equation (9). Theorem 3.2.: _Let \(\mathbf{u}\in\mathcal{P}^{\prime}\) be regular and let \((P_{n})_{n\geq 0}\) denote the corresponding sequence of orthogonal polynomials satisfying (6). Suppose that there exist a non-negative integer \(s\), complex numbers \((a_{n,j})_{j=0}^{n}\), and a polynomial \(\phi\) such that_ \[\phi\mathcal{D}_{q}P_{n}=\sum_{j=n-s}^{n+\deg\phi-1}a_{n,j}P_{j}, \tag{27}\] _with \(a_{n.n-s}\neq 0\) for all \(n\geq s\). Then there exist \(\Phi\in\mathcal{P}_{s+1}\setminus\mathcal{P}_{-1}\) and \(\Psi\in\mathcal{P}_{s}\setminus\mathcal{P}_{-1}\) such that_ \[\Phi\,\mathbf{D}_{q}\mathbf{u}=\Psi\,\mathbf{S}_{q}\mathbf{u}, \tag{28}\] _where \(\Phi\) and \(\Psi\) never have more than \(s-1\) common zeros. If \(\Phi\) and \(\Psi\) have \(s-1\) common zeros, then \(\mathbf{u}\) is \(\mathbf{D}_{q}\)-classical. Otherwise, \(\mathbf{u}\) is \(\mathbf{D}_{q}\)-semiclassical of class \(s-1-r\), \(r\) being the number of common zeros of \(\Phi\) and \(\Psi\)._ Proof.: We can now proceed analogously to the proof of Theorem 3.1 to obtain \[\mathbf{D}_{q}(\phi P_{n}\mathbf{u})=-Q_{n+s}\mathbf{u},\quad Q_{n+s}=\big{<} \mathbf{u},P_{n}^{2}\big{>}\sum_{j=n-\deg\phi+1}^{n+s}\frac{a_{j,n}}{\big{<} \mathbf{u},P_{j}^{2}\big{>}}P_{j}.\] (Note that \(Q_{n+s}\) is a polynomial of degree \(n+s\).) Taking \(n=0\) and \(n=1\), in the above expression, we have \[\mathbf{D}_{q}(\phi\mathbf{u})=-Q_{s}\mathbf{u}, \tag{29}\] \[\mathbf{D}_{q}(\phi P_{1}\mathbf{u})=-Q_{s+1}\mathbf{u}. \tag{30}\] Using (29) and (14), we have \[-\alpha Q_{s+1}(x)\mathbf{u} =\alpha\mathbf{D}_{q}\big{(}P_{1}(x)\phi\mathbf{u}\big{)}=(x- \alpha B_{0})\mathbf{D}_{q}(\phi\mathbf{u})+\mathbf{S}_{q}(\phi\mathbf{u})\] \[=-(x-\alpha B_{0})Q_{s}(x)\mathbf{u}+\mathbf{S}_{q}(\phi\mathbf{u }),\] and so \[((x-\alpha B_{0})Q_{s}(x)-\alpha Q_{s+1}(x))\mathbf{u}=\mathbf{S}_{q}(\phi \mathbf{u}).\] Applying \(\mathbf{D}_{q}\) to the above equation, and using (18), (29), and (16), we can assert that \[-\alpha\mathbf{D}_{q}(((x-\alpha B_{0})Q_{s}(x)-\alpha Q_{s+1}(x)) \mathbf{u})\] \[=-\alpha\mathbf{D}_{q}\mathbf{S}_{q}\big{(}\phi\mathbf{u}\big{)}=-( 2\alpha^{2}-1)\mathbf{S}_{q}\mathbf{D}_{q}\big{(}\phi\mathbf{u}\big{)}- \mathfrak{U}_{1}\mathbf{D}_{q}^{2}\big{(}\phi\mathbf{u}\big{)}\] \[=(2\alpha^{2}-1)\mathbf{S}_{q}\big{(}Q_{s}(x)\mathbf{u}\big{)}+ \mathfrak{U}_{1}\mathbf{D}_{q}\big{(}Q_{s}(x)\mathbf{u}\big{)}=\alpha^{2} \mathbf{S}_{q}\big{(}Q_{s}(x)\mathbf{u}\big{)}+\alpha\mathbf{D}_{q}\big{(} \mathfrak{U}_{1}Q_{s}(x)\mathbf{u}\big{)}.\] Therefore, \[\mathbf{D}_{q}\big{(}(Q_{s+1}(x)-(\alpha x-B_{0})Q_{s}(x))\mathbf{u}\big{)}= \mathbf{S}_{q}(Q_{s}(x)\mathbf{u}), \tag{31}\] and \(\mathbf{u}\) is classical or semiclassical of class at most \(s-1\). Let us now rewrite (31) in the form (28) to distinguish between cases. Using (14) and (15), (31) becomes \[(\alpha\mathcal{S}_{q}R_{s+1}-\mathfrak{U}_{1}\mathcal{D}_{q}R_{s+1}+\big{(} \mathfrak{U}_{1}^{2}-\alpha^{2}\mathfrak{U}_{2}\big{)}\mathcal{D}_{q}Q_{s}) \mathbf{D}_{q}\mathbf{u}\] \[=\big{(}\mathfrak{U}_{1}\mathcal{D}_{q}Q_{s}+\alpha\mathcal{S}_{q}Q_{s}- \mathcal{D}_{q}R_{s+1}\big{)}\mathbf{S}_{q}\mathbf{u},\] where \(R_{s+1}(x)=Q_{s+1}(x)-(\alpha x-B_{0})Q_{s}(x)\). Consequently, (28) follows with \[\Phi=\alpha\mathcal{S}_{q}R_{s+1}-\mathfrak{U}_{1}\mathcal{D}_{q}R_{s+1}+( \mathfrak{U}_{1}^{2}-\alpha^{2}\mathfrak{U}_{2})\mathcal{D}_{q}Q_{s}, \tag{32}\] \[\Psi=\mathfrak{U}_{1}\mathcal{D}_{q}Q_{s}+\alpha\mathcal{S}_{q}Q_{s}- \mathcal{D}_{q}R_{s+1}. \tag{33}\] Of course, \(\Phi(x)\neq 0\) and \(\Psi(x)\neq 0\), otherwise \(\mathbf{u}=\mathbf{0}\), which contradicts the regularity of \(\mathbf{u}\). Without restriction of generality, let us assume \(\Phi(x)=(x-1)\Psi(x)\). From (28), and using (16) and (17), we get \[\mathbf{D}_{q}\big{(}1/2(\alpha\,x-1)\mathbf{u}\big{)}=\mathbf{S}_{q}\mathbf{ u}.\] By Theorem 2.1, this leads to a contradiction with the regularity of \(\mathbf{u}\) --\(a=d=0\) in the notation of Theorem 2.1 and so \(d_{n}=0\) therein--, and the first part of the theorem follows. Now suppose that \(\Phi=\rho_{r}\phi\) and \(\Psi=\rho_{r}\psi\), \(r<s\) where \(\rho_{r}\in\mathcal{P}_{r}\), \(\phi\in\mathcal{P}_{s-r+1}\) and \(\psi\in\mathcal{P}_{s-r}\). Hence (28) reduces to \(\phi\,\mathbf{D}_{q}\mathbf{u}=\psi\,\mathbf{S}_{q}\mathbf{u}\) and, therefore, using (16) and (17), we have \[\mathbf{D}_{q}\left((\mathcal{S}_{q}\phi+\mathfrak{U}_{2}\mathcal{D}_{q}\psi) \,\mathbf{u}\right)=\mathbf{S}_{q}\left((\mathcal{S}_{q}\psi+\mathcal{D}_{q} \phi)\,\mathbf{u}\right).\] Since \(\mathcal{S}_{q}\phi\) has degree at most \(s-r+1\), and \(\mathcal{S}_{q}\psi\) and \(\mathcal{D}_{q}\phi\) have degree at most \(s-r\), \(\mathbf{u}\) is classical whenever \(r=s-1\) or semiclassical of class at most \(s-r-1\) whenever \(r<s-1\). Assume that (28) holds with \(\Phi\) and \(\Psi\) being coprime, i.e., \(r=0\). To obtain a contradiction, suppose that \(\mathbf{u}\) is semiclassical of class at most \(s-2\): there exists \(\phi\in\mathcal{P}_{s}\) and \(\psi\in\mathcal{P}_{s-1}\) such that (9) holds. Taking into account (14) and (15), (9) holds if and only if \[\widetilde{\Phi}\,\mathbf{D}_{q}\mathbf{u}=\widetilde{\Psi}\,\mathbf{S}_{q} \mathbf{u}, \tag{34}\] where \(\widetilde{\Phi}=\alpha\mathcal{S}_{q}\phi-\mathfrak{U}_{1}\mathcal{D}_{q} \phi+(\mathfrak{U}_{1}^{2}-\alpha^{2}\mathfrak{U}_{2})\mathcal{D}_{q}\psi\) and \(\widetilde{\Psi}=\alpha\mathcal{S}_{q}\psi+\mathfrak{U}_{1}\mathcal{D}_{q} \psi-\mathcal{D}_{q}\phi\). Combining (28) with (34) yields \[(\widetilde{\Psi}\,\Phi-\widetilde{\Phi}\,\Psi)\,\mathbf{D}_{q}\mathbf{u}= \mathbf{0}.\] By the regularity of \(\mathbf{u}\), and the fact that \(\Phi\) and \(\Psi\) are coprime, \(\Phi=a\widetilde{\Phi}\) and \(\Psi=a\widetilde{\Psi}\) (\(a\in\mathbb{C}\setminus\{0\}\)), and so \[Q_{s}=a\,\psi,\] which is impossible. Thus \(\mathbf{u}\) is semiclassical of class \(s-1\). The same conclusion can be drawn for \(r\neq 0\) and the theorem is proved. ## 4. Counterexamples As an application of a particular case of Theorem 3.2, we disprove Conjecture 1.1 and Conjecture 1.2. Proposition 4.1.: _Let \((P_{n})_{n\geq 0}\) be a sequence of orthogonal polynomials satisfying (6) with_ \[B_{n}=0,\qquad C_{n}=\frac{1}{4}\big{(}1-(-1)^{n}q^{n/2}\big{)}\big{(}1-(-1)^{n }q^{(n-1)/2}\big{)}.\] _Then \((P_{n})_{n\geq 0}\) is \(\mathcal{D}_{q}\)-semiclassical of class two and the corresponding linear form satisfies (9) with_ \[\phi(x) =-\frac{1}{8}(1-q^{-1})^{2}\big{(}4x^{4}-(q+5)x^{2}+q+1),\] \[\psi(x) =\frac{1}{4}(q-1)q^{-3/2}x(4x^{2}-3-q).\] Proof.: We claim that \((P_{n})_{n\geq 0}\) satisfies \[\mathcal{S}_{q}P_{n} =\alpha_{n}P_{n}+b_{n}C_{n-1}P_{n-2}, \tag{36}\] \[\mathtt{U}_{2}\mathcal{D}_{q}P_{n} =a_{n}P_{n+1}+c_{n}P_{n-1}+d_{n}P_{n-3}, \tag{35}\] with \[a_{n} =(\alpha^{2}-1)\gamma_{n},\] \[b_{n} =-\frac{1}{2}\big{(}1-(-1)^{n}q^{n/2}\big{)}\big{(}(-1)^{n}-q^{-( n-1)/2}\big{)},\] \[c_{n} =b_{n+1}C_{n}-\alpha b_{n}C_{n-1}-(\alpha^{2}-1)\gamma_{n}C_{n}, \qquad d_{n}=(b_{n-1}C_{n}-\alpha b_{n}C_{n-1})C_{n-2}.\] Indeed, we prove this by induction on \(n\). For \(n=1\), RHS of (35) gives \(\alpha_{1}P_{1}(x)+b_{1}C_{0}P_{-1}(x)=\alpha x\), while LHS gives \(\mathcal{S}_{q}P_{1}(x)=\mathcal{S}_{q}x=\alpha x\). Similarly, for \(n=1\), LHS of (36) gives \(\mathtt{U}_{2}\mathcal{D}_{q}P_{1}=\mathtt{U}_{2}\), while RHS gives \[a_{1}P_{2}(x)+c_{1}P_{0}(x)+d_{1}P_{-1}(x) =a_{1}P_{2}(x)+c_{1}\] \[=(\alpha^{2}-1)(x^{2}-C_{1})+b_{2}C_{1}-\alpha b_{1}C_{0}-(\alpha ^{2}-1)C_{1}\] \[=(\alpha^{2}-1)(x^{2}-1).\] Assuming (35) and (36) hold, with \(k\) instead of \(n\), for \(k=1,2,\ldots,n\), we will prove it for \(k=n+1\). Apply \(\mathcal{S}_{q}\) to (6), and use (12), to obtain \[\mathcal{S}_{q}(P_{n+1}(x)+C_{n}P_{n-1}(x))=\mathcal{S}_{q}(xP_{n}(x))=\mathtt{ U}_{2}(x)\mathcal{D}_{q}P_{n}(x)+\alpha x\mathcal{S}_{q}P_{n}(x).\] From (36) for \(n\) and (35) for \(n-1\) and \(n\), we get \[\mathcal{S}_{q}P_{n+1}(x) =a_{n}P_{n+1}(x)+c_{n}P_{n-1}(x)+d_{n}P_{n-3}(x)\] \[\quad+\alpha x(\alpha_{n}P_{n}(x)+b_{n}C_{n-1}P_{n-2}(x))\] \[\quad-C_{n}(\alpha_{n-1}P_{n-1}(x)+b_{n-1}C_{n-2}P_{n-3}(x)). \tag{37}\] Now using (6), (37) becomes \[\mathcal{S}_{q}P_{n+1} =(a_{n}+\alpha\alpha_{n})P_{n+1}\] \[+(c_{n}+\alpha\alpha_{n}C_{n}+\alpha b_{n}C_{n-1}-\alpha_{n-1}C_{ n})P_{n-1}\] \[+(d_{n}+\alpha b_{n}C_{n-1}C_{n-2}-b_{n-1}C_{n}C_{n-2})P_{n-3}.\] The reader should convince himself that the following relations hold: \[\alpha_{n+1} =a_{n}+\alpha\alpha_{n},\] \[b_{n+1}C_{n+1} =c_{n}+\alpha\alpha_{n}C_{n}+\alpha b_{n}C_{n-1}-\alpha_{n-1}C_{n},\] \[0 =d_{n}+\alpha b_{n}C_{n-1}C_{n-2}-b_{n-1}C_{n}C_{n-2}.\] This gives \(\mathcal{S}_{q}P_{n+1}=\alpha_{n+1}P_{n+1}+b_{n+1}C_{n}P_{n-1}\), and (35) holds for \(n+1\). Similarly, apply \(\mathfrak{U}_{2}\mathcal{D}_{q}\) to (6), and use (11), to obtain \[\mathfrak{U}_{2}(x)\mathcal{D}_{q}(P_{n+1}(x)+C_{n}P_{n-1}(x))=\mathfrak{U}_{2 }(x)\mathcal{D}_{q}(xP_{n}(x))=\mathfrak{U}_{2}(x)(\mathcal{S}_{q}P_{n}(x)+ \alpha x\mathcal{D}_{q}P_{n}(x))\] or, using (35) for \(n\), \[\mathfrak{U}_{2}(x)\mathcal{D}_{q}P_{n+1}(x) =\mathfrak{U}_{2}(x)\mathcal{S}_{q}P_{n}(x)+\alpha x\mathfrak{U}_ {2}(x)\mathcal{D}_{q}P_{n}(x)-C_{n}\mathfrak{U}_{2}(x)\mathcal{D}_{q}P_{n-1}(x)\] \[=\mathfrak{U}_{2}(x)\big{(}\alpha_{n}P_{n}(x)+b_{n}C_{n-1}P_{n-2} (x)\big{)}+\alpha x\mathfrak{U}_{2}(x)\mathcal{D}_{q}P_{n}(x)\] \[-C_{n}\mathfrak{U}_{2}(x)\mathcal{D}_{q}P_{n-1}(x). \tag{38}\] From (6) it follows that \[\mathfrak{U}_{2}P_{n}=(\alpha^{2}-1)(P_{n+2}+(C_{n+1}+C_{n}-1)P_{n}+C_{n}C_{n- 1}P_{n-2}).\] Combining the above equation with (36) for \(n-1\) and \(n\), (38) becomes \[\mathfrak{U}_{2}\mathcal{D}_{q}P_{n+1} =\big{(}(\alpha^{2}-1)\alpha_{n}+\alpha a_{n}\big{)}P_{n+2}\] \[\quad+\big{(}(\alpha^{2}-1)\alpha_{n}(C_{n}+C_{n+1}-1)+(\alpha^{2} -1)b_{n}C_{n-1}+\alpha a_{n}C_{n+1}\] \[\quad+\alpha c_{n}-a_{n-1}C_{n}\big{)}P_{n}\] \[\quad+\big{(}(\alpha^{2}-1)\alpha_{n}C_{n}C_{n-1}+(\alpha^{2}-1)b _{n}C_{n-1}(C_{n-1}+C_{n-2}-1)\] \[\quad+\alpha c_{n}C_{n-1}+\alpha d_{n}-c_{n-1}C_{n}\big{)}P_{n-2}\] \[\quad+\big{(}\alpha d_{n}C_{n-3}-d_{n-1}C_{n}+(\alpha^{2}-1)b_{n} C_{n-1}C_{n-2}C_{n-3}\big{)}P_{n-4}.\] The reader again should convince himself that the following relations hold: \[a_{n+1} =(\alpha^{2}-1)\alpha_{n}+\alpha a_{n},\] \[c_{n+1} =(\alpha^{2}-1)\alpha_{n}(C_{n}+C_{n+1}-1)+(\alpha^{2}-1)b_{n}C_{n- 1}+\alpha a_{n}C_{n+1}\] \[\qquad+\alpha c_{n}-a_{n-1}C_{n},\] \[d_{n+1} =(\alpha^{2}-1)\alpha_{n}C_{n}C_{n-1}+(\alpha^{2}-1)b_{n}C_{n-1} (C_{n-1}+C_{n-2}-1)\] \[\qquad+\alpha c_{n}C_{n-1}+\alpha d_{n}-c_{n-1}C_{n},\] \[0 =\alpha d_{n}C_{n-3}-d_{n-1}C_{n}+(\alpha^{2}-1)b_{n}C_{n-1}C_{n- 2}C_{n-3}.\] We thus get \[\mathtt{U}_{2}\mathcal{D}_{q}P_{n+1}=a_{n+1}P_{n+2}+c_{n+1}P_{n}+d_{n+1}P_{n-2},\] as claimed. Observe from (36) that \((P_{n})_{n\geq 0}\) satisfies the hypotheses of Theorem 3.2 with \(B_{n}=0\) and \(\phi=\mathtt{U}_{2}\). Note also that \[C_{1} =\frac{1}{2}(1+q^{1/2}), C_{2} =\frac{1}{4}(1-q)(1-q^{1/2}),\] \[C_{3} =\frac{1}{4}(1+q)(1+q^{3/2}), C_{4} =\frac{1}{4}(1-q^{2})(1-q^{3/2}).\] Under the notation of Theorem 3.2 and its proof, we get \[Q_{3}(x) =\frac{c_{1}}{C_{1}}P_{1}(x)+\frac{d_{3}}{C_{3}C_{2}C_{1}}P_{3}(x )=\frac{1}{4}(q-1)q^{-3/2}x(4x^{2}-3-q),\] \[Q_{4}(x) =\frac{c_{2}}{C_{2}}P_{2}(x)+\frac{d_{4}}{C_{4}C_{3}C_{2}}P_{4}(x )=\frac{1}{8}(q-1)q^{-2}(8x^{4}-8x^{2}+1-q^{2}), \tag{40}\] \[R_{4}(x) =Q_{4}(x)-\alpha xQ_{3}(x)=-\frac{1}{8}(1-q^{-1})^{2}(4x^{4}-(q+5 )x^{2}+q+1). \tag{39}\] Taking into account that \(\mathcal{S}_{q}x=\alpha x\), \(\mathcal{D}_{q}x^{2}=2\alpha x\), \[\mathcal{S}_{q}x^{2} =(2\alpha^{2}-1)x^{2}+1-\alpha^{2},\] \[\mathcal{D}_{q}x^{3} =(4\alpha^{2}-1)x^{2}+1-\alpha^{2},\quad\mathcal{S}_{q}x^{3}= \alpha(4\alpha^{2}-3)x^{3}+3\alpha(1-\alpha^{2})x,\] \[\mathcal{D}_{q}x^{4} =4\alpha(2\alpha^{2}-1)x^{3}+4\alpha(1-\alpha^{2})x,\] \[\mathcal{S}_{q}x^{4} =(8\alpha^{4}-8\alpha^{2}+1)x^{4}+2(1-\alpha^{2})(4\alpha^{2}-1) x^{2}+(1-\alpha^{2})^{2},\] from (32) and (33), we have \[\Phi(x) =-\frac{1}{16}(q-1)^{2}q^{-3/2}(8qx^{4}-2(q^{2}+4q+1)x^{2}+(q+1)^{2}),\] \[\Psi(x) =\frac{1}{4}(q^{1/2}-q^{-1/2})(4qx^{2}-3q-1)x.\] Finally, since \(\Phi\) and \(\Psi\) are coprime, by Theorem 3.2, (31), (39) and (40), the result follows. Remark 4.1.: _The structure relation (36) is of type (4) with \(\phi=\mathfrak{U}_{2}\), \(M=3\), and \(N=1\). Consequently, the semiclassical orthogonal polynomials given in Proposition 4.1 disprove Conjecture 1.1._ Corollary 4.1.: _Assume the hypotheses and notation of Proposition 4.1. Then \((P_{n})_{n\geq 0}\) satisfies_ \[(\alpha^{2}-1)^{2}(x^{2}-\alpha^{2})(1-x^{2})\mathcal{D}_{q}^{2}P_ {n}(x) =-(\alpha^{2}-1)^{2}\gamma_{n}\gamma_{n-1}P_{n+2}(x)+d_{n,1}P_{n}(x)\] \[+d_{n,2}P_{n-2}(x)+d_{n,3}P_{n-4}(x)+d_{n,4}P_{n-6}(x), \tag{41}\] _with_ \[d_{n,1} =a_{n}c_{n+1}+a_{n-1}c_{n}-2\alpha(\alpha^{2}-1)(\alpha_{n}^{2}- 1)(C_{n+1}+C_{n}-1)\] \[\quad-4\alpha^{2}(\alpha^{2}-1)\alpha_{n-1}b_{n}C_{n-1},\] \[d_{n,2} =a_{n}d_{n+1}+c_{n}c_{n-1}+a_{n-3}d_{n}-2\alpha(\alpha^{2}-1)( \alpha_{n}^{2}-1)C_{n}C_{n-1}\] \[\quad-4\alpha^{2}(\alpha^{2}-1)\alpha_{n-1}b_{n}C_{n-1}(C_{n-1}+ C_{n-2}-1)-2\alpha(\alpha^{2}-1)b_{n}b_{n-2}C_{n-1}C_{n-3},\] \[d_{n,3} =c_{n}d_{n-1}+c_{n-3}d_{n}-4\alpha^{2}(\alpha^{2}-1)\alpha_{n-1} b_{n}C_{n-1}C_{n-2}C_{n-3}\] \[\quad-2\alpha(\alpha^{2}-1)b_{n}b_{n-2}C_{n-1}C_{n-3}(C_{n-3}+C_{ n-4}-1),\] \[d_{n,4} =-4\alpha^{2}q^{-(n-3)/2}C_{n}C_{n-1}C_{n-2}C_{n-3}C_{n-4}C_{n-5}.\] Proof.: From the previous result, we apply \(\mathfrak{U}_{2}\mathcal{D}_{q}\) to (36), and use (11), to get \[\mathfrak{U}_{2}\mathcal{S}_{q}\mathfrak{U}_{2}\mathcal{D}_{q}^{2}P_{n}+ \mathfrak{U}_{2}\mathcal{D}_{q}\mathfrak{U}_{2}\mathcal{S}_{q}\mathcal{D}_{q} P_{n}=\mathfrak{U}_{2}\mathcal{D}_{q}(a_{n}P_{n+1}+c_{n}P_{n-1}+d_{n}P_{n-3}),\] and since \(\mathcal{S}_{q}\mathfrak{U}_{2}=\alpha^{2}\mathfrak{U}_{2}+\mathfrak{U}_{1}^ {2}\) and \(\mathcal{D}_{q}\mathfrak{U}_{2}=2\alpha\mathfrak{U}_{1}\), we use again (36) to obtain \[(\alpha^{2}\mathfrak{U}_{2} +\mathfrak{U}_{1}^{2})\mathfrak{U}_{2}\mathcal{D}_{q}^{2}P_{n}+2 \alpha\mathfrak{U}_{1}\mathfrak{U}_{2}\mathcal{S}_{q}\mathcal{D}_{q}P_{n}=a_ {n}a_{n+1}P_{n+2}\] \[+(a_{n}c_{n+1}+a_{n-1}c_{n})P_{n}+(a_{n}d_{n+1}+c_{n}c_{n-1}+a_{n- 3}d_{n})P_{n-2}\] \[+(c_{n}d_{n-1}+d_{n}c_{n-3})P_{n-4}+d_{n}d_{n-3}P_{n-6}. \tag{42}\] On the other hand, it is known from [9, Lemma 2.1] that \[\alpha\mathcal{S}_{q}^{2}P_{n}=\mathcal{S}_{q}(\mathfrak{U}_{1}\mathcal{D}_{q }P_{n})+\mathfrak{U}_{2}\mathcal{D}_{q}^{2}P_{n}+\alpha P_{n}=\alpha^{2} \mathbf{U}_{2}\mathcal{D}_{q}^{2}P_{n}+\alpha\mathfrak{U}_{1}\mathcal{S}_{q} \mathcal{D}_{q}P_{n}+\alpha P_{n},\] where the second equality holds thanks to (12). Now we apply \(\mathcal{S}_{q}\) to (35) using again the same equation and the above equation in order to obtain \[\alpha\mathfrak{U}_{2}(x)\mathcal{D}_{q}^{2}P_{n}(x)+\mathfrak{U }_{1}(x)\mathcal{S}_{q}\mathcal{D}_{q}P_{n}(x) =(\alpha_{n}^{2}-1)P_{n}(x)+2\alpha\alpha_{n-1}b_{n}C_{n-1}P_{n-2}(x)\] \[+b_{n}b_{n-2}C_{n-1}C_{n-3}P_{n-4}(x). \tag{43}\] Therefore, (41) holds by combining (42) with (43) in order to eliminate \(\mathcal{S}_{q}\mathcal{D}_{q}P_{n}\) and by using (6), with \[d_{n,4}=d_{n}d_{n-3}-2\alpha(\alpha^{2}-1)b_{n}b_{n-2}C_{n-1}C_{n-3}C_{n-4}C_{n -5}.\] In addition, \(b_{n}=2C_{n}q^{-(n-1)/2}\) and \(d_{n}=(q-1)q^{-n/2}C_{n}C_{n-1}C_{n-2}\). Therefore we obtain \[d_{n,4}=-4\alpha^{2}q^{-(n-3)/2}\prod_{j=0}^{5}C_{n-j}\neq 0.\] The result is then proved. Remark 4.2.: _The structure relation (41) is of type (4) with \(\phi(x)=(\alpha^{2}-1)^{2}(x^{2}-\alpha^{2})(1-x^{2})\), \(M=6\), and \(N=2\). Consequently, the semiclassical orthogonal polynomials given in Proposition 4.1 disprove Conjecture 1.2._ ## 5. Further results: \(D_{q,\omega}\)-semiclassical orthogonal polynomials Although this paper was originally intended to deal with the Askey-Wilson operator, the ideas developed above allow working with other operators. The results of this section are related to the structure relation that appears in [14, Conjecture 24.7.7], and warn the reader of the existence of semiclassical OP in such a problem. Recall that given complex numbers \(q\) and \(\omega\), Hahn's operator \(D_{q,\omega}:\mathcal{P}\to\mathcal{P}\) is defined by \[D_{q,\omega}f(x):=\frac{f(qx+\omega)-f(x)}{(q-1)x+\omega},\] where we have fixed \(q\) and \(\omega\) such that \[|1-q|+|\omega|\neq 0,\qquad q\not\in\{0\}\cup\left\{e^{2ij\pi/n}\,|\,\,1\leq j \leq n-1,n\in\mathbb{N}\setminus\{0,1\}\right\}. \tag{44}\] For every \(\mathbf{u}\in\mathcal{P}^{*}\) and \(f\in\mathcal{P}\), \(D_{q,\omega}\) induces \(\mathbf{D}_{q,\omega}:\mathcal{P}^{*}\to\mathcal{P}^{*}\) defined by \[\langle\mathbf{D}_{q,\omega}\mathbf{u},f\rangle=-q^{-1}\langle\mathbf{u},D_{q, \omega}^{*}f\rangle,\] where \(D_{q,\omega}^{*}=D_{1/q,-\omega/q}\). Definition 5.1.: _[_4_, p. 487]__\(\mathbf{u}\in\mathcal{P}^{*}\) is called \(\mathbf{D}_{q,\omega}\)-classical if it is regular and there exist \(\phi\in\mathcal{P}_{2}\setminus\mathcal{P}_{-1}\) and \(\psi\in\mathcal{P}_{1}\setminus\mathcal{P}_{-1}\) such that_ \[\mathbf{D}_{q,\omega}\left(\phi\,\mathbf{u}\right)=\psi\,\mathbf{u}. \tag{45}\] (_We will call it simply classical when no confusion can arise._) Definition 5.2.: _[_4_, p. 855]_ _We call a linear form, in \(\mathcal{P}^{*}\), \(\mathbf{D}_{q,\omega}\)-semiclassical if it is regular, not \(\mathbf{D}_{q,\omega}\)-classical, and there exist two polynomials \(\phi\) and \(\psi\) with at least one of them nonzero, such that (45) holds._ (_We will call it simply semiclassical when no confusion can arise._) OP with respect to a (\(\mathbf{D}_{q,\omega}\)-)semiclassical form of class \(s\) is called (\(D_{q,\omega}\)-) semiclassical of class \(s\). Under the conditions of Definition 5.2, we define the class of \(\mathbf{u}\) as in Section 2. The next theorem is the analogue of Theorem 2.1. Here we use the standard notation \[[n]_{q}=\frac{q^{n}-1}{q-1}.\] Theorem 5.1.: _[_3_, Theorem 1.2]_ _Suppose that \(\mathbf{u}\in\mathcal{P}^{*}\) satisfies (45) with \(\phi(x)=ax^{2}+bx+c\) and \(\psi(x)=dx+e\). Then \(\mathbf{u}\) is regular if and only if_ \[d_{n}\neq 0,\qquad\phi\left(-\frac{e_{n}}{d_{2n}}\right)\neq 0,\] _for all \(n\in\mathbb{N}\), where \(d_{n}=d\,q^{n}+a[n]_{q}\) and \(e_{n}=eq^{n}+(\omega d_{n}+b)[n]_{q}\). Moreover, \((P_{n})_{n\geq 0}\) satisfies (6) with_ \[B_{n}=\omega[n]_{q}+\frac{[n]_{q}e_{n-1}}{d_{2n-2}}-\frac{[n+1]_{q}e_{n}}{d_{2n }},\] \[C_{n+1}=-\frac{q^{n}[n+1]_{q}d_{n-1}}{d_{2n-1}d_{2n+1}}\,\phi\left(-\frac{e_{n }}{d_{2n}}\right).\] In this context we have also an analogue to Theorem 3.2 for semiclassical orthogonal polynomials of class one. Theorem 5.2.: _Let \(\mathbf{u}\in\mathcal{P}^{\prime}\) be regular and let \((P_{n})_{n\geq 0}\) denote the corresponding sequence of orthogonal polynomials satisfying (6). Suppose that there exist complex numbers \(c\), \((a_{n})_{n\geq 0}\), \((b_{n})_{n\geq 0}\)\((b_{n}\neq 0)\), and \((c_{n})_{n\geq 0}\) such that_ \[(x-c)D_{q,\omega}P_{n}(x)=a_{n}P_{n}(x)+(b_{n}x+c_{n})P_{n-1}(x). \tag{46}\] _Then_ \[\mathbf{D}_{1/q,-\omega/q}\big{(}(x-c)\mathbf{u}\big{)}=-\frac{qb_{2}}{C_{2}} (x-\lambda_{+})(x-\lambda_{-})\mathbf{u},\] _where_ \[\lambda_{\pm}=\frac{1}{2}(B_{0}+B_{1})+\frac{c-B_{0}}{2b_{2}C_{1}}C_{2}\pm \frac{1}{2}\left((B_{0}-B_{1}-\frac{c-B_{0}}{b_{2}C_{1}}C_{2})^{2}+4C_{1} \right)^{1/2}.\] _If \(q(\omega+qc-\lambda_{+})(\omega+qc-\lambda_{-})=-C_{2}/b_{2}\), then \(\mathbf{u}\) is \(\mathbf{D}_{1/q,-\omega/q}\)-classical. More precisely, \((P_{n})_{n\geq 0}\) are the Al-Salam-Carlitz polynomials. Otherwise, \(\mathbf{u}\) is a \(\mathbf{D}_{1/q,-\omega/q}\)-semiclassical of class one. Moreover, if_ \[b_{2}C_{1}^{2}=(B_{0}-c)(b_{2}(B_{1}-c)C_{1}-(B_{0}-c)C_{2}), \tag{47}\] _then_ \[\mathbf{u}=(x-c)^{-1}\mathbf{v}+\delta_{c},\] \(\mathbf{v}\) _being the linear form corresponding to the Al-Salam-Carlitz polynomials._ Proof.: As in the proof of Theorem 3.2, from (6) and (46) we get \[\mathbf{D}_{1/q,-\omega/q}\big{(}(x-c)\mathbf{a}_{n}\big{)}=-q(a_{n}+b_{n}) \mathbf{a}_{n}-q(c_{n+1}+b_{n+1}B_{n})\mathbf{a}_{n+1}-q\,b_{n+2}C_{n+1} \mathbf{a}_{n+2},\] in the sense of the weak dual topology in \(\mathcal{P}^{\prime}\), \((\mathbf{a}_{n})_{n\geq 0}\) being the dual basis associated to \((P_{n})_{n\geq 0}\). Taking \(n=0\) in the above expression we have \[\mathbf{D}_{1/q,-\omega/q}\big{(}(x-c)\mathbf{u}\big{)}=\varphi(x)\mathbf{u}, \tag{48}\] where \(\varphi\) is a polynomial of degree two given by \[\varphi=-\frac{q}{C_{1}C_{2}}((a_{0}+b_{0})C_{1}C_{2}+(r_{1}+b_{1}B_{0})C_{2} P_{1}+b_{2}C_{1}P_{2}).\] Taking \(n=0\), \(n=1\), and \(n=2\) in (46) we get \[a_{0}+b_{0}=0,\qquad c_{1}+b_{1}B_{0}=B_{0}-c,\qquad b_{2}=q+1+\frac{(B_{0}-c) (qB_{0}-B_{1}+\omega)}{C_{1}}.\] Hence \(C_{2}\varphi(x)=-q\,b_{2}(x-\lambda_{+})(x-\lambda_{-})\), and the first part of the theorem follows. Assume that \(\varphi(\omega+q\,c)=1\). Recall that (see [3, (2.10)]) \[\mathbf{D}_{1/q,-\omega/q}(f\mathbf{u})=D_{1/q,-\omega/q}f\,\mathbf{u}+f\,(( x-\omega)/q)\,\mathbf{D}_{1/q,-\omega/q}\mathbf{u}\qquad(f\in\mathcal{P}).\] Using this identity, (48) becomes \[(x-qc-\omega){\bf D}_{1/q,-\omega/q}{\bf u}=q(\varphi(x)-1){\bf u}.\] Since \(\varphi(\omega+qc)=1\), \(x-qc-\omega\) and \(q(\varphi(x)-1)\) have a common zero at \(x=qc+\omega\), and therefore there exists a polynomial of degree one, \(Q_{1}\), such that \(q(\varphi(x)-1)=(x-qc-\omega)Q_{1}(x)\), which gives \[(x-qc-\omega)\Big{(}({\bf D}_{1/q,-\omega/q}{\bf u})-Q_{1}(x){\bf u}\Big{)}=0,\] and so \[{\bf D}_{1/q,-\omega/q}{\bf u}=\frac{1}{(q^{-1}-1)rs}(x-r-s-\omega/(1-q)){\bf u},\] for some nonzero complex numbers \(r\) and \(s\), i.e. \(Q_{1}(x)=1/((q^{-1}-1)rs)(x-r-s-\omega/(1-q))\). This last equation is of type (45) with \(\phi=1\) and \(\psi(x)=1/((q^{-1}-1)rs)(x-r-s-\omega/(1-q))\). We claim that \({\bf u}\) is regular. Indeed, by Theorem 5.1, \(c=1\), \(b=0\), \(a=0\), \(d=1/((q^{-1}-1)rs)\), and \(e=-(r+s+\omega/(1-q))/((q^{-1}-1)rs)\). Hence \[d_{n}=q^{-n}/((q^{-1}-1)rs)\neq 0,\qquad\phi=1\neq 0.\] Moreover, by Theorem 5.1, we get \[B_{n} =-q^{-1}\omega[n]_{q^{-1}}+\frac{[n]_{q^{-1}}e_{n-1}}{d_{2n-2}}- \frac{[n+1]_{q^{-1}}e_{n}}{d_{2n}}\] \[=\omega/(1-q)+(r+s)q^{n},\] \[C_{n+1} =-\frac{q^{-n}[n+1]_{q^{-1}}d_{n-1}}{d_{2n-1}d_{2n+1}}=-rs(1-q^{n+ 1})q^{n},\] and finally \(P_{n}(x)=s^{n}U_{n}^{(r/s)}((x-\omega/(1-q))/s\,|\,q)\), where \((U_{n}^{(a)}(\cdot\,|\,q))_{n\geq 0}\) are the All-Salam-Carlitz polynomial (see [17, Section 14.24]). Assume now that \(\varphi(\omega+qc)\neq 1\). Hence \(x-\omega-qc\) and \(\varphi(x)-1\) are coprime. Using the same _argumentum ad absurdum_ as in Theorem 3.2, we see that \({\bf u}\) is a \({\bf D}_{1/q,-\omega/q}\)-semiclassical form of class one. Now, from (48), and using (47), we get \[\lambda_{+}=c,\qquad\lambda_{-}=c-\left(\left(B_{0}-B_{1}-\frac{c-B_{0}}{b_{ 2}C_{1}}C_{2}\right)^{2}+4C_{1}\right)^{1/2}.\] Then (48) becomes \[{\bf D}_{1/q,-\omega/q}\big{(}(x-c){\bf u}\big{)}=-\frac{qb_{2}}{C_{2}}(x-c)( x-c+\Delta^{1/2}){\bf u},\] where \(\Delta=\left(B_{0}-B_{1}-\frac{c-B_{0}}{b_{2}C_{1}}C_{2}\right)^{2}+4C_{1}\) or, equivalently, \[{\bf D}_{1/q,-\omega/q}((x-c){\bf u})=\frac{1}{(q^{-1}-1)rs}(x-c)(x-r-s- \omega/(1-q)){\bf u},\] where \(r\) and \(s\) are nonzero complex numbers such that \((q-1)rs=C_{2}/b_{2}\) and \(r+s=c-\omega/(1-q)-\Delta^{1/2}\). Define \({\bf v}=(x-c){\bf u}\). Hence \[{\bf D}_{1/q,-\omega/q}{\bf v}=\frac{1}{(q^{-1}-1)rs}(x-r-s-\omega/(1-q)){\bf v}.\] As above, by Theorem 5.1, \[d_{n}=\frac{q^{-n}}{(q^{-1}-1)rs}\neq 0,\qquad\phi=1\neq 0.\] Moreover, also by Theorem 5.1, \(\mathbf{v}\) is the linear form corresponding to the Al-Salam-Carlitz polynomials, and the theorem follows. The next proposition gives an explicit example of semiclassical orthogonal polynomials of class one satisfying (46), which prevents the reader from making any conjectures related to classical polynomials when faced with a relation of type (1) after changing the standard derivative by Hahn's operator. Proposition 5.1.: _Fix \(\omega,q\in\mathbb{C}\) such that (44) hold. Fix \(a,b\in\mathbb{C}\) such that_ \[-a\neq b,\qquad b\neq 0,\qquad a+(-1)^{n}b-(a+b)q^{n}\neq 0,\] _for all \(n\in\mathbb{N}\). Let \((P_{n})_{n\geq 0}\) be a sequence of orthogonal polynomials satisfying (6) with_ \[B_{n}=\frac{\omega}{1-q},\qquad C_{n}=(a+b)\left(\frac{a+(-1)^{n}b}{a+b}-q^{n} \right)q^{n}. \tag{49}\] _Then \((P_{n})_{n\geq 0}\) is a \(D_{1/q,-\omega/q}\)-semiclassical of class one and its corresponding linear form \(\mathbf{u}\in\mathcal{P}^{\prime}\) satisfies_ \[\mathbf{D}_{1/q,-\omega/q}\left(\left(x-\frac{\omega}{1-q}\right)\mathbf{u}\right)\] \[=\frac{1}{(a+b)(q-1)}\left(\frac{1}{q}\left(x-\frac{\omega}{1-q}\right)^{2}+b- a+(a+b)q\right)\mathbf{u}.\] Proof.: We claim that \((P_{n})_{n\geq 0}\) satisfies (46) with \[c=\frac{\omega}{1-q},\qquad a_{n}=\frac{1+(-1)^{n+1}}{(1-q)(a+b)}\,b,\qquad b_ {n}=[n]_{q}-a_{n},\qquad c_{n}=-cb_{n}.\] Indeed, the proof is by induction on \(n\). For \(n=1\), LHS of (46) gives \[(x-c)D_{q,\omega}P_{1}(x)=x-c,\] while RHS gives \[a_{1}P_{1}(x)+b_{1}(x-c)P_{0}(x)=(a_{1}+b_{1})(x-c)=x-c.\] Assuming (46) to hold, with \(k\) instead of \(n\), for \(k=1,2,\ldots,n\), we will prove it for \(k=n+1\). Apply \((x-c)D_{q,\omega}\) to (6) to obtain \[(x-c)D_{q,\omega}\big{(}(x-c)P_{n}(x)\big{)}=(x-c)D_{q,\omega}\big{(}P_{n+1}( x)+C_{n}P_{n-1}(x)\big{)}.\] Using the identity (see [3, (2.9)]) \[D_{q,\omega}(fg)=g(qx+\omega)D_{q,\omega}f+fD_{q,\omega}g\qquad(f,g\in \mathcal{P}),\] we get \[(x-c)D_{q,\omega}P_{n+1}(x)=q(x-c)^{2}D_{q,\omega}P_{n}(x)+(x-c)P_{n}(x)-(x-c )C_{n}D_{q,\omega}P_{n-1}(x).\] Now using successively (46), with \(k\) instead of \(n\), for \(k=n\) and \(k=n-1\) and (6) it follows that \[(x-c)D_{q,\omega}P_{n+1}(x) =q(x-c)(a_{n}P_{n}(x)+b_{n}(x-c)P_{n-1}(x))\] \[\quad+(x-c)P_{n}(x)-(a_{n-1}P_{n-1}(x)+b_{n-1}(x-c)P_{n-2}(x))C_{n}\] \[=qa_{n}(P_{n+1}(x)+C_{n}P_{n-1}(x))+qb_{n}(x-c)(P_{n}(x)+C_{n-1}P _{n-2}(x))\] \[\quad+(x-c)P_{n}(x)-(a_{n-1}P_{n-1}(x)+b_{n-1}(x-c)P_{n-2}(x))C_{n}\] \[=qa_{n}P_{n+1}(x)+(qa_{n}-a_{n-1})C_{n}P_{n-1}(x)+(1+qb_{n})(x-c)P _{n}(x)\] \[\quad+(qb_{n}C_{n-1}-b_{n-1}C_{n})(x-c)P_{n-2}(x)\] \[=a_{n-1}P_{n+1}(x)+(1-a_{n-1}+q(a_{n}+b_{n}))(x-c)P_{n}(x)\] \[\quad+(qb_{n}C_{n-1}-b_{n-1}C_{n})(x-c)P_{n-2}(x).\] The reader should convince himself that the following relations hold: \(a_{n-1}=a_{n+1}\), \(1-a_{n-1}+q(a_{n}+b_{n})=b_{n+1}\), and \(qb_{n}C_{n-1}-b_{n-1}C_{n}=0\). Thus (46) holds for \(n+1\), and our claim follows. Note also that \[b_{2}=q+1,\qquad C_{1}=(a-b-(a+b)q)q,\qquad C_{2}=(a+b)(1-q^{2})q^{2}.\] Under the notation of Theorem 5.2, we get \[\lambda_{\pm}=c\pm(a-b-(a+b)q)^{1/2}q^{1/2},\] and so \[q(\omega+qc-\lambda_{+})(\omega+qc-\lambda_{-})\] \[=(b-a+(a+b)q)q^{2}\neq(-b-a+(a+b)q)q^{2}=-\frac{C_{2}}{b_{2}},\] because \(\omega+q\,c=c\). Thus, from Theorem 5.2, the result follows. If we replace, in Theorem 3.1, the Askey-Wilson operator by the Hahn operator, then \(\mathbf{S}_{q}\) becomes the identity, as in the case of the standard derivative, and Theorem 3.1\(i)\) reduces to \(\mathbf{D}_{q,\omega}(\rho\mathbf{u})=\psi\mathbf{u}\). In this context, an analogue of Theorem 3.1 appears in Smaili's PhD thesis under the supervision of Maroni (see [20, Theorem 1.1]). ## Acknowledgements The authors thank T. H. Koornwinder for several helpful comments. The authors also thank P. Maroni for his comments, and for offering them an original printed copy of [21] during his last visit to Portugal. This work was supported by the Centre for Mathematics of the University of Coimbra-UIDB/00324/2020, funded by the Portuguese Government through FCT/ MCTES. The first author thanks CMUP, University of Porto, for their support and hospitality. The second author also thanks CMUP for their support. CMUP is a member of LASI, which is financed by national funds through FCT, under the projects with reference UIDB/00144/2020 and UIDP/00144/2020.
2306.10018
A unified immersed finite element error analysis for one-dimensional interface problems
It has been noted that the traditional scaling argument cannot be directly applied to the error analysis of immersed finite elements (IFE) because, in general, the spaces on the reference element associated with the IFE spaces on different interface elements via the standard affine mapping are not the same. By analyzing a mapping from the involved Sobolev space to the IFE space, this article is able to extend the scaling argument framework to the error estimation for the approximation capability of a class of IFE spaces in one spatial dimension. As demonstrations of the versatility of this unified error analysis framework, the manuscript applies the proposed scaling argument to obtain optimal IFE error estimates for a typical first-order linear hyperbolic interface problem, a second-order elliptic interface problem, and the fourth-order Euler-Bernoulli beam interface problem, respectively.
Slimane Adjerid, Tao Lin, Haroun Meghaichi
2023-05-26T02:26:12Z
http://arxiv.org/abs/2306.10018v1
# A Unified Immersed Finite Element Error Analysis ###### Abstract It has been noted that the traditional scaling argument cannot be directly applied to the error analysis of immersed finite elements (IFE) because, in general, the spaces on the reference element associated with the IFE spaces on different interface elements via the standard affine mapping are not the same. By analyzing a mapping from the involved Sobolev space to the IFE space, this article is able to extend the scaling argument framework to the error estimation for the approximation capability of a class of IFE spaces in one spatial dimension. As demonstrations of the versatility of this unified error analysis framework, the manuscript applies the proposed scaling argument to obtain optimal IFE error estimates for a typical first-order linear hyperbolic interface problem, a second-order elliptic interface problem, and the fourth-order Euler-Bernoulli beam interface problem, respectively. **keywords:** Immersed finite elements, error analysis, Bramble Hilbert lemma, scaling argument. ## 1 Introduction Partial differential equations with discontinuous coefficients arise in many areas of sciences and engineering such as heat transfer, acoustics, structural mechanics, and electromagnetism. The discontinuity of the coefficients results in multiple challenges in the design and the analysis of numerical methods and it is an active area of research in the communities of finite element, finite volume, as well as finite difference method. The immersed finite element (IFE) methods can use an interface independent mesh to solve an interface problem. Many publications were about IFE methods using either linear [21, 22, 23], bilinear [18, 25], or trilinear polynomials [17, 33]. IFE methods have been applied to a variety of problems, such as parabolic interface problems [19, 27, 29], hyperbolic interface problems [6], the acoustic interface problem [7, 32] and Stokes and Navier-Stokes interface problems [3, 13, 20, 34]. IFE methods with higher degree polynomials have also been explored [2, 4, 5, 13, 16, 28]. In particular, Adjerid and Lin [5] constructed IFE spaces of arbitrary degree and analyzed their approximation capabilities. In [7, 32], Adjerid and Moon discussed IFE methods for the following acoustic interface problem \[\left\{\begin{aligned} p_{t}(x,t)=\rho(x)c(x)^{2}v_{x}(x,t),& x\in(a,\alpha)\cup(\alpha,b),\\ \rho(x)v_{t}(x,t)=p_{x}(x,t),& x\in(a,\alpha)\cup( \alpha,b),\\ [v]_{x=\alpha}=[p]_{x=\alpha}=0.\end{aligned}\right. \tag{1}\] where \(\rho,c\) are equal to \(\rho_{+},c_{+}\) on interval \((\alpha,b)\) and to \(\rho_{-},c_{-}\) on \((a,\alpha)\). Assuming that the exact solution \((u,p)\) has sufficient regularity in \((a,\alpha)\) and \((\alpha,b)\), respectively, we can follow the idea in [31] to show that the exact solution satisfies the following so-called extended jump conditions: \[\frac{\partial^{k}}{\partial x^{k}}p(\alpha^{+},t)=r_{k}^{p}\frac{\partial^{k} }{\partial x^{k}}p(\alpha^{-},t),\qquad\frac{\partial^{k}}{\partial x^{k}}v( \alpha^{+},t)=r_{k}^{v}\frac{\partial^{k}}{\partial x^{k}}v(\alpha^{-},t), \qquad k=0,1,\ldots, \tag{2}\] for certain positive constants \(r_{k}^{p}\) and \(r_{k}^{v}\). In [7, 32], IFE spaces based on polynomials of degree up to 4 were developed with these extended jump conditions, and these IFE spaces were used with a discontinuous Galerkin (DG) method to solve the above acoustic interface problem with pertinent initial and boundary conditions. Numerical examples presented in [7, 32] demonstrated the optimal convergence of this DG IFE method, but we have not seen any error analysis about it in the related literature. Extended jump conditions have also been used in the development of higher degree IFE spaces for solving other interface problems [2, 4, 5, 13, 16, 28]. This motivates us to look for a unified framework for the error analysis for methods based on IFE spaces constructed with the extended jump conditions such as those in (2). As an initial effort, our focus here is on one-dimensional interface problems. One challenge in error estimation for IFE methods is that the scaling argument commonly used in error estimation for traditional finite element methods cannot be directly applied. In the standard scaling argument, local finite element spaces on elements in a sequence of meshes are mapped to the same finite element space on the reference element via an affine transformation. However, the same affine transformation will map the local IFE spaces on interface elements in a sequence of meshes to different IFE spaces on the reference element because of the variation of interface location in the reference element, see the illustration in Figure 1. A straightforward application of the scaling argument to the analysis of the approximation capability of an IFE space will result in error bounds of a form \(C(\check{\alpha})h^{r}\), i.e., the constant factor \(C(\check{\alpha})\) in the derived error bounds depend on the location of the interface in the reference element, and this kind of error bounds cannot be used to show the convergence of the related IFE method unless one can show that the constant factor \(C(\check{\alpha})\) is bounded for all \(\check{\alpha}\) in the reference element, which, to our best knowledge, is difficult to establish. Alternative analysis techniques such as multi-point Taylor expansions are used [5] which becomes awkward for higher degree IFE spaces, particularly so for higher degree IFE spaces in higher dimension. To circumvent this predicament of the classical scaling argument, we introduce a mapping between the related Sobolev space and the IFE space by using weighted averages of the derivatives in terms of the coefficients in the jump conditions. We show that the Sobolev norm of the error of this mapping can be bounded by the related Sobolev semi-norm. This essential property enables us to establish a Bramble-Hilbert type lemma for the IFE spaces, and, to our best knowledge, this is the first result that makes the scaling argument applicable in the error analysis of a class of IFE methods. For demonstrating the versatility of this unified error analysis framework, we apply it to establish, for the first time, the optimal approximation capability of the IFE space designed for the acoustic interface problem (1). Similarly, we apply this immersed scaling argument to the IFE space designed for an elliptic interface problem considered in [5] as well as the IFE space for the Euler-Bernoulli Beam interface problem considered in [24, 26, 35] leading to much simpler and elegant proofs. The paper is organized as follows. In Section 2, we introduce the notation and spaces used in the rest of the paper. In Section 3, we restrict ourselves to study of the IFE functions on the interval \([0,1]\), we show that they have similar properties to polynomials, for example, they both have the same maximum number of roots, they both admit a Lagrange basis and they both satisfy an inverse and a trace inequality. In Section 4, we define the notion of uniformly bounded RIFE operators and how the scaling argument is applicable using an immersed Bramble-Hilbert lemma. In Section 5, we study the convergence of the DG-IFE method for the acoustic interface problem (1). In Section 6, we give shorter and simpler proofs for the optimal convergence of IFE methods for the second-order elliptic interface problem as well as the fourth-order Euler-Bernoulli beam interface problem. ## 2 Preliminaries Throughout the article, we will consider a bounded open interval \(I=(a,b)\) with \(\left|a\right|,\left|b\right|<\infty\), and let \(\alpha\in I\) be the interface point dividing \(I\) into two open intervals \(I^{-}=(a,\alpha),I^{+}=(\alpha,b)\). This convention extends to any other open interval \(B\subseteq\mathbb{R}\) with \(B^{-}=B\cap(-\infty,\alpha)\) and \(B^{+}=B\cap(\alpha,\infty)\). For every bounded open interval \(B\) not containing \(\alpha\), let \(W^{m,p}(B)\) be the usual Sobolev space on \(B\) equipped with the norm \(\left\|\cdot\right\|_{m,p,B}\) and the seminorm \(\left|\cdot\right|_{m,p,B}\). We are particularly interested in the case of \(p=2\) corresponding to the Hilbert space \(H^{m}(B)=W^{m,2}(B)\), and we will use \(\left\|\cdot\right\|_{m,B}\) and \(\left|\cdot\right|_{m,B}\) to denote \(\left\|\cdot\right\|_{m,2,B},\left|\cdot\right|_{m,2,B}\) respectively for convenience. We will use \((\cdot,\cdot)_{B}\) and Figure 1: The relative position of the interface (on the right) changes as the we refine the mesh (on the left). \((\cdot,\cdot)_{w,B}\) to denote the classical and the weighted \(L^{2}\) inner product defined as \[(f,g)_{B}=\int_{B}f(x)g(x)\ dx,\qquad(f,g)_{w,B}=\int_{B}w(x)f(x)g(x)\ dx,\qquad w (x)>0,\ \forall\ x\in B.\] Given a positive finite sequence \(r=(r_{i})_{i=0}^{\tilde{m}},\ \tilde{m}\geq 0\) and an open interval \(B\) containing \(\alpha\), we introduce the following piecewise Sobolev space: \[\mathcal{H}_{\alpha,r}^{m+1}(B)=\left\{u\mid u_{|B^{\pm}}\in H^{m+1}(B^{\pm}), u^{(k)}(\alpha^{+})=r_{k}u^{(k)}(\alpha^{-}),\ \forall k=0,1,\ldots,m\right\},\ 0\leq m\leq\tilde{m}. \tag{3}\] The norms, semi-norms and the inner product that we will use on \(\mathcal{H}_{\alpha,r}^{m+1}(B)\) are \[\left\|\cdot\right\|_{m+1,B}=\sqrt{\left\|\cdot\right\|_{m+1,B^{-}}^{2}+\left\| \cdot\right\|_{m+1,B^{+}}^{2}},\ \ \left|\cdot\right|_{m,B}=\sqrt{\left|\cdot\right|_{m,B^{-}}^{2}+\left| \cdot\right|_{m,B^{+}}^{2}},\ \ \left(f,g\right)_{w,B}=\left(f,g\right)_{w,B^{-}}+\left(f,g \right)_{w,B^{+}}.\] We note, by the Sobolev embedding theory, that \(\mathcal{H}_{\alpha,r}^{m+1}(B)\) is a subspace of \[\mathcal{C}_{\alpha,r}^{m}(\overline{B})=\left\{u\mid u_{|B^{\pm}}\in C^{m}( \overline{B^{\pm}}),u^{(k)}(\alpha^{+})=r_{k}u^{(k)}(\alpha^{-}),\ \forall k=0,1,\ldots,m\right\},\ 0\leq m\leq\tilde{m}. \tag{4}\] By dividing \(I\) into \(N\) sub-intervals, we obtain the following partition of \(I\): \[I_{k}=(x_{k-1},x_{k}),\quad\mathcal{T}_{\!h}=\{I_{k}\}_{k=1}^{N},\quad a=x_{0} <x_{1}<\cdots<x_{N}=b,\qquad h=\max_{1\leq k\leq N}(x_{k}-x_{k-1}).\] We will assume that there is \(k_{0}\in\{1,2,\ldots,N\}\) such that \(x_{k_{0}-1}<\alpha<x_{k_{0}}\), which is equivalent to \(\alpha\in\overset{\circ}{I}_{k_{0}}\). We define the discontinuous immersed finite element space \(W_{\alpha,r}^{m}\) on the interval \(I\) as \[\mathcal{W}_{\alpha,r}^{m}(\mathcal{T}_{\!h})=\left\{\varphi\mid\varphi_{|I_{ k}}\in\mathcal{P}^{m}(I_{k})\text{ for }k\in\{1,\ldots,N\}\backslash\{k_{0}\}\text{ and }\varphi_{|I_{k_{0}}}\in\mathcal{V}_{\alpha,r}^{m}(I_{k_{0}})\right\}, \tag{5}\] where \(\mathcal{P}^{m}(I_{k})\) is the space of polynomials of degree at most \(m\) on \(I_{k}\) and \(\mathcal{V}_{\alpha,r}^{m}(I_{k_{0}})\) is the local immersed finite element (_LIFE_) space defined as: \[\mathcal{V}_{\alpha,r}^{m}(I_{k_{0}})=\left\{\varphi\in\mathcal{C}_{\alpha,r}^ {m}(I_{k_{0}})\mid\varphi_{|I_{k_{0}}^{\prime}}\in\mathcal{P}^{m}\left(I_{k_ {0}}^{s}\right),\ s=+,-\right\},\ 0\leq m\leq\tilde{m}. \tag{6}\] In discussions from now on, given a function \(v\) in \(\mathcal{H}_{\alpha,r}^{m+1}\) (or \(\mathcal{C}_{\alpha,r}^{m}\) or \(\mathcal{V}_{\alpha,r}^{m}\)), its derivative is understood in the piecewise sense unless specified otherwise. By definition, we can readily verify that \(\mathcal{V}_{\alpha,r}^{m-1}(I_{k_{0}})\subset\mathcal{V}_{\alpha,r}^{m}(I_{k_ {0}})\) for a given finite sequence \(r=(r_{i})_{i=0}^{\tilde{m}},\ \tilde{m}\geq 1\). In order to study the LIFE space \(\mathcal{V}_{\alpha,r}^{m}(I_{k_{0}})\), we will investigate the properties of the corresponding reference IFE (_IRFE_) space \(\mathcal{V}_{\alpha,r}^{m}(\tilde{I})\) on the reference interval \(\tilde{I}=[0,1]\) with an interface point \(\check{\alpha}\in(0,1)\). Our goal is to extend the scaling argument to such IFE spaces and use the IFE scaling argument to show IFE spaces such as \(\mathcal{V}_{\alpha,r}^{m}(I_{k_{0}})\) have the optimal approximation capability, i.e., every function in \(\mathcal{H}_{\alpha,r}^{m+1}(I_{k_{0}})\) can be approximated by functions from the IFE space \(\mathcal{V}_{\alpha,r}^{m}(I_{k_{0}})\) at the optimal convergence rate. Following the convention in the error analysis literature for finite element methods, we will often use a generic constant \(C\) in estimates whose value varies depending on the context, but this generic constant is independent of \(h\) and the interface \(\alpha\in I_{k_{0}}\) or \(\check{\alpha}\in\tilde{I}\) unless otherwise declared. ## 3 Properties of the RIFE space \(\mathcal{V}_{\check{\alpha},r}^{m}(\tilde{I})\) For a given function \(\check{\varphi}\in\mathcal{V}_{\check{\alpha},r}^{m}(\tilde{I})\), we will write \(\check{\varphi}=(\check{\varphi}_{-},\check{\varphi}_{+})\) where \(\check{\varphi}_{s}=\check{\varphi}_{|\tilde{I}^{s}}\in\mathcal{P}^{m}(\tilde {I}^{s})\) for \(s=+,-\). Additionally, we will use \(\check{\varphi}_{s}^{(k)}(\check{\alpha})\) to denote \(\lim_{x\to\check{\alpha}^{s}}\check{\varphi}_{s}^{(k)}(x),\ s=\pm\) for a given integer \(k\geq 0\). For clarity, we will use \(s^{\prime}\) to denote the dual of \(s\), i.e., if \(s=\pm\), then \(s^{\prime}=\mp\). **Lemma 1**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+},\check{\alpha} \in(0,1)\) and \(s\in\{+,-\}\). The following statements hold_ 1. _For every_ \(\hat{\varphi}_{s}\in\mathcal{P}^{m}(\tilde{I}^{s})\) _there is a unique_ \(\check{\varphi}_{s^{\prime}}\in\mathcal{P}^{m}(\tilde{I}^{s^{\prime}})\) _such that_ \(\check{\varphi}=(\check{\varphi}_{-},\check{\varphi}_{+})\in\mathcal{V}_{\check{ \alpha},r}^{m}(\tilde{I})\)_._ 2. _The dimension of_ \(\mathcal{V}_{\check{\alpha},r}^{m}(\tilde{I})\) _is_ \(m+1\)_._ 3. _The set_ \(\{\mathcal{N}^{k}_{\bar{\alpha},r}\}_{k=0}^{m}\)_, where_ \[\mathcal{N}^{k}_{\bar{\alpha},r}(x)=\left\{\begin{array}{ll}(x-\check{\alpha} )^{k},&x\in\check{I}^{-},\\ r_{k}(x-\check{\alpha})^{k},&x\in\check{I}^{+},\end{array}\right.\] (7) _forms a basis of_ \(\mathcal{V}^{m}_{\bar{\alpha},r}(\check{I})\) _and will be referred to as the canonical basis._ Proof.: We will prove the statements in order: 1. Let \(\check{\varphi}_{\pm}\in\mathcal{P}^{m}(\check{I}^{\pm})\), then \((\check{\varphi}_{-},\check{\varphi}_{+})\in\mathcal{V}^{m}_{\bar{\alpha},r}( \check{I})\) if and only if \[\check{\varphi}^{(k)}_{\mp}(\check{\alpha})=(r_{k})^{\mp 1}\,\check{\varphi}^{(k )}_{\pm}(\check{\alpha}),\qquad k=0,1,\ldots,m,\] which uniquely defines a polynomial \(\check{\varphi}_{\mp}\in\mathcal{P}^{m}(\check{I}^{\mp})\): \[\check{\varphi}_{\mp}(x)=\sum_{k=0}^{m}\frac{(r_{k})^{\mp 1}\,\check{ \varphi}^{(k)}_{\pm}(\check{\alpha})}{k!}(x-\check{\alpha})^{k}.\] 2. We have shown that the maps \(\check{\varphi}_{-}\mapsto\check{\varphi}\) is well defined and injective which implies that the map \(\check{\varphi}\mapsto\check{\varphi}_{-}\) is surjective since every \(\check{\varphi}_{-}\in\mathcal{P}^{m}(\check{I}^{-})\) can be extended to \(\check{\varphi}\in\mathcal{V}^{m}_{\bar{\alpha},r}(\check{I})\). Hence, \(\mathcal{V}^{m}_{\bar{\alpha},r}(\check{I})\) is isomorphic to \(\mathcal{P}^{m}(\check{I}^{-})\) implying that the dimension of \(\mathcal{V}^{m}_{\bar{\alpha},r}(\check{I})\) is \(m+1\). 3. We only need to show that \(\{\mathcal{N}^{k}_{\bar{\alpha},r}\}_{k=0}^{m}\) is linearly independent: Assume that \(\check{\varphi}=\sum_{k=0}^{m}c_{k}\mathcal{N}^{k}_{\bar{\alpha},r}\equiv 0\), then \(\check{\varphi}_{-}\equiv 0\) which implies that \(c_{k}=0\) for all \(k=0,1,\ldots,m\). The results in Lemma 1 allows us to introduce an extension operator that maps \(\check{\varphi}_{s}\) to \(\check{\varphi}_{s^{\prime}}\). **Definition 1**.: _Let \(\check{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\check{m}}\subset\mathbb{R}_{+}, \check{\alpha}\in(0,1)\) and \(s\in\{+,-\}\). We define the extension operator \(\mathcal{E}^{m,s^{\prime}}_{\bar{\alpha},r}:\mathcal{P}^{m}(\check{I}^{s}) \rightarrow\mathcal{P}^{m}(\check{I}^{s^{\prime}})\) that maps every \(\check{\varphi}_{s}\in\mathcal{P}^{m}(\check{I}^{s})\) to \(\mathcal{E}^{m,s^{\prime}}_{\bar{\alpha},r}(\check{\varphi}_{s})=\check{ \varphi}_{s^{\prime}}\) such that \((\check{\varphi}_{-},\check{\varphi}_{+})\in\mathcal{V}^{m}_{\bar{\alpha},r}( \check{I})\)._ By Lemma 1, the extension operator \(\mathcal{E}^{m,s^{\prime}}_{\bar{\alpha},r}\) is well-defined and is linear. Furthermore, by Lemma 1 again, this extension operator is also invertible. Consequently, the dimension of the RIFE space is the same as the dimension of the traditional polynomial space of the same degree. Next, we will estimate the operator norm of \(\mathcal{E}^{m,s^{\prime}}_{\bar{\alpha},r}\). Let \(\check{h}_{-}=\check{\alpha},\ \check{h}_{+}=1-\check{\alpha}\) be the lengths of the sub-intervals \(\check{I}^{\pm}\) formed by \(\check{\alpha}\). First, let us consider the following example \(\check{\varphi}=(\check{\varphi}_{-},\check{\varphi}_{+})=\mathcal{N}^{m}_{\bar {\alpha},r}\) defined by (7), we have \[\left\|\mathcal{E}^{m,s^{\prime}}_{\bar{\alpha},r}(\check{\varphi}_{-})\right\| _{0,I^{+}}=\left\|\check{\varphi}_{+}\right\|_{0,I^{+}}=\left|r_{m}\right| \left(\frac{\check{h}_{+}}{\check{h}_{-}}\right)^{m}\sqrt{\frac{\check{h}_{+}}{ \check{h}_{-}}}\left\|\check{\varphi}_{-}\right\|_{0,I^{-}},\qquad\text{where }\check{h}_{-}= \check{\alpha},\ \check{h}_{+}=1-\check{\alpha}.\] Hence, if \(h_{-}>h_{+}\), we get \(\left\|\check{\varphi}_{+}\right\|_{0,I^{+}}\leq\left|r_{m}\right|\left\|\check {\varphi}_{-}\right\|_{0,I^{-}}\). In the following lemma, we will show that a similar result holds for all \(\check{\varphi}\in\mathcal{V}^{m}_{\bar{\alpha},r}(\check{I})\). Consequently, for every interface position \(\check{\alpha}\in I\), one of the two extension operators \(\mathcal{E}^{m,s^{\prime}}_{\bar{\alpha},r},\ s^{\prime}=-,+\) will be bounded independently of \(\check{\alpha}\). **Lemma 2**.: _There exists a constant \(C>0\) that depends on \(m\) such that for every \((\check{\varphi}_{-},\check{\varphi}_{+})\in\mathcal{V}^{m}_{\bar{\alpha},r}( \check{I})\), we have_ \[\left\|\mathcal{E}^{m,s^{\prime}}_{\bar{\alpha},r}\check{\varphi}_{s}\right\|_{ 0,I^{\prime}}=\left\|\check{\varphi}_{s^{\prime}}\right\|_{0,I^{\prime}}\leq C \sqrt{\frac{\check{h}_{s^{\prime}}}{\check{h}_{s}}}\left(\max_{0\leq i\leq m}r^{ \tilde{s}}_{i}\right)\max\left(1,\left(\frac{\check{h}_{s^{\prime}}}{\check{h}_{s} }\right)^{m}\right)\left\|\check{\varphi}_{s}\right\|_{0,I^{*}},\qquad s=+,-, \tag{8}\] _where \(\tilde{s}=\mp 1\) for \(s=\pm\). In particular, if \(\check{h}_{s}\geq\check{h}_{s^{\prime}}\), we have_ \[\left\|\check{\varphi}_{s^{\prime}}\right\|_{0,I^{\prime}}\leq C\sqrt{\check{h}_ {s^{\prime}}}\left(\max_{0\leq i\leq m}r^{\tilde{s}}_{i}\right)\left\|\check{ \varphi}_{s}\right\|_{0,I^{*}}. \tag{9}\] Proof.: First, we note that (9) is a straightforward consequence of (8). Here, we only need prove (8) for \(s=-\) since the case \(s=+\) can be proven similarly. For every \((\check{\varphi}_{-},\check{\varphi}_{+})\in\mathcal{V}^{m}_{\check{\alpha},r}( \check{I})\), we first define \(\check{\varphi}_{-}\in\mathcal{P}^{m}([0,1])\) as \(\hat{\varphi}_{-}(\xi)=\check{\varphi}_{-}(\check{h}_{-}\xi)\) which yields \[\hat{\varphi}_{-}^{(i)}(1)=\check{h}_{-}^{i}\hat{\varphi}_{-}^{(i)}(\check{ \alpha}),\quad i=0,1,\ldots,m. \tag{10}\] Now, let us write \(\check{\varphi}_{+}\) as a finite Taylor sum around \(\check{\alpha}\) and use \(\check{\varphi}_{+}^{(i)}(\check{\alpha})=r_{i}\check{\varphi}_{-}^{(i)}( \check{\alpha})\) to obtain: \[\check{\varphi}_{+}(x)=\sum_{i=0}^{m}\check{\varphi}_{+}^{(i)}(\check{\alpha} )\frac{(x-\check{\alpha})^{i}}{i!}=\sum_{i=0}^{m}r_{i}\check{\varphi}_{-}^{(i )}(\check{\alpha})\frac{(x-\check{\alpha})^{i}}{i!}.\] Using (10), we can replace \(\check{\varphi}_{-}^{(i)}(\check{\alpha})\) by \(\check{h}_{-}^{-i}\check{\varphi}_{-}^{(i)}(1)\): \[\check{\varphi}_{+}(x)=\sum_{i=0}^{m}r_{i}\check{\varphi}_{-}^{(i)}(1)\check {h}_{-}^{-i}\frac{(x-\check{\alpha})^{i}}{i!}. \tag{11}\] We square and integrate (11), then we apply the change of variables \(z=x-\check{\alpha}\) to get \[\|\check{\varphi}_{+}\|_{0,I^{+}}^{2}=\int_{\check{\alpha}}^{1}\left(\sum_{i= 0}^{m}r_{i}\hat{\varphi}_{-}^{(i)}(1)\check{h}_{-}^{-i}\frac{(x-\check{\alpha })^{i}}{i!}\right)^{2}\ dx=\int_{0}^{\check{h}_{+}}\left(\sum_{i=0}^{m}r_{i} \hat{\varphi}_{-}^{(i)}(1)\left(\frac{z}{\check{h}_{-}}\right)^{i}\frac{1}{i! }\right)^{2}\ dz.\] We can bound \(r_{i}\) and \(|\hat{\varphi}_{-}^{(i)}(1)|\) by their maximum values for \(0\leq i\leq m\) and we can bound \(\left(\frac{z}{\check{h}_{-}}\right)^{i}\) by \[\left(\frac{z}{\check{h}_{-}}\right)^{i}\leq\max\left(1,\left(\frac{\check{h }_{+}}{\check{h}_{-}}\right)^{m}\right).\] We also have \(\sum_{i=0}^{m}\frac{1}{i!}\leq e\). Using these bounds, we get \[\|\check{\varphi}_{+}\|_{0,\check{I}^{+}}^{2} \leq\left(\max_{0\leq i\leq m}r_{i}\right)^{2}\left(\max_{0\leq i \leq m}|\check{\varphi}_{-}^{(i)}(1)|\right)^{2}\max\left(1,\left(\frac{\check {h}_{+}}{\check{h}_{-}}\right)^{m}\right)^{2}\int_{0}^{\check{h}_{+}}e^{2}\ dz\] \[=\left(\max_{0\leq i\leq m}r_{i}\right)^{2}\left(\max_{0\leq i \leq m}|\check{\varphi}_{-}^{(i)}(1)|\right)^{2}\max\left(1,\left(\frac{\check {h}_{+}}{\check{h}_{-}}\right)^{m}\right)^{2}\check{h}_{+}e^{2}. \tag{12}\] Since \(\mathcal{P}^{m}([0,1])\) is a finite dimensional space, all norms are equivalent. In particular, there is a constant \(C(m)\) such that \[\left(\max_{0\leq i\leq m}|p^{(i)}(1)|\right)\leq C(m)\left\|p\right\|_{0,[0,1 ]},\ \forall p\in\mathcal{P}^{m}([0,1]),\] which leads to \[\left(\max_{0\leq i\leq m}|\check{\varphi}_{-}^{(i)}(1)|\right)^{2}\leq C(m) \left\|\check{\varphi}_{-}\right\|_{0,[0,1]}^{2}. \tag{13}\] By using a change of variables, we can show that \[\left\|\check{\varphi}_{-}\right\|_{0,[0,1]}^{2}=\frac{1}{\check{h}_{-}}\left\| \check{\varphi}_{-}\right\|_{0,\check{I}^{-}}^{2}. \tag{14}\] Finally, we combine (12), (13) and (14) to get \[\left\|\mathcal{E}_{\check{\alpha},r}^{m,+}\check{\varphi}_{-}\right\|^{2}=\| \check{\varphi}_{+}\|_{0,I^{+}}^{2}\leq C(m)\left(\max_{0\leq i\leq m}r_{i} \right)^{2}\left(\frac{\check{h}_{+}}{\check{h}_{-}}\right)\max\left(1,\left( \frac{\check{h}_{+}}{\check{h}_{-}}\right)^{m}\right)^{2}\|\check{\varphi}_{-} \|_{0,I^{-}}^{2}\] which is (8) for \(s=-\). Next, we will use the bounds on the extension operator \(\mathcal{E}_{\check{\alpha},r}^{m,s}\) to establish inverse inequalities which are independent of \(\check{\alpha}\) for the RIFE space. **Lemma 3**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in(0,1)\). Then there exists \(C(m,r)>0\) independent of \(\check{\alpha}\) such that for every \(\check{\varphi}\in\mathcal{V}_{\check{\alpha},r}^{m}(\tilde{I})\) we have_ \[\left|\check{\varphi}\right|_{i,\tilde{I}}\leq C(m,r)\left\|\check{\varphi} \right\|_{0,\tilde{I}},\ 0\leq i\leq m+1. \tag{15}\] Proof.: The estimate given in (15) obviously holds for \(i=0\) and \(i=m+1\). Without loss of generality, assume that \(\tilde{h}_{-}\geq\tilde{h}_{+}\), this implies that \(\tilde{h}_{-}\geq\frac{1}{2}\). Then, using the classical inverse inequality [11] we have \[\left\|\check{\varphi}_{-}^{\prime}\right\|_{0,\tilde{I}^{-}}\leq C\tilde{h}_ {-}^{-1}\left\|\check{\varphi}_{-}\right\|_{0,\tilde{I}^{-}}\leq 2C\left\| \check{\varphi}_{-}\right\|_{0,\tilde{I}^{-}}. \tag{16}\] By the Taylor expansion of \(\check{\varphi}_{+}^{\prime}(x)\) at \(x=\check{\alpha}\), we have \[\check{\varphi}_{+}^{\prime}=\mathcal{E}_{\check{\alpha},\tau(r)}^{m-1,+} \left(\frac{\mathrm{d}}{\mathrm{d}x}\check{\varphi}_{-}\right), \tag{17}\] where \(\tau:(r_{0},r_{1},\ldots,r_{m})\mapsto(r_{1},\ldots,r_{m})\) is the shift operator. By (9) and the inverse inequality given in (16), we have \[\left\|\check{\varphi}_{+}^{\prime}\right\|_{0,\tilde{I}^{+}}\leq C(m)\left( \max_{1\leq i\leq m}r_{i}\right)\left\|\check{\varphi}_{-}\right\|_{0,\tilde{ I}^{-}}\leq C(m,r)\left\|\check{\varphi}_{-}\right\|_{0,\tilde{I}^{-}}. \tag{18}\] Therefore, we have \[\left|\check{\varphi}\right|_{1,\tilde{I}}=\left\|\check{\varphi}^{\prime} \right\|_{0,\tilde{I}}\leq C(m,r)\left\|\check{\varphi}_{-}\right\|_{0,\tilde{ I}^{-}}\leq C(m,r)\left\|\check{\varphi}\right\|_{0,\tilde{I}}\] which proves (15) for \(i=1\). Applying similar arguments, we can prove (15) for other values of \(i\). Since \(\check{\varphi}_{+}=\mathcal{E}_{\check{\alpha},r}^{m,+}\left(\check{\varphi} _{-}\right)\), the formula in (17) leads to the following identity about the permutation of the classical differential operator and the extension operator: \[\frac{\mathrm{d}}{\mathrm{d}x}\mathcal{E}_{\check{\alpha},r}^{m,+}\left( \check{\varphi}_{-}\right)=\check{\varphi}_{+}^{\prime}=\mathcal{E}_{\check{ \alpha},\tau(r)}^{m-1,+}\left(\frac{\mathrm{d}}{\mathrm{d}x}\check{\varphi}_{ -}\right),\ \forall m\geq 1. \tag{19}\] As a piecewise function \(\check{\varphi}=(\check{\varphi}_{-},\check{\varphi}_{+})\in\mathcal{V}_{ \check{\alpha},r}^{m}(\tilde{I})\), the value of \(\check{\varphi}\) at \(\check{\alpha}\) is not defined in general since the two sided limits \(\check{\varphi}_{-}(\check{\alpha})\) and \(\check{\varphi}_{+}(\check{\alpha})\) could be different if \(r_{0}\neq 1\). However, if \(\check{\varphi}_{s}(\check{\alpha})=0\) then \(\check{\varphi}_{s^{\prime}}(\check{\alpha})=0\) for \(s=+,-\). Furthermore, the multiplicity of \(\check{\alpha}\) as a root of \(\check{\varphi}_{-}\) is the same as its multiplicity as a root of \(\check{\varphi}_{+}\). This observation motivates to define \(\check{\alpha}\) as a root of \(\check{\varphi}\) of multiplicity \(d\) if \(\check{\alpha}\) is a root of \(\check{\varphi}_{-}\) of multiplicity \(d\). The following theorem shows that the number of roots of a non-zero function \(\check{\varphi}\in\mathcal{V}_{\check{\alpha},r}^{m}(\tilde{I})\) counting multiplicities cannot exceed \(m\) (similar to a polynomial of degree \(m\)), this theorem will be crucial to establish the existence of a Lagrange-type basis in \(\mathcal{V}_{\check{\alpha},r}^{m}(\tilde{I})\) and constructing an immersed Radau projection later in Section 5. In the discussions below, we will omit the phrase "counting multiplicities" for the sake of conciseness. For example, we say that \((x-2)^{2}\) has two roots in \(\mathbb{R}\). **Theorem 1**.: _For \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), and \(\check{\alpha}\in(0,1)\), every non-zero \(\check{\varphi}\in\mathcal{V}_{\check{\alpha},r}^{m}(\tilde{I})\) has at most \(m\) roots._ Proof.: We start from the base case \(m=0\) and then proceed by induction. Let \(\check{\varphi}\in\mathcal{V}_{\check{\alpha},r}^{0}(\tilde{I})\). if \(\check{\varphi}\not\equiv 0\) then \(\check{\varphi}=c\mathcal{N}_{\check{\alpha},r}^{0}\) for some \(c\neq 0\). In this case, \(\check{\varphi}\) has no roots since \(\mathcal{N}_{\check{\alpha},r}^{0}\) has no roots. Now, assume that for every positive sequence \(q\), the number of roots of any non-zero \(\check{\varphi}\in\mathcal{V}_{\check{\alpha},q}^{m-1}(\tilde{I})\) is at most \(m-1\). Next, we will show that for a given positive sequence \(r\), every function \(\check{\varphi}\in\mathcal{V}_{\check{\alpha},r}^{m}(\tilde{I})\) has at most \(m\) roots by contradiction: Assume that \(\check{\varphi}=(\check{\varphi}_{-},\check{\varphi}_{+})\in\mathcal{V}_{ \check{\alpha},r}^{m}(\tilde{I})\) is a non-zero function that has \(j\) discinct roots \(\{\xi_{i}\}_{i=1}^{j}\) of multiplicities \(\{d_{i}\}_{i=1}^{j}\) such that \(D=d_{1}+d_{2}+\cdots+d_{j}>m\). Therefore, \(\xi_{j}>\check{\alpha}\) and \(\xi_{1}\leq\check{\alpha}\) because \(\check{\varphi}_{\pm}\in\mathcal{P}^{m}(\tilde{I}^{s})\). let \(\xi_{i_{0}}\) be the largest root that is not larger than \(\check{\alpha}\), i.e., \[0\leq\xi_{1}<\xi_{2}<\cdots<\xi_{i_{0}}\leq\check{\alpha}<\xi_{i_{0}+1}<\cdots< \xi_{j}\leq 1,\] By the definition of \(\check{\varphi}=(\check{\varphi}_{-},\check{\varphi}_{+})\), \(\check{\varphi}_{-}\) has \(D_{1}=d_{1}+d_{2}+\cdots+d_{i_{0}}\) roots in \([\xi_{1},\xi_{i_{0}}]\) and \(\check{\varphi}_{+}\) has \(D_{2}=d_{i_{0}+1}+d_{i_{0}+2}+\cdots+d_{j}\) roots in \([\xi_{i_{0}+1},\xi_{j}]\). Therefore, \(\check{\varphi}_{-}^{\prime}\) has \(D_{1}-1\) roots in \([\xi_{1},\xi_{i_{0}}]\) and \(\check{\varphi}_{+}^{\prime}\) has \(D_{2}-1\) roots in \([\xi_{i_{0}+1},\xi_{j}]\). It remains to show that \(\check{\varphi}^{\prime}\) has an additional root in \((\xi_{i_{0}},\xi_{i_{0}+1})\). To show that,we consider two cases: * \(\xi_{i_{0}}=\check{\alpha}\): In this case, \(\check{\varphi}\) is continuous and \(\check{\varphi}_{+}(\check{\alpha})=\check{\varphi}_{+}(\xi_{i_{0}+1})=0\). By the mean value theorem, we conclude that \(\check{\varphi}_{+}^{\prime}\) has a root in \((\xi_{i_{0}},\xi_{i_{0}+1})\). * \(\xi_{i_{0}}<\tilde{\alpha}\): Assume that \(\tilde{\varphi}^{\prime}(x)>0\) for all \(x\in(\xi_{i_{0}},\xi_{i_{0}+1})\backslash\{\tilde{\alpha}\}\), then \(\tilde{\varphi}_{-}(\tilde{\alpha})>0\). Since \(r_{0}>0\), we have \(\tilde{\varphi}_{+}(\tilde{\alpha})>0\). By integrating \(\tilde{\varphi}^{\prime}_{+}\) from \(\tilde{\alpha}\) to \(\xi_{i_{0}+1}\), we get \(0=\tilde{\varphi}_{+}(\xi_{i_{0}+1})>0\), a contradiction. A similar conclusion follows if we assume that \(\tilde{\varphi}^{\prime}(x)<0\) for all \(x\in(\xi_{i_{0}},\xi_{i_{0}+1})\backslash\{\tilde{\alpha}\}\). Therefore, \(\tilde{\varphi}^{\prime}\) changes sign at some \(x_{0}\in(\xi_{i_{0}},\xi_{i_{0}+1})\). Since, \(\tilde{\varphi}^{\prime}\) does not change sign at \(\tilde{\alpha}\), then \(x_{0}\neq\tilde{\alpha}\) and \(\tilde{\varphi}^{\prime}(x_{0})=0\) (because \(\tilde{\varphi}^{\prime}\) is continuous at \(x_{0}\)). In either case above, \(\tilde{\varphi}^{\prime}\) has \((D_{1}-1)+(D_{2}-1)+1=D-1>m-1\) roots in \(\check{I}\) which contradicts the induction hypothesis since \(\tilde{\varphi}^{\prime}\in\mathcal{V}_{\tilde{\alpha},r(r)}^{m-1}(\check{I})\). Therefore, \(\tilde{\varphi}\) has at most \(m\) roots. The previous theorem allows us to establish the existence of a Lagrange basis on \(\mathcal{V}_{\tilde{\alpha},r}^{m}(\check{I})\) for every choice of nodes and for every degree \(m\) which was proved by Moon in [32] for a few specific cases \(m=1,2,3,4\). **Theorem 2**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), and \(\tilde{\alpha}\in(0,1)\). Assume \(\xi_{0},\xi_{1},\ldots,\xi_{m}\) are \(m+1\) distinct points in \(\check{I}\), then there is a Lagrange basis \(\{L_{i}\}_{i=0}^{m}\) of \(\mathcal{V}_{\tilde{\alpha},r}^{m}(\check{I})\) that satisfies_ \[L_{i}(\xi_{j})=\delta_{i,j},\hskip 28.452756pt0\leq i,j\leq m. \tag{20}\] Proof.: For each \(0\leq i\leq m\), we construct \(\tilde{L}_{i}\in\mathcal{V}_{\tilde{\alpha},r}^{m}(\check{I})\) such that \(\tilde{L}_{i}(\xi_{j})=0\) for all \(j\neq i\) by writing \(\tilde{L}_{i}\) as \[\tilde{L}_{i}=\sum_{i=0}^{m}a_{i}\mathcal{N}_{\tilde{\alpha},r}^{i},\] for some \(\{a_{i}\}_{i=0}^{m}\) chosen such that \[\tilde{L}_{i}(\xi_{j})=\sum_{i=0}^{m}a_{i}\mathcal{N}_{\tilde{\alpha},r}^{i}( \xi_{j})=0,\quad\forall j\in\{0,1,\ldots,i-1,i+1,\ldots,m\}. \tag{21}\] The equations (21) form a homogeneous system of \(m\) equations with \(m+1\) unknowns. Therefore, it has a non-zero solution. From Theorem 1, we know that \(\tilde{L}_{i}(\xi_{i})\neq 0\); otherwise, \(\tilde{L}_{i}\) would have \(m+1\) roots. This allows us to define \(L_{i}\) as \[L_{i}(x)=\frac{1}{\tilde{L}_{i}(\xi_{i})}\tilde{L}(x).\] By (20), \(L_{i},0\leq i\leq m\) are linearly independent. Consequently, \(\{L_{i}\}_{i=0}^{m}\) is a basis for \(\mathcal{V}_{\tilde{\alpha},r}^{m}(\check{I})\) since its dimension is \(m+1\) from Lemma 1. In addition to having a Lagrange basis, the RIFE space has an orthogonal basis with respect to \((\cdot,\cdot)_{w,I}\) as stated in the following theorem in which we also show that if a function \(\tilde{\varphi}\in\mathcal{V}_{\tilde{\alpha},r}^{m}(\check{I})\) is orthogonal to \(\mathcal{V}_{\tilde{\alpha},r}^{m-1}(\check{I})\) with respect to \((\cdot,\cdot)_{w,I}\), then \(\tilde{\varphi}\) has exactly \(m\) distinct interior roots similar to the classical orthogonal polynomials. Although the theorem holds for a general weight \(w\), we will restrict our attention to a piecewise constant function \(w\): \[w(x)=\left\{\begin{array}{ll}w_{-},&x\in\check{I}^{-},\\ w_{+},&x\in\check{I}^{+},\end{array}\right. \tag{22}\] where \(w_{\pm}\) are positive constants. The result of this theorem can also be considered as a generalization for the theorem about the orthogonal IFE basis described in [12] for elliptic interface problems. **Theorem 3**.: _Let \(\tilde{m}\geq m\geq 1,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\tilde{\alpha}\in(0,1)\), and let \(w:\check{I}\to\mathbb{R}_{+}\) be defined as in (22), then there is a non-zero \(\tilde{\varphi}\in\mathcal{V}_{\tilde{\alpha},r}^{m}(\check{I})\) such that_ \[(\tilde{\varphi},\tilde{\psi})_{w,\check{I}}=\int_{I}w(x)\tilde{\varphi}(x) \tilde{\psi}(x)\ dx=0,\quad\forall\tilde{\psi}\in\mathcal{V}_{\tilde{\alpha},r }^{m-1}(\check{I}). \tag{23}\] _Furthermore, \(\tilde{\varphi}\) has exactly \(m\) distinct roots in the interior of \(\check{I}\)._ Proof.: Existence is a classical result of linear algebra. The proof of the second claim follows the same steps used for orthogonal polynomials: Note that \(\tilde{\varphi}\) has at least one root of odd multiplicity in the interior of \(\check{I}\) since \((\tilde{\varphi},\mathcal{N}^{0}_{\check{\alpha},r})_{w,I}=0\). Assume that \(\tilde{\varphi}\) has \(j<m\) distinct roots \(\{\xi_{i}\}_{i=1}^{j}\) of odd multiplicity in the interior of \(\check{I}\). Following the ideas in the proof of Theorem 2, we can show that there is \(\tilde{\psi_{0}}\in\mathcal{V}^{j}_{\check{\alpha},r}(\check{I})\) such that \(\tilde{\psi_{0}}(\xi_{i})=0\) for \(1\leq i\leq j\). Furthermore, all roots of \(\check{\psi_{0}}\) are simple according to Theorem 1 since the sum of multiplicities cannot exceed \(j\), and \(\hat{\psi_{0}}\) changes sign at these roots. This means that \(w\tilde{\varphi}\hat{\psi_{0}}\) does not change sign on \(\check{I}\). As a consequence, \((\tilde{\varphi},\hat{\psi_{0}})_{w,I}\neq 0\) which contradicts the assumption \((\tilde{\varphi},\hat{\psi})_{w,I}=0\) for all \(\hat{\psi}\in\mathcal{V}^{m-1}_{\check{\alpha},r}(\check{I})\) since \(\mathcal{V}^{j}_{\check{\alpha},r}(\check{I})\subseteq\mathcal{V}^{m-1}_{ \check{\alpha},r}(\check{I})\). For every integer \(m\) with \(\tilde{m}\geq m\geq 1\), we use \(\mathcal{Q}^{m}_{\check{\alpha},w,r}(\check{I}),\ m\geq 1\) to denote the orthogonal complement of \(\mathcal{V}^{m-1}_{\check{\alpha},r}(\check{I})\) in \(\mathcal{V}^{m}_{\check{\alpha},r}(\check{I})\) with respect to the weight \(w\), that is \[\mathcal{Q}^{m}_{\check{\alpha},w,r}(\check{I})=\left\{\check{\varphi}\in \mathcal{V}^{m}_{\check{\alpha},r}(\check{I})\mid(\check{\varphi},\check{ \psi})_{w,I}=0,\ \forall\check{\psi}\in\mathcal{V}^{m-1}_{\check{\alpha},r}(\check{I})\right\}.\] According to Theorem 3, one can see that \(\check{\varphi}\mapsto\sqrt{\check{\varphi}(0)^{2}+\check{\varphi}(1)^{2}}\) defines a norm on \(\mathcal{Q}^{m}_{\check{\alpha},w,r}(\check{I})\) which is one-dimensional. Thus, it is is equivalent to the \(L^{2}\) norm and the quantity \(\frac{\sqrt{\check{\varphi}(0)^{2}+\check{\varphi}(1)^{2}}}{\left\|\check{ \varphi}\right\|_{0,\check{I}}}\) depends only on \(\check{\alpha},w\) and \(r\) (and not on the choice of \(\check{\varphi}\in\mathcal{Q}^{m}_{\check{\alpha},w,r}(\check{I})\)). Furthermore, The following lemma shows that the equivalence constant is independent of the interface location. This result will be crucial later in the analysis of Radau projections. **Lemma 4**.: _Let \(\tilde{m}\geq m\geq 1,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in(0,1)\) and \(w\) be defined as in (22) then, there exist \(C(m,w,r)\) and \(\tilde{C}(m,w,r)>0\) independent of \(\check{\alpha}\) such that for every \(\check{\varphi}\in\mathcal{Q}^{m}_{\check{\alpha},w,r}(\check{I})\), we have_ \[\sqrt{\check{\varphi}(0)^{2}+\check{\varphi}(1)^{2}}\geq C(m,w,r)\left\| \check{\varphi}\right\|_{0,\check{I}}\geq\tilde{C}(m,w,r)\left\|\check{ \varphi}\right\|_{1,\check{I}}. \tag{24}\] Proof.: The inequality on the right follows from the inverse inequality (15) for the IFE funcdtions. For a proof of the inequality on the left, see Appendix A. ## 4 An immersed Bramble-Hilbert lemma and the approximation capabilities of the LIFE space In this section, we will develop a new version of Bramble-Hilbert lemma that applies to functions in \(\mathcal{H}^{m+1}_{\check{\alpha},r}(\check{I})\) and its IFE counterpart. This lemma will serve as a fundamental tool for investigating the approximation capability of IFE spaces. In the discussions below, we will use \(\mathbbm{1}_{B}\) for the indicator function of a set \(B\subset\mathbb{R}\) and we define \(w_{i}=r_{i}\mathbbm{1}_{I_{-}}+\mathbbm{1}_{I_{+}}\) for \(i=0,1,\ldots,m\). **Theorem 4**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in(0,1)\), and \(v\in\mathcal{H}^{m+1}_{\check{\alpha},r}(\check{I})\). Assume \((w_{i},v^{(i)})_{I}=0\) for \(i=0,1,\ldots,m\). Then, there exists \(C(i,r)>0\) independent of \(\check{\alpha}\) such that_ \[\left\|v\right\|_{i,I}\leq C(i,r)|v|_{i,I},\ i=0,1,\cdots,m+1. \tag{25}\] Proof.: Let \(v\in\mathcal{H}^{m+1}_{\check{\alpha},r}(\check{I})\) and assume \((w_{i},v^{(i)})_{I}=0\) for \(i=0,1,\ldots,m\). Because \(w_{0}\) is such that \(w_{0}v\) is continuous and since \(v_{|\check{I}^{\pm}}\in H^{1}(\check{I}^{\pm})\), we have \(w_{0}v\in H^{1}(\check{I})\). Therefore, for any given \(x,y\in\check{I}\), we have \[w_{0}(x)v(x)-w_{0}(y)u(y)=\int_{y}^{x}w_{0}(z)v^{\prime}(z)\ dz\] We integrate this identity on \(\check{I}\) with respect to \(x\) and use \((w_{0},v^{(0)})_{I}=0\) to get \[-w_{0}(y)v(y)=\int_{0}^{1}\int_{y}^{x}w_{0}(z)v^{\prime}(z)\ dz\ dx,\quad \forall y\in\check{I}.\] Taking the absolute value and applying the Cauchy-Schwarz inequality, we get \[|v(y)|\leq\frac{1}{\min(1,r_{0})}|w_{0}(y)v(y)|\leq\frac{\max(1,r_{0})}{\min(1, r_{0})}|v|_{1,I}=\max\left(r_{0},r_{0}^{-1}\right)|v|_{1,I},\] which implies \(\left\|v\right\|_{0,I}\leq\max\left(r_{0},r_{0}^{-1}\right)\left|v\right|_{1,I}\). Since \(v^{\prime}\in\mathcal{H}^{m}_{\hat{\alpha},r(r)}(\check{I})\), where \(\tau\) is the shift operator described in (17), we can use the same reasoning to show that \[\left|v\right|_{1,I}\leq\max\left(r_{1},r_{1}^{-1}\right)\left|v\right|_{2,I}.\] Repeating the same arguments, we can obtain \[\left|v\right|_{i,\check{I}}\leq\max\left(r_{i},r_{i}^{-1}\right)\left|v \right|_{i+1,\check{I}},\ \ i=0,1,\cdots,m \tag{26}\] which leads to (25) with \[C(i,r)=\sqrt{1+\sum_{k=0}^{i}\prod_{j=k}^{i}\max\left(r_{j}^{2},r_{j}^{-2} \right)}.\] **Lemma 5**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in(0,1)\) and \(v\in\mathcal{H}^{m+1}_{\hat{\alpha},r}(\check{I})\). Then, there is a unique \(\tilde{\pi}^{m}_{\check{\alpha},r}v\in\mathcal{V}^{m}_{\check{\alpha},r}( \check{I})\) that satisfies_ \[\int_{I}w_{i}(x)\frac{d^{i}}{dx^{i}}\left(v(x)-\tilde{\pi}^{m}_{\check{\alpha},r}v(x)\right)\ dx=0,\qquad\forall i=0,1,\ldots,m. \tag{27}\] Proof.: For \(v\in\mathcal{H}^{m+1}_{\check{\alpha},r}(\check{I})\), to see that \(\tilde{\pi}^{m}_{\check{\alpha},r}v\) exists and is unique, we consider the problem of finding \(\check{\varphi}\in\mathcal{V}^{m}_{\check{\alpha},r}(\check{I})\) such that \[\left(w_{i},\check{\varphi}^{(i)}\right)_{\check{I}}=\left(w_{i},v^{(i)} \right)_{\check{I}},\qquad\text{for }i=0,1,\ldots,m. \tag{28}\] By Lemma 1, we can express \(\check{\varphi}\) in terms of the canonical basis \[\check{\varphi}=\sum_{j=0}^{m}c_{j}\mathcal{N}^{j}_{\check{\alpha},r}.\] Then, by (28), the coefficients \(\mathbf{c}\) of \(\check{\varphi}\) are determined by the linear system \(A\mathbf{c}=\mathbf{b}\), where \(A=(A_{i,j})\) is a triangular matrix with \(A_{i,j}=\left(w_{i},(\mathcal{N}^{j}_{\check{\alpha},r})^{(i)}\right)_{\check {I}},\ 0\leq i,j\leq m\) and diagonal entries \[A_{i,i}=\left(w_{i},(\mathcal{N}^{i}_{\check{\alpha},r})^{(i)}\right)_{\check {I}}=i!(h_{-}+r_{i}h_{+})\neq 0,\ 0\leq i\leq m.\] Therefore, \(A\) is invertible and \(\tilde{\pi}^{m}_{\check{\alpha},r}v=\check{\varphi}\) is uniquely determined by (27). We note that the mapping \(\tilde{\pi}^{m}_{\check{\alpha},r}:v\in\mathcal{H}^{m+1}_{\check{\alpha},r}( \check{I})\mapsto\tilde{\pi}^{m}_{\check{\alpha},r}v\in\mathcal{V}^{m}_{ \check{\alpha},r}(\check{I})\) is linear because of the linearity of integration. We now present an immersed version of the Bramble-Hilbert lemma [10] which can be considered a generalization of the one-dimensional Bramble-Hilbert lemma in the sense that if \(r\equiv 1\), then, this immersed Bramble-Hilbert lemma recovers the classical Bramble-Hilbert lemma. **Lemma 6**.: _Let \(\tilde{m}\geq m\geq i\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in(0,1)\). Assume \(\check{P}^{m}_{\check{\alpha},r}:\mathcal{H}^{m+1}_{\check{\alpha},r}(\check{ I})\rightarrow\mathcal{V}^{m}_{\check{\alpha},r}(\check{I})\) is a linear map that satisfies the following two conditions_ 1. \(\check{P}^{m}_{\check{\alpha},r}\) _is a projection on_ \(\mathcal{V}^{m}_{\check{\alpha},r}(\check{I})\) _in the sense that_ \[\check{P}^{m}_{\check{\alpha},r}\check{\varphi}=\check{\varphi},\quad\forall \check{\varphi}\in\mathcal{V}^{m}_{\check{\alpha},r}(\check{I}).\] (29) 2. _There exists an integer_ \(j\)_,_ \(0\leq j\leq m+1\) _such that_ \(\check{P}^{m}_{\check{\alpha},r}\) _is bounded with respect to the norm_ \(\left\|\cdot\right\|_{j,\check{I}}\) _as follows:_ \[\left\|\check{P}^{m}_{\check{\alpha},r}v\right\|_{i,\check{I}}\leq C\left\|v \right\|_{j,\check{I}},\qquad\forall v\in\mathcal{H}^{m+1}_{\check{\alpha},r}( \check{I}).\] (30) _Then, there exists \(C(m,r)>0\) independent of \(\check{\alpha}\), such that_ \[\left\|v-\check{P}^{m}_{\check{\alpha},r}v\right\|_{i,\check{I}}\leq C(m,r) \left(1+\left\|\check{P}^{m}_{\check{\alpha},r}\right\|_{i,j,\check{I}}\right) \left|v\right|_{m+1,\check{I}},\quad v\in\mathcal{H}^{m+1}_{\check{\alpha},r}( \check{I}), \tag{31}\] _where_ \[\left\|\check{P}^{m}_{\check{\alpha},r}\right\|_{i,j,\check{I}}=\sup\left\{ \left\|\check{P}^{m}_{\check{\alpha},r}v\right\|_{i,\check{I}}\mid v\in \mathcal{H}^{m+1}_{\check{\alpha},r}(\check{I})\text{ and }\left\|v\right\|_{j,\check{I}}=1\right\}.\] Proof.: Since \(\check{P}^{m}_{\bar{\alpha},r}\) is a projection in the sense of (29), we have \(\check{P}^{m}_{\bar{\alpha},r}\check{\pi}^{m}_{\bar{\alpha},r}v=\check{\pi}^{m}_ {\bar{\alpha},r}v\). Using the triangle inequality and (30), we obtain \[\left\|v-\check{P}^{m}_{\bar{\alpha},r}v\right\|_{i,I} \leq\left\|v-\check{\pi}^{m}_{\bar{\alpha},r}v\right\|_{i,I}+ \left\|\check{P}^{m}_{\bar{\alpha},r}\left(v-\check{\pi}^{m}_{\bar{\alpha},r}v \right)\right\|_{i,I},\] \[\leq\left(1+\left\|\check{P}^{m}_{\bar{\alpha},r}\right\|_{i,j,I }\right)\left\|v-\check{\pi}^{m}_{\bar{\alpha},r}v\right\|_{j,I}\leq\left(1+ \left\|\check{P}^{m}_{\bar{\alpha},r}\right\|_{i,j,I}\right)\left\|v-\check{ \pi}^{m}_{\bar{\alpha},r}v\right\|_{m+1,I},\qquad\forall v\in\mathcal{H}^{m+1 }_{\bar{\alpha},r}(\check{I}).\] Then, applying Lemma 5 and Theorem 4 to the right hand side of the above estimate leads to (31). Next, we extend the results of Lemma 6 to the physical interface element \(I_{k_{0}}=[x_{k_{0}-1},x_{k_{0}-1}+h]\). Following the tradition in finite element analysis, for every function \(\varphi\) defined on the interface element \(I_{k_{0}}\), we can map it to a function \(\mathcal{M}\varphi=\check{\varphi}\) defined on the reference interval \(\check{I}\) by the standard affine transformation: \[\mathcal{M}\varphi(\xi)=\check{\varphi}(\xi)=\varphi(x_{k_{0}-1}+h\xi),\ \ \ \xi\in\check{I}=[0,1]. \tag{32}\] Furthermore, given a mapping \(P^{m}_{\alpha,r}:\mathcal{H}^{m+1}_{\alpha,r}(I_{k_{0}})\to\mathcal{V}^{m}_{ \alpha,r}(I_{k_{0}})\), we can use this affine transformation to introduce a mapping \(\check{P}^{m}_{\bar{\alpha},r}:\mathcal{H}^{m+1}_{\bar{\alpha},r}(\check{I}) \to\mathcal{V}^{m}_{\bar{\alpha},r}(\check{I})\) such that \[(\check{P}^{m}_{\bar{\alpha},r}\check{v})(\xi)=(P^{m}_{\alpha,r}v)(x_{k_{0}-1 }+h\xi)=(P^{m}_{\alpha,r}v)(x)\ \ \text{with}\ \ \xi\in\check{I}\ \ \text{or}\ \ x=x_{k_{0}-1}+h\xi\in I_{k_{0}}. \tag{33}\] It can be verified that the mappings \(\mathcal{M},P^{m}_{\alpha,r}\) and \(\check{P}^{m}_{\alpha,r}\) satisfy the following commutative diagram: (34) We now use the immersed Bramble-Hilbert lemma in the scaling argument to obtain estimates for the projection error \(v-P^{m}_{\alpha,r}v\). **Theorem 5**.: _Let \(\tilde{m}\geq m\geq i\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in(0,1)\). Assume \(P^{m}_{\alpha,r}:\mathcal{H}^{m+1}_{\alpha,r}(I_{k_{0}})\to\mathcal{V}^{m}_{ \alpha,r}(I_{k_{0}})\) is a linear operator such that \(\check{P}^{m}_{\bar{\alpha},r}\) defined by (33) satisfies the assumption of Lemma 6 for an integer \(j\) with \(0\leq j\leq m+1\). Then, there exists \(C(m,r)>0\) independent of \(\alpha\) such that_ \[|v-P^{m}_{\alpha,r}v|_{i,I_{k_{0}}}\leq Ch^{m+1-i}\left(1+\left\|\check{P}^{m }_{\bar{\alpha},r}\right\|_{i,j,I}\right)|v|_{m+1,I_{k_{0}}},\ \ \ v\in\mathcal{H}^{m+1}_{\alpha,r}(I_{k_{0}}). \tag{35}\] Proof.: The proof follows the same argument as for the classical case \(r_{k}=1,\ k=0,1,\ldots,m\)[14]. We start by applying the change of variables \(x\mapsto h^{-1}(x-x_{k_{0}-1})\) to we obtain \[|v|_{m+1,I_{k_{0}}}=h^{-1}h^{-m-1}|\check{v}|_{m+1,I},\qquad|v-P^{m}_{\alpha,r} v|_{i,I_{k_{0}}}=h^{-1}h^{-i}|\check{v}-\check{P}^{m}_{\bar{\alpha},r}\check{v}| _{i,I}. \tag{36}\] Next, we combine Lemma 6 and (36) to obtain \[|v-P^{m}_{\alpha,r}v|_{i,I_{k_{0}}} =h^{-i-1}|\check{v}-\check{P}^{m}_{\bar{\alpha},r}\check{v}|_{i,I }\leq h^{-i-1}C(m,r)\left(1+\left\|\check{P}^{m}_{\bar{\alpha},r}\right\|_{i,j,I }\right)|\check{v}|_{m+1,I}\] \[=h^{-i-1}C(m,r)\left(1+\left\|\check{P}^{m}_{\bar{\alpha},r} \right\|_{i,j,I}\right)h^{m+2}|v|_{m+1,I_{k_{0}}}\] \[=C(m,r)h^{m+1-i}\left(1+\left\|\check{P}^{m}_{\bar{\alpha},r} \right\|_{i,j,\bar{I}}\right)|v|_{m+1,I_{k_{0}}}.\] Nevertheless, the estimate (35) does not directly lead to the convergence of \(P^{m}_{\alpha,r}v\) to \(v\) as \(h\to 0\) unless we can show that \(\left\|\check{P}^{m}_{\bar{\alpha},r}\right\|_{i,j,\bar{I}}\) is uniformly bounded with respect to \(\check{\alpha}\in\check{I}\), and this can be addressed by the _uniform boundedness_ of \(P^{m}_{\alpha,r}\) defined as follows. **Definition 2**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{m}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in\check{I}\), and let \(\{P_{\check{\alpha},r}^{m}\}_{0<\check{\alpha}<1}\) be a collection of projections in the sense of (29) such that \(\check{P}_{\check{\alpha},r}^{m}:\mathcal{H}_{\check{\alpha},r}^{m+1}(\check{I })\rightarrow\mathcal{V}_{\check{\alpha},r}^{m}(\check{I})\). We call \(\{\check{P}_{\check{\alpha},r}^{m}\}_{0<\check{\alpha}<1}\) a uniformly bounded collection of RIFE projections provided that there exists a constant \(C>0\) independent of \(\check{\alpha}\) and an integer \(j\) with \(0\leq j\leq m+1\) such that_ \[\left\|\check{P}_{\check{\alpha},r}^{m}v\right\|_{0,\check{I}}<C\left\|v \right\|_{j,\check{I}},\ \forall v\in\mathcal{H}_{\check{\alpha},r}^{m+1}(\check{I}),\qquad\forall \check{\alpha}\in(0,1), \tag{37}\] _and the associated collection of maps \(\{P_{\alpha,r}^{m}\}_{\alpha\in\check{I}_{k_{0}}}\) defined in (33) is called a uniformly bounded collection of LIFE projections._ **Lemma 7**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in\check{I}\). Assume \(\{\check{P}_{\check{\alpha},r}^{m}\}_{0<\check{\alpha}<1}\) is a uniformly bounded collection of RIFE projections. Then, there exists a constant \(C\) independent of \(\check{\alpha}\) such that_ \[\left\|\check{P}_{\check{\alpha},r}^{m}v\right\|_{i,\check{I}}\leq C\left\|v \right\|_{m+1,\check{I}},\ \forall v\in\mathcal{H}_{\check{\alpha},r}^{m+1}(\check{I}),\ 0\leq i\leq m+1. \tag{38}\] Proof.: Assume that \(\{\check{P}_{\check{\alpha},r}^{m}\}_{0<\check{\alpha}<1}\) is a uniformly bounded collection of RIFE projections, then, \[\left\|\check{P}_{\check{\alpha},r}^{m}v\right\|_{0,\check{I}}\leq C\left\|v \right\|_{j,\check{I}}\] for an integer \(j\) with \(0\leq j\leq m+1\). By Lemma 3, we further have \[\left\|\check{P}_{\check{\alpha},r}^{m}v\right\|_{i,\check{I}}\leq c(m,r) \left\|\check{P}_{\check{\alpha},r}^{m}v\right\|_{0,\check{I}}\leq c(m,r)C \left\|v\right\|_{j,\check{I}}\leq c(m,r)C\left\|v\right\|_{m+1,\check{I}},\ \ 0\leq i\leq m+1\] which implies the uniform boundedness stated in (38). Now, we can derive an error bound for a collection of uniformly bounded LIFE projections that implies convergence. **Theorem 6**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in\check{I}\). Assume that \(\{P_{\alpha,r}^{m}\}_{\alpha\in\check{I}_{k_{0}}}\) is an uniformly bounded collection of LIFE projections. Then, there exits a constant \(C>0\) independent of \(\alpha\) and \(h\) such that_ \[|v-P_{\alpha,r}^{m}v|_{i,I_{0}}\leq Ch^{m+1-i}|v|_{m+1,I_{k_{0}}},\quad\forall v \in\mathcal{H}_{\alpha,r}^{m+1}(I_{k_{0}}),\quad\forall i=0,1,\ldots,m. \tag{39}\] Proof.: By Lemma 7, we know that \(\{P_{\alpha,r}^{m}\}_{\alpha\in\check{I}_{k_{0}}}\) satisfies (30) for \(i=0,1,2,\ldots,m\) with \(j=m+1\). Consequently, we have \[\left\|\check{P}_{\check{\alpha},r}^{m}\right\|_{i,m+1,\check{I}}=\sup\left\{ \left\|\check{P}_{\check{\alpha},r}^{m}v\right\|_{i,\check{I}}\mid v\in \mathcal{H}_{\check{\alpha},r}^{m+1}(\check{I})\text{ and }\left\|v\right\|_{m+1,\check{I}}=1\right\}\leq C.\] Then, applying these to (35) established in Theorem 5 yields (39). The simplest example of a uniformly bounded collection of LIFE projections is the \(L^{2}\) projection \(\check{\Pr}_{\check{\alpha},r}^{m}:\mathcal{H}_{\check{\alpha},r}^{m+1}(\check{I })\rightarrow\mathcal{V}_{\check{\alpha},r}^{m}(\check{I})\) defined by \[\left(q,\check{\Pr}_{\check{\alpha},r}^{m}v\right)_{\check{I}}=\left(q,v \right)_{\check{I}},\qquad\forall q\in\mathcal{V}_{\check{\alpha},r}^{m}( \check{I}).\] Choosing \(q=\check{\Pr}_{\check{\alpha},r}^{m}v\), we get \(\left\|\check{\Pr}_{\check{\alpha},r}^{m}v\right\|_{0,\check{I}}=\left\|v \right\|_{0,\check{I}}\leq\left\|v\right\|_{m+1,\check{I}}\). By Lemma 7, \(\{\check{\Pr}_{\check{\alpha},r}^{m}\}_{\alpha\in\check{I}_{k_{0}}}\) is an uniformly bounded collection of projections. Consequently, by Theorem 6, we can obtain the following optimal approximation capability of the LIFE space. **Corollary 1**.: _Let \(\tilde{m}\geq m\geq 0,\{r_{k}\}_{k=0}^{\tilde{m}}\subset\mathbb{R}_{+}\), \(\check{\alpha}\in(0,1)\). Then, there exists a constant \(C>0\) independent of \(\alpha\) and \(h\) such that_ \[\min_{q\in\mathcal{V}_{\check{\alpha},r}^{m}(I_{k_{0}})}|q-v|_{0,I_{k_{0}}}\leq Ch ^{m+1}|v|_{m+1,I_{k_{0}}},\quad\forall v\in\mathcal{H}_{\alpha,r}^{m+1}(I_{k_{0 }}). \tag{40}\] Analysis of an IFE-DG method for a class of hyperbolic systems with discontinuous coefficients In this section, we employ the results of Section 3 and Section 4 to analyse an IFE-DG method for the acoustic interface problem (1). To our knowledge, the analysis of such method for hyperbolic systems has so far not been considered in the literature unlike the IFE methods for elliptic problems. The main challenge that time-dependent problems present is the plethora of possible jump coefficients. For instance, the jump coefficients \(r_{k}^{p},r_{k}^{v}\) described in (2) for the acoustic interface problem are given by: \[r_{2k}^{p}=r_{2k}^{v}=\left(\frac{c_{-}}{c_{+}}\right)^{2k},\qquad r_{2k+1}^{p }=\frac{\rho_{+}}{\rho_{-}}\left(\frac{c_{-}}{c_{+}}\right)^{2k},\qquad r_{2k+ 1}^{v}=\frac{\rho_{-}}{\rho_{+}}\left(\frac{c_{-}}{c_{+}}\right)^{2k+2},\quad k =0,1,\ldots \tag{41}\] The nature of the jump coefficients in (41) makes the study of this particular IFE space extremely tedious as observed in [32]. Fortunately, the theory developed in Section 3 and Section 4 is general and applies to any choice of positive jump coefficients. ### Problem statement and preliminary results Let \(I=(a,b)\) be a bounded interval containing \(\alpha\), and let \(\rho_{\pm},c_{\pm}\) be positive constants describing the density and the sound speed in \(I^{\pm}\), respectively. Now, we consider the acoustic interface problem on \(I\) \[\mathbf{u}_{t}(x,t)+A(x)\mathbf{u}_{x}(x,t)=0,\qquad x\in I\backslash\{\alpha \},\qquad t>0,\] (42a) where \[\mathbf{u}=(p,u)^{T}\] is the pressure-velocity couple and \[A_{|I^{\pm}}=A_{\pm}=\begin{pmatrix}0&\rho_{\pm}c_{\pm}^{2}\\ \rho_{\pm}^{-1}&0\end{pmatrix}. \tag{42b}\] The matrices \(A_{\pm}\) can be decomposed as \(A_{\pm}=P_{\pm}\Lambda_{\pm}P_{\pm}^{-1}\), where \(\Lambda_{\pm}=\text{diag}(-c_{\pm},c_{\pm})\). Using this eigen-decomposition, we define \(A_{\pm}^{+}=P_{\pm}\text{diag}(0,c_{\pm})P_{\pm}^{-1}\), \(A_{\pm}^{-}=P_{\pm}\text{diag}(-c_{\pm},0)P_{\pm}^{-1}\), and \(|A_{\pm}|=P_{\pm}\text{diag}(c_{\pm},c_{\pm})P_{\pm}^{-1}\) to be the positive part,the negative part, and the absolute value of \(A_{\pm}\), respectively. The acoustic interface problem that we are considering here is subject to the following homogeneous inflow boundary conditions \[A_{-}^{+}\mathbf{u}(a,t)=A_{+}^{-}\mathbf{u}(b,t)=0,\qquad t\geq 0, \tag{42c}\] initial conditions \[\mathbf{u}(x,0)=\mathbf{u}_{0}(x),x\in I,\] (42d) and interface condition \[\mathbf{u}(\alpha^{-},t)=\mathbf{u}(\alpha^{+},t),\qquad t\geq 0.\] (42e) In the remainder of this section, let \(S_{\pm}=\text{diag}(\rho_{\pm}^{-1}c_{\pm}^{-2},\rho_{\pm})\) and \(S(x)=S_{\pm}\) if \(x\in I^{\pm}\), then \[S_{\pm}A_{\pm}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}=\tilde{A}. \tag{43}\] Now, we can multiply (42a) by \(S\) and write the acoustic interface problem as \[S(x)\mathbf{u}_{t}(x,t)+\tilde{A}\mathbf{u}_{x}(x,t)=0,\qquad x\in I \backslash\{\alpha\},\qquad t>0. \tag{44}\] Lombard and Piraux [30], and Moon [32] have shown that by successively differentiating \(\llbracket\mathbf{u}(\cdot,t)\rrbracket_{\alpha}=0\), where \(\llbracket\cdot\rrbracket\) is the jump, we obtain \[\llbracket A(\cdot)^{k}\mathbf{u}(\cdot,t)\rrbracket_{\alpha}=0\Longleftrightarrow \frac{\partial^{k}}{\partial x^{k}}\mathbf{u}(\alpha^{+},t)=R_{k}\frac{ \partial^{k}}{\partial x^{k}}\mathbf{u}(\alpha^{-},t),\quad R_{k}=A_{+}^{-k}A _{-}^{k},\ k=0,1,\ldots,m.\] Since \(R_{k}\) is diagonal (see part (a) of Lemma 8), the condition \(\mathbf{u}^{(k)}(\alpha^{+},t)=R_{k}\mathbf{u}^{(k)}(\alpha^{-},t)\) is equivalent to \[\frac{\partial^{k}}{\partial x^{k}}p(\alpha^{+},t)=r_{k}^{p}\frac{\partial^{k }}{\partial x^{k}}p(\alpha^{-},t),\qquad\frac{\partial^{k}}{\partial x^{k}} u(\alpha^{+},t)=r_{k}^{u}\frac{\partial^{k}}{\partial x^{k}}u(\alpha^{-},t), \tag{45}\] where \(r_{k}^{p}\) and \(r_{k}^{u}\) are defined in (41). These decoupled interface conditions make the results obtained previously about the approximation capabilities of the LIFE space directly applicable to vector functions in the product spaces \[\mathbb{H}_{\alpha,\mathbf{r}}^{m+1}(I_{k_{0}})=\mathcal{H}_{\alpha,r^{p}}^{m+1} (I_{k_{0}})\times\mathcal{H}_{\alpha,r^{u}}^{m+1}(I_{k_{0}}),\quad\mathbb{V}_{ \alpha,\mathbf{r}}^{m}(I_{k_{0}})=\mathcal{V}_{\alpha,r^{p}}^{m}(I_{k_{0}}) \times\mathcal{V}_{\alpha,r^{u}}^{m}(I_{k_{0}}),\quad\mathbb{W}_{\alpha, \mathbf{r}}^{m}(\mathcal{T}_{h})=W_{\alpha,r^{p}}^{m}(\mathcal{T}_{h})\times W _{\alpha,r^{u}}^{m}(\mathcal{T}_{h}),\] where \(\mathbf{r}=(r^{p},r^{u})\). Now, we define the following bilinear forms \[M(\mathbf{w},\mathbf{v})=\sum_{k=1}^{N}\left(S\mathbf{v},\mathbf{w}\right)_{ I_{k}} \tag{46a}\] \[B(\mathbf{w},\mathbf{v})=\sum_{k=1}^{N}\left(\mathbf{v}^{\prime},\tilde{A} \mathbf{w}\right)_{I_{k}}+\sum_{k=0}^{N}\llbracket\mathbf{v}\rrbracket_{x_{k}}^ {T}\,S(x_{k})\hat{\mathbf{w}}(x_{k}), \tag{46b}\] where the numerical flux \(\hat{\mathbf{w}}(x_{k})=A(x_{k})^{+}\mathbf{w}(x_{k}^{-})+A(x_{k})^{-}\mathbf{ w}(x_{k}^{+})\) at the interior nodes. At the boundary, we have \(\hat{\mathbf{w}}(x_{N})=A(x_{N})^{+}\mathbf{w}(x_{N}^{-})\), \(\hat{\mathbf{w}}(x_{0})=A(x_{0})^{-}\mathbf{w}(x_{0}^{+})\), \(\llbracket\mathbf{w}\rrbracket_{x_{N}}=\mathbf{w}(x_{N}^{-})\) and \(\llbracket\mathbf{w}\rrbracket_{x_{0}}=-\mathbf{w}(x_{0}^{+})\). Now we define the immersed DG formulation as: Find \(\mathbf{u}_{h}\in C^{1}\left([0,T],\mathbb{W}_{\alpha,\mathbf{r}}^{m}( \mathcal{T}_{h})\right)\) such that \[(\mathbf{u}_{h}(\cdot,0),\mathbf{v}_{h})_{I}=(\mathbf{u}_{0}, \mathbf{v}_{h})_{I},\qquad\forall\mathbf{v}_{h}\in\mathbb{W}_{\alpha,\mathbf{ r}}^{m}(\mathcal{T}_{h}), \tag{47a}\] \[M(\mathbf{u}_{h,t}(\cdot,t),\mathbf{v}_{h})=B(\mathbf{u}_{h}( \cdot,t),\mathbf{v}_{h}),\qquad\forall\mathbf{v}_{h}\in\mathbb{W}_{\alpha, \mathbf{r}}^{m}(\mathcal{T}_{h}), \tag{47b}\] We note that the discrete weak form (47) and the discrete space \(\mathbb{W}_{\alpha,\mathbf{r}}^{m}(\mathcal{T}_{h})\) are identical to the ones described in the IDPGFE formulation in [32]. Next, we will go through some basic properties of the matrices \(S_{\pm}\) and \(A_{\pm}\), these properties will be used later in the proof the \(L^{2}\) stability in Lemma9, in the analysis of the immersed Radau projection and in the convergence estimate. **Lemma 8**.: _Let \(A_{\pm}\) be the matrices defined in (42b) and let \(S_{\pm}=\mathrm{diag}(\rho_{\pm}^{-1}c_{\pm}^{-1},\rho_{\pm})\), then_ 1. _For any integer_ \(k\geq 0\)_, the matrix_ \(A_{+}^{-k}A_{-}^{k}\) _is diagonal with positive entries._ 2. _Let_ \(s\in\{+,-\}\)_, then there is an invertible matrix_ \(P_{s}\) _such that_ \(A_{s}=P_{s}\mathrm{diag}(-c_{s},c_{s})P_{s}^{-1}\) _and_ \(S_{s}=P_{s}^{-T}P_{s}^{-1}\)_._ 3. _Let_ \(s\in\{+,-\}\)_, then the matrices_ \(S_{s}A_{s}^{+},\ S_{s}A_{s}^{-}\) _and_ \(S_{s}|A_{s}|\) _are symmetric. Furthermore,_ \(S_{s}A_{s}^{+}\) _is positive semi-definite,_ \(S_{s}A_{s}^{-}\) _is negative semi-definite and_ \(S_{s}|A_{s}|\) _is positive definite._ 4. _Let_ \(s,\tilde{s}\in\{+,-\}\)_, and let_ \(\mathbf{w}\in\mathbb{R}^{2}\)_, then_ \[\left(\left\|A_{s}^{\tilde{s}}\mathbf{w}\right\|^{2}+\left|\mathbf{w}^{T}S_{s }A_{s}^{\tilde{s}^{\prime}}\mathbf{w}\right|=0\right)\Longrightarrow\mathbf{ w}=0.\] (48) _where_ \(\tilde{s}^{\prime}\) _is dual of_ \(\tilde{s}\) _defined at the beginning of Section_ 3_, and_ \(\left\|\cdot\right\|\) _is Euclidean norm._ 5. _Let_ \(s\in\{+,-\}\)_. Then, there is a constant_ \(C(\rho_{s},c_{s})>0\) _such that_ \[\mathbf{w}^{T}S_{s}A_{s}^{+}\mathbf{w}-\mathbf{w}^{T}S_{s}A_{s}^{-}\mathbf{w} \geq C(\rho_{s},c_{s})\left\|\mathbf{w}\right\|^{2},\qquad\forall\mathbf{w}\in \mathbb{R}^{2}.\] (49) _Proof._ (a) We have \(A_{\pm}^{2}=c_{\pm}^{2}\mathrm{Id}_{2}\), where \(\mathrm{Id}_{2}\) is the the \(2\times 2\) identity matrix. Therefore, \[A_{\pm}^{2k}=c_{\pm}^{2k}\mathrm{Id}_{2},\qquad A_{\pm}^{2k+1}=c_{\pm}^{2k}A_{ \pm},\qquad k=0,1,\ldots \tag{50}\] Using (50), we immediately obtain \(A_{+}^{-2k}A_{-}^{2k}=\left(\frac{c_{-}}{c_{+}}\right)^{2k}\mathrm{Id}_{2}\) and \(A_{+}^{-2k-1}A_{-}^{2k+1}=\left(\frac{c_{-}}{c_{+}}\right)^{2k}A_{+}^{-1}A_{-}\). Finally, by direct computation, we have \(A_{+}^{-1}A_{-}=\mathrm{diag}\left(\frac{\rho_{+}}{\rho_{-}},\frac{\rho_{-}c_{ +}^{2}}{\rho_{+}c_{+}^{2}}\right)\). Hence, \(A_{+}^{-k}A_{-}^{k}=\mathrm{diag}(r_{k}^{p},r_{k}^{k})\), where \(r_{k}^{p}\) and \(r_{k}^{n}\)are defined in (41). (b) Let \[P_{s}=\frac{1}{\sqrt{2\rho_{s}}}\begin{pmatrix}-c_{s}\rho_{s}&c_{s}\rho_{s}\\ 1&1\end{pmatrix},\] (51) then, by a simple computation, we can show that \(S_{s}=P_{s}^{-T}P_{s}^{-1}\) and \(A_{s}=P_{s}\mathrm{diag}(-c_{s},c_{s})P_{s}^{-1}\). 3. We have \(S_{s}A_{s}^{+}=P_{s}^{-T}\mathrm{diag}(0,c_{s})P_{s}^{-1}\), where \(P_{s}\) is defined in (51). Therefore, \(S_{s}A_{s}^{+}\) is a symmetric semi-positive definite matrix. The other two claims can be proven similarly. 4. We will only consider the case \(\tilde{s}=+\) here, the other case can be proven similarly. Consider a vector \(\mathbf{w}\in\mathbb{R}^{2}\) that satisfies \[\left\|A_{s}^{+}\mathbf{w}\right\|^{2}+\left|\mathbf{w}^{T}S_{s}A_{s}^{-} \mathbf{w}\right|=0.\] (52) Now, let \(\tilde{\mathbf{w}}=P_{s}\mathbf{w}\) where \(P_{s}\) is defined in (51), then (52) can be written as \[\left\|P_{s}\mathrm{diag}(0,c_{s})\tilde{\mathbf{w}}\right\|^{2}+\left\| \mathrm{diag}(-c_{s},0)\tilde{\mathbf{w}}\right\|^{2}=0,\] Since both norms are non-negative, we have \(\mathrm{diag}(-c_{s},0)\tilde{\mathbf{w}}=0\) and \(P_{s}\mathrm{diag}(0,c_{s})\tilde{\mathbf{w}}=0\). \(P_{s}\) being invertible, we get \(\tilde{\mathbf{w}}=0\). Consequently, \(\mathbf{w}=P_{s}^{-1}\tilde{\mathbf{w}}=0\). 5. We have by direct computation \[\mathbf{w}^{T}S_{s}(A_{s}^{+}-A_{s}^{-})\mathbf{w}=\mathbf{w}^{T}\begin{pmatrix} \rho_{s}^{-1}c_{s}^{-1}&0\\ 0&\rho_{s}c_{s}\end{pmatrix}\mathbf{w}\geq\min(\rho_{s}c_{s},\rho_{s}^{-1}c_{s }^{-1})\left\|\mathbf{w}\right\|.\] **Lemma 9**.: _Let \(\mathbf{u}\) be a solution to (42), and let \(\epsilon(t)=\frac{1}{2}\left(\mathbf{u}(\cdot,t),S(\cdot)\mathbf{u}(\cdot,t) \right)_{0,I}^{2}\), then_ \[\epsilon^{\prime}(t)\leq 0,\qquad t\geq 0.\] Proof.: By multiplying (44) by \(\mathbf{u}^{T}\) and integrating on \(I^{\pm}\), we obtain \[\int_{I}\mathbf{u}(x,t)^{T}S\mathbf{u}_{t}(x,t)+\mathbf{u}(x,t)^{T}\tilde{A} \mathbf{u}_{x}(x,t)\ dx=0.\] The matrices \(S\) and \(\tilde{A}\) are symmetric. Therefore, we rewrite the previous equation as \[\epsilon^{\prime}(t)+\sum_{s=+,-}\int_{I^{s}}\frac{\partial}{\partial x} \mathbf{u}(x,t)^{T}\tilde{A}\mathbf{u}(x,t))dx=0.\] Since \(\mathbf{u}\) is continuous at \(\alpha\) (from (42e)), we have \[\epsilon^{\prime}(t)+\mathbf{u}(b,t)^{T}\tilde{A}\mathbf{u}(b,t)-\mathbf{u}(a,t)^{T}\tilde{A}\mathbf{u}(a,t)=0. \tag{53}\] Now, we can rewrite the term \(\mathbf{u}(b,t)^{T}\tilde{A}\mathbf{u}(b,t)\) as \[\mathbf{u}(b,t)^{T}\tilde{A}\mathbf{u}(b,t)=\mathbf{u}(b,t)^{T}S_{+}A_{+} \mathbf{u}(b,t)=\mathbf{u}(b,t)^{T}S_{+}A_{+}^{+}\mathbf{u}(b,t)+\mathbf{u}(b,t)^{T}S_{+}A_{+}^{-}\mathbf{u}(b,t)=\mathbf{u}(b,t)^{T}S_{+}A_{+}^{+}\mathbf{u }(b,t),\] where the last equality follows from the boundary condition (42c). Since \(SA^{+}\) is symmetric semi-positive definite (see part (c) of Lemma 8), we conclude that \(\mathbf{u}(b,t)^{T}\tilde{A}\mathbf{u}(b,t)\geq 0\). Similarly, we have \(\mathbf{u}(a,t)^{T}\tilde{A}\mathbf{u}(a,t)\leq 0\). Therefore, \[\epsilon^{\prime}(t)\leq 0.\] The previous lemma shows that \(\epsilon(t)\), interpreted as the energy of the system, is decreasing. This is to be expected since the boundary conditions in (42c) are dissipative (see [9]). Furthermore, if we let \(\epsilon_{h}(t)=\frac{1}{2}M(\mathbf{u}_{h}(\cdot,t),\mathbf{u}_{h}(\cdot,t))\) be the discrete energy, then \[\epsilon^{\prime}_{h}(t)=B(\mathbf{u}_{h},\mathbf{u}_{h})=\frac{-1}{2}\sum_{k=0 }^{N}\left[\!\left[\mathbf{u}_{h}\right]\!\right]_{x_{k}}^{T}S(x_{k})|A(x_{k})| \left[\!\left[\mathbf{u}_{h}\right]\!\right]_{x_{k}}\leq 0. \tag{54}\] The proof of (54) follows the same steps as the scalar case described in [15]. ### The immersed Radau projection and the convergence analysis We denote by \(\mathcal{R}\mathbf{u}\in\mathbb{W}^{m}_{\alpha,\mathbf{r}}(\mathcal{T}_{h})\) the global Gauss-Radau projection defined as \[B(\mathbf{u}-\mathcal{R}\mathbf{u},\mathbf{v}_{h})=0,\qquad\forall\mathbf{v}_{h} \in\mathbb{W}^{m}_{\alpha,\mathbf{r}}(\mathcal{T}_{h}). \tag{55}\] Although \(\mathcal{R}\mathbf{u}\) is a global projection, it can be constructed on each element independently, the construction of \((\mathcal{R}\mathbf{u})_{|I_{k}}\) where \(k\neq k_{0}\) can be found in [1, 36] for the scalar case and can be generalized easily to systems. On the interface element, we define the local immersed Radau projection (_IRP_) operator \(\Pi^{m}_{\alpha,\mathbf{r}}:\mathbb{H}^{m+1}_{\alpha,\mathbf{r}}(I_{k_{0}}) \rightarrow\mathbb{V}^{m}_{\alpha,\mathbf{r}}(I_{k_{0}})\) using (34) as \[\Pi^{m}_{\alpha,\mathbf{r}}=\mathcal{M}^{-1}\circ\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\circ\mathcal{M},\] where \(\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}:\mathbb{H}^{m+1}_{\bar{\alpha}, \mathbf{r}}(\check{I})\rightarrow\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}( \check{I})\) is called the reference IRP operator and it is defined as the solution to the following system: \[A^{-}_{-}\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\check{\mathbf{u}}(0) =A^{-}_{-}\check{\mathbf{u}}(0), \tag{56a}\] \[A^{+}_{+}\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\check{\mathbf{ u}}(1) =A^{+}_{+}\check{\mathbf{u}}(1),\] (56b) \[\left(\check{A}\mathbf{v}^{\prime},\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\check{\mathbf{u}}\right)_{\check{I}} =\left(\check{A}\mathbf{v}^{\prime},\check{\mathbf{u}}\right)_{ \check{I}},\qquad\forall\mathbf{v}\in\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}} (\check{I}). \tag{56c}\] Next, we will go through some basic properties of the IRP to prove that the IRP is well defined and is uniformly bounded on the RIFE space \(\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}(\check{I})\). From there, we can show that the IRP error on the LIFE space \(\mathbb{V}^{m}_{\alpha,\mathbf{r}}(I_{k_{0}})\) decays at an optimal rate of \(O(h^{m+1})\) under mesh refinement. **Lemma 10**.: _Let \(A\) be the matrix function defined in (42b) and let \(\mathbf{p}\in\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}(\check{I})\), then \(A\mathbf{p}^{\prime}\in\mathbb{V}^{m-1}_{\bar{\alpha},\mathbf{r}}(\check{I})\). Furthermore the map_ \[G:\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}(\check{I}) \rightarrow\mathbb{V}^{m-1}_{\bar{\alpha},\mathbf{r}}(\check{I})\] \[\mathbf{p} \mapsto A\mathbf{p}^{\prime}\] _is surjective._ Proof.: Let \(\mathbf{p}\in\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}(\check{I})\) and let \(\tilde{\mathbf{p}}=A\mathbf{p}^{\prime}\), then for a fixed \(k\in\{0,1,\ldots,m-1\}\), we have \[\tilde{\mathbf{p}}^{(k)}(\check{\alpha}^{+}) =A_{+}\mathbf{p}^{(k+1)}(\check{\alpha}^{+})\] (Using \[\tilde{\mathbf{p}}=A\mathbf{p}^{\prime}\] ) \[=A_{+}A^{-k-1}_{+}A^{k+1}_{-}A^{-1}_{-}\tilde{\mathbf{p}}^{(k)}( \check{\alpha}^{-}),\] (By construction of \[\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}(\check{I})\] ) \[=A^{-k}_{+}A^{k}_{-}\check{\mathbf{p}}^{(k)}(\check{\alpha}^{-}).\] (Using \[\mathbf{p}^{\prime}=A^{-1}\tilde{\mathbf{p}}\] ) Since (57) holds for every \(k=0,1,\ldots,m-1\), we conclude that \(\tilde{\mathbf{p}}\in\mathbb{V}^{m-1}_{\bar{\alpha},\mathbf{r}}(\check{I})\). Now, we show that \(G\) is surjective, by the rank-nullity theorem, it suffices to prove that \(\dim\ker(G)\) is \(2\) since \(\dim\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}(\check{I})-\dim\mathbb{V}^{m-1}_{ \bar{\alpha},\mathbf{r}}(\check{I})=2\). Let \(\mathbf{p}\in\ker(G)\), then \(A\mathbf{p}^{\prime}=0\), since \(A\) is invertible, we get \(\mathbf{p}^{\prime}=0\) which implies that \(\mathbf{p}\in\mathbb{V}^{0}_{\bar{\alpha},\mathbf{r}}(\check{I})\). This shows that \(\dim\ker(G)=\dim\mathbb{V}^{0}_{\bar{\alpha},\mathbf{r}}(\check{I})=2\). Following the definition of \(G\), we can re-write (56c) as \[(SG(\mathbf{v}),\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\check{\mathbf{u}})_{ \check{I}}=(SG(\mathbf{v}),\check{\mathbf{u}})_{\check{I}},\qquad\forall \mathbf{v}\in\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}(\check{I}).\] For convenience, we will write \((SG(\mathbf{v}),\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\check{\mathbf{u}})_{ \check{I}}=(G(\mathbf{v}),\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\check{ \mathbf{u}})_{\check{S},\check{I}}\). Now, since \(G\) maps \(\mathbb{V}^{m}_{\bar{\alpha},\mathbf{r}}(\check{I})\) onto \(\mathbb{V}^{m-1}_{\bar{\alpha},\mathbf{r}}(\check{I})\), we can express the condition (56c) as \[(\mathbf{v},\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\check{\mathbf{u}})_{ \check{S},\check{I}}=(\mathbf{v},\check{\mathbf{u}})_{\check{S},\check{I}}, \qquad\forall\mathbf{v}\in\mathbb{V}^{m-1}_{\bar{\alpha},\mathbf{r}}(\check{I}). \tag{58}\] **Theorem 7**.: _The system (56) admits exactly one solution._ Proof.: First, we prove that the system admits at most one solution, for that, we only need to show that if \(\check{\mathbf{u}}=0\), then \(\check{\Pi}^{m}_{\bar{\alpha},\mathbf{r}}\check{\mathbf{u}}=0\). For simplicity, let \(\mathbf{q}=(q_{1},q_{2})^{T}\) be the solution to (56) with \(\check{\mathbf{u}}=0\), and let \(r^{(1)}=r^{p}\) and \(r^{(2)}=r^{u}\), then by (58), we have \[(v,q_{i})_{w_{i},\check{I}}=0,\ \forall v\in\mathcal{V}^{m-1}_{\bar{\alpha},r^{(i)}}( \check{I}),\ \text{where}\ w_{i}=S_{i,i},\qquad i=1,2, \tag{59}\] which is equivalent to \(q_{i}\in\mathcal{Q}^{m}_{\hat{\alpha},w_{i},r^{(i)}}(\hat{I})\). On the other hand, we have \[\left(\tilde{A}\mathbf{q}^{\prime},\mathbf{q}\right)_{\hat{I}}=\frac{1}{2} \left(\mathbf{q}(1)^{T}S_{+}A_{+}\mathbf{q}(1)-\mathbf{q}(0)^{T}S_{-}A_{-} \mathbf{q}(0)\right)=0. \tag{60}\] From (56a) and (56b), we have \(A_{+}^{+}\mathbf{q}(1)=A_{-}^{-}\mathbf{q}(0)=0\), then \(A_{+}\mathbf{q}(1)=A_{+}^{-}\mathbf{q}(1)\) and \(A_{-}\mathbf{q}(0)=A_{-}^{+}\mathbf{q}(0)\). Therefore, the equation (60) becomes \[\frac{1}{2}\left(\mathbf{q}(1)^{T}S_{+}A_{+}^{-}\mathbf{q}(1)-\mathbf{q}(0)^{T }S_{-}A_{-}^{+}\mathbf{q}(0)\right)=0.\] Now, by Lemma 8 part (c), the quantities \(\mathbf{q}(1)^{T}S_{+}A_{+}^{-}\mathbf{q}(1)\) and \(-\mathbf{q}(0)^{T}S_{-}A_{-}^{+}\mathbf{q}(0)\) are non-positive, then \[\mathbf{q}(1)^{T}S_{+}A_{+}^{-}\mathbf{q}(1)=\mathbf{q}(0)^{T}S_{-}A_{-}^{+} \mathbf{q}(0)=0,\] Furthermore, by (56a) and (56b), we have \[\left\|A_{+}^{+}\mathbf{q}(1)\right\|^{2}+\left|\mathbf{q}(1)^{T}S_{+}A_{+}^{ -}\mathbf{q}(1)\right|=\left\|A_{-}^{-}\mathbf{q}(0)\right\|^{2}+\left| \mathbf{q}(0)^{T}S_{-}A_{-}^{+}\mathbf{q}(0)\right|=0.\] At this point, we use Lemma 8 part (d) to conclude that \(\mathbf{q}(1)=\mathbf{q}(0)=0\). Therefore, \(q_{i}\) are orthogonal IFE functions (as shown in (59)) that vanish on the boundary. By Theorem 3, we conclude that \(q_{i}\equiv 0\) for \(i=1,2\). Equivalently, \(\mathbf{q}\equiv 0\). To finalize the proof, we only need to show that (56) can be written as a square system. Let \(A_{\pm}=P_{\pm}\mathrm{diag}(-c_{\pm},c_{\pm})P_{\pm}^{-1}\) be an eigen-decomposition of \(A_{\pm}\). Then, (56) can be written as \[\left\{\begin{array}{ll}\left(P_{+}^{-1}\tilde{\Pi}_{\hat{\alpha},\mathbf{ r}}^{m}\check{\mathbf{u}}(0)\right)_{1}&=\left(P_{-}^{-1}\check{\mathbf{u}}(0) \right)_{1}\\ \left(P_{+}^{-1}\tilde{\Pi}_{\hat{\alpha},\mathbf{r}}^{m}\check{\mathbf{u}}(1 )\right)_{2}&=\left(P_{+}^{-1}\check{\mathbf{u}}(1)\right)_{2},\\ \left(\mathcal{N}_{\hat{\alpha},r\mathbf{r}}^{\delta},(\tilde{\Pi}_{\hat{ \alpha},\mathbf{r}}^{m}\check{\mathbf{u}})\right)_{S_{11},\tilde{I}}&=\left( \mathcal{N}_{\hat{\alpha},r\mathbf{r}}^{j},\tilde{p}\right)_{S_{22},I},&1\leq j \leq m-1,\\ \left(\mathcal{N}_{\hat{\alpha},r^{*}}^{j},(\tilde{\Pi}_{\hat{\alpha},\mathbf{ r}}^{m}\check{\mathbf{u}})\right)_{S_{22},\tilde{I}}&=\left(\mathcal{N}_{\hat{ \alpha},r^{*}}^{j},\tilde{u}\right)_{S_{22},\tilde{I}},&1\leq j\leq m-1\end{array}\right. \tag{61}\] which is a system of \(2(m+1)\) equations with \(2(m+1)\) variables. Since the homogeneous system admits at most one solution, we conclude that (56) has exactly one solution. Next, we show that \(\{\check{\Pi}_{\hat{\alpha},\mathbf{r}}^{m}\}_{0<\check{\alpha}<1}\) is uniformly bounded. First, let \(\mathbf{p}\in\mathbb{V}_{\hat{\alpha},\mathbf{r}}^{m-1}(\check{I})\) be the solution to the following symmetric positive definite system \[\left(\mathbf{v},\mathbf{p}\right)_{S,\hat{I}}=\left(\mathbf{v},\check{\mathbf{ u}}\right)_{\hat{I}},\;\forall\mathbf{v}\in\mathbb{V}_{\hat{\alpha},\mathbf{r}}^{m -1}(\check{I}), \tag{62}\] and let \(\mathbf{q}=\check{\Pi}_{\hat{\alpha},\mathbf{r}}^{m}\check{\mathbf{u}}- \mathbf{p}\), then by (56c) and (58), we have \[\left(\tilde{A}\mathbf{v}^{\prime},\mathbf{q}\right)_{\hat{I}}=\left(\tilde{A} \mathbf{v}^{\prime},\check{\Pi}_{\hat{\alpha},\mathbf{r}}^{m}\check{\mathbf{u} }-\mathbf{p}\right)_{\hat{I}}=0,\qquad\forall\mathbf{v}\in\mathbb{V}_{\hat{ \alpha},\mathbf{r}}^{m}(\check{I}), \tag{63}\] which can be written as \[\left(\mathbf{v},\mathbf{q}\right)_{S,\hat{I}}=0,\qquad\forall\mathbf{v}\in \mathbb{V}_{\hat{\alpha},\mathbf{r}}^{m-1}(\check{I}).\] Thus, \(\mathbf{q}\in\mathbb{Q}_{\hat{\alpha},S,\mathbf{r}}^{m}(\check{I})=\mathcal{Q}_ {\hat{\alpha},S_{11},r^{\prime}}^{m}(\check{I})\times\mathcal{Q}_{\hat{\alpha},S_ {22},r^{\prime\prime}}^{m}(\check{I})\). Additionally, by (56a) and (56b), we have \[\left\{\begin{array}{ll}A_{-}^{-}\mathbf{q}(0)=A_{-}^{-}\left(\check{ \mathbf{u}}(0)-\mathbf{p}(0)\right),\\ A_{+}^{+}\mathbf{q}(1)=A_{+}^{+}\left(\check{\mathbf{u}}(1)-\mathbf{p}(1) \right).\end{array}\right. \tag{64}\] In the next two lemmas, we prove that \(\left\|\mathbf{p}\right\|_{0,\check{I}}\) and \(\left\|\mathbf{q}\right\|_{0,\check{I}}\) is bounded by some appropriate norms of \(\check{\mathbf{u}}\) independently of \(\check{\alpha}\). Both lemmas will be used later in Theorem 8 to prove that \(\{\check{\Pi}_{\hat{\alpha},\mathbf{r}}^{m}\}_{0<\check{\alpha}<1}\) is a uniformly bounded collection of RIFE projections. **Lemma 11**.: _Let \(\check{\mathbf{u}}\in\mathbb{H}_{\hat{\alpha},r}^{m+1}(\check{I})\) and \(\mathbf{p}\in\mathbb{V}_{\hat{\alpha},\mathbf{r}}^{m-1}(\check{I})\) defined by (62), then there is \(C(\rho,c)>0\) independent of \(\check{\alpha}\) such that \(\left\|\mathbf{p}\right\|_{0,\check{I}}\leq C(\rho,c)\left\|\check{\mathbf{u}} \right\|_{0,\check{I}}\)._ Proof.: Let \(\mathbf{v}=\mathbf{p}\) in (62), then \(\left\|\mathbf{p}^{T}S\mathbf{p}\right\|_{0,I}=(\mathbf{p},\tilde{\mathbf{u}})_{ 0,I}\leq\left\|\mathbf{p}\right\|_{0,I}\left\|\tilde{\mathbf{u}}\right\|_{0,I}\). On the other hand, by construction of \(S\), we have \(\left\|\mathbf{p}^{T}S\mathbf{p}\right\|_{0,I}\geq C(\rho,c)\left\|\mathbf{p} \right\|_{0,I}\), where \(C(\rho,c)=\min(\rho_{-}^{-1}c_{-}^{-2},\rho_{+}^{-1}c_{+}^{-2},\rho_{-},\rho_{+})\). Therefore, \[\left\|\mathbf{p}\right\|_{0,I}\leq C(\rho,c)\left\|\tilde{\mathbf{u}}\right\| _{0,I}.\] **Lemma 12**.: _Let \(\tilde{\mathbf{u}}\in\mathbb{H}_{\tilde{\alpha},r}^{m+1}(\tilde{I})\) and let \(\mathbf{p}\in\mathbb{V}_{\tilde{\alpha},\mathbf{r}}^{m-1}(\tilde{I})\) defined by (62). Then, there is \(C(\rho,c,m)>0\) independent of \(\dot{\alpha}\) such that_ \[\left\|\tilde{\Pi}_{\tilde{\alpha},\mathbf{r}}^{m}\tilde{\mathbf{u}}-\mathbf{ p}\right\|_{0,I}\leq C(\rho,c,m)\left\|\tilde{\mathbf{u}}\right\|_{1,\tilde{I}}\] Proof.: Let \(\mathbf{q}=(q_{1},q_{2})=\tilde{\Pi}_{\tilde{\alpha},\mathbf{r}}^{m}\tilde{ \mathbf{u}}-\mathbf{p}\), then by (63), we have \((\tilde{A}\mathbf{q}^{\prime},\mathbf{q})_{\tilde{I}}=0\) which is equivalent to \[\mathbf{q}(1)^{T}\tilde{A}\mathbf{q}(1)-\mathbf{q}(0)^{T}\tilde{A}\mathbf{q} (0)=0 \tag{65}\] By decomposing \(A_{\pm}=A_{\pm}^{+}+A_{\pm}^{-}\) and using the definition of \(\tilde{A}\) in (43), we can split (65) as \[\mathbf{q}(1)^{T}S_{+}A_{+}^{+}\mathbf{q}(1)+\mathbf{q}(1)^{T}S_{+}A_{+}^{-} \mathbf{q}(1)-\mathbf{q}(0)^{T}S_{-}A_{-}^{+}\mathbf{q}(0)-\mathbf{q}(0)^{T}S_ {-}A_{-}^{-}\mathbf{q}(0)=0. \tag{66}\] Now, from (49), we have \[\mathbf{q}(0)^{T}S_{-}A_{-}^{+}\mathbf{q}(0)-\mathbf{q}(0)^{T}S_{-}A_{-}^{-} \mathbf{q}(0)\geq C_{1}(\rho,c)\left\|\mathbf{q}(0)\right\|^{2}, \tag{67a}\] \[\mathbf{q}(1)^{T}S_{+}A_{+}^{+}\mathbf{q}(1)-\mathbf{q}(1)^{T}S_{+}A_{+}^{-} \mathbf{q}(1)\geq C_{1}(\rho,c)\left\|\mathbf{q}(1)\right\|^{2}. \tag{67b}\] Next, we sum (66), (67a) and (67b) to obtain \[\mathbf{q}(1)S_{+}A_{+}^{+}\mathbf{q}(1)-\mathbf{q}(0)S_{-}A_{-}^{-}\mathbf{ q}(0)\geq\frac{1}{2}C_{1}(\rho,c)\left(\left\|\mathbf{q}(0)\right\|^{2}+\left\| \mathbf{q}(1)\right\|^{2}\right) \tag{68}\] We substitute (64) in (68) to obtain \[\mathbf{q}(1)S_{+}A_{+}^{+}(\tilde{\mathbf{u}}(1)-\mathbf{p}(1))-\mathbf{q}(0 )S_{-}A_{-}^{-}(\tilde{\mathbf{u}}(0)-\mathbf{p}(0))\geq\frac{1}{2}C_{1}(\rho, c)\left(\left\|\mathbf{q}(0)\right\|^{2}+\left\|\mathbf{q}(1)\right\|^{2}\right) \tag{69}\] Now, we will bound the left hand side from above. First, we have \[\mathbf{q}(1)S_{+}A_{+}^{+}(\tilde{\mathbf{u}}(1)-\mathbf{p}(1))-\mathbf{q}(0 )S_{-}A_{-}^{-}(\tilde{\mathbf{u}}(0)-\mathbf{p}(0))\leq C_{2}(\rho,c)\left( \left\|\mathbf{q}(1)\right\|\left\|\tilde{\mathbf{u}}(1)-\mathbf{p}(1)\right\| +\left\|\mathbf{q}(0)\right\|\left\|\tilde{\mathbf{u}}(0)-\mathbf{p}(0)\right\|\right) \tag{70}\] Since \(\tilde{\mathbf{u}}-\mathbf{p}\in(H^{1}(\tilde{I}))^{2}\), there is \(C_{3}>0\) such that \[\max(\left\|\tilde{\mathbf{u}}(0)-\mathbf{p}(0)\right\|,\left\|\tilde{\mathbf{ u}}(1)-\mathbf{p}(1)\right\|)\leq C_{3}\left(\left\|\tilde{\mathbf{u}}\right\|_{1, \tilde{I}}+\left\|\mathbf{p}\right\|_{1,\tilde{I}}\right). \tag{71}\] By applying the inverse inequality (15) and Lemma 11 to \(\left\|\mathbf{p}\right\|_{1,\tilde{I}}\), we obtain \[\max(\left\|\tilde{\mathbf{u}}(0)-\mathbf{p}(0)\right\|,\left\| \tilde{\mathbf{u}}(1)-\mathbf{p}(1)\right\|) \leq C_{4}(\rho,c,m)\left(\left\|\tilde{\mathbf{u}}\right\|_{1, \tilde{I}}+\left\|\mathbf{p}\right\|_{0,\tilde{I}}\right),\] \[\leq C_{5}(\rho,c,m)\left\|\tilde{\mathbf{u}}\right\|_{1,\tilde{I}} \tag{72}\] Now, we substitute (72) and (70) back into (69) and use the inequality \(a^{2}+b^{2}\geq\frac{1}{2}(a+b)^{2}\) to obtain \[(\left\|\mathbf{q}(0)\right\|+\left\|\mathbf{q}(1)\right\|)\left\|\tilde{ \mathbf{u}}\right\|_{1,I}\geq C_{6}(\rho,c,m)\left(\left\|\mathbf{q}(0)\right\| +\left\|\mathbf{q}(1)\right\|\right)^{2},\] which yields \[\left\|\mathbf{q}(0)\right\|+\left\|\mathbf{q}(1)\right\|\leq C_{7}(\rho,c,m) \left\|\tilde{\mathbf{u}}\right\|_{1,I}. \tag{73}\] To finish the proof, we recall that \(\mathbf{q}\in\mathcal{Q}_{\tilde{\alpha},S_{11},r^{\prime}}^{m}(\tilde{I})\times \mathcal{Q}_{\tilde{\alpha},S_{22},r^{\prime}}^{m}(\tilde{I})\). Therefore, by Lemma 4 and some elementary algebraic manipulations, we have \[\left\|\mathbf{q}\right\|_{0,\tilde{I}} =\sqrt{\left\|q_{1}\right\|_{0,\tilde{I}}^{2}+\left\|q_{2}\right\| _{0,\tilde{I}}^{2}}\] (By definition) \[\leq C_{7}(\rho,c,m)\sqrt{q_{1}(0)^{2}+q_{1}(1)^{2}+q_{2}(0)^{2}+ q_{2}(1)^{2}},\] (Using Lemma 4 ) \[\leq C_{8}(\rho,c,m)\left(\left\|\mathbf{q}(0)\right\|+\left\| \mathbf{q}(1)\right\|\right)\] \[\left\|\mathbf{q}\right\|_{0,\tilde{I}} \leq C_{9}(\rho,c,m)\left\|\tilde{\mathbf{u}}\right\|_{1,I},\] (using ( 73 )) which is the desired result. By combining Lemma11 and Lemma12, we can show that the norm of \(\Pi^{m}_{\alpha,\mathbf{r}}\mathbf{\dot{u}}\) can be bounded by a norm of \(\mathbf{\dot{u}}\) independently of \(\check{\alpha}\) as described in the following theorem. We note that \(\mathring{\Pi}^{m}_{\check{\alpha},\mathbf{r}}\) maps \(\mathbb{H}^{m+1}_{\check{\alpha},\mathbf{r}}(\check{I})\) to \(\mathbf{V}^{m}_{\check{\alpha},\mathbf{r}}(\check{I})\). Nevertheless, we shall call \(\mathring{\Pi}^{m}_{\check{\alpha},\mathbf{r}}\) a RIFE projection since the results from Section4 in the scalar case apply directly to the vector case here. **Theorem 8**.: _Let \(m\geq 1\) and let \(\mathbf{\dot{u}}\in\mathbb{H}^{m+1}_{\check{\alpha},\mathbf{r}}(\check{I})\). Then, there is \(C(\rho,c,m)>0\) independent of \(\check{\alpha}\) such that_ \[\left\|\mathring{\Pi}^{m}_{\check{\alpha},\mathbf{r}}\mathbf{\dot{u}}\right\|_ {0,\check{I}}\leq C(\rho,c,m)\left\|\mathbf{\dot{u}}\right\|_{1,\check{I}}\] _That is, \(\{\mathring{\Pi}^{m}_{\check{\alpha},\mathbf{r}}\}_{0<\check{\alpha}<1}\) is a uniformly bounded collection of RIFE projections._ Proof.: By definition of \(\mathbf{p}\) and \(\mathbf{q}\), we have \[\left\|\mathring{\Pi}^{m}_{\check{\alpha},\mathbf{r}}\mathbf{\dot{u}}\right\|_ {0,\check{I}}\leq\left\|\mathbf{p}\right\|_{0,\check{I}}+\left\|\mathbf{q} \right\|_{0,\check{I}},\] Which, using Lemma11 and Lemma12, leads to \[\left\|\mathring{\Pi}^{m}_{\check{\alpha},\mathbf{r}}\mathbf{\dot{u}}\right\|_ {0,\check{I}}\leq C(\rho,c,m)\left\|\mathbf{\dot{u}}\right\|_{1,\check{I}},\] where \(C(\rho,c,m)>0\) is independent of \(\check{\alpha}\). **Corollary 2**.: _Let \(\mathbf{u}\in\mathbb{H}^{m+1}_{\alpha,\mathbf{r}}(I_{k_{0}})\), then there is \(C(S_{\pm},A_{\pm},m)>0\) independent of \(\alpha\) and \(h\) such that_ \[\left|\mathbf{u}-\Pi^{m}_{\check{\alpha},\mathbf{r}}\mathbf{u}\right|_{i,I_{ k_{0}}}<C(S_{\pm},A_{\pm},m)h^{m+1-i}|\mathbf{u}|_{m+1,I_{k_{0}}},\qquad 0 \leq i\leq m.\] Proof.: Let \(\tilde{\mathbf{u}}=(\check{p},\check{u})^{T}=\mathcal{M}\mathbf{u}\) and let \(\tilde{\pi}^{m}_{\check{\alpha},\mathbf{r}}\tilde{u}=(\tilde{\pi}^{m}_{\check {\alpha},\mathbf{r}}\check{p},\tilde{\pi}^{m}_{\check{\alpha},\mathbf{r}^{n}} \check{u})^{T}\), where \(\tilde{\pi}^{m}_{\check{\alpha},\mathbf{r}}\check{p},\tilde{\pi}^{m}_{\check {\alpha},\mathbf{r}^{n}}\) are defined in Lemma5. Then by Theorem4, we have \[|\tilde{\mathbf{u}}-\tilde{\pi}^{m}_{\check{\alpha},\mathbf{r}}\tilde{ \mathbf{u}}|_{i,\check{I}}\leq C(\rho,c,m)|\tilde{\mathbf{u}}|_{m+1,\check{I }},\qquad i=0,1,\ldots,m+1. \tag{74}\] On the other hand, we have \[\left|\mathbf{u}-\Pi^{m}_{\check{\alpha},\mathbf{r}}\mathbf{u} \right|_{i,I_{k_{0}}} =h^{1-i}\left|\mathbf{\dot{u}}-\mathring{\Pi}^{m}_{\check{\alpha },\mathbf{r}}\tilde{\mathbf{u}}\right|_{i,\check{I}}\] \[\leq C_{1}(\rho,c,m)h^{1-i}\left(\left|\mathring{\Pi}^{m}_{\check {\alpha},\mathbf{r}}\left(\mathbf{\dot{u}}-\tilde{\pi}^{m}_{\check{\alpha}, \mathbf{r}}\tilde{\mathbf{u}}\right)\right|_{0,\check{I}}+\left|\mathbf{\dot{ u}}-\tilde{\pi}^{m}_{\check{\alpha},\mathbf{r}}\tilde{\mathbf{u}}\right|_{i,\check{I}}\right)\] (Using Lemma3) \[\leq C_{2}(\rho,c,m)h^{1-i}\left(\left\|\mathring{\Pi}^{m}_{\check {\alpha},\mathbf{r}}\left(\mathbf{\dot{u}}-\tilde{\pi}^{m}_{\check{\alpha}, \mathbf{r}}\tilde{\mathbf{u}}\right)\right|_{1,\check{I}}+\left|\mathbf{\dot{ u}}-\tilde{\pi}^{m}_{\check{\alpha},\mathbf{r}}\tilde{\mathbf{u}}\right|_{i,\check{I}}\right)\] (Using Theorem8) \[\leq C_{3}(\rho,c,m)h^{1-i}\left\|\mathbf{\dot{u}}-\tilde{\pi}^{m}_ {\check{\alpha},\mathbf{r}}\tilde{\mathbf{u}}\right\|_{m+1,\check{I}}\] (From (74)) \[=C_{4}(\rho,c,m)h^{m+1-i}|\mathbf{u}|_{m+1,I_{k_{0}}}.\] By summing over all elements, we get a similar bound for the global Radau projection \(\mathcal{R}\mathbf{u}\) with a function \(\mathbf{u}\in\mathbb{H}^{m+1}_{\alpha,\mathbf{r}}(I)\) \[\left\|\mathbf{u}-\mathcal{R}\mathbf{u}\right\|_{i,\check{I}}<C(S_{\pm},A_{\pm },m)h^{m+1-i}|\mathbf{u}|_{m+1,I},\qquad 0\leq i\leq m. \tag{75}\] **Theorem 9**.: _Let \(\mathbf{u}\) be the solution of problem (42) and let \(\mathbf{u}_{h}\in\mathbb{W}^{m}_{\alpha,\mathbf{r}}(I)\) be the solution of (47). If \(\mathbf{u}\in C([0,T];\mathbb{H}^{m+2}_{\alpha,\mathbf{r}}(I))\), then there is \(C>0\) independent of \(h\) and \(\alpha\) such that_ \[\left\|\mathbf{u}(\cdot,T)-\mathbf{u}_{h}(\cdot,T)\right\|_{0,I}\leq Ch^{m+1} \left(|\mathbf{u}_{0}|_{m+1,\check{I}}+|\mathbf{u}(\cdot,T)|_{m+1,\check{I}}+T \max_{0\leq t\leq T}|\mathbf{u}(\cdot,t)|_{m+2,\check{I}}\right),\qquad T>0.\] Proof.: Our proof follows the usual methodology used for the non-interface problem (see [15]). We first note that \(\mathcal{R}\mathbf{u}_{t}=\frac{d}{dt}\mathcal{R}\mathbf{u}\) and split the error \(\mathbf{e}=\mathbf{u}_{h}-\mathbf{u}\) as \[\mathbf{e}=\mathbf{z}-\mathbf{g},\qquad\mathbf{z}=\mathbf{u}-\mathcal{R} \mathbf{u},\quad\mathbf{g}=\mathbf{u}_{h}-\mathcal{R}\mathbf{u}.\] It follows from the definition of \(\mathcal{R}\) in (55) that \[B(\mathbf{g},\mathbf{g})=B(\mathbf{g}-\mathbf{z},\mathbf{g})=B(\mathbf{u}_{h}- \mathbf{u},\mathbf{g})=B(\mathbf{e},\mathbf{g}). \tag{76}\] By combining (76), (47) and (54), we get \[\left(S\mathbf{z}_{t}(\cdot,t),\mathbf{g}(\cdot,t)\right)_{I} =\left(S\mathbf{g}_{t}(\cdot,t),\mathbf{g}(\cdot,t)\right)_{I}- \left(S\mathbf{e}_{t}(\cdot,t),\mathbf{g}(\cdot,t)\right)_{I},\] \[=\frac{1}{2}\frac{d}{dt}\left\|\sqrt{S}\mathbf{g}(\cdot,t) \right\|_{0,I}^{2}-B(\mathbf{e}(\cdot,t),\mathbf{g}(\cdot,t)),\] \[=\frac{1}{2}\frac{d}{dt}\left\|\sqrt{S}\mathbf{g}(\cdot,t) \right\|_{0,I}^{2}-B(\mathbf{g}(\cdot,t),\mathbf{g}(\cdot,t)),\] \[=\frac{1}{2}\frac{d}{dt}\left\|\sqrt{S}\mathbf{g}(\cdot,t) \right\|_{0,I}^{2}+\sigma(t), \tag{77}\] where \(\sigma(t)\geq 0\) by (54). Let \(\kappa(t)=\left\|\sqrt{S}\mathbf{g}(\cdot,t)\right\|_{0,I}\), then by Cauchy-Schwarz inequality, \[\left(S\mathbf{z}_{t}(\cdot,t),\mathbf{g}(\cdot,t)\right)_{I}\leq\left\| \mathbf{z}_{t}(\cdot,t)\right\|_{0,I}\kappa(t). \tag{78}\] Following the ideas of the proof of Lemma 10, we can show that \(\mathbf{u}_{t}(\cdot,t)=-A\mathbf{u}_{x}(\cdot,t)\in\mathbb{H}_{\alpha, \mathbf{r}}^{m+1}(I)\) since \(\mathbf{u}(\cdot,t)\in\mathbb{H}_{\alpha,\mathbf{r}}^{m+2}(I)\). Therefore, by (75), there is \(C\) independent of \(h\) and \(\alpha\) such that \[\left\|\mathbf{z}_{t}(\cdot,t)\right\|_{0,I}\leq Ch^{m+1}|\mathbf{u}(\cdot,t) |_{m+2,I}. \tag{79}\] Now, we use (79), (78) and integrate (77) on \([0,T]\) to get \[\frac{1}{2}\kappa(T)^{2}-\frac{1}{2}\kappa(0)^{2}+\sigma(t)\leq Ch^{m+1}\int_ {0}^{T}\kappa(s)|\mathbf{u}(\cdot,s)|_{m+2,I}\ ds, \tag{80}\] Using a generalized version of Gronwall's inequality (see [8, p. 24]), we get the following bound on \(\kappa(T)\) \[\kappa(T) \leq\kappa(0)+Ch^{m+1}\int_{0}^{T}|\mathbf{u}(\cdot,s)|_{m+2,I} \ ds, \tag{81}\] \[\leq\kappa(0)+Ch^{m+1}T\max_{0\leq t\leq T}|\mathbf{u}(\cdot,t)|_ {m+2,I}. \tag{82}\] We also have \[\kappa(0)=\left\|\sqrt{S}\left(\mathbf{u}_{h}(\cdot,0)-\mathcal{R}\mathbf{u}_ {0}\right)\right\|_{0,I}\leq\left\|\sqrt{S}\left(\mathbf{u}_{h}(\cdot,0)- \mathbf{u}_{0}\right)\right\|_{0,I}+\left\|\sqrt{S}\left(\mathbf{u}_{0}- \mathcal{R}\mathbf{u}_{0}\right)\right\|_{0,I}\leq Ch^{m+1}|\mathbf{u}_{0}|_{m+ 1,I}. \tag{83}\] We substitute (83) into (82), to obtain \[\kappa(T)=\left\|\sqrt{S}\mathbf{g}(\cdot,T)\right\|_{0,I}\leq Ch^{m+1}\left( |\mathbf{u}_{0}|_{m+1,I}+T\max_{0\leq t\leq T}|\mathbf{u}(\cdot,t)|_{m+2,I} \right).\] To finalize the proof, we use the triangle inequality \[\left\|\mathbf{e}(\cdot,T)\right\|_{0,I}\leq\left\|\mathbf{z}(\cdot,T)\right\| _{0,I}+\left\|\mathbf{g}(\cdot,T)\right\|_{0,I}\leq Ch^{m+1}\left(|\mathbf{u}_{0 }|_{m+1,I}+|\mathbf{u}(\cdot,T)|_{m+1,I}+T\max_{0\leq t\leq T}|\mathbf{u}(\cdot, t)|_{m+2,I}\right).\] Novel proofs for results already established in the literature In this section, for demonstrating the versatility of the immersed scaling argument established in Section 3 and Section 4, we redo the error estimation for two IFE methods in the literature. One of them is the IFE space for an elliptic interface problem [5], and the other one is the IFE space for an interface problem of the Euler-Bernoulli Beam [24]. We note that the approximation capability for these IFE spaces were already analyzed, but with complex and lengthy procedures. Our discussions here is to demonstrate that similar error bounds for the optimal approximation capability of these different types of IFE spaces can be readily derived by the unified immersed scaling argument. ### The \(m\)-th degree IFE space for an elliptic interface problem In this subsection, we consider the \(m\)-th degree IFE space developed in [5] for solving the following interface problem: \[\left\{\begin{aligned} &-\beta(x)u^{\prime\prime}(x)=f(x),\ x\in(a, \alpha)\cup(\alpha,b)\\ & u(a)=u(b)=0,\end{aligned}\right.\qquad\qquad\beta(x)= \left\{\begin{aligned} &\beta^{-}>0,\ \ \ x\in(a,\alpha),\\ &\beta^{+}>0,\ \ \ x\in(\alpha,b),\end{aligned}\right. \qquad[u]_{\alpha}=[\beta u^{\prime}]_{\alpha}=0. \tag{84}\] Assume that \(f\) is in \(C^{m-1}(I)\) which implies that the solution \(u\in\mathcal{H}^{m+1}_{\alpha,r}(I)\) with \[r_{0}=1,\quad\text{ and }\quad r_{i}=\frac{\beta^{-}}{\beta^{+}}\ \text{ for }\ i=1,2, \ldots,m. \tag{85}\] The discussion in Section 2 suggests the following IFE space for this elliptic interface problem: \[Z^{m}_{\alpha,r}(\mathcal{T}_{h})=H^{1}_{0}(I)\cap W^{m}_{\alpha,r}(\mathcal{ T}_{h}) \tag{86}\] which coincides with the one developed in [5] based on the extended jump conditions where it was proved, by an elementary but complicated multi-point Taylor expansion technique, to have the optimal approximation capability with respect to \(m\)-th degree polynomials employed in this IFE space. We now reanalyze this IFE space by the immersed scaling argument. The continuity of functions in the IFE space suggests to consider the following immersed Lobatto projection \(\mathscr{L}^{m}_{\alpha,r}:\mathcal{H}^{m+1}_{\alpha,r}(I_{k_{0}})\to\mathcal{ V}^{m}_{\alpha,r}(I_{k_{0}})\) defined by \[\left\{\begin{aligned} &\mathscr{L}^{m}_{\alpha,r}u(x_{k_{0}-1})=u(x_{k_{0}-1}),\\ &\mathscr{L}^{m}_{\alpha,r}(u(x_{k_{0}})=u(x_{k_{0}}),\\ &\left(\mathscr{L}^{m}_{\alpha,r}u,v_{h}\right)_{w,I_{k_{0}}}=(u, v_{h})_{w,I_{k_{0}}},\quad\forall v_{h}\in\mathcal{V}^{m-2}_{\alpha,\tau^{2}(r)}(I_{k_{0}}), \end{aligned}\right. \qquad w(x)=\left\{\begin{aligned} & r_{1},\ \ \ x\in I^{-}_{k_{0}},\\ & 1,\ \ \ x\in I^{+}_{k_{0}},\end{aligned}\right. \tag{87}\] where \(\tau^{2}=\tau\circ\tau\) and \(\tau\) is the shift operator defined in (17). The related reference immersed Lobatto projection \(\mathscr{L}^{m}_{\alpha,r}:\mathcal{H}^{m+1}_{\tilde{\alpha},r}(\tilde{I}) \to\mathcal{V}^{m}_{\tilde{\alpha},r}(\tilde{I})\) is defined by the diagram (34), that is, \(\mathscr{L}^{m}_{\tilde{\alpha},r}\tilde{u}=\mathscr{L}^{m}_{\alpha,r}u\) where \(\tilde{u}=\mathcal{M}v\). For simplicity, let \(\tilde{u}=\mathscr{L}^{m}_{\tilde{\alpha},r}\tilde{u}\) for a given \(\tilde{u}\in\mathcal{V}^{m}_{\tilde{\alpha},r}(\tilde{I})\) and note that the system (87) is a square system of \(m+1\) equations since the last line can be written as \(m-1\) equations. Therefore, we only need to show that if \(\tilde{u}\equiv 0\) then \(\tilde{u}\equiv 0\) to prove that \(\mathscr{L}^{m}_{\tilde{\alpha},r}\) is well defined. **Lemma 13**.: _The reference immersed Lobatto projection \(\mathscr{L}^{m}_{\tilde{\alpha},r}\) is well defined._ Proof.: Let \(\tilde{u}\equiv 0\), we will show that \(\tilde{u}=\mathscr{L}^{m}_{\tilde{\alpha},r}\tilde{u}\equiv 0\). We have \[\tilde{u}(0)=\tilde{u}(1)=0,\qquad(\tilde{u},v_{h})_{\tilde{u},\tilde{I}}=0,\ \forall v\in\mathcal{V}^{m-2}_{\tilde{\alpha},\tau^{2}(r)}(\tilde{I}),\] where \(\tilde{w}=\mathcal{M}w\). Using (17), \(\tilde{u}^{\prime\prime}\in\mathcal{V}^{m-2}_{\tilde{\alpha},\tau^{2}(r)}( \tilde{I})\), then \[0=\int_{0}^{1}w(x)\tilde{u}(x)\tilde{u}^{\prime\prime}(x)\ dx =r_{1}\int_{0}^{\tilde{\alpha}}\tilde{u}(x)\tilde{u}^{\prime \prime}(x)\ dx+\int_{\tilde{\alpha}}^{1}\tilde{u}(x)\tilde{u}^{\prime\prime}(x) \ dx\] \[=\tilde{u}(\tilde{\alpha})\left[r_{1}\tilde{u}^{\prime}(\tilde{ \alpha}^{-})-\tilde{u}^{\prime}(\tilde{\alpha}^{+})\right]-\int_{0}^{1}w(x)[ \tilde{u}^{\prime}(x)]^{2}\ dx\] \[0 =-\int_{0}^{1}w(x)[\tilde{u}^{\prime}(x)]^{2}\ dx,\] which implies that \(\tilde{u}\) is zero since \(\tilde{u}(0)=\tilde{u}(1)=0\) Next, we will show that \(\{\mathscr{L}_{\dot{\alpha},r}^{m}\}_{0\leq\dot{\alpha}<1}\) is a uniformly bounded collection of RIFE projections in the following lemma. **Lemma 14**.: _There is a constant \(C(\beta^{+},\beta^{-},m)>0\) independent of \(\check{\alpha}\) such that the following estimate holds for every \(\check{u}\in\mathcal{H}_{\check{\alpha},r}^{m+1}(\check{I})\)_ \[\left\|\check{u}\right\|_{0,\check{I}}\leq C(\beta^{+},\beta^{-},m)\left\| \check{u}\right\|_{1,\check{I}}.\] Proof.: We write \(\check{u}\) as \(\check{u}=q_{1}+q_{2}\), where \(q_{1}\in\mathcal{V}_{\check{\alpha},r}^{1}(\check{I})\) such that \[q_{1}(0)=\check{u}(0),\quad q_{1}(1)=\check{u}(1),\] and \(q_{2}=\mathscr{L}_{\check{\alpha},r}^{m}(\check{u}-q_{1})\in\mathcal{V}_{ \check{\alpha},r}^{m}(\check{I})\). The construction of \(q_{1}\) is straightforward (see [21]) and we have \(\left\|q_{1}\right\|_{0,\check{I}}\leq C(\beta^{+},\beta^{-})\left\|u\right\| _{1,\check{I}}\). Now, the second term \(q_{2}\) satisfies \[q_{2}(0)=q_{2}(1)=0,\qquad(q_{2},v_{h})_{\check{w},\check{I}}=(\check{u}-q_{1 },v_{h})_{\check{w},\check{I}}\,,\ \forall v_{h}\in\mathcal{V}_{\check{\alpha},r^{2}(r)}^{m-2}( \check{I}),\] where \(\check{w}=\mathcal{M}w\). Following the proof of Lemma13, we can choose \(v_{h}=q_{2}^{\prime\prime}\) and integrate by parts to get \[-\left\|wq_{2}^{\prime}\right\|_{0,\check{I}}^{2}=(\check{u}-q_{1},q_{2}^{ \prime\prime})_{\check{w},\check{I}}\,.\] We take the absolute value of each side and apply Cauchy-Schwarz inequality \[\left\|q_{2}^{\prime}\right\|_{0,\check{I}}^{2}\leq C(\beta^{+},\beta^{-},m) \left\|\check{u}-q_{1}\right\|_{0,\check{I}}\left\|q_{2}^{\prime\prime}\right\| _{0,\check{I}}.\] The inverse inequality in Lemma3 implies that \(\left\|q_{2}^{\prime\prime}\right\|_{0,\check{I}}\leq C(\beta^{+},\beta^{-},m )\left\|q_{2}^{\prime}\right\|_{0,\check{I}}\). Hence, \[\left\|q_{2}^{\prime}\right\|_{0,\check{I}}\leq C(\beta^{+},\beta^{-},m) \left\|\check{u}-q_{1}\right\|_{0,\check{I}}\leq C(\beta^{+},\beta^{-},m) \left\|\check{u}\right\|_{1,\check{I}}. \tag{88}\] Since \(q_{2}(0)=q_{2}(1)=0\), we can apply Poincare's inequality to obtain \(\left\|q_{2}\right\|_{0,\check{I}}<C\left\|q_{2}^{\prime}\right\|_{0,\check{I}}\). Finally, we have \[\left\|\check{u}\right\|_{0,\check{I}}\leq\left\|q_{1}\right\|_{0,\check{I}}+ \left\|q_{2}\right\|_{0,\check{I}}\leq C(m,\beta^{+},\beta^{-})\left\|\check{u }\right\|_{1,\check{I}}.\] Then, we can use Theorem6 to derive an error bound for the Lobatto projection \(\mathscr{L}_{\alpha,r}^{m}u\) in the following theorem which confirms the optimal approximation capability of the IFE space established in [5] by a more complex analysis. **Theorem 10**.: _There is \(C(\beta^{+},\beta^{-},m)>0\) such that the following estimate holds for every \(u\in\mathcal{H}_{\alpha,r}^{m+1}(I_{k_{0}})\)_ \[|u-\mathscr{L}_{\alpha,r}^{m}u|_{i,I_{k_{0}}}\leq C(\beta^{+},\beta^{-},m)h^ {m+1-i}|u|_{m+1,I_{k_{0}}},\quad\forall i=0,1,\ldots,m.\] Proof.: This follows immediately from Lemma14 and Theorem6. ### Euler-Bernoulli Beam interface problem In this subsection, we apply the immersed scaling argument to reanalyze the cubic IFE space developed in [26] and [35] for solving the following interface problem of the Euler-Bernoulli beam equation: \[\left\{\begin{aligned} &\beta(x)u^{(4)}(x)=f(x),\ x\in(a,\alpha) \cup(\alpha,b)\\ & u(a)=u(b)=0,\\ & u^{\prime}(a)=u^{\prime}(b)=0\end{aligned}\right.\qquad \qquad\qquad\beta(x)=\left\{\begin{aligned} &\beta^{-}>0,& x\in(a,\alpha),\\ &\beta^{+}>0,& x\in(\alpha,b),\end{aligned}\right. \tag{89}\] where the solution \(u\) satisfies the following jump conditions at \(\alpha\) \[[u]_{\alpha}=[u^{\prime}]_{\alpha}=[\beta u^{\prime\prime}]_{\alpha}=[\beta u^ {\prime\prime\prime}]_{\alpha}=0.\] First, let \(r=\left(1,1,\frac{\beta^{-}}{\beta^{+}},\frac{\beta^{-}}{\beta^{+}}\right)\) be fixed throughout this subsection. Then, the usual weak form of (89) suggests to consider the following IFE method: \[\text{find }u_{h}\in Q_{\alpha,r}^{3}(I)\ \ \text{such that}\ \ \ (\beta u_{h}^{\prime\prime},v_{h}^{\prime\prime})_{I}=(f,v_{h})_{I},\ \forall v_{h}\in Q_{\alpha,r}^{3}(\mathcal{T}_{h}), \tag{90}\] where \(Q^{3}_{\alpha,r}(\mathcal{T}_{h})=H_{0}^{2}(I)\cap W^{3}_{\alpha,r}(\mathcal{T}_{h})\). We note that the IFE space \(Q^{3}_{\alpha,r}(\mathcal{T}_{h})\) as well as the method described by (90) were discussed in [26] and [35], and an error analysis based on a multipoint Taylor expansion was carried out to establish the optimality of this IFE method in [24]. As another demonstration of the versatility of the immerse scaling argument, we now present an alternative analysis for the optimal approximation capability of this IFE space. This new analysis based on the framework developed in Section 3 and Section 4 is shorter and cleaner than the one in the literature. As usual, for the discussion of the approximation capability of the IFE space, we consider the interpolation on the reference element \(\check{I}\) and map it to the physical element \(I_{k_{0}}\). To define the interpolation, we let \(\{\sigma_{i}\}_{i=1}^{4}\) be the Hermite degrees of freedom, that is, \[\sigma_{0}(v)=v(0),\quad\sigma_{1}(v)=v(1),\quad\sigma_{2}(v)=v^{\prime}(0), \quad\sigma_{3}(v)=v^{\prime}(1),\qquad\forall v\in H^{2}(\check{I}).\] It is known [26, 35] that there is a basis \(\{L^{i}_{\alpha,r}\}_{i=0}^{3}\) of \(\mathcal{V}^{3}_{\alpha,r}(\check{I})\) that satisfies \[\sigma_{i}(L^{j}_{\check{\alpha},r})=\delta_{i,j},\qquad i,j=0,1,2,3. \tag{91}\] These basis functions can then be used to define an immersed Hermite projection/interpolation operator \(\check{\mathcal{S}}_{\check{\alpha},r}:\mathcal{H}^{4}_{\check{\alpha},r}( \check{I})\to\mathcal{V}^{3}_{\check{\alpha},r}(\check{I})\) such that \(\check{u}_{H}=\check{\mathcal{S}}_{\check{\alpha},r}\check{u}\) and \[\check{u}_{H}=\sum_{i=0}^{3}\sigma_{i}(\check{u})L^{i}_{\check{\alpha},r}. \tag{92}\] **Lemma 15**.: _Let \(\beta^{\pm}>0\) and \(\check{\alpha}\in(0,1)\), then_ \[-1<L^{i}_{\check{\alpha},r}(x)<1,\qquad\forall\ x\in[0,1],\quad i=0,1,2,3. \tag{93}\] Proof.: See Appendix B Now, we are ready to establish that \(\{\check{\mathcal{S}}_{\check{\alpha},r}\}_{0<\check{\alpha}<1}\) is a collection of uniformly bounded of RIFE projections. **Lemma 16**.: _Let \(\beta^{\pm}>0,\check{\alpha}\in(0,1)\). Then there is a constant \(C\) independent of \(\check{\alpha}\) such that the following estimate holds for every \(\check{u}\in\mathcal{H}^{4}_{\check{\alpha},r}(\check{I})\)_ \[\left\|\check{\mathcal{S}}_{\check{\alpha},r}\check{u}\right\|_{0,\check{I}} \leq C\left\|\check{u}\right\|_{2,\check{I}}.\] Proof.: We know that \(\sigma_{i}(\check{u})\leq C\left\|\check{u}\right\|_{2,\check{I}}\) since \(\check{u}\in H^{2}(\check{I})\). Now, we apply the triangle inequality to (92) and Lemma 15 to get \[\left\|\check{\mathcal{S}}_{\check{\alpha},r}\check{u}\right\|_{0,\check{I}} \leq C\left\|\check{u}\right\|_{2,\check{I}}\left(\sum_{i=0}^{3}\left\|L^{i}_ {\check{\alpha},r}\right\|_{0,\check{I}}\right)\leq 4C\left\|\check{u}\right\|_{2, \check{I}}.\] Now, let \(\mathcal{S}_{\alpha,r}=\mathcal{M}^{-1}\circ\mathcal{S}_{\check{\alpha},r}\circ \mathcal{M}\) where \(\mathcal{M}\) is defined in (32). By the commutative diagram in (34), \(\mathcal{S}_{\alpha,r}\) is the local immersed Hermite interpolation. Then, by Lemma 16, \(\left\{\mathcal{S}_{\alpha,r}\right\}_{\check{\alpha}\in\check{I}_{k_{0}}}\) is a collection of uniformly bounded LIFE projections. Hence, the following theorem follows from Theorem 6. **Theorem 11**.: _Let \(\beta^{\pm}>0\), \(i\in\{0,1,2,3\}\), \(\alpha\in I_{k_{0}}\). Then, there is a constant \(C\) independent of \(\alpha\) such that the following estimate holds for every \(u\in\mathcal{H}^{4}_{\alpha,r}(I_{k_{0}})\)_ \[\left\|u-\mathcal{S}_{\alpha,r}u\right\|_{i,I_{k_{0}}}\leq Ch^{3-i}|u|_{4,I_{k _{0}}}.\] This theorem establishes the optimal approximation capability of the IFE space \(Q^{3}_{\alpha,r}(\mathcal{T}_{h})\) which was first derived in [24] with a lengthy and complex procedure. ## 7 Conclusion In this manuscript we developed a framework for analyzing the approximation properties of one-dimensional IFE spaces using the scaling argument. We have applied this IFE scaling argument to establish the optimal convergence of IFE spaces constructed for solving the acoustic interface problem, the elliptic interface problem and the Euler-Bernoulli beam interface problem, respectively. We are currently working on extending these results to IFE spaces and methods for solving interface problems in two and three dimensions. Proof of Lemma 4 Our goal is to show that the ratio \(\frac{\sqrt{\hat{\varphi}(0)^{2}+\varphi(1)^{2}}}{\|\hat{\varphi}\|_{0,I}^{2}}\) is bounded from below by a constant \(c(m,r,w)\) independent of \(\check{\alpha}\). For simplicity, let \(q_{i}\in\mathcal{P}^{m}([0,1])\) be the monomial basis \(q_{i}(x)=x^{i}\) for \(0\leq i\leq m\). Using the equivalence of norms, one can show that there is \(c_{1}(m)>0\) such that \[\min\left(|p(0)|,|p(1)|\right)+\sum_{i=0}^{m-1}\left|(p,q_{i})_{[0,1]}\right| \geq c_{1}(m)\left\|p\right\|_{0,[0,1]},\qquad\forall\ p\ \in\mathcal{P}^{m}([0,1]). \tag{94}\] Unfortunately, if we extend (94) to \(\mathcal{V}^{m}_{\check{\alpha},r}\), then the constant on the right might depend on \(\check{\alpha}\) and might grow unboundedly as \(\check{\alpha}\to 0^{+}\) or as \(\check{\alpha}\to 1^{-}\). To circumvent this issue, we will use a scaling trick similar to the one used in the proof of Lemma 2. First, we bound \((\check{\varphi},q_{i})_{w_{s},\check{I}^{s}}\) as shown in the following lemma **Lemma 17**.: _Let \(\tilde{m}\geq m\geq 0\), \(\{r_{k}\}_{k=0}^{m}\subset\mathbb{R}_{+}\) and \(\check{\alpha}\in(0,1)\), there is \(C(m,r,w)>0\) such that if \(\tilde{h}_{s}>\tilde{h}_{s^{\prime}}\), then_ \[|(\check{\varphi}_{s},q_{i})_{\check{I}^{s}}|=\left|\int_{I^{s}}\check{ \varphi}_{s}(x)x^{i}dx\right|\leq C(m,r,w)h_{s^{\prime}}\left\|\check{\varphi }_{s}\right\|_{0,\check{I}^{s}},\quad i=0,1,\ldots,m-1,\ \forall\check{\varphi}\in\mathcal{Q}^{m}_{\check{ \alpha},w,r}(\check{I}).\] Proof.: Since \(\check{\varphi}\in\mathcal{Q}^{m}_{\check{\alpha},w,r}(\check{I})\), we have \[0=w_{s}(\check{\varphi}_{s},q_{i})_{I^{s}}+w_{s^{\prime}}(\check{\varphi}_{s^ {\prime}},\mathcal{E}^{m,s^{\prime}}_{\check{\alpha},r}(q_{i}))_{I^{s^{\prime }}}.\] Then, by Cauchy-Schwarz inequality and (9), we have \[|(\check{\varphi},q_{i})_{I^{s}}| =\frac{w_{s^{\prime}}}{w_{s}}\left|(\check{\varphi},\mathcal{E}^{ m,s^{\prime}}_{\check{\alpha},r}(q_{i}))_{I^{s^{\prime}}}\right|\leq C(w)\left\| \check{\varphi}\right\|_{0,\check{I}^{s^{\prime}}}\left\|\mathcal{E}^{m,s^{ \prime}}_{\check{\alpha},r}(q_{i})\right\|_{0,\check{I}^{s^{\prime}}},\] \[\leq C(m,r,w)\sqrt{h_{s^{\prime}}}\left\|\check{\varphi}\right\|_ {0,\check{I}^{s}}\sqrt{h_{s^{\prime}}}\left\|q_{i}\right\|_{0,\check{I}^{s}},\] \[\leq C(m,r,w)h_{s^{\prime}}\left\|\check{\varphi}\right\|_{I^{s}}.\] The previous lemma shows that \((\check{\varphi},q_{i})_{I^{s}}\) will approach \(0\) if \(h_{s}\) approaches \(1\). This will allow us to obtain a restricted version of (94). **Lemma 18**.: _There is \(\delta(m,r,w)\in(0,\frac{1}{2})\) and \(C(m,r)>0\) such that if \(\min(h_{-},h_{+})<\delta(m,r,w)\), then_ \[|\check{\varphi}(0)|+|\check{\varphi}(1)|\geq C(m,r)\left\|\check{\varphi} \right\|_{0,\check{I}}.\qquad\forall\check{\varphi}\in\mathcal{Q}^{m}_{\check {\alpha},w,r}(\check{I}).\] Proof.: We will only discuss the case where \(h_{-}>h_{+}\), the other case can be proved similarly. We define \(\hat{\varphi}_{-}\in\mathcal{P}^{m}([0,1])\) as \(\hat{\varphi}_{-}(\xi)=\check{\varphi}_{-}(\check{h}_{-}\xi)\), then by the fact that \(h_{-}\geq 1/2\), \[|\check{\varphi}(0)|+\sum_{i=0}^{m-1}\left|\int_{0}^{h_{-}}\check {\varphi}(x)x^{i}\ dx\right| =|\hat{\varphi}_{-}(0)|+\sum_{i=0}^{m-1}h_{-}^{i+1}\left|\int_{0}^ {1}\hat{\varphi}_{-}(\xi)\xi^{i}\ d\xi\right|\] \[\geq|\hat{\varphi}_{-}(0)|+\sum_{i=0}^{m-1}2^{-m-1}\left|(\hat{ \varphi}_{-},q_{i})_{[0,1]}\right|\geq C(m)\left(|\hat{\varphi}_{-}(0)|+\sum_{i =0}^{m-1}|(\hat{\varphi}_{-},q_{i})_{[0,1]}|\right).\] Then, by (94), \(h_{-}\leq 1\), and (9), we have \[|\check{\varphi}(0)|+\sum_{i=0}^{m-1}\left|\int_{0}^{h_{-}}\check{\varphi}(x)x ^{i}\ dx\right|\geq C(m)\left\|\hat{\varphi}_{-}\right\|_{0,[0,1]}=C(m)h_{-}^{ -1/2}\left\|\hat{\varphi}\right\|_{0,\check{I}^{-}}\geq C(m)\left\|\check{ \varphi}\right\|_{0,\check{I}^{-}}\geq C_{0}(m,r)\left\|\check{\varphi} \right\|_{0,\check{I}}.\] Now, we use Lemma 17 to estimate the inner product on the left hand side: \[\sum_{i=0}^{m-1}\left|\int_{0}^{h_{-}}\check{\varphi}(x)x^{i}\ dx\right|\leq C_{ 1}(m,r,w)h_{+}\left\|\check{\varphi}\right\|_{0,\check{I}^{-}}.\] We combine it with the previous inequality to get \[\left|\bar{\varphi}(0)\right|\geq\left\|\bar{\varphi}\right\|_{0,I}\left(C_{0}(m,r )-C_{1}(m,r,w)h_{+}\right).\] Hence, if \(h_{+}\leq\delta=\min(1,\frac{C_{0}(m,r)}{2C_{1}(m,r,w)})\), then \[\left|\bar{\varphi}(0)\right|\geq\frac{1}{2}C_{0}(m,r)\left\|\bar{\varphi} \right\|_{0,I^{-}} \tag{95}\] A similar argument can be used to show that if \(h_{+}\leq\tilde{\delta}\) (where \(\tilde{\delta}\) could be different than the previous \(\delta\)), then \[\left|\bar{\varphi}(1)\right|\geq\frac{1}{2}\tilde{C}_{0}(m,r)\left\|\bar{ \varphi}\right\|_{0,I^{+}}. \tag{96}\] So far, we have shown that if one of the sub-elements \(\check{I}^{\pm}\) is small enough, then Lemma 4 holds. It remains to show that the lemma holds for \(\check{\alpha}\in[\delta,1-\delta]\), for which, we consider the following sequence \(\{\mathcal{O}^{i}_{\check{\alpha},w,r}\}_{i=0}^{m}\) by the Gram-Schmidt process: \[\mathcal{O}^{0}_{\check{\alpha},w,r}=\mathcal{N}^{0}_{\check{\alpha},r}, \quad\mathcal{O}^{i}_{\check{\alpha},w,r}=\mathcal{N}^{i}_{\check{\alpha},r} -\sum_{j=0}^{i-1}\frac{(\mathcal{N}^{i}_{\check{\alpha},r},\mathcal{O}^{j}_{ \check{\alpha},w,r})_{w,I}}{(\mathcal{O}^{j}_{\check{\alpha},w,r},\mathcal{O} ^{j}_{\check{\alpha},w,r})_{w,\check{I}}}\mathcal{O}^{j}_{\check{\alpha},w,r}, \quad i=1,2,\ldots,m. \tag{97}\] Clearly, we have \(\mathcal{O}^{m}_{\check{\alpha},w,r}\in\mathcal{Q}^{m}_{\check{\alpha},w,r}( \check{I})\). The following lemma shows that when \(\mathcal{O}^{m}_{\check{\alpha},w,r}\) is expressed in terms of the canonical basis \(\{\mathcal{N}^{i}_{\check{\alpha},r}\}_{i=0}^{m}\), the coefficients of the expansion are rational functions. **Lemma 19**.: _Let \(\tilde{m}\geq m\geq 0\), \(\{r_{k}\}_{k=0}^{m}\subset\mathbb{R}_{+}\) and \(\check{\alpha}\in(0,1)\), the orthogonal RIFE function \(\mathcal{O}^{m}_{\check{\alpha},w,r}\) defined in (97) satisfies_ \[\mathcal{O}^{m}_{\check{\alpha},w,r}=\sum_{i=0}^{m}R^{i,m}_{w,r}(\check{ \alpha})\mathcal{N}^{i}_{\check{\alpha},r} \tag{98}\] _for some rational functions \(\left\{R^{i,m}_{w,r}\right\}_{i=0}^{m}\) of \(\check{\alpha}\)._ Proof.: We will prove Lemma 19 via strong induction. First, the case \(m=0\) is obvious. Now,we assume that For every \(i=0,1,\ldots,m-1\), there are rational functions \(R^{j,i}_{w,r}\) of \(\check{\alpha}\) such that \[\mathcal{O}^{i}_{\check{\alpha},w,r}=\sum_{j=0}^{i}R^{j,i}_{w,r}(\check{\alpha })\mathcal{N}^{j}_{\check{\alpha},r}. \tag{99}\] To show that \(\mathcal{O}^{m}_{\check{\alpha},w,r}\) satisfies (98), we use the fact that \((\mathcal{N}^{i}_{\check{\alpha},r},\mathcal{N}^{j}_{\check{\alpha},r})_{w, \check{I}}\) is a polynomial in \(\check{\alpha}\). Therefore, \[\frac{(\mathcal{N}^{i}_{\check{\alpha},r},\mathcal{O}^{j}_{\check{\alpha},w,r} )_{w,\check{I}}}{(\mathcal{O}^{j}_{\check{\alpha},w,r},\mathcal{O}^{j}_{\check {\alpha},w,r})_{w,\check{I}}} \tag{100}\] is a rational function of \(\check{\alpha}\). Furthermore, by plugging (99) into (97) and rearranging the terms, we get \[\mathcal{O}^{m}_{\check{\alpha},w,r}=\mathcal{N}^{m}_{\check{\alpha},r}-\sum_ {j=0}^{m-1}\frac{(\mathcal{N}^{m}_{\check{\alpha},r},\mathcal{O}^{j}_{\check {\alpha},w,r})_{w,I}}{(\mathcal{O}^{j}_{\check{\alpha},w,r},\mathcal{O}^{j}_{ \check{\alpha},w,r})_{w,I}}\sum_{i=0}^{j}R^{i,j}_{w,r}(\check{\alpha})\mathcal{ N}^{i}_{\check{\alpha},r}=\mathcal{N}^{m}_{\check{\alpha},r}-\sum_{i=0}^{m-1} \underbrace{\left(\sum_{j=i}^{m-1}\frac{(\mathcal{N}^{m}_{\check{\alpha},r}, \mathcal{O}^{j}_{\check{\alpha},w,r})_{w,I}}{(\mathcal{O}^{j}_{\check{\alpha}, w,r},\mathcal{O}^{j}_{\check{\alpha},w,r})_{w,I}}R^{i,j}_{w,r}(\check{\alpha}) \right)}_{:=R^{j,m}_{w,r}(\check{\alpha})}\mathcal{N}^{i}_{\check{\alpha},r}.\] From the strong induction assumption and (100), we conclude that \(R^{j,m}_{w,r}\) is a rational function. **Corollary 3**.: _Given \(m,w_{\pm}\) and \(r\), the function \(\mathcal{J}^{m}_{w,r}:(0,1)\rightarrow\mathbb{R}_{+}\) defined in (101) is a rational function._ Now, we are ready to prove Lemma 4. We can rewrite Lemma 18 as: there is \(\delta\in(0,\frac{1}{2})\) that depends on \(m,w\) and \(r\), and a constant \(C_{1}(m,r)\) such that \[\sqrt{\mathcal{O}_{\alpha,w,r}^{m}(0)^{2}+\mathcal{O}_{\bar{\alpha},w,r}^{m}(1) ^{2}}\geq C_{1}(m,r)\left\|\mathcal{O}_{\bar{\alpha},w,r}^{m}\right\|_{0,\bar{ I}},\qquad\breve{\alpha}\in(0,\delta)\cup(1-\delta,1).\] For \(\delta\in[\delta,1-\delta]\), the following function is continuous \[\mathcal{J}_{w,r}^{m}:\breve{\alpha}\mapsto\frac{\mathcal{O}_{\bar{\alpha},w, r}^{m}(0)^{2}+\mathcal{O}_{\bar{\alpha},w,r}^{m}(1)^{2}}{\left\|\mathcal{O}_{ \bar{\alpha},w,r}^{m}\right\|_{0,\bar{I}}^{2}} \tag{101}\] because both of its numerator and denominator are rational functions of \(\breve{\alpha}\) and the denominator is not zero. Therefore, there is \(C_{2}(m,w,r)>0\) such that \[\sqrt{\mathcal{O}_{\bar{\alpha},w,r}^{m}(0)^{2}+\mathcal{O}_{\bar{\alpha},w, r}^{m}(1)^{2}}\geq C_{2}(m,w,r)\left\|\mathcal{O}_{\bar{\alpha},w,r}^{m} \right\|_{0,I},\qquad\breve{\alpha}\in[\delta,1-\delta].\] By letting \(C(m,w,r)=\min(C_{1}(m,r),C_{2}(m,w,r))\), we know that \(\mathcal{O}_{\bar{\alpha},w,r}^{m}\) satisfies inequality (24) stated in Lemma 4. Consequently, the estimates in (24) of Lemma 4 is true for every function in \(\mathcal{Q}_{\bar{\alpha},w,r}^{m}(\bar{I})\) because it is a one-dimensional space, and Lemma 4 is proven. ## Appendix B Proof of Lemma 15 Let us start with \(p=L^{0}_{\alpha,r}\), we have \[p(0)=1,\quad p(1)=0,\quad p^{\prime}(0)=0,\quad p^{\prime}(1)=0.\] By Theorem 1, \(p^{\prime}\) does not change sign in \((0,1)\) since \(p^{\prime}\in\mathcal{V}_{\bar{\alpha},\tau(r)}^{2}(\bar{I})\) and \(p^{\prime}(0)=p^{\prime}(1)=0\). Therefore, \(p\) is monotonically decreasing from \(p(0)=1\) to \(p(1)=0\). The same argument applies to \(L^{1}_{\bar{\alpha},r}\). Next, we show that \(q=L^{2}_{\bar{\alpha},r}\) is bounded between \(0\) and \(1\). We have \[q(0)=0,\quad q(1)=0,\quad q^{\prime}(0)=1,\quad q^{\prime}(1)=0.\] By Rolle's theorem, there is \(c\in(0,1)\) such that \(q^{\prime}(c)=0\). By Theorem 1, \(c\) the only root of \(q^{\prime}\) in \((0,1)\). Now, by the generalized Rolle's theorem, there is \(d\in(c,1)\) such that \(q^{\prime\prime}(d^{-})q^{\prime\prime}(d^{+})\leq 0\). If \(d\neq\breve{\alpha}\), then \(q^{\prime\prime}(d^{-})=q^{\prime\prime}(d^{+})=q^{\prime\prime}(d)\) because \(q\) is a polynomial on either sides of \(\breve{\alpha}\). In this case we have \(q^{\prime\prime}(d)=0\). If \(d=\breve{\alpha}\), then \(q^{\prime\prime}(d^{-})q^{\prime\prime}(d^{+})\leq 0\) and jump condition implies \[\frac{\beta^{-}}{\beta^{+}}\big{(}q^{\prime\prime}(\breve{\alpha}^{-})\big{)} ^{2}\leq 0\] from which we have \(q^{\prime\prime}(\breve{\alpha}^{-})=0=q^{\prime\prime}(\breve{\alpha}^{+})\). Hence, \(q^{\prime\prime}(d)=0\). Furthermore, by Theorem 1, \(d\) is the only root of \(q^{\prime\prime}\) since \(q^{\prime\prime}\in\mathcal{V}_{\bar{\alpha},\tau^{2}(r)}^{1}(\bar{I})\). Since \(q^{\prime\prime}\) is a linear polynomial on either sides of \(\breve{\alpha}\), the jump condition satisfied by \(q\) further implies that \(q^{\prime\prime}\) does not change its sign \((0,d)\) and \((d,1)\). Because \(q^{\prime}(0)=1\) and \(q^{\prime}(c)=0\) and \(0<c<d\), we know that \(q^{\prime}\) is decreasing on \((0,d)\) but increasing on \((d,1)\). These further imply \(q^{\prime}(x)\in[0,1]\) for \(x\in[0,c]\) and \(q^{\prime}(x)\leq 0\) for \(x\in[c,d]\); hence, \(q(x)\leq q(c)\) for all \(x\in[0,d]\). Furthermore, since \(q^{\prime}(d)\leq 0,q^{\prime}(1)=0\) and \(q^{\prime}\) is monotonic on \([d,1]\), we know that \(q^{\prime}(x)\leq 0\) for all \(x\in[d,1]\). Hence, \(0=q(1)\leq q(x)\leq q(d)\leq q(c)\) for all \(x\in[d,1]\). Consequently, \(q(c)\geq q(x)\) for \(x\in[0,1]\). In addition, since \(q\) has no local minimum point on \((0,d)\), we have \(q(x)\geq\min\{q(0),q(d)\}\geq 0\) for all \(x\in[0,d]\). Thus, \(q(x)\geq 0\ \forall x\in[0,1]\). On the other hand, \[q(x)\leq q(c)=\int_{0}^{c}q^{\prime}(x)dx\leq\int_{0}^{c}1dx=c<1\ \ \forall x\in[0,1],\] The last two estimates lead us to conclude that \(0\leq q(x)=L^{2}_{\bar{\alpha},r}(x)<1\). As for \(L^{3}_{\bar{\alpha},r}\), we note that \[L^{3}_{\bar{\alpha},r}(x)=-L^{2}_{1-\bar{\alpha},\bar{r}}(1-x),\qquad\text{ where }\tilde{r}=\{r_{i}^{-1}\}_{i=0}^{3}\] which leads to \(L^{3}_{\bar{\alpha},r}(x)\in[-1,0]\).
2304.06003
Approximation by a special de la Vallée Poussin type matrix transform mean of Walsh-Fourier series
In this paper, we consider norm convergence for a special matrix-based de la Vall\'ee Poussin-like mean of Fourier series for the Walsh system. We estimate the difference between the named mean above and the corresponding function in norm, and the upper estimation is given by the modulus of continuity of the function.
IstvÑn Blahota
2023-03-22T15:14:39Z
http://arxiv.org/abs/2304.06003v2
# Approximation by a special de la Vallee poussin type matrix transform mean of Walsh-Fourier series ###### Abstract. In this paper, we consider norm convergence for a special matrix-based de la Vallee Poussin-like mean of Fourier series for the Walsh system. We estimate the difference between the named mean above and the corresponding function in norm, and the upper estimation is given by the modulus of continuity of the function. Key words and phrases:character system, Fourier series, Walsh-Paley system, rate of approximation, modulus of continuity, matrix transform 2020 Mathematics Subject Classification: 42C10 ## 1. Definitions and notations We follow the standard notions of dyadic analysis introduced by F. Schipp, W. R. Wade, P. Simon, and J. Pal [17] and others. ## 2. Definitions and notation Let \(\mathbb{P}\) be the set of positive natural numbers and \(\mathbb{N}:=\mathbb{P}\cup\{0\}\). Let denote by \(\mathbb{Z}_{2}\) the discrete cyclic group of order \(2\), the group operation is the modulo \(2\) addition. Let be every subset open. The normalized Haar measure \(\mu\) on \(\mathbb{Z}_{2}\) is given in the way that \(\mu(\{0\}):=\mu(\{1\}):=1/2\). \(G:=\underset{k=0}{\overset{\infty}{\times}}\mathbb{Z}_{2},\ G\) is called the Walsh group. The elements of Walsh group \(G\) are sequences of numbers \(0\) and \(1\), that is \(x=(x_{0},x_{1},\dots,x_{k},\dots)\) with \(x_{k}\in\{0,1\}\) (\(k\in\mathbb{N}\)). The group operation on \(G\) is the coordinate-wise addition (denoted by \(+\)), the normalized Haar measure \(\mu\) is the product measure and the topology is the product topology. Dyadic intervals are defined in the usual way \[I_{0}(x):=G,\ I_{n}(x):=\{y\in G:y=(x_{0},\dots,x_{n-1},y_{n},y_{n+1},\dots)\}\] for \(x\in G,n\in\mathbb{P}\). They form a base for the neighbourhoods of \(G\). Let \(0:=(0:i\in\mathbb{N})\in G\) denote the null element of \(G\) and \(I_{n}:=I_{n}(0)\) for \(n\in\mathbb{N}\). Let \(L_{p}(G)\) denote the usual Lebesgue spaces on \(G\) (with the corresponding norm \(\|.\|_{p}\)), where \(1\leq p<\infty\). For the sake of brevity in notation, we agree to write \(L_{\infty}(G)\) instead of \(C(G)\) and set \(\|f\|_{\infty}:=\sup\{|f(x)|:x\in G\}.\) Of course, it is clear that the space \(L_{\infty}(G)\) is not the same as the space of continuous functions, i.e. it is a proper subspace of it. But since in the case of continuous functions the supremum norm and the \(L_{\infty}(G)\) norm are the same, for convenience we hope the reader will be able to tolerate this simplification in notation. Next, we define the modulus of continuity in \(L_{p}(G),1\leq p\leq\infty\), of a function \(f\in L_{p}(G)\) by \[\omega_{p}(f,\delta):=\sup_{|t|<\delta}\|f(.+t)-f(.)\|_{p},\quad\delta>0,\] with the notation \[|x|:=\sum_{i=0}^{\infty}\frac{x_{i}}{2^{i+1}}\quad\text{for all $x\in G$}.\] The Lipschitz classes in \(L_{p}(G)\) (for each \(\alpha>0\)) are defined as \[\text{Lip}(\alpha,p,G):=\{f\in L_{p}(G):\omega_{p}(f,\delta)=O(\delta^{\alpha} )\text{ as $\delta\to 0$}\}.\] We introduce some concepts of Walsh-Fourier analysis. The Rademacher functions are defined as \[r_{k}(x):=(-1)^{x_{k}}\ (x\in G,k\in\mathbb{N}).\] The Walsh-Paley functions are the product functions of the Rademacher functions. Namely, each natural number \(n\) can be uniquely expressed in the number system based 2, in the form \[n=\sum_{k=0}^{\infty}n_{k}2^{k},\ n_{k}\in\{0,1\}\ (k\in\mathbb{N}),\] where only a finite number of \(n_{k}\)'s different from zero. Let the order of \(n\in\mathbb{P}\) be denoted by \(|n|:=\max\{j\in\mathbb{N}:n_{j}\neq 0\}\). Walsh-Paley functions are \(w_{0}:=1\) and for \(n\in\mathbb{P}\) \[w_{n}(x):=\prod_{k=0}^{\infty}r_{k}^{n_{k}}(x)=(-1)^{\sum_{k=0}^{|n|}n_{k}x_{k }}.\] Let \(\mathcal{P}_{n}\) be the collection of Walsh polynomials of order less than \(n\), that is, functions of the form \[P(x)=\sum_{k=0}^{n-1}a_{k}w_{k}(x),\] where \(n\in\mathbb{P}\) and \(\{a_{k}\}\) is a sequence of complex numbers. It is known [10] that the system \((w_{n},n\in\mathbb{N})\) is the character system of \((G,+)\). The \(n\)th Fourier-coefficient, the \(n\)th partial sum of the Fourier series and the \(n\)th Dirichlet kernel is defined by \[\hat{f}(n):=\int_{G}fw_{n}d\mu,\ S_{n}(f):=\sum_{k=0}^{n-1}\hat{f}(k)w_{k},\ D_{n}:=\sum_{k=0}^{n-1}w_{k},\ D_{0}:=0.\] Fejer kernels are defined as the arithmetical means of Dirichlet kernels, that is, \[K_{n}:=\frac{1}{n}\sum_{k=1}^{n}D_{k}.\] Let \(T:=(t_{i,j})_{i,j=1}^{\infty}\) be a doubly infinite matrix of numbers. It is always supposed that matrix \(T\) is lower triangular. Let us define the \((m,n)\)th matrix transform de La Vallee Poussin mean determined by the matrix \(T\) as \[\sigma_{m,n}^{T}(f):=\sum_{k=m}^{n}t_{k,n}S_{k}(f),\] where \(m,n\in\mathbb{P}\) and \(m\leq n\). The \((m,n)\)th matrix transform de La Vallee Poussin kernel is defined as \[K^{T}_{m,n}:=\sum_{k=m}^{n}t_{k,n}D_{k}.\] It is very easy to verify that \[\sigma^{T}_{m,n}(f;x)=\int_{G}f(u)K^{T}_{m,n}(u+x)d\mu(u).\] We introduce the notation \(\Delta t_{k,n}:=t_{k,n}-t_{k+1,n}\), where \(k\in\{1,\dots,n\}\) and \(t_{n+1,n}:=0\). ## 3. Historical overview Matrix transforms means are common generalizations of several well-known summation methods. It follows by simple consideration that the Norlund means, the Fejer (or the \((C,1)\)) and the \((C,\alpha)\) means are special cases of the matrix transform summation method introduced above. Our paper is motivated by the work of Moricz, Siddiqi [14] on the Walsh-Norlund summation method and the result of Moricz and Rhoades [13] on the Walsh weighted mean method. As special cases, Moricz and Siddiqi obtained the earlier results given by Yano [23], Jastrebova [11] and Skvortsov [19] on the rate of the approximation by Cesaro means. The approximation properties of the Walsh-Cesaro means of negative order were studied by Goginava [9], the Vilenkin case was investigated by Shavardenidze [18] and Tepnadze [20]. A common generalization of these two results of Moricz and Siddiqi [14] and Moricz and Rhoades [13] was given by Nagy and the author [2]. In 2008, Fridli, Manchanda and Siddiqi generalized the result of Moricz and Siddiqi for homogeneous Banach spaces and dyadic Hardy spaces [8]. Recently, the author, Baramidze, Memic, Nagy, Persson, Tephnadze and Wall presented some results with respect to this topic [1],[3], [5],[12]. See [7, 22], as well. For the two-dimensional results see [4, 16, 15]. It is important to note that in the paper of Chripko [6] some methods and results with respect to Jacobi-Fourier series gave us some ideas and used in this paper. ## 4. Auxiliary results To prove Theorem 1 we need the following Lemmas. **Lemma 1** (Paley's Lemma [17], p. 7.).: _For \(n\in\mathbb{N}\)_ \[D_{2^{n}}(x)=\begin{cases}2^{n},&\text{ if }x\in I_{n},\\ 0,&\text{ if }x\notin I_{n}.\end{cases}\] **Lemma 2** ([17], p. 34.).: _For \(j,n\in\mathbb{N},\ j<2^{n}\) we have_ \[D_{2^{n}+j}=D_{2^{n}}+r_{n}D_{j}.\] **Lemma 3** (Yano's Lemma [24]).: _The norm of the Fejer kernel is bounded uniformly. That is, for all \(n\in\mathbb{P}\)_ \[\|K_{n}\|_{1}\leq 2.\] In 2018, Toledo improved this result. **Lemma 4**.: _[_21_]_ \[\sup_{n\in\mathbb{P}}\|K_{n}\|_{1}=\frac{17}{15}.\] **Lemma 5**.: _[_14_]_ _Let \(n\in\mathbb{P},\ g\in\mathcal{P}_{2^{n}}\), \(f\in L_{p}(G)\) (\(1\leq p\leq\infty\)). Then_ \[\left\|\int_{G}r_{n}(t)g(t)(f(\cdot+t)-f(\cdot))d\mu(t)\right\|_{p}\leq\frac{1 }{2}\|g\|_{1}\omega_{p}\left(f,2^{-n}\right)\] _holds._ In the next lemma, we give a decomposition of the kernels \(K_{2^{n},2^{n+1}-1}^{T}\). **Lemma 6**.: _Let \(n\) be a positive integer, then we have_ \[K_{2^{n},2^{n+1}-1}^{T} =\sum_{k=0}^{2^{n}-1}t_{2^{n}+k,2^{n+1}-1}D_{2^{n}}+r_{n}\sum_{k= 1}^{2^{n}-2}\Delta t_{2^{n}+k,2^{n+1}-1}kK_{k}\] \[\quad+r_{n}t_{2^{n+1}-1,2^{n+1}-1}(2^{n}-1)K_{2^{n}-1}\] \[=:\sum_{j=1}^{3}K_{j,n}.\] Proof.: We write \[K_{2^{n},2^{n+1}-1}^{T}=\sum_{l=2^{n}}^{2^{n+1}-1}t_{l,2^{n+1}-1}D_{l}.\] Now, we apply Lemma 2. We get \[\sum_{l=2^{n}}^{2^{n+1}-1}t_{l,2^{n+1}-1}D_{l} =\sum_{k=0}^{2^{n}-1}t_{2^{n}+k,2^{n+1}-1}D_{2^{n}+k}\] \[=\sum_{k=0}^{2^{n}-1}t_{2^{n}+k,2^{n+1}-1}D_{2^{n}}+r_{n}\sum_{k= 1}^{2^{n}-1}t_{2^{n}+k,2^{n+1}-1}D_{k}.\] Using Abel-transform \[\sum_{k=1}^{2^{n}-1}t_{2^{n}+k,2^{n+1}-1}D_{k} =\sum_{k=1}^{2^{n}-2}\Delta t_{2^{n}+k,2^{n+1}-1}kK_{k}\] \[\quad+\ t_{2^{n+1}-1,2^{n+1}-1}(2^{n}-1)K_{2^{n}-1}.\] Summarizing these it completes the proof of Lemma 6. ## 5. The rate of the approximation **Theorem 1**.: _Let \(f\in L_{p}(G)\)\((1\leq p\leq\infty)\). For every \(n\in\mathbb{P},\ \{t_{k,2^{n+1}-1}:2^{n}\leq k\leq 2^{n+1}-1\}\) be a finite sequence of non-negative numbers such that_ \[\sum_{k=2^{n}}^{2^{n+1}-1}t_{k,2^{n+1}-1}=1 \tag{5.1}\] _be satisfied._ _a) If the finite sequence \(\{t_{k,2^{n+1}-1}:2^{n}\leq k\leq 2^{n+1}-1\}\) is non-decreasing for a fixed \(n\) and_ \[t_{2^{n+1}-1,2^{n+1}-1}=O\left(\frac{1}{2^{n+1}-1}\right), \tag{5.2}\] _or_ _b) if the finite sequence \(\{t_{k,2^{n+1}-1}:2^{n}\leq k\leq 2^{n+1}-1\}\) is non-increasing for a fixed \(n\), then_ \[\left\|\sigma_{2^{n},2^{n+1}-1}^{T}(f)-f\right\|_{p}\leq c\omega_{p}\left(f,2^ {-n}\right)\] _holds._ Proof of Theorem 1.: The proof is carried out in cases where \(1\leq p<\infty\), while the proof of case \(p=\infty\) is similar. Recall that by the case \(p=\infty\) we mean that we are considering the space of continuous functions. During our proofs \(c\) denotes a positive constant, which may vary at different appearances. We use condition (5.1), the usual Minkowski inequality and Lemma 6 \[\left\|\sigma_{2^{n},2^{n+1}-1}^{T}(f)-f\right\|_{p} =\left(\int_{G}|\sigma_{2^{n},2^{n+1}-1}^{T}(f;x)-f(x)|^{p}d\mu(x )\right)^{\frac{1}{p}}\] \[=\left(\int_{G}\left|\int_{G}K_{2^{n},2^{n+1}-1}^{T}(u)F(x,u)d \mu(u)\right|^{p}d\mu(x)\right)^{\frac{1}{p}}\] \[\leq\sum_{j=1}^{3}\left(\int_{G}\left|\int_{G}K_{j,n}(u)F(x,u)d \mu(u)\right|^{p}d\mu(x)\right)^{\frac{1}{p}}\] \[=:\sum_{j=1}^{3}I_{j,n}\] with notation \(F(x,u):=f(x+u)-f(x)\). Using generalized Minkowski inequality, Lemma 1 and condition (5.1) for the expressions \(I_{1,n}\), we obtain \[I_{1,n} \leq\sum_{k=0}^{2^{n}-1}t_{2^{n}+k,2^{n+1}-1}\int_{G}D_{2^{n}}(u) \left(\int_{G}|F(x,u)|^{p}\,d\mu(x)\right)^{\frac{1}{p}}d\mu(u)\] \[\leq\sum_{k=2^{n}}^{2^{n+1}-1}t_{k,2^{n+1}-1}\omega_{p}\left(f,2^ {-n}\right)\] \[\leq\omega_{p}\left(f,2^{-n}\right).\] Now, applying Lemma 4 and Lemma 5 we get \[I_{2,n} \leq\sum_{k=1}^{2^{n}-2}\left|\Delta t_{2^{n}+k,2^{n+1}-1}\right|k\] \[\quad\times\left(\int_{G}\left|\int_{G}r_{n}(u)K_{k}(u)F(x,u)d \mu(u)\right|^{p}d\mu(x)\right)^{\frac{1}{p}}\] \[\leq\sum_{k=1}^{2^{n}-2}\left|\Delta t_{2^{n}+k,2^{n+1}-1}\right| k\frac{1}{2}\|K_{k}\|_{1}\omega_{p}\left(f,2^{-n}\right)\] \[\leq\sum_{k=1}^{2^{n}-2}\left|\Delta t_{2^{n}+k,2^{n+1}-1}\right|k \omega_{p}\left(f,2^{-n}\right).\] We write in case a) \[\sum_{k=1}^{2^{n}-2}|\Delta t_{2^{n}+k,2^{n+1}-1}|k =\sum_{k=1}^{2^{n}-2}(t_{2^{n}+k+1,2^{n+1}-1}-t_{2^{n}+k,2^{n+1}-1 })k\] \[=(2^{n}-2)t_{2^{n+1}-1,2^{n+1}-1}-\sum_{k=1}^{2^{n}-2}t_{2^{n}+k,2 ^{n+1}-1}\] \[\leq(2^{n+1}-1)t_{2^{n+1}-1,2^{n+1}-1}\] and using condition (5.2) \[I_{2,n} \leq(2^{n+1}-1)t_{2^{n+1}-1,2^{n+1}-1}\omega_{p}\left(f,2^{-n}\right)\] \[\leq c\omega_{p}\left(f,2^{-n}\right).\] We estimate the expression \(I_{3,n}\) in case a). Lemma 4, Lemma 5 and condition (5.2) yield \[I_{3,n} \leq(2^{n}-1)t_{2^{n+1}-1,2^{n+1}-1}\] \[\quad\times\left(\int_{G}\left|\int_{G}r_{n}(u)K_{2^{n}-1}(u)F(x,u)d\mu(u)\right|^{p}d\mu(x)\right)^{\frac{1}{p}}\] \[\leq(2^{n+1}-1)t_{2^{n+1}-1,2^{n+1}-1}\frac{1}{2}\|K_{2^{n}-1}\| _{1}\omega_{p}\left(f,2^{-n}\right)\] \[\leq(2^{n+1}-1)t_{2^{n+1}-1,2^{n+1}-1}\omega_{p}\left(f,2^{-n}\right)\] \[\leq c\omega_{p}\left(f,2^{-n}\right).\] In case b) we estimate \(I_{2,n}+I_{3,n}\). In this situation \[\sum_{k=1}^{2^{n}-2}\left|\Delta t_{2^{n}+k,2^{n+1}-1}\right|k=\sum_{k=1}^{2^{ n}-2}t_{2^{n}+k,2^{n+1}-1}-(2^{n}-2)t_{2^{n+1}-1,2^{n+1}-1},\] so Lemma 4, Lemma 5 and condition (5.1) imply \[I_{2,n}+I_{3,n} \leq\sum_{k=1}^{2^{n}-2}\left|\Delta t_{2^{n}+k,2^{n+1}-1}\right|k\times\] \[\quad\times\left(\int_{G}\left|\int_{G}r_{n}(u)K_{k}(u)F(x,u)d\mu( u)\right|^{p}d\mu(x)\right)^{\frac{1}{p}}\] \[\quad+(2^{n}-1)t_{2^{n+1}-1,2^{n+1}-1}\times\] \[\quad\times\left(\int_{G}\left|\int_{G}r_{n}(u)K_{2^{n}-1}(u)F(x,u)d\mu(u)\right|^{p}d\mu(x)\right)^{\frac{1}{p}}\] \[\leq\left(\sum_{k=1}^{2^{n}-2}t_{2^{n}+k,2^{n+1}-1}+(2^{n}-1)t_{2 ^{n+1}-1,2^{n+1}-1}-(2^{n}-2)t_{2^{n+1}-1,2^{n+1}-1}\right)\] \[\quad\times\frac{1}{2}\cdot\frac{17}{15}\omega_{p}\left(f,2^{-n}\right)\] \[=\sum_{k=1}^{2^{n}-1}t_{2^{n}+k,2^{n+1}-1}\frac{17}{30}\omega_{p} \left(f,2^{-n}\right)\] \[\leq\frac{17}{30}\omega_{p}\left(f,2^{-n}\right).\] This completes the proof of our Theorem 1. _Remark 1_.: We mention, that assuming (5.1) is natural, because many well-known means satisfy it and this equality is a part of regularity conditions [25, page 74.]. **Corollary 1**.: _Let us suppose that the conditions in Theorem 1 are satisfied. If \(f\in\text{Lip}(\alpha,p,G)\), then_ \[\left\|\sigma_{2^{n},2^{n+1}-1}(f)-f\right\|_{p}=O\left(2^{-n\alpha}\right).\] _Remark 2_.: In case b) we can formulate the statement of Theorem 1 in following form \[\left\|\sigma_{2^{n},2^{n+1}-1}^{T}(f)-f\right\|_{p}\leq\frac{47}{30}\omega_{p }\left(f,2^{-n}\right).\]
2310.04953
Robust matrix completion via Novel M-estimator Functions
M-estmators including the Welsch and Cauchy have been widely adopted for robustness against outliers, but they also down-weigh the uncontaminated data. To address this issue, we devise a framework to generate a class of nonconvex functions which only down-weigh outlier-corrupted observations. Our framework is then applied to the Welsch, Cauchy and $\ell_p$-norm functions to produce the corresponding robust loss functions. Targeting on the application of robust matrix completion, efficient algorithms based on these functions are developed and their convergence is analyzed. Finally, extensive numerical results demonstrate that the proposed methods are superior to the competitors in terms of recovery accuracy and runtime.
Zhi-Yong Wang, Hing Cheung So
2023-10-08T00:25:34Z
http://arxiv.org/abs/2310.04953v1
# Robust matrix completion via Novel M-estimator Functions ###### Abstract M-estimators including the Welsch and Cauchy have been widely adopted for robustness against outliers, but they also down-weigh the uncontaminated data. To address this issue, we devise a framework to generate a class of nonconvex functions which only down-weigh outlier-corrupted observations. Our framework is then applied to the Welsch, Cauchy and \(\ell_{p}\)-norm functions to produce the corresponding robust loss functions. Targeting on the application of robust matrix completion, efficient algorithms based on these functions are developed and their convergence is analyzed. Finally, extensive numerical results demonstrate that the proposed methods are superior to the competitors in terms of recovery accuracy and runtime. Low-rank matrix completion, matrix factorization, outlier-robustness, implicit regularizer. ## I Introduction Matrix completion (MC) [1, 2] refers to recovering the missing entries of a partially-observed matrix. It has numerous applications in signal processing and machine learning, such as hyperspectral imaging [3] and image inpainting [4]. MC can be formulated as a constrained rank minimization problem [5], but it is NP-hard since the rank is discrete. To address this, nuclear norm minimization is exploited [6] to recast MC as a semi-definite program [7]. Although it can be solved by the interior-point method [8], its computational complexity is high. On the other hand, computationally efficient algorithms such as singular value thresholding [9], and accelerated proximal gradient with linesearch [10], have been proposed. Nevertheless, they still involve full singular value decomposition (SVD) per iteration, implying an expensive cost especially for large-size data. To avoid performing SVD, factorization based MC approach [11, 12] has been developed, whose idea is to utilize the product of two much smaller matrices to approximately match the observed matrix and thus the low-rank property is automatically fulfilled. MC algorithms such as low-rank matrix fitting [13] and alternating minimization [14], are then proposed. In addition, Zhu \(et\ al.\)[11, 12] have shown that this kind of MC problem has no spurious local minima and obeys the strict saddle property which requires the cost function to have a directional negative curvature at all critical points but local minima. That is, although the factorization based MC problem is nonconvex, global optimality can be achieved under some conditions. Nevertheless, the above mentioned methods are vulnerable to outliers. To resist outliers, \(\ell_{1}\)-norm is suggested, resulting in numerous robust MC algorithms, including robust matrix factorization by majorization minimization (RMF-MM) [15] and practical low-rank matrix approximation under robust \(\ell_{1}\)-norm (RegL\({}_{1}\)) [16]. Nevertheless, the \(\ell_{1}\)-norm is still vulnerable to outliers with large magnitudes because it is not upper bounded [17]. To enhance outlier-robustness, the \(\ell_{p}\)-norm with \(0<p<1\)[17, 18] and the nonconvex penalty functions such as the Welsch and Tukey, are adopted [19, 20]. Although these functions penalize outliers via assigning small weights to outlier-corrupted data, they also down-weigh the normal data. Here, normal data refer to observations without noise or with only Gaussian noise. Compared with these nonconvex functions, the Huber function only penalizes outlier-contaminated entries but is still sensitive to large outliers because it employs the \(\ell_{1}\)-norm for robustness [21]. In this paper, we devise a framework to produce a class of M-estimator functions, which only down-weigh outlier-contaminated data and can resist even large outliers. The framework is then applied to commonly-used Welsch, Cauchy and \(\ell_{p}\)-norm M-estimators, resulting in the corresponding robust functions. Since these functions are nonconvex, the Legendre-Fenchel (LF) transform is exploited to convert the nonconvex problem into a sum of convex problems with closed-form solutions. Furthermore, we apply the developed functions to factorization based MC and propose robust MC algorithms with convergence guarantees. ## II Problem formulation Let \(\Omega\subset\{1,\cdots,m\}\times\{1,\cdots,n\}\) represent the index set of the known entries of an incomplete matrix \(\boldsymbol{X}_{\Omega}\), and \((\cdot)_{\Omega}\) is a projection operator, defined as: \[\left[\boldsymbol{X}_{\Omega}\right]_{ij}=\begin{cases}X_{ij},&\text{if }(i,j)\in\Omega\\ 0,&\text{otherwise}.\end{cases}\] Given \(\boldsymbol{X}_{\Omega}\), MC is to seek a low-rank matrix \(\boldsymbol{M}\) to match \(\boldsymbol{X}_{\Omega}\) and estimate its missing entries, which can be modeled as a rank minimization problem [5]: \[\min_{\boldsymbol{M}}\ \text{rank}(\boldsymbol{M}),\ \text{s.t.}\ \boldsymbol{M}_{ \Omega}=\boldsymbol{X}_{\Omega} \tag{1}\] However, (1) is an NP-hard problem. Instead, the convex nuclear norm is suggested [6]: \[\min_{\boldsymbol{M}}\ \|\boldsymbol{M}\|_{*},\ \text{s.t.}\ \boldsymbol{M}_{ \Omega}=\boldsymbol{X}_{\Omega} \tag{2}\] where the nuclear norm \(\|\boldsymbol{M}\|_{*}\) is the sum of singular values of \(\boldsymbol{M}\). Although it is convex, full SVD calculation is required per iteration. To address this problem, factorization based MC has been exploited [13]: \[\min_{\mathbf{U},\mathbf{V}}\ \left\|\mathbf{X}_{\Omega}-\left(\mathbf{U}\mathbf{V}\right)_{\Omega} \right\|_{F}^{2} \tag{3}\] where \(\mathbf{U}\in\mathbb{R}^{m\times r}\) and \(\mathbf{V}\in\mathbb{R}^{r\times n}\) are the two small size matrices with rank \(r\ll\min(m,n)\). Although the restored matrix \(\mathbf{M}=\mathbf{U}\mathbf{V}\) is low-rank, the recovery performance is sensitive to outliers because the \(\ell_{2}\)-norm is utilized. To resist outliers, loss functions such as the Huber and Welsch functions are suggested [19, 20], resulting in: \[\min_{\mathbf{U},\mathbf{V}}\ l(\mathbf{X}_{\Omega}-\left(\mathbf{U}\mathbf{V}\right)_{\Omega}) \tag{4}\] where \(l(\cdot)\) refers to a robust loss function. Table I tabulates the commonly-used loss functions and the corresponding weight functions. It is known that a good robust loss function should only down-weigh the outlier-corrupted elements. We see that the quadratic function, namely, the \(\ell_{2}\)-norm, has the same weights for all entries, including the outlier-contaminated data, thus it is suitable to handle normal data but is not robust against outliers. Although the Huber function only assigns small weights for noisy observations, it is still vulnerable to large outliers since it employs the \(\ell_{1}\)-norm to combat gross errors. To enhance outlier-robustness, the nonconvex functions such as the Welsch and Cauchy, are suggested [22]. However, as shown in Table I, these functions also reduce the weights of normal data. Recently, we design a novel robust function called hybrid ordinary-Welsch (HOW) [21] where 'ordinary' refers to the quadratic function, whose expression is shown in Table I. We see that only the Huber and HOW functions down-weigh outlier-corrupted entries because they assign the same weights for \(|x|\leq c\), where \(c\) is a parameter to differentiate whether an entry is contaminated by outliers or not. That is, when \(|x|>c\), the corresponding element is considered corrupted by an outlier, and assumed normal entry otherwise. ## III Novel M-estimator function and its application to robust matrix completion In this section, motivated by the thoughts of the construction for Huber and HOW, we devise a framework to generate a class of M-estimator functions, which only down-weights the outlier-corrupted data. ### _Framework to Generate M-estimator Functions_ We generalize the expressions of Huber and HOW functions for \(|x|>c\) and develop a new generic function: \[l_{g,c}(x)=\begin{cases}x^{2}/2,&|x|\leq c\\ a\cdot g(|x|)+b,&|x|>c\end{cases} \tag{5}\] where \(g(x)\) is a continuous function and \(g^{\prime}(x)\geq 0\) for \(x>0\), while \(a\) and \(b\) are constants to ensure that \(l_{g,c}(x)\) is continuous and smooth at \(x=c\). Thus, \(a=c/g^{\prime}(c)>0\) (\(g^{\prime}(c)\neq 0\)), and \(b=c^{2}/2-ag(c)\). We easily see that \(l_{g,c}\) only down-weighs the outlier-corrupted data because when \(|x|\leq c\), \(l_{g,c}\) is the quadratic function and does not reduce the weight of normal data, while when \(|x|>c\), \(l_{g,c}\) employs the robust function \(g\) to handle outliers. Compared with the Huber function where \(g(x)=|x|_{1}\), we will focus on nonconvex \(g(x)\) because the latter can resist large outliers. Hence, the function \(l_{g,c}\) is assumed nonconvex in our study. Analogous to [21], the LF transform [23] is applied to \(l_{g,c}\), resulting in: \[l_{g,c}(x)=\inf_{y}\ \frac{(y-x)^{2}}{2}+\varphi_{g,c}(y) \tag{6}\] where \(\varphi_{g,c}\) is the dual function of \(l_{g,c}\) and is also called the implicit regularizer (IR). The value of \(y\) that solves (6) is: \[P_{\varphi_{g,c}}(x):=\nabla f(x)=\max\left\{0,|x|-a\cdot g^{\prime}(|x|) \right\}\cdot\mathrm{sign}(x) \tag{7}\] The process of obtaining (7) from (5) can be found in [21]. Next, we will specify \(g\) as several commonly-used functions, including the \(\ell_{p}\)-norm and Cauchy function. Note that we have already specified \(g\) as the Welsch M-estimator and have proposed the HOW function in [21]. When \(g(x)=|x|^{p}\), we have the smooth hybrid ordinary-\(\ell_{p}\) (HOP) function: \[l_{p,c}(x)=\begin{cases}x^{2}/2,&|x|\leq c\\ \frac{1}{p}c^{2-p}|x|^{p}+\frac{c^{2}}{2}-\frac{1}{p}c^{2},&|x|>c\end{cases} \tag{8}\] By the LF transform, we obtain \[l_{p,c}(x)=\min_{y}\ \frac{(x-y)^{2}}{2}+\varphi_{p,c}(y) \tag{9}\] where \(\varphi_{p,c}(y)\) is the IR related to \(l_{p,c}(x)\), and the solution to \(y\) is: \[P_{\varphi_{p,c}}(x)=\max\left\{0,|x|-c^{2-p}|x|^{p-1}\right\}\cdot\mathrm{ sign}(x) \tag{10}\] Note that when \(p=1\), (8) becomes the Huber function. We then replace \(g(x)\) by the Cauchy M-estimator shown in Table I, and then develop the hybrid ordinary-Cauchy (HOC) function: \[l_{\gamma,c}(x)=\begin{cases}x^{2}/2,&|x|\leq c\\ \frac{\gamma^{2}+c^{2}}{2}\ln\left(1+\left(\frac{\pi}{\gamma}\right)^{2}\right) +b,&|x|>c\end{cases}\] where \(b=\frac{c^{2}}{2}-\frac{\gamma^{2}+c^{2}}{2}\ln\left(1+\left(c/\gamma\right)^{ 2}\right)\) and \(\gamma\) is the scale parameter. Employing the LF transform results in: \[l_{\gamma,c}(x)=\min_{y}\ \frac{(x-y)^{2}}{2}+\varphi_{\gamma,c}(y)\] with the solution to \(y\) being: \[p_{\varphi_{\gamma,c}}(x):=\max\left\{0,|x|-\frac{\left(\gamma^{2}+c^{2} \right)|x|}{\gamma^{2}+x^{2}}\right\}\cdot\mathrm{sign}(x) \tag{11}\] ### _Algorithms for Robust Matrix Completion_ We replace the Frobenius norm in (3) with our M-estimator functions, leading to: \[\min_{\mathbf{U},\mathbf{V}}\ l_{g,c}\left(\mathbf{X}_{\Omega}-\left(\mathbf{U}\mathbf{V}\right)_{ \Omega}\right) \tag{12}\] where \(l_{q,c}\left(\mathbf{X}_{\Omega}-\left(\mathbf{U}\mathbf{V}\right)_{\Omega}\right)\) is separable, i.e., \(l_{g,c}\left(\mathbf{X}_{\Omega}-\left(\mathbf{U}\mathbf{V}\right)_{\Omega}\right)\) \(=\sum_{i,j\in\Omega}l_{g,c}\left(\mathbf{X}_{i,j}-\left(\mathbf{U}\mathbf{V}\right)_{i,j}\right)\). According to (6), we have: \[\min_{\mathbf{U},\mathbf{V},\mathbf{S}}\mathcal{L}_{g,c}\left(\mathbf{U},\mathbf{V},\mathbf{S}\right):= \frac{1}{2}\left\|\mathbf{X}_{\Omega}-\left(\mathbf{U}\mathbf{V}\right)_{\Omega}-\mathbf{S}_{ \Omega}\right\|_{F}^{2}+\varphi_{g,c}(\mathbf{S}_{\Omega}) \tag{13}\] where \(\varphi(\mathbf{S})=\sum_{i,j}\varphi(\mathbf{S}_{i,j})\), and \(\mathbf{S}_{\Omega^{c}}=\mathbf{0}\). In the \((k+1)\)th iteration, given \(\mathbf{U}^{k}\) and \(\mathbf{V}^{k}\), (13) is equal to: \[\min_{\mathbf{S}}\ \frac{1}{2}\left\|\mathbf{D}_{\Omega}^{k}-\mathbf{S}_{\Omega}\right\|_ {F}^{2}+\varphi_{g,c}(\mathbf{S}_{\Omega}) \tag{14}\] where \(\mathbf{D}^{k}=\mathbf{X}-\mathbf{U}^{k}\mathbf{V}^{k}\). The solution to (14) via (7) is: \[\mathbf{S}_{\Omega}^{k+1}=P_{\varphi_{g,c}}(\mathbf{D}_{\Omega}^{k}) \tag{15}\] Given \(\mathbf{S}^{k+1}\), (13) amounts to: \[\min_{\mathbf{U},\mathbf{V}}\ \ h(\mathbf{U},\mathbf{V}):=\frac{1}{2}\left\|\mathbf{H}_{\Omega}^{k+1}- \left(\mathbf{U}\mathbf{V}\right)_{\Omega}\right\|_{F}^{2} \tag{16}\] where \(\mathbf{H}_{\Omega}^{k+1}=\mathbf{X}_{\Omega}-\mathbf{S}_{\Omega}^{k+1}\), and it can be efficiently solved by the scaled alternating steepest descent (SASD) [24]. Then, the scaled gradient descent directions for our case are: \[\widetilde{\nabla}h_{\mathbf{V}}(\mathbf{U}) =\left(\mathbf{H}_{\Omega}-\left(\mathbf{U}\mathbf{V}\right)_{\Omega}\right) \mathbf{V}^{T}(\mathbf{V}\mathbf{V}^{T})^{-1} \tag{17a}\] \[\widetilde{\nabla}h_{\mathbf{U}}(\mathbf{V}) =\left(\mathbf{U}^{T}\mathbf{U}\right)^{-1}\mathbf{U}^{T}\left(\mathbf{H}_{\Omega }-\left(\mathbf{U}\mathbf{V}\right)_{\Omega}\right) \tag{17b}\] with the corresponding step sizes being: \[\widetilde{\mu}_{\mathbf{U}} =\left\langle\nabla h_{\mathbf{V}}(\mathbf{U}),\widetilde{\nabla}h_{\mathbf{ V}}(\mathbf{U})\right\rangle\bigg{/}\left\|\left(\widetilde{\nabla}h_{\mathbf{V}}( \mathbf{U})\mathbf{V}\right)_{\Omega}\right\|_{F}^{2} \tag{18a}\] \[\widetilde{\mu}_{\mathbf{V}} =\left\langle\nabla h_{\mathbf{V}}(\mathbf{V}),\widetilde{\nabla}h_{\mathbf{ V}}(\mathbf{V})\right\rangle\bigg{/}\left\|\left(\mathbf{U}\widetilde{\nabla}h_{\mathbf{ V}}(\mathbf{V})\right)_{\Omega}\right\|_{F}^{2} \tag{18b}\] Thus, the SASD updates for \(\mathbf{U}\) and \(\mathbf{V}\) are: \[\mathbf{U}^{k+1} =\mathbf{U}^{k}-\widetilde{\mu}_{\mathbf{U}}^{k}\widetilde{\nabla}h_{\mathbf{ V}^{k}}(\mathbf{U}^{k}) \tag{19a}\] \[\mathbf{V}^{k+1} =\mathbf{V}^{k}-\widetilde{\mu}_{\mathbf{V}}^{k}\widetilde{\nabla}h_{\mathbf{ V}^{k+1}}(\mathbf{V}^{k}) \tag{19b}\] Recall that the value of \(c\) in (5) is the boundary to determine whether an entry is corrupted or not. Similar to [19, 21], its value is set as: \[c^{k}=\min\left\{\xi d^{k},c^{k-1}\right\} \tag{20}\] where \(\xi{>}0\) is a user-defined constant, \(d^{k}\) is the robust normalized interquartile range of the vectorized \(\mathbf{D}_{\Omega}\), defined as: \[d^{k}=\mathrm{IQR}\left(\mathrm{vec}(\mathbf{D}_{\Omega}^{k})\right)/1.349 \tag{21}\] with \(\mathrm{IQR}(\cdot)\) being the sample interquartile range operator [25]. When HOW, HOP and HOC are adopted in (12), the resultant algorithms are referred to as robust MC via HOW (RMC-HOW), HOP (RMC-HOP) and HOC (RMC-HOC), respectively. In addition, SASD dominates the complexity of the proposed approach with complexity of \(\mathcal{O}\left(8|\Omega|r+4(m+n)r^{2}\right)\) per iteration. Defining \(E_{k}=l_{g,c}\left(\mathbf{X}_{\Omega}-\left(\mathbf{U}^{k}\mathbf{V}^{k}\right)_{\Omega}\right)\), we terminate our algorithms until the relative error \(rel_{E}^{k}=\left(E_{k}-E_{k-1}\right)/E_{k-1}<\zeta\). On the other hand, the convergence analysis results are shown in the following theorems, whose proofs are analogous to those in our previous work [21], and we omit them due to page limit. **Theorem 1**.: _The generated sequence \(\left\{\mathcal{L}_{g^{k},c^{k}}\left(\mathbf{U}^{k},\mathbf{V}^{k},\mathbf{S}^{k}\right)\right\}\) converges._ **Theorem 2**.: _Let \(\left\{\left(\mathbf{U}^{k},\mathbf{V}^{k},\mathbf{S}^{k}\right)\right\}\) be the generated sequence, and suppose that \(\left(\mathbf{U}^{k},\mathbf{V}^{k}\right)\) are of full rank. Then \(\left\{\left(\mathbf{U}^{k},\mathbf{V}^{k},\mathbf{S}^{k}\right)\right\}\) is bounded. Besides, let \(\left\{\left(\mathbf{U}^{k_{j}},\ \mathbf{V}^{k_{j}},\mathbf{S}^{k_{j}}\right)\right\}\) be a generated subsequence such that \(\lim_{k_{j}\rightarrow\infty}\left(\mathbf{U}^{k_{j}},\mathbf{V}^{k_{j}},\mathbf{S}^{k_{j}}\right)\) \(=\left(\mathbf{U}^{*},\mathbf{V}^{*},\mathbf{S}^{*}\right)\). Then, \(\left(\mathbf{U}^{*},\mathbf{V}^{*},\mathbf{S}^{*}\right)\) is a critical point._ ## IV Experimental Results We compare our algorithms with the competing methods, including HQ-ASD [19], \((\mathbf{S}+\mathbf{L})_{1/2}\)[18], \((\mathbf{S}+\mathbf{L})_{2/3}\)[18], RMF-MM [15] and \(\mathrm{RegL}_{1}\)[16]. All numerical simulations are conducted using a computer with 3.0 GHz CPU and 16 GB memory. We first generate a low-rank matrix \(\mathbf{X}=\mathbf{U}\mathbf{V}\), where the entries of \(\mathbf{U}\in\mathbb{R}^{m\times r}\) and \(\mathbf{V}\in\mathbb{R}^{r\times n}\) satisfy the standard Gaussian distribution. Impulsive noise is modeled by the Gaussian mixture model (GMM) while the signal-to-noise ratio (SNR) is defined as: \[\mathrm{SNR}=\frac{\|\mathbf{X}_{\Omega}\|_{F}^{2}}{|\Omega|\left((1-r)\sigma_{1}^{2 }+\tau\sigma_{2}^{2}\right)} \tag{22}\] where \(\sigma_{1}^{2}\) and \(\sigma_{2}^{2}\) are variances with \(\sigma_{1}^{2}\ll\sigma_{2}^{2}\), and \(\tau\) controls the proportion of outliers. To model outliers, we set \(\sigma_{2}^{2}=100\sigma_{1}^{2}\) and \(\tau=0.1\). Besides, the root mean square error (RMSE) defined as \(\mathrm{RMSE}=\|\mathbf{X}-\mathbf{M}\|_{F}/\sqrt{mn}\) is utilized to measure the performance of all algorithms. Similar to the parameter selection in [21], we set \(\xi=2\) and \(\zeta=10^{-4}\) in our methods. We conduct experiments on data matrices with \(m=300\), \(n=200\) and \(r=5\). Fig. 1 plots the RMSE versus SNR \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & Quadratic & Huber & Cauchy & Welsch & HOW \\ \hline \multirow{3}{*}{\(l(x)\)} & \(\frac{x^{2}}{2}\) & \(\begin{cases}x^{2}/2,&|x|\leq c\\ c|x|-\frac{c^{2}}{2},&|x|>c\end{cases}\) & \(\frac{\gamma^{2}}{2}\ln\left(1+\left(\frac{x}{\gamma}\right)^{2}\right)\) & \(\frac{x^{2}}{2}\left(1-e^{-\frac{x^{2}}{\sigma^{2}}}\right)\) & \(\begin{cases}x^{2}/2,&|x|\leq c\\ \frac{x^{2}}{2}\left(1-e^{\frac{x^{2}-x^{2}}{\sigma^{2}}}\right)+\frac{c^{2}}{2},&|x|>c \\ \end{cases}\) \\ \hline \multirow{2}{*}{\(w(x)=\frac{r^{ with \(30\%\) observations. It is seen that the RMSE for all methods decreases with SNR, while the proposed algorithms have smaller recovery error than the competing techniques. In addition, RMC-HOW yields the best recovery because HOW is bounded. Moreover, the impact of percentage of observations is investigated, and the results are shown in Fig. 2. Again, our methods outperform the competitors, with HOW attaining the best recovery performance. Finally, we investigate the runtime of all methods, and perform numerical experiments on four cases with different matrix dimensions. Here, Case 1: \(m=300,n=200,r=5\), Case 2: \(m=600,n=400,r=10\), Case 3: \(m=900,n=600,r=15\), and Case 4: \(m=1200,n=800,r=20\). The results under SNR = 10 dB and \(50\%\) observations are tabulated in Table II. We see that the runtime of our algorithms is less than that of the competitors. ## V Conclusion In this paper, we provide a framework to generate a new class of robust loss functions via combining the quadratic and other robust loss functions. The proposed functions can be used to combat gross errors and only down-weigh the outlier-contaminated observations. Applying our framework to the Welsch, Cauchy and \(\ell_{p}\)-norm functions yields the HOW, HOC and HOP, respectively, which are then adopted for robust MC. Furthermore, although the resultant optimization problem is nonconvex, the LF transform is adopted to transform it into a sum of convex subproblems. Then, efficient robust MC algorithms based on SASD are developed. Finally, experimental results show the superiority of the proposed algorithms over the competing methods in terms of recovery error and runtime.
2310.15686
Assume-Guarantee Verification of Strategic Ability
Model checking of strategic abilities is a notoriously hard problem, even more so in the realistic case of agents with imperfect information. Assume-guarantee reasoning can be of great help here, providing a way to decompose the complex problem into a small set of exponentially easier subproblems. In this paper, we propose two schemes for assume-guarantee verification of alternating-time temporal logic with imperfect information. We prove the soundness of both schemes, and discuss their completeness. We illustrate the method by examples based on known benchmarks, and show experimental results that demonstrate the practical benefits of the approach.
Łukasz Mikulski, Wojciech Jamroga, Damian Kurpiewski
2023-10-24T09:58:43Z
http://arxiv.org/abs/2310.15686v1
# Assume-Guarantee Verification of Strategic Ability ###### Abstract Model checking of strategic abilities is a notoriously hard problem, even more so in the realistic case of agents with imperfect information. Assume-guarantee reasoning can be of great help here, providing a way to decompose the complex problem into a small set of exponentially easier subproblems. In this paper, we propose two schemes for assume-guarantee verification of alternating-time temporal logic with imperfect information. We prove the soundness of both schemes, and discuss their completeness. We illustrate the method by examples based on known benchmarks, and show experimental results that demonstrate the practical benefits of the approach. Keywords:model checking, assume-guarantee reasoning, strategic ability ## 1 Introduction Multi-agent systems involve a complex network of social and technological components. Such components often exhibit self-interested, goal-directed behavior, which makes it harder to predict and analyze the dynamics of the system. In consequence, formal specification and automated verification can be of significant help. **Verification of strategic ability.** Many important properties of multi-agent systems refer to _strategic abilities_ of agents and their groups. _Alternating-time temporal logic_\(\mathbf{ATL}^{*}\)[2, 37] and _Strategy Logic_\(\mathbf{SL}\)[34] provide powerful tools to reason about such aspects of MAS. For example, the \(\mathbf{ATL}^{*}\) formula \(\langle\!\langle taxi\rangle\!\rangle\mathds{G}\dashp\)fatality expresses that the autonomous cab can drive in such a way that no one gets ever killed. Similarly, \(\langle\!\langle taxi,passg\rangle\!\rangle\mathds{F}\)destination says that the cab and the passenger have a joint strategy to arrive at the destination, no matter what the other agents do. Specifications in agent logics can be used as input to algorithms and tools for _model checking_, that have been in constant development for over 20 years [3, 6, 7, 20, 27, 30]. Model checking of strategic abilities is hard, both theoretically and in practice. First, it suffers from the well-known state/transition-space explosion. Moreover, the space of possible strategies is at least exponential _on top of the state-space explosion_, and incremental synthesis of strategies is not possible in general - especially in the realistic case of agents with partial observability. Even for the more restricted (and computation-friendly) logic \(\mathbf{ATL}\), model checking of its imperfect information variants is \(\mathbf{\Delta_{2}^{P}}\)- to \(\mathbf{PSPACE}\)-complete for agents playing memoryless strategies [5, 37] and **EXPTIME**-complete to undecidable for agents with perfect recall [12, 16]. The theoretical results concur with outcomes of empirical studies on benchmarks [6, 22, 30], as well as recent attempts at verification of real-life multi-agent scenarios [21, 26]. **Contribution.** In this paper, we make the first step towards compositional model checking of strategic properties in asynchronous multi-agent systems with imperfect information. The idea of _assume-guarantee reasoning_[10, 36] is to "factorize" the verification task into subtasks where components are verified against a suitable abstraction of the rest of the system. Thus, instead of searching through the states (and, in our case, strategies) of the huge product of all components, most of the search is performed locally. To achieve this, we adapt and extend the assume-guarantee framework of [31, 32]. We redefine the concepts of modules and their composition, follow the idea of expressing assumptions as Buchi automata, and accordingly redefine their interaction with the computations of the coalition. Then, we propose two alternative assume-guarantee schemes for \(\mathbf{ATL}^{*}\) with imperfect information. The first, simpler one is shown to be sound but incomplete. The more complex one turns out to be both sound and complete. We illustrate the properties of the schemes on a variant of the Trains, Gate and Controller scenario [4], and evaluate the practical gains through verification experiments on models of logistic robots, inspired by [26]. Note that our formal treatment of temporal properties, together with strategic properties of curtailment,4 substantially extends the applicability of schemes in [31, 32] from temporal liveness properties to strategic properties with arbitrary \(\mathbf{LTL}\) objectives. We also emphasize that our schemes are sound for the model checking of agents with _imperfect_ as well as _perfect recall_. In consequence, they can be used to facilitate verification problems with a high degree of hardness, including the undecidable variant for coalitions of agents with memory. In that case, the undecidable problem reduces to multiple instances of the **EXPTIME**-complete verification of individual abilities. Footnote 4: Provided in the supplementary material, available at [https://github.com/agrprima22/sup](https://github.com/agrprima22/sup). **Structure of the paper.** In Section 2, we present the model of concurrent MAS that we consider in this paper. In Section 3, we define the syntax and semantics of the logic used in the formulation of agents' strategic properties. In Sections 4 and 5 we introduce the notions of assumption and guarantee, and utilize them to propose two schemes of assume-guarantee reasoning for strategic abilities. Finally, we present preliminary results of experimental verification in Section 6 and conclude the paper in Section 7. **Related Work.** Compositional verification (known as _rely-guarantee_ in the program verification community) dates back to the early 1970s and the works of Hoare, Owicki, Gries and Jones [19, 24, 35]. Assume-guarantee reasoning for temporal specifications was introduced a decade later [10, 36], and has been in development since that time [11, 13, 18, 29, 31, 32]. Moreover, automated synthesis of assumptions for temporal reasoning has been studied in [9, 15, 17, 25]. The works that come closest to our proposal are [11, 14, 31, 32]. In [31, 32], models and a reasoning scheme are defined for assume-guarantee verification of liveness properties in distributed systems. We build directly on that approach and extend it to the verification of strategic abilities. [11] studies assume-guarantee reasoning for an early version of \(\mathbf{ATL}\). However, their assume-guarantee rules are designed for perfect infor mation strategies (whereas we tackle the more complex case of imperfect information), and targeted specifically the verification of aspect-oriented programs. Finally, [14] investigates the compositional synthesis of strategies for **LTL** objectives. The difference to our work is that they focus on finite-memory strategies while we consider the semantics of ability based on memoryless and perfect recall strategies. Another difference lies in our use of repertoire functions that define agents' choices in a flexible way, and make it closer to real applications. The advantage of the solution presented in [14] is the use of contracts, thanks to which it is possible to synthesize individual strategies using the knowledge of the coalition partners' strategies. We also mention [8] that studies the synthesis of Nash equilibrium strategies for 2-player coalitions pursuing \(\omega\)-regular objectives. The authors call their approach _assume-guarantee strategy synthesis_, but the connection to assume-guarantee verification is rather loose. A preliminary version of the ideas, presented here, was published in the extended abstract [33]. Our extension of the STV tool [27], used in the experiments, is described in the companion paper [28]. ## 2 Models of Concurrent MAS Asynchronous MAS have been modeled by variants of reactive modules [1, 32] and automata networks [23]. Here, we adapt the variant of reactive modules that was used to define assume-guarantee verification for temporal properties in [32]. ### Modules Let \(D\) be the shared domain of values for all the variables in the system. \(D^{X}\) is the set of all valuations for a set of variables \(X\). The _system_ consists of a number of _agents_, each represented by its _module_ and a _repertoire_ of available choices. Every agent uses _state variables_ and _input variables_. It can read and modify its state variables at any moment, and their valuation is determined by the current state of the agent. The input variables are not a part of the state, but their values influence transitions that can be executed. Definition 1 (Module [32]): A _module_ is a tuple \(M=(X,I,Q,T,\lambda,q_{0})\), where: \(X\) is a finite set of state variables; \(I\) is a finite set of input variables with \(X\cap I=\varnothing\); \(Q=\{q_{0},q_{1},\ldots,q_{n}\}\) is a finite set of states; \(q_{0}\in Q\) is an initial state; \(\lambda:Q\to D^{X}\) labels each state with a valuation of the state variables; finally, \(T\subseteq Q\times D^{I}\times Q\) is a transition relation such that (a) for each pair \((q,\alpha)\in Q\times D^{I}\) there exists \(q^{\prime}\in Q\) with \((q,\alpha,q^{\prime})\in T\), and (b) \((q,\alpha,q^{\prime})\in T,q\neq q^{\prime}\) implies \((q,\alpha,q)\notin T\). In what follows, we omit the self-loops from the presentation. Modules \(M,M^{\prime}\) are _asynchronous_ if \(X\cap X^{\prime}=\varnothing\). We extend modules by adding _repertoire functions_ that define the agents' available choices in a way similar to [23]. Definition 2 (Repertoire): Let \(M=(X,I,Q,T,\lambda,q_{0})\) be a module of agent \(i\). The _repertoire_ of \(i\) is defined as \(R:Q\rightarrow\mathcal{P}(\mathcal{P}(T))\), i.e., a mapping from local states to sets of sets of transitions. Each \(R(q)=\{T_{1},\ldots,T_{m}\}\) must be nonempty and consist of nonempty sets \(T_{i}\) of transitions starting in \(q\). If the agent chooses \(T_{i}\in R(q)\), then only a transition in \(T_{i}\) can be occur at \(q\) within the module. We adapt the Train-Gate-Controller (TGC) benchmark [3] as our running example. Example 1: The module \(M^{(i)}\) of a train is presented in Figure 1 (left). Its local states \(Q^{(i)}=\{w^{(i)},t^{(i)},a^{(i)}\}\) refer, respectively, to the train waiting at the entrance, riding in the tunnel, and cruising **a**way from the tunnel. The sole state variable \(x^{(i)}\) labels the state with values \(0\), \(1\), and \(2\), respectively. \(I^{(i)}=\{s\}\) consists of a single input variable that takes values from an external multi-valued semaphore. The train can enter and exit the tunnel only if the semaphore allows for that, i.e., if \(v(s)=i\). To this end, we define \(T^{(i)}=\{(w^{(i)},i,t^{(i)}),(t^{(i)},i,a^{(i)}),(a^{(i)},0,w^{(i)}),(a^{(i)}, 1,w^{(i)})\ldots,\)\((a^{(i)},n,w^{(i)})\}\cup\{(w^{(i)},j,w^{(i)}),(t^{(i)},j,t^{(i)})\mid j\neq i\}\).5 Footnote 5: By a slight abuse of notation, the valuation of a single variable is identified with its value. The module \(M^{(C(n))}\) of a controller that coordinates up to \(n\) trains is depicted in Figure 1 (right). Formally, it is defined by: * \(X=\{s\}\) (the semaphore), * \(I=\{x_{1},\ldots,x_{n}\}\) (the positions of trains), * \(Q=\{r,g_{1},\ldots,g_{n}\}\) (red or directed green light), where a state with subscript \(1\) represents a tunnel shared with the other trains, \(\lambda(g_{i})(s)=i\), \(\lambda(r)(s)=0\), and \(r\) is the initial state. The controller can change the light to green when a train is waiting for the permission to enter the tunnel, and back to red after it passed through the tunnel: \(T=\{(r,v,g_{i})\mid v(x_{i})=0\}\cup\{(g_{i},v,r)\mid v(x_{i})=2\}\). Each agent can freely choose the local transition intended to execute next. Thus, \(R^{(i)}(q)=\{\{(q,\alpha,q^{\prime})\}\mid(q,\alpha,q^{\prime})\in T^{(i)}\}\), and similarly for \(R^{(C(n))}\). Note that all the modules in TCG are asynchronous. ### Composition of Agents On the level of the temporal structure, the model of a multi-agent system is given by the asynchronous composition \(M=M^{(1)}|\ldots|M^{(n)}\) that combines modules \(M^{(i)}\) into a single module. The definition is almost the same as in [32]; we only extend it to handle the repertoire functions that are needed to characterize strategies and strategic abilities. Figure 1: A variant of TCG: Train synchronizing with a semaphore (left) and the controller (right). We begin with the notion of compatible valuations to adjust local states of one agent with the labels of the actions performed by the other agent. Note that the local states of different asynchronous agents rely on disjoint sets of variables. Let \(Y,Z\subseteq X\) and \(\rho_{1}\in D^{Y}\) while \(\rho_{2}\in D^{Z}\). We say that \(\rho_{1}\) is compatible with \(\rho_{2}\) (denoted by \(\rho_{1}\sim\rho_{2}\)) if for any \(x\in Y\cap Z\) we have \(\rho_{1}(x)=\rho_{2}(x)\). We can compute the union of \(\rho_{1}\) with \(\rho_{2}\) which is compatible with \(\rho_{1}\) by setting \((\rho_{1}\cup\rho_{2})(x)=\rho_{1}(x)\) for \(x\in Y\) and \((\rho_{1}\cup\rho_{2})(x)=\rho_{2}(x)\) for \(x\in Z\). Definition 3 (Composition of modules [32]): The composition of asynchronous modules \(M^{(1)}=(X^{(1)},I^{(1)},Q^{(1)},T^{(1)},\lambda^{(1)},q_{0}^{(1)})\) and \(M^{(2)}=(X^{(2)},I^{(2)},Q^{(2)},T^{(2)},\lambda^{(2)},q_{0}^{(2)})\) (with \(X^{(1)}\cap X^{(2)}=\varnothing\)) is a composite module \(M=(X=X^{(1)}\uplus X^{(2)},I=(I^{(1)}\cup I^{(2)})\setminus X,Q^{(1)}\times Q^{ (2)},T,\lambda,q_{0}=(q_{0}^{(1)},q_{0}^{(2)})\)), where * \(\lambda:Q^{(1)}\times Q^{(2)}\to D^{X}\), \(\lambda(q^{(1)},q^{(2)})=\lambda^{(1)}(q^{(1)})\cup\lambda^{(2)}(q^{(2)})\), * \(T\) is the minimal transition relation derived by the set of rules presented below: \[\mathbf{ASYN_{L}} \frac{q^{(1)}\xrightarrow{\alpha^{(1)}}_{T^{(1)}}{q^{\prime}}^{ (1)}}{\alpha^{(1)}\sim\alpha^{(2)}}_{\lambda^{(1)}}(q^{(1)})\sim\alpha^{(2)} \xrightarrow{\alpha^{(2)}}_{T^{(2)}}{q^{\prime}}^{(2)}\] \[\mathbf{ASYN_{R}} \frac{q^{(1)}\xrightarrow{\alpha^{(1)}}_{\sim\alpha^{(2)}}\chi^{ (1)}(q^{(1)})\sim\alpha^{(2)}}{(q^{(1)},q^{(2)})\xrightarrow{(\alpha^{(1)} \cup\alpha^{(2)})\setminus X}_{T}(q^{(1)},q^{\prime}}^{(2)})\] \[\mathbf{SYN} \frac{q^{(1)}\xrightarrow{\alpha^{(1)}}_{T^{(1)}}{q^{\prime}}^{ (1)}}{\alpha^{(1)}\sim\alpha^{(2)}}_{\lambda^{(1)}}(q^{(1)})\sim\alpha^{(2)} \xrightarrow{\lambda^{(2)}}(q^{(2)})\sim\alpha^{(1)}\] \[\mathbf{SYN} \frac{(q^{(1)}\xrightarrow{\alpha^{(1)}}_{\sim\alpha^{(2)}}\chi^{ (1)}(q^{(1)})\sim\alpha^{(2)}}{(q^{(1)},q^{(2)})\xrightarrow{(\alpha^{(1)} \cup\alpha^{(2)})\setminus X}_{T}(q^{\prime}}^{(1)},{q^{\prime}}^{(2)})\] pruned in order to avoid disallowed self-loops. We use the notation \(M=M^{(1)}|M^{(2)}\). Note that the operation is defined in [32] for a pair of modules only. It can be easily extended to a larger number of pairwise asynchronous modules. Moreover, the order of the composition does not matter. Consider agents \((M^{(1)},R^{(1)}),\ \ldots,\ (M^{(n)},R^{(n)})\). The _multi-agent system_ is defined by \(\mathcal{S}=(M^{(1)}|M^{(2)}|\ldots|M^{(n)},\ R^{(1)},\ldots,R^{(n)})\), i.e., the composition of the underlying modules, together with the agents' repertoires of choices. Example 2: The composition \(M^{(1)}|M^{(2)}|M^{(C(2))}\) of two train modules \(M^{(1)},M^{(2)}\) and controller \(M^{(C(2))}\) is presented in Figure 2. The asynchronous transitions are labelled by the agent performing the transitions. All the synchronous transitions performed by both trains are in red, while the synchronous transitions performed by a controller with one of the trains are in blue. There are two synchronous transition performed by all the agents, both in green. Traces and Words.A trace of a module \(M\) is an infinite sequence of alternating states and transitions \(\sigma=q_{0}\alpha_{0}q_{1}\alpha_{1}\ldots\), where \((q_{i},\alpha_{i},q_{i+1})\in T\) for every \(i\in\mathbb{N}\) (note that \(q_{0}\) is the initial state). An infinite word \(w=v_{0}v_{1},\ldots\in(D^{X})^{\omega}\) is _derived_ by \(M\) with trace \(\sigma=q_{0}\alpha_{0}q_{1}\alpha_{1}\ldots\) if \(v_{i}=\lambda(q_{i})\) for all \(i\in\mathbb{N}\). An infinite word \(u=\alpha_{0}\alpha_{1},\ldots\in(D^{I})^{\omega}\) is _admitted_ by \(M\) with \(\sigma\) if \(\sigma=q_{0}\alpha_{0}q_{1}\alpha_{1}\ldots\). Finally, \(w\) (resp. \(u\)) is derived (resp. admitted) by \(M\) if there exists a trace of \(M\) that derives (resp. admits) it. ## 3 What Agents Can Achieve Alternating-time temporal logic \(\mathbf{ATL}^{*}\)[2, 37] introduces _strategic modalities_\(\langle\!\langle C\rangle\!\rangle\gamma\), expressing that coalition \(C\) can enforce the temporal property \(\gamma\). We use the semantics based on _imperfect information strategies_ with _imperfect recall_ (ir) or _perfect recall_ (iR) [37]. Moreover, we only consider formulas without the next step operator X due to its questionable interpretation for asynchronous systems, which are based on the notion of local clocks. **Syntax.** Formally, the syntax of \(\mathbf{ATL}^{*}_{-\mathbf{X}}\) is as follows: \[\phi::=p(Y)\mid\neg\phi\mid\phi\wedge\phi\mid\langle\!\langle C\rangle\! \rangle\gamma\,,\qquad\qquad\gamma::=\phi\mid\neg\gamma\mid\gamma\wedge\gamma \mid\gamma\operatorname{U}\gamma\] where \(p:Y\to D\) for some subset of domain variables \(Y\subseteq X\). That is, each atomic statement refers to the valuation of variables used in the system. \(\operatorname{U}\) is the "strong until" operator of \(\mathbf{LTL}_{-\mathbf{X}}\). The "sometime" and "always" operators can be defined as usual Figure 2: Composition of modules: two trains \(M^{(1)},M^{(2)}\) and controller \(M^{(C(2))}\) by \(\mathrm{F}\,\gamma\equiv\top\,\mathrm{U}\,\gamma\) and \(\mathrm{G}\,\gamma\equiv\neg\mathrm{F}\neg\gamma\). The set of variables used by the formula \(\gamma\) is denoted by \(var(\gamma)\). In most of the paper, we focus on formulas that consist of a single strategic modality followed by an \(\mathbf{LTL}_{-\mathbf{X}}\) formula (i.e., \(\langle\!\langle C\rangle\!\rangle\gamma\), where \(\gamma\in\mathbf{LTL}_{-\mathbf{X}}\)). The corresponding fragment of \(\mathbf{ATL}_{-\mathbf{X}}^{*}\), called \(\mathbf{1ATL}_{-\mathbf{X}}^{*}\), suffices to express many interesting specifications, namely the ones that refer to agents' ability of enforcing trace properties (such as safety or reachability of a winning state). Note that \(\mathbf{1ATL}_{-\mathbf{X}}^{*}\) has strictly higher expressive and distinguishing power than \(\mathbf{LTL}_{-\mathbf{X}}\). In fact, model checking \(\mathbf{1ATL}_{-\mathbf{X}}^{*}\) is equivalent to \(\mathbf{LTL}_{-\mathbf{X}}\) controller synthesis, i.e., a variant of \(\mathbf{LTL}\) realizability. Nested strategic modalities might be sometimes needed to refer to an agent's ability to endow or deprive another agent with/of ability. We discuss assume-guarantee verification for such specifications in Section 5.4. **Strategies and Their Outcomes.** Let \(\mathcal{S}\) be a system composed of \(n\) agents with asynchronous modules \(M^{(i)}=(X^{(i)},I^{(i)},Q^{(i)},T^{(i)},\lambda^{(i)},q_{0}^{(i)})\) and repertoires \(R^{(i)}\). Definition 4 (Strategies): A _memoryless strategy_ for agent \(i\) (\(\mathrm{ir}\)-strategy in short) is a function \(s_{i}^{\mathrm{ir}}:Q^{(i)}\rightarrow\mathcal{P}(\mathcal{P}(T^{(i)}))\) such that \(s_{i}^{\mathrm{ir}}(q^{(i)})\in R^{(i)}(q^{(i)})\) for every \(q^{(i)}\in Q^{(i)}\). That is, a memoryless strategy assigns a legitimate choice to each local state of \(i\). A _perfect recall strategy_ for \(i\) (\(\mathrm{iR}\)-strategy in short) is a function \(s_{i}^{\mathrm{iR}}:(Q^{(i)})^{+}\to T^{(i)}\) such that \(s_{i}^{\mathrm{iR}}(q_{1}^{(i)},\ldots,q_{k}^{(i)})\in R^{(i)}(q_{k}^{(i)})\), i.e., it assigns choices to finite sequences of local states. We assume that \(s_{i}^{\mathrm{iR}}\) is stuttering-invariant, i.e., \[s_{i}^{\mathrm{iR}}(q_{1}^{(i)},\ldots,q_{j}^{(i)},q_{j}^{(i)},\ldots,q_{k}^{ (i)})=s_{i}^{\mathrm{iR}}(q_{1}^{(i)},\ldots,q_{j}^{(i)},\ldots,q_{k}^{(i)}).\] Note that the agent's choices in a strategy depend only on its _local_ states, thus being uniform by construction. Let \(\sigma=q_{0}\alpha_{0}q_{1}\alpha_{1}\ldots\) be a trace, where \(q_{j}=(q_{j}^{(1)},q_{j}^{(2)},\ldots,q_{j}^{(n)})\) are global states in \(Q^{(1)}\times\ldots\times Q^{(n)}\). We say that \(\sigma\)_implements_ strategy \(s_{i}^{\mathrm{ir}}\) if, for any \(j\) where \(q_{j}^{(i)}\neq q_{j+1}^{(i)}\), we have \((q_{j}^{(i)},\alpha_{j},q_{j+1}^{(i)})\in s_{i}^{\mathrm{ir}}(q_{j}^{(i)})\) where \(\alpha_{j}:I^{(i)}\to D\) and \(\alpha_{j}(x)=\lambda(q_{j})(x)\). A word \(w=v_{0}v_{1}\ldots\)_implements_\(s_{i}^{\mathrm{ir}}\) if it is derived by \(\mathcal{S}\) with some trace \(\sigma\) implementing \(s_{i}^{\mathrm{ir}}\). The definitions for \(s_{i}^{\mathrm{iR}}\) are analogous. Definition 5 (Coalitional strategies): Let \(C\subseteq\{1,\ldots,n\}\) be a coalition of agents. A _joint memoryless strategy_\(s_{C}^{\mathrm{ir}}\) for \(C\) is a collection of memoryless strategies \(s_{i}^{\mathrm{ir}}\), one per \(i\in C\). We say that a trace \(\sigma\) (respectively a word \(w_{\sigma}\))_ implements_\(s_{C}^{\mathrm{ir}}\) if it implements every strategy \(s_{i}^{\mathrm{ir}},i\in C\). The definitions for joint perfect recall strategies are analogous. Whenever a claim holds for both types of strategies, we will refer to them simply as "strategies." **Semantics.** Let \(x\in\{\mathrm{ir},\mathrm{iR}\}\) be a strategy type. The semantics of \(\mathbf{ATL}_{-\mathbf{X}}^{*}\) is given below (we omit the standard clauses for Boolean operators etc.). By \(w[i]\), we denote the \(i\)th item of sequence \(w\), starting from \(0\). \(\mathcal{S},q\models_{x}p(Y)\) if \(\lambda(q)|_{Y}=p(Y)\); \(\mathcal{S},q\models_{x}\langle\!\langle C\rangle\!\rangle\gamma\) if there exists an \(x\)-strategy \(s_{C}\) for \(C\) such that, for any word \(w\) starting in \(q\) that implements \(s_{C}\), we have \(\mathcal{S},w\models\gamma\); \(\mathcal{S},w\models\phi\) if \(\mathcal{S},w[0]\models\phi\); \(\mathcal{S},w\models\gamma_{1}\operatorname{U}\gamma_{2}\) if there exists \(j\) such that \(\mathcal{S},w[j,\infty]\models\gamma_{2}\), and \(\mathcal{S},w[i,\infty]\models\gamma_{1}\) for each \(0\leq i<j\). Finally, we say that \(\mathcal{S}\models_{x}\phi\) if \(\mathcal{S},q_{0}\models_{x}\phi\), where \(q_{0}\) is the initial state of \(\mathcal{S}\). Example 3: Let us consider the system \(\mathcal{S}\) of Example 2 and the \(\mathbf{1ATL}^{*}\) formula \(\phi\equiv\langle\!\langle 1,2\rangle\!\rangle(GFp^{(1)}\wedge GFp^{(2)})\), where \(p^{(i)}(x^{(i)})=1\). That is, \(\phi\) says that trains \(1,2\) have a strategy so that each visits the tunnel infinitely many times. Consider the joint strategy \((\sigma_{1},\sigma_{2})\) with \(\sigma_{i}(w^{(i)})=\{(w^{(i)},i,T^{(i)})\}\), \(\sigma_{i}(T^{(i)})=\{(T^{(i)},i,A^{(i)})\}\), and \(\sigma_{i}(A^{(i)})=\{(A^{(i)},3-i,w^{(i)})\}\). All the traces implementing \((\sigma_{1},\sigma_{2})\) alternate the visits of the trains in the tunnel, making the \(\mathbf{LTL}\) formula \(GFp^{(1)}\wedge GFp^{(2)}\) satisfied. Thus, \(\mathcal{S}\models_{x}\phi\) for \(x\in\{\operatorname{ir},\operatorname{iR}\}\). By the same strategy, we get \(\mathcal{S}\models_{x}\langle\!\langle 1,2\rangle\!\rangle(GFq^{(1)}\wedge GFq^{(2)})\), where \(q^{(i)}(s)=i\). ## 4 Assumptions and Guarantees Our assume-guarantee scheme reduces the complexity of model checking by "factorizing" the task into verification of strategies of single agents with respect to abstractions of the rest of the system. In this section, we formalize the notions of _assumption_ and _guarantee_, which provide the abstractions in a way that allows for simulating the global behavior of the system. ### Assumptions Definition 6 (Assumption [32]): An _assumption_ or an _extended module_\((M,F)=(X,I,Q,T,\lambda,q_{0},F)\) is a module augmented with a set of accepting states \(F\subseteq Q\). For assumptions, we use Buchi accepting conditions. More precisely, the infinite word \(w=q_{0}q_{1},\ldots\) is _accepted_ by extended module \((M,F)\) with computation \(u=\alpha_{0}\alpha_{1}\ldots\) if it is derived by \(M\) with a trace \(\sigma=q_{0}\alpha_{0}q_{1}\alpha_{1}\ldots\) and \(\mathit{inf}(\sigma)\cap F\neq\varnothing\). Thus, the assumptions have the expressive power of \(\omega\)-regular languages. In practical applications, it might be convenient to formulate actual assumptions in \(\mathbf{LTL}\) (which covers a proper subclass of \(\omega\)-regular properties). The definitions of Sections 2 and 3 generalize to assumptions in a straightforward way. In particular, we can compose a module \(M\) with an assumption \(A^{\prime}=(M^{\prime},F^{\prime})\), and obtain an extended composite module \(A=(M|M^{\prime},F)\), where \(F=\{(q,q^{\prime})\in Q\times Q^{\prime}\mid q^{\prime}\in F^{\prime}\}\). We use the notation \(A=M|A^{\prime}\). Moreover, let \(\mathcal{A}=(A,R^{(1)},\ldots,R^{(m)})\) be a MAS based on the extended module \(A\) with repertoires related to all components of \(M\). The semantics of \(\mathbf{1ATL}^{*}_{-\mathbf{X}}\) extends naturally: \(\mathcal{A},q\models_{x}\langle\!\langle C\rangle\!\rangle\phi\) iff there exists an \(x\)-strategy \(s_{C}\) for \(C\) such that, for any word \(w=w[1]w[2]\ldots\) that implements \(s_{C}\) and is accepted by \(A\), we have \(\mathcal{A},w\models_{x}\phi\). Example 4: Recall module \(M^{(C(2))}=(X,I,Q,T,\lambda,q_{0})\) of the controller for 2 trains, with \(Q=\{r,g^{(1)},g^{(2)}\}\). We define four different assumptions about the behavior of the rest of the system, depicted graphically in Figure 3: * \(A_{0}=(X,I,Q,T,\lambda,q_{0},\{r\})\) * \(A_{1}=(X,I,Q,T,\lambda,q_{0},\{g^{(1)}\})\) * \(A_{2}=(X,I,Q,T,\lambda,q_{0},\{g^{(2)}\})\) * \(A_{012}=(X,I,Q,T,\lambda,q_{0},\{r,g^{(1)},g^{(2)}\})\). Note that we can identify each valuation with an element of the set \(\{0,1,2\}\), i.e., the value of the only variable \(s\). This way \(A_{0}\) as well as \(A_{012}\) accept all infinite words of the \(\omega\)-regular language \(L=(0(1|2))^{\omega}\), while \(A_{1}\) and \(A_{2}\) accept only proper subsets of this language, namely \(L\setminus(0(1|2))^{*}(02)^{\omega}\) and \(L\setminus(0(1|2))^{*}(01)^{\omega}\). ### Guarantees We say that a sequence \(v=v_{1}v_{2}\ldots\) over \(D^{Y}\) is a _curtailment_ of a sequence \(u=u_{1}u_{2}\ldots\) over \(D^{X}\) (where \(Y\subseteq X\)) if there exists an infinite sequence \(c\) of indices \(c_{0}<c_{1}<...\) with \(c_{0}=0\) such that \(\forall_{i}\forall_{c_{i}\leq k<c_{i+1}}v_{i}=u_{k}|_{Y}\). We will denote a curtailment of \(u\) to \(D^{Y}\) by \(u|_{Y}\) or \(u|_{Y}^{c}\), and use it to abstract away from irrelevant variables and the stuttering of states. Definition 7 (Guarantee): Let \(M^{(1)},\ldots,M^{(k)}\) be pairwise asynchronous modules, and \(A=(X^{(A)},I^{(A)},Q^{(A)},T^{(A)},\lambda^{(A)},q_{0}^{(A)},F^{(A)})\) be an assumption with \(X^{(A)}\subseteq X=\bigcup_{i=1}^{k}X^{(i)}\) and \(I^{(A)}\subseteq I=\bigcup_{i=1}^{k}I^{(i)}\). We say that \(M=M^{(1)}|\ldots|M^{(k)}\) guarantees the assumption \(A\) (denoted \(M\models A\)) if, for every infinite trace \(\sigma\) of \(M\) with \(w\in(D^{X})^{\omega}\) derived by \(M\) with \(\sigma\) and \(u\in(D^{I})^{\omega}\) admitted by \(M\) with \(\sigma\), there exists a curtailment \(w|_{X^{(A)}}^{c}\) (\(c=c_{1},c_{2},\ldots\)) accepted by \(A\) with the computation \(u_{c_{1}-1}|_{I^{(A)}}\ u_{c_{2}-1}|_{I^{(A)}}\ \ldots\.\) That is, every trace of \(M\) must agree on the values of \(X^{(A)}\) with some trace in \(A\), modulo stuttering. Example 5: Consider the system \(M^{(C(2))}|M^{(1)}|M^{(2)}\) presented in Example 2, its subsystem \(M^{(C(2))}|M^{(2)}\) from Figure 4, and the assumption \(A_{012}\) of Example 4. Figure 3: Assumptions for the railway scenario If we focus on the changes \(s\), the following words can be derived: \((0(1|2))^{\omega}\) for the trains taking turns in the tunnel forever, \((0(1|2))^{*}01^{\omega}\) for the traces where the semaphore is stuck in state \((1,0)\) because it never receives that \(v(x^{(1)})=2\), and \((0(1|2))^{*}02^{\omega}\) for ones that cycle forever in the right-hand part of \(M^{(C(2))}|M^{(2)}\). In consequence, we have that \(M^{(C(2))}|M^{(2)}\models A_{012}\), but not \(M^{(C(2))}|M^{(2)}\models A_{1}\). It is possible to relate the traces of a subsystem with the traces of the entire system in such a way that it is possible to verify locally defined formulas. ## 5 Assume-Guarantee Reasoning for 1atl* Now we propose our assume-guarantee schemes that decompose abilities of coalition \(C\) into abilities of its subcoalitions, verified in suitable abstractions of their neighbor modules. ### Assume-Guarantee Rule for Strategies Let \(\mathcal{S}\) be a system composed of asynchronous agents \((M^{(1)},R^{(1)}),\ \ldots,\ (M^{(n)},R^{(n)})\). By \(N_{1}^{(i)}\), we denote the direct "neighborhood" of agent \(i\), i.e., the set of agent indices \(j\) such that \(I_{M^{(j)}}\cap X_{M^{(i)}}\neq\varnothing\) or \(I_{M^{(i)}}\cap X_{M^{(j)}}\neq\varnothing\). By \(N_{k}^{(i)}\), we denote the agents connected to \(i\) in at most \(k\) steps, i.e., \((N_{k-1}^{(i)}\cup\bigcup_{j\in N_{k-1}^{(i)}}N_{1}^{(j)})\setminus\{i\}\). Finally \(\mathit{Comp}_{k}^{(i)}\) denotes the composition of all modules of \(N_{k}^{(i)}\). That is, if \(N_{k}^{(i)}=\{a_{1},...,a_{m}\}\) then \(\mathit{Comp}_{k}^{(i)}=M^{(a_{1})}|...|M^{(a_{m})}\). Let \(\psi_{i}\) be an \(\mathbf{LTL}\) formula (without "next"), where atomic propositions are local valuations of variables in \(M^{(i)}\). Also, let \(x\in\{\mathrm{ir},\mathrm{i}\mathrm{R}\}\). The scheme is formalized through a sequence of rules \(\mathbf{R_{k}}\) which rely on the behaviour of the neighbourhoods of coalition \(C\), limited by "distance" \(k\): \[\begin{array}{cc}&\forall_{i\in C}\ (M^{(i)}|A_{i},R^{(i)})\models_{x} \langle\!\langle i\rangle\!\rangle\psi_{i}\\ \mathbf{R_{k}}&\frac{\forall_{i\in C}\ \mathit{Comp}_{k}^{(i)}\models A_{i}}{(M^{(1)}|...|M^{(n)},R^{(1)},\ldots,R^{(n)})}\models_{x}\langle\!\langle C\rangle\! \rangle\bigwedge_{i\in C}\psi_{i}\end{array}\] Figure 4: Module \(M^{(C(2))}|M^{(2)}\) (left). The edges are labeled only if the value of \(x^{(1)}\) is relevant. Subsystem \(M^{(2)}|A_{012}\) implementing strategy \(\sigma\) (right). The main challenge in applying the scheme is to define the right assumptions and to decompose the verified formula. Example 6: Recall the multi-agent system \(\mathcal{S}\) presented in Example 2, based on module \(M^{(C(2))}|M^{(1)}|M^{(2)}\). We already argued that it satisfies \(\phi\equiv\langle\!\langle 1,2\rangle\!\rangle(GFp^{(1)})\wedge(GFp^{(2)})\) as well as \(\phi^{\prime}\equiv\langle\!\langle 1,2\rangle\!\rangle(GFq^{(1)})\wedge(GFq^{(2)})\), for \(p^{(i)}(x^{(i)})=1\) and \(q^{(i)}(s)=i\), cf. Example 3. We will now see if the verification of the formulas can be decomposed using \(\mathbf{R_{k}}\). By Example 5 we know that \(M^{(C(2))}|M^{(i)}\models A_{012}\), where \(A_{012}\) was an assumption defined in Example 4. It is easy to see that \(M^{(C(2))}\models A_{012}\). Consider the extended module \(M^{(2)}|A_{012}\), which is nothing but \(M^{(2)}|M^{(C(2))}\) with all the states marked as accepting. Assume further that agent \(2\) executes strategy \(\sigma_{2}\) of Example 3. The resulting subsystem is presented in Figure 4. Note that, if we focus on the values of variable \(s\), the \(\omega\)-regular language accepted by this automaton is \(((01)|(0222(01)^{*}011))^{\omega}\), hence it periodically satisfies \(p(\{s\})=1\). In consequence, \(\sigma_{2}\) can be used to demonstrate that \((M^{(2)}|A_{012},R^{(2)})\models_{\mathrm{ir}}\langle\!\langle 2\rangle\! \rangle GFq^{(1)}\), where \(q^{(1)}(s)=1\). Similarly, \((M^{(1)}|A_{012},R^{(1)})\models_{\mathrm{ir}}\langle\!\langle 1\rangle\! \rangle GFq^{(2)}\), where \(q^{(2)}(s)=2\). As a result, we have decomposed formula \(\phi^{\prime}\) and constructed independent strategies for agents \(1\) and \(2\). By the use of rule \(\mathbf{R_{1}}\), we conclude that \[(M,R^{(C(2))},R^{(1)},R^{(2)})\models_{\mathrm{ir}}\langle\!\langle 1,2 \rangle\!\rangle(GFq^{(1)})\wedge(GFq^{(2)}).\] The situation for \(\phi\equiv\langle\!\langle 1,2\rangle\!\rangle(GFp^{(1)})\wedge(GFp^{(2)})\) is drastically different. We cannot use the analogous reasoning, because \(\langle\!\langle i\rangle\!\rangle GFp^{(3-i)}\) is not a local constraint for \(M^{(i)}\). There is a unique decomposition of \(\phi\) into local constraints, but proving that \((M^{(1)}|A_{012},R^{(1)})\models_{\mathrm{ir}}\langle\!\langle 1\rangle\! \rangle GFp^{(1)}\) fails, as the system can get stuck in the state where \(s\) equals \(2\) or infinitely loop between the states where \(s=2\) and \(s=0\). Changing the assumption would not help, since we cannot avoid the infinite exclusion of the considered train. Thus, while the scheme can be used to derive that \(\mathcal{S}\models_{\mathrm{ir}}\phi^{\prime}\), it cannot produce the (equally true) statement \(\mathcal{S}\models_{\mathrm{ir}}\phi\). ### Soundness and Incompleteness The following theorem says that, if each coalition member together with its assumption satisfies the decomposition of the formula, and its neighborhood satisfies the assumption, then the original verification task must return "true." Theorem 5.1: _The rule \(\mathbf{R_{k}}\) is sound._ Proof: Let \(\forall_{i\in C}\ (M^{(i)}|A_{i},R^{(i)})\models_{x}\ \langle\!\langle i \rangle\!\rangle\psi_{i}\) with (memoryless or perfect recall) imperfect information strategy \(\sigma_{i}\) and \(\forall_{i\in C}\ \ Comp_{k}^{(i)}\models\ A_{i}\). Here and in the rest of the proof, \(x\in\{ir,iR\}\). Let us consider \(M=M^{(1)}|...|M^{(n)}\) such that \((M,R^{(1)},\ldots,R^{(n)})\models_{x}\langle\!\langle C\rangle\!\rangle\psi_{i}\) and fix its joint strategy \(\sigma\) for coalition \(C\), where \(\sigma(i)=\sigma_{i}\) for every \(i\in C\). We will prove the soundness by contradiction. Suppose that for every (memoryless or perfect recall) imperfect information joint strategy there exists an infinite word which implements this joint strategy, but do not satisfy \(\bigwedge_{i\in C}\psi_{i}\), i.e. there exists \(j\in C\) such that \(w\) does not satisfy \(\psi_{j}\). Let \(w=q_{0}q_{1}\ldots\) be such a word for the strategy \(\sigma\) and fix \(j\). Let us consider \(M^{(j)}|A_{j}\), where \(X_{M^{(j)}}\) and \(X_{A_{j}}\) are internal variables of \(M^{(j)}\) and \(A_{j}\), appropriately. By the construction and the presumption that \((M^{(j)}|A_{j},\mathrm{R}^{(j)})\models_{x}\langle\!\langle j\rangle\! \rangle\psi_{j}\) we get that every infinite word over \(X_{M^{(j)}}\cup X_{A_{j}}\) which implement (memoryless or perfect recall) imperfect information strategy \(\sigma_{j}\) satisfy \(\psi_{j}\). However, the assumption \(A_{j}\) is guaranteed by \(\mathit{Comp}_{k}^{(j)}\), hence for a word derived by \(\mathit{Comp}_{k}^{(j)}\) we have its curtailment accepted by \(A_{j}\). Moreover, every word accepted by \(M^{(j)}|A_{j}\) is a curtailment of a word derived by \(M\), and, in particular, \(w\) is such a word. However, there exists a curtailment \(w|_{X_{M^{(j)}}\cup X_{A_{j}}}\) which satisfy strategy \(\sigma_{j}\) but is not accepted by \(M^{(j)}|A_{j}\), which gives an obvious contradiction with \((M^{(j)}|A_{j},R^{(j)})\models_{x}\langle\!\langle i\rangle\!\rangle\psi_{j}\). The obtained contradiction shows that there exists a joint strategy \(\sigma\) for the entire model and \((M^{(1)}|...|M^{(n)},R^{(1)},\ldots,R^{(n)})\models_{x}\langle\!\langle C \rangle\!\rangle\bigwedge_{i\in C}\psi_{i}\), which concludes the proof. Unfortunately, there does not always exist \(k<n\) for which the rule \(\mathbf{R_{k}}\) is complete, even in a very weak sense, where we only postulate the _existence_ of appropriate assumptions. Theorem 5.2: _The scheme consisting of rules \(\{\mathbf{R_{k}}\mid\mathbf{k}\in\mathbb{N}\}\) is in general not complete._ Proof: Follows directly from Example 6. ### Coalitional Assume-Guarantee Verification In Section 5.2, we showed that achievable coalitional goals may not decompose into achievable individual subgoals. As a result, the scheme proposed in Section 5.1 is incomplete. A possible way out is to allow for assume-guarantee reasoning about joint strategies of subcoalitions of \(C\). We implement the idea by partitioning the system into smaller subsystems and allowing to explicitly consider the cooperation between coalition members. Again, let \(\mathcal{S}=(M^{(1)},R^{(1)}),\ \ldots,\ (M^{(n)},R^{(n)})\) be a system composed of asynchronous agents. Moreover, let \(\{P_{1},\ldots,P_{k}:P_{i}\subseteq\{1,2,\ldots,n\}\}\), be a partitioning of coalition \(C\), and let \(\overline{C}=\{i:i\notin C\}=Ag\setminus C\) be the set of opponents of \(C\). By \(\mathcal{S}^{(P_{i})}\) we denote the system composed of all the agents in \(P_{i}=\{i_{1},\ldots,i_{s}\}\), i.e., \((M^{(P_{i})}=M^{(i_{1})}|\ldots|M^{(i_{s})},R^{(i_{1})},\ldots,R^{(i_{s})})\). \(\mathcal{S}^{(\overline{C})}\) is defined analogously. We extend the notion of neighbourhood to sets of agents as follows: * \(N_{1}^{P_{i}}=(\bigcup_{i\in P_{i}}N_{1}^{(i)})\setminus P_{i}\), \(N_{k}^{P_{i}}=(N_{k-1}^{P_{i}}\cup\bigcup_{j\in N_{k-1}^{P_{i}}}N_{1}^{(j)}) \setminus P_{i}\) for \(k>1\), * \(\mathit{Comp}_{k}^{P_{i}}=M^{(x_{1})}|...|M^{(x_{s})}\) for \(N_{k}^{P_{i}}=\{x_{1},...,x_{s}\}\). Let \(x\in\{\mathrm{ir},\mathrm{iR}\}\). The generalized assume-guarantee rule is defined below: \[\mathbf{Part}_{\mathbf{k}}^{\mathbf{P}} \frac{\forall_{P_{i}\in P}\ (M^{(P_{i})}|A_{i},R^{(i_{1})},\ldots,R^{(i_{s})})\models_{x}\langle\! \langle P_{i}\rangle\!\rangle\bigwedge_{j\in P_{i}}\psi_{j}}{(M^{(1)}|...|M^{( n)},R^{(1)},\ldots,R^{(n)})\models_{x}\langle\!\langle C\rangle\! \rangle\bigwedge_{i\in C}\psi_{i}}\] As it turns out, the new scheme is sound, conservative with respect to enlarging the neighborhood, and complete. Theorem 4.1: _The rule \(\mathbf{Part}_{\mathbf{k}}^{\mathbf{P}}\) is sound._ Proof: Intuitively, we can proceed similarly to the proof of Theorem 4.1. Note that each component \(P_{i}\) can be seen as single composed module with a imperfect information strategy (memoryless or with perfect recall) being a joint strategy for the subset of coalition \(C\) which is the component \(P_{i}\). Moreover, we can take instead of \(\mathit{Comp}_{k}^{P_{i}}\) the union \(U\) of all the components \(P_{j}\) (and possibly \(\overline{C}\)) which intersection with \(\mathit{Comp}_{k}^{P_{i}}\) is non-empty. It is easy to see, that if \(\mathit{Comp}_{k}^{P_{i}}\models A_{i}\) then also \(U\models A_{i}\). This way we could fix the strategy for coalition \(C\), deduce that every infinite word as a composition of strategies \(\sigma_{P_{i}}\) for its parts \((C\cap P_{i})_{P_{i}\in P}\), and deduce that for every word \(w\) which do not satisfy \(\bigwedge_{i\in C}\psi_{i}\) there exists a single component \(P_{j}\) containing \(M_{i}\) such that \(\psi_{j}\) would not be satisfied for any curtailment of \(w\), while one of them implements strategy \(\sigma_{P_{i}}\) being at the same time accepted by \(M^{(P_{i})}|A_{i}\). Proposition 1: _If \(\mathit{Comp}_{k}^{P_{i}}\models A_{i}\) then \(\mathit{Comp}_{k+1}^{P_{i}}\models A_{i}\)._ Proof: Let \(N_{k+1}^{P_{i}}=\{i_{1},\ldots,i_{t}\}\), \(M=M^{(i_{1})}|\ldots M^{(i_{t})}\), \(N_{k}^{P_{i}}=\{j_{1},\ldots,j_{t^{\prime}}\}\) and \(M^{\prime}=M^{(j_{1})}|\ldots M^{(j_{t^{\prime}})}\). Let us consider an infinite trace \(\sigma\) of \(M\), with \(w\in(D^{X})^{\omega}\) and \(u\in(D^{I})^{\omega}\) and \(\sigma^{\prime}=w_{1}|_{X^{\prime}}\left(u_{1}|_{I^{\prime}\cap I}\cup w_{1}|_{ I^{\prime}\cap X}\right)w_{2}|_{X^{\prime}}\ldots\). Note that one of the curtailments of a word \(w_{1}|_{X^{\prime}}w_{2}|_{X^{\prime}}\ldots\) is derived by \(M^{\prime}\), and thus its curtailment is accepted by \(A_{i}\). Theorem 4.2: _There exist a partition set \(P\) and \(k\leq n\) such that the rule \(\mathbf{Part}_{\mathbf{k}}^{\mathbf{P}}\) is complete._ Proof: Straightforward, as we can take \(k=n\) and singleton partition \(P=\{P_{1}\}\), where \(A_{1}\) is an automaton constructed on the base of the system \(M^{(\overline{C})}\), where all the states are accepting ones (hence \(\mathit{Comp}_{P_{1}}^{k}\models A_{1}\) as every word accepted by \(A_{1}\) is derived with a trace of \(M_{\overline{C}}\)). Hence \((M^{(P_{1})}|A_{1},R^{(i_{1})},\ldots,R^{(i_{s})})\models_{x}\langle\!\langle P _{1}\rangle\!\rangle\bigwedge_{j\in P_{1}}\psi_{j}\) is just an equivalent formulation of \((M^{(1)}|...|M^{(n)},R^{(1)},\ldots,R^{(n)})\models_{x}\langle\!\langle C \rangle\!\rangle\bigwedge_{i\in C}\psi_{i}\), for \(x\in\{ir,iR\}\). Remark 1 (Complexity): The assume-guarantee schemes provide (one-to-many) reductions of the model checking problem. The resulting verification algorithm for \(\mathbf{ATL}_{\mathrm{ir}}^{*}\) is \(\mathbf{PSPACE}\)-complete with respect to the size of the coalition modules, the size of the assumptions, and the length of the formula. In the very worst case (i.e., as the assumptions grow), this becomes \(\mathbf{PSPACE}\)-complete w.r.t. the size of the global model, i.e., no better than ordinary model checking for \(\mathbf{ATL}^{*}\) with memoryless strategies. On the other hand, our method often allows to decompose the verification of the huge global model of the system to several smaller cases. For many systems one can propose assumptions that are exponentially smaller than the size of the full model, thus providing an exponential gain in complexity. Note also that the first scheme provides a model checking algorithm for \(\mathbf{ATL}_{\mathrm{ir}}^{*}\) that is \(\mathbf{EXPTIME}\)-complete with respect to the size of the coalition modules, the size of the assumptions, and the length of the formula, i.e., an incomplete but decidable algorithm for the generally undecidable problem. ### Verification of Nested Strategic Operators So far, we have concentrated on assume-guarantee specification of formulas without nested strategic modalities. Here, we briefly point out that the schemes \(\mathbf{R_{k}}\) and \(\mathbf{Part_{k}^{P}}\) extend to the whole language of \(\mathbf{ATL}_{-\mathbf{X}}^{*}\) through the standard recursive model checking algorithm that verifies subformulas bottom-up. Such recursive application of the method to the verification of \(\mathcal{S}\models\phi\) proceeds as follows: * For each strategic subformula \(\phi_{j}\) of \(\phi\), do assume-guarantee verification of \(\phi_{j}\) in \(\mathcal{S}\), and label the states where \(\phi_{j}\) holds by a fresh atomic proposition \(\mathsf{p_{j}}\); * Replace all occurrences of \(\phi_{j}\) in \(\phi\) by \(\mathsf{p_{j}}\), and do assume-guarantee verification of the resulting formula in \(\mathcal{S}\). The resulting algorithm is sound, though there is the usual price to pay in terms of computational complexity. The main challenge lies in providing decompositions of \(\mathbf{LTL}\) objectives for multiple strategic formulas, as well as multiple Buchi assumptions (one for each subformula). A refinement of the schemes for nested strategic abilities is planned for future work. ## 6 Case Study and Experiments In this section, we present an experimental evaluation of the assume-guarantee verification schemes of Section 5. As the benchmark, we use a variant of the factory scenario from [26], where a coalition of logistic robots cooperate to deliver packages from the production line to the storage area. ### Experiments: Monolithic vs. Assume-Guarantee Verification **Decomposition to Individual Strategies.** In the first set of experiments, we verified the formula \[\psi\ \equiv\ \langle\!\langle R\rangle\!\rangle(\bigwedge_{r\in R}\mathsf{ energy}_{r}>0)\,\mathrm{U}\,\mathrm{delivered}\] expressing that the coalition of robots \(R\) can maintain their energy level above zero until at least one package is delivered to the storage area. Guessing that the first robot has enough energy to deliver a package on his own, we can decompose the formula as the conjunction of the following components: \[\psi_{d}\ \equiv\ \langle\!\langle r_{1}\rangle\!\rangle\mathrm{F}\, \mathrm{delivered},\qquad\quad\psi_{e}^{(i)}\ \equiv\ \langle\!\langle r_{i}\rangle\!\rangle\mathrm{G}\,\mathsf{ energy}_{r}>0,\quad i\in R.\] Note that, if \(\psi_{d}\land\bigwedge_{i>1}\psi_{e}^{(i)}\) is true, then \(\psi\) must be true, too. The experiments used the first (incomplete) scheme of assume-guarantee verification. The results are presented in Table 1. The first column describes the configuration of the model, i.e., the number of robots, locations in the factory, and the initial energy level. Then, we report the performance of model checking algorithms that operate on the explicit model of the whole system. The running times are given in seconds. _DFS_ is a straightforward implementation of depth-first strategy synthesis. _Apprx_ refers to the (sound but incomplete) method of fixpoint-approximation in [22]; besides the time, we also report if the approximation was conclusive. **Coalitional Assume-Guarantee Verification.** For the second set of experiments, the robots were divided in two halves, initially located in different parts of the factory. We verified the following formula: \[\psi\ \equiv\ \langle\!\langle R\rangle\!\rangle\mathrm{F}\,\mathrm{G}\,(\bigwedge_{ \mathrm{i}\in\{1,2,\ldots,n/2\}}(\mathsf{delivered}_{\mathrm{i}}\vee\mathsf{ delivered}_{\mathrm{i+n/2}})),\] expressing that the coalition of robots can delivered at least one package per pair to the storage area. Depending on the initial energy level of robots, the storage may not be reachable from the production line. That means that the robots must work in pairs to deliver the packages. We use this insight to decompose the verification into the following formulas: \[\psi^{(i)}\ \equiv\ \langle\!\langle r_{i},r_{i+n/2}\rangle\!\rangle\mathrm{F}\, \mathrm{G}\,(\mathsf{delivered}_{\mathrm{i}}\vee\mathsf{delivered}_{\mathrm{ i+n/2}}).\] The results are presented in Table 1. **Discussion of Results.** The experimental results show that assume-guarantee schemes presented here enables to verify systems of distinctly higher complexity than model checking of the full model. We have also conducted analogous experiments on the Simple Voting scenario of [22], with very similar results; we do not report them here due to lack of space. Interestingly, Table 1 shows that the application of incomplete assume-guarantee scheme to fixpoint approximation (in itself an incomplete method of model checking) often turns inconclusive verification into conclusive one. This is because fixpoint approximation works rather well for individual abilities, but poorly for proper coalitions [22]. Rule \(\mathbf{R_{k}}\) decomposes verification of coalitional abilities (very likely to resist successful approximation) to model checking individual abilities (likely to submit to approximation). It is not true in the case of the second experiment, as this time we did not reduce the tested coalitions to singleton ones. ## 7 Conclusions In this paper we propose two schemes for assume-guarantee verification of strategic abilities. Importantly, they are both sound for the memoryless as well as perfect recall semantics of abilities under imperfect information. Moreover, the second scheme is complete (albeit in a rather weak sense). The experiments show that both schemes can provide noticeable improvement in verification of large systems consisting of asynchronous agents with independent goals. Note also that the scheme \(\mathbf{R_{k}}\) provides an (incomplete) reduction of the undecidable model checking problem for coalitions with perfect recall to decidable verification of individual abilities. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Conf**} & \multicolumn{3}{c|}{**Monolithic verif.**} & \multicolumn{3}{c|}{**Ass.-guar. verif.**} \\ \cline{2-7} & **\#st** & **DFS** & **Apprx** & **\#st** & **DFS** & **Apprx** \\ \hline 2,2,2 & 8170 & \textless{}0.01 & 0.6/No & 1356 & \textless{}0.01 & \textless{}0.01/Yes \\ \hline 2,3,3 & 1.1e5 & 0.02 & 13/No & 9116 & \textless{}0.01 & 0.5/Yes \\ \hline 3,2,2 & 5.5e5 & timeout & 2.7e4 & \textless{}0.01 & 3/Yes \\ \hline 3,3,3 & memout & 4.4e5 & \textless{}0.01 & 58/Yes \\ \hline 4,2,2 & memout & 5.2e5 & timeout & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Conf**} & \multicolumn{3}{c|}{**Monolithic verif.**} & \multicolumn{3}{c|}{**Ass.-guar. verif.**} \\ \cline{2-7} & **\#st** & **DFS** & **Apprx** & **\#st** & **DFS** & **Apprx** \\ \hline 2,3,1 & 522 & \textless{}0.01 & \textless{}0.01/No & 522 & \textless{}0.01 & \textless{}0.01/No \\ \hline 2,4,2 & 3409 & \textless{}0.01 & \textless{}0.01/No & 3409 & \textless{}0.01 & \textless{}0.01/No \\ \hline 4,3,1 & memout & 4.8e4 & \textless{}0.01 & 4/No \\ \hline 6,3,1 & memout & 5.8e5 & 0.36 & 42/No \\ \hline 8,3,1 & memout & \multicolumn{3}{c|}{timeout} \\ \hline \end{tabular} \end{table} Table 1: Results of assume-guarantee verification, scheme \(\mathbf{R_{k}}\) (left), scheme \(\mathbf{Part_{k}^{P}}\) (right). Clearly, the main challenge is to formulate the right assumptions and to decompose the verified formula. In the future, we would like to work on the automated generation of assumptions. The first idea is to obtain a larger granularity of the global model by decomposing agents into even smaller subsystems (and recomposing some of them as assumptions). This can be combined with abstraction refinement of the assumptions in case they are still too complex. We also plan to extend the notion of assumptions to capture the agents' knowledge about the strategic abilities of their coalition partners. Positive results in that direction would significantly increase the applicability of assume-guarantee schemes for model checking of asynchronous MAS. #### Acknowledgement The work was supported by NCBR Poland and FNR Luxembourg under the PolLux/FNR-CORE project STV (POLLUX-VII/1/2019), and the CHIST-ERA grant CHIST-ERA-19-XAI-010 by NCN Poland (2020/02/Y/ST6/00064). The work of Damian Kurpiewski was also supported by the CNRS IEA project MoSART.
2305.16139
Damping of three-dimensional waves on coating films dragged by moving substrates
Paints and coatings often feature interfacial defects due to disturbances during the deposition process which, if they persist until solidification, worsen the product quality. In this article, we investigate the stability of a thin liquid film dragged by a vertical substrate moving against gravity, a flow configuration found in a variety of coating processes. The receptivity of the liquid film to three-dimensional disturbances is discussed with Direct Numerical Simulations (DNS), an in-house non-linear Integral Boundary Layer (IBL) film model, and Linear Stability Analysis (LSA). The thin film model, successfully validated with the DNS computations, implements a pseudo-spectral approach for the capillary terms that allows for investigating non-periodic surface tension dominated flows. The combination of these numerical tools allows for describing the mechanisms of capillary and non-linear damping, and identifying the instability threshold of the coating processes. The results show that transverse modulations can be beneficial for the damping of two-dimensional waves within the range of operational conditions considered in this study, typical of air-knife and slot-die coating.
David Barreiro-Villaverde, Anne Gosset, Marcos Lema, Miguel Alfonso Mendez
2023-05-25T15:13:20Z
http://arxiv.org/abs/2305.16139v3
# Damping of three-dimensional waves on coating films dragged by moving substrates ###### Abstract Paints and coatings often feature interfacial defects due to disturbances during the deposition process which, if they persist until solidification, worsens the product quality. In this article, we investigate the stability of a thin liquid film dragged by a vertical substrate moving against gravity, a fundamental flow configuration in various coating processes. The receptivity of the liquid film to three-dimensional disturbances is analyzed with Direct Numerical Simulations (DNS) and an in-house Integral Boundary Layer (IBL) film model. The latter was used for Linear Stability Analysis (LSA) and nonlinear wave propagation analysis. The numerical implementation of the IBL film model combines a finite volume formulation with a pseudo-spectral approach for the capillary terms that allows for investigating non-periodic surface tension-dominated flows. Both the model and the numerical solver were successfully validated with DNS computations. The combination of these numerical tools allows for describing the mechanisms of capillary and nonlinear damping and identifying the instability threshold of the coating processes. The results show that transverse modulations can be beneficial for damping two-dimensional waves within the range of operational conditions considered in this study, which are relevant to air-knife and slot-die coating. + Footnote †: preprint: AIP/126 ## I Introduction Liquid films dragged by moving substrates are often found in industrial coating and painting processes. In these processes, thickness inhomogeneities reduce the quality and performance of the final products and are thus considered defects. Therefore, the damping of non-uniformities soon after liquid deposition and before solidification is fundamental. Liquid film flows are generally unstable and naturally develop interfacial waves that evolve in time and space, even at very low Reynolds numbers. The analysis of thin films instabilities began in the 1910s with the pioneering works of Nusselt[1], who derived the governing equations for a falling liquid film, and the Kapitza family, who carried out experiments on highly viscous liquids to describe the wave structures in vertically falling liquid films[2]. The analysis of the instability mechanisms leading to three-dimensional (3D) waves from an initially unperturbed flow has been an active research area since then, involving both theoretical and experimental fluid mechanics[3; 4; 5]. Until now, most literature has focused on the stability of liquid films falling along inclined and vertical planes. Direct Numerical Simulations (DNS) provide detailed insights into these flows, but the computational cost is unaffordable for most cases. Consequently, simplified numerical models -with different levels of complexity- have been derived to explain the vast phenomenology documented in the experiments. These models are simplifications of the Navier-Stokes equations. The simplest model is the Benney Equation (BE)[6], which assumes that the velocity field is bound to the thickness evolution. It correctly predicts the instability threshold in inclined and vertical planes but cannot be applied to moderate Reynolds numbers because it blows up at finite times. On the other hand, Integral Boundary Layer (IBL) models combine the boundary layer approximation with the assumption of self-similarity of the velocity profile to formulate the problem as a function of the flow rate(s), \(q_{x}\) and \(q_{z}\), and thickness, \(h\). This approach was first proposed by Kapitza[2] and Shkadov[7] for stationary and non-stationary waves, respectively, and extended to three-dimensional problems by Shkadov and Demekhin[8]. Unfortunately, although applicable to moderate Reynolds numbers, these models fail to predict the critical Reynolds number in inclined planes. A major modeling improvement in the past decades is the weighted Residuals technique by Ruyer-Quil and Manneville[9; 10], which consists in a gradient expansion of the velocity profile to account for high-order non-linearities in the flow. The flow instabilities that develop on liquid films falling down inclined planes drive an initially unperturbed flat film towards two-dimensional (2D) solitary waves and 3D wave trains until, finally, reaching a chaotic and unpredictable interface dynamics. Many investigations successfully combined experiments with theoretical models to predict the instability onset and the evolution of the subsequent structures[11; 12; 13; 14]. However, the full spectrum of instability mechanisms is not yet fully understood. The flat film initially undergoes primary instability, leading to 2D structures in the shape of solitary or periodic waves. These interact with each other after a few wavelengths, coalescing or repelling neighboring waves, as shown with noise-driven experiments[15]. Due to a secondary instability mechanism, these further evolve into three-dimensional structures such as herringbone patterns or solitary 3D waves. A comprehensive experimental investigation is provided by Liu et al.[16], who conducted experiments in an inclined plane close to the instability threshold to capture the transition at which the span-wise perturbations are naturally amplified, and the film becomes 3D. On the other hand, Nosoko et al.[17] experimentally introduced artificial span-wise
2307.03403
An Improved Compound Gaussian Model for Bivariate Surface EMG Signals Related to Strength Training
Recent literature suggests that the surface electromyography (sEMG) signals have non-stationary statistical characteristics specifically due to random nature of the covariance. Thus suitability of a statistical model for sEMG signals is determined by the choice of an appropriate model for describing the covariance. The purpose of this study is to propose a Compound-Gaussian (CG) model for multivariate sEMG signals in which latent variable of covariance is modeled as a random variable that follows an exponential model. The parameters of the model are estimated using the iterative Expectation Maximization (EM) algorithm. Further, a new dataset, electromyography analysis of human activities database 2 (EMAHA-DB2) is developed. Based on the model fitting analysis on the sEMG signals from EMAHA-DB2, it is found that the proposed CG model fits more closely to the empirical pdf of sEMG signals than the existing models. The proposed model is validated by visual inspection, further validated by matching central moments and better quantitative metrics in comparison with other models. The proposed compound model provides an improved fit to the statistical behavior of sEMG signals. Further, the estimate of rate parameter of the exponential model shows clear relation to the training weights. Finally, the average signal power estimates of the channels shows distinctive dependency on the training weights, the subject's training experience and the type of activity.
Durgesh Kusuru, Anish C. Turlapaty, Mainak Thakur
2023-07-07T06:00:55Z
http://arxiv.org/abs/2307.03403v1
# An Improved Compound Gaussian Model for Bivariate Surface EMG Signals Related to Strength Training ###### Abstract Recent literature suggests that the surface electromyography (sEMG) signals have non-stationary statistical characteristics specifically due to random nature of the covariance. Thus suitability of a statistical model for sEMG signals is determined by the choice of an appropriate model for describing the covariance. The purpose of this study is to propose a Compound-Gaussian (CG) model for multivariate sEMG signals in which latent variable of covariance is modeled as a random variable that follows an exponential model. The parameters of the model are estimated using the iterative Expectation Maximization (EM) algorithm. Further, a new dataset, electromyography analysis of human activities - database \(2\) (EMAHA-DB2) is developed. Based on the model fitting analysis on the sEMG signals from EMAHA-DB2, it is found that the proposed CG model fits more closely to the empirical pdf of sEMG signals than the existing models. The proposed model is validated by visual inspection, further validated by matching central moments and better quantitative metrics in comparison with other models. The proposed compound model provides an improved fit to the statistical behavior of sEMG signals. Further, the estimate of rate parameter of the exponential model shows clear relation to the training weights. Finally, the average signal power estimates of the channels shows distinctive dependency on the training weights, the subject's training experience and the type of activity. Surface electromyography (sEMG), Compound Gaussian models, Expectation Maximization (EM) algorithm, Exponential random variable. ## I Introduction ### _Background_ Statistical models of strength of surface electromyography (sEMG) signals have many applications including a) to develop insights into sEMG signal generation from the constituent motor unit action potentials (MUAPs) that forms a basis for the sEMG signal synthesis [1] and simulation studies [2], b) to enhance the interpretation of the sEMG signals in clinical studies such as neuromuscular disorders detection [3], c) to improve performance for pattern classification of intent to control wearable exoskeleton and prostheses [4], d) to improve system identification models that non-invasively determine muscle force and joint torque [5], e) to understand interrelationships between sEMG signals and muscle groups, for example in sports activities [6, 7, 8], and f) to build visualization tools to support movement sciences [9], and muscle physiology examinations and the sports science education. sEMG signals can be modelled as stochastic processes because each constituent motor unit firing can be considered a random event [10]. Many studies [11, 12, 13] have attempted to extract their features by analyzing EMG signals, which are typically assumed to follow the Gaussian distribution. In one of the earliest experiments [11], the Gaussian distribution was used to explain the statistical nature of EMG signals. In a similar work, Hogan et al [12] used the Gaussian model to describe the relationship between EMG signals and the muscle force. Moreover, they assumed that EMG signals have a constant variance under constant force conditions. However, even under constant force, sEMG signals may not follow a steady Gaussian distribution [14, 15, 5, 16, 17]. A study by Milner-red et al [14] showed that in the presence of constant-force conditions, the distribution of EMG signals collected from the bicep and first dorsal interosseous muscles underwent a sharper peak than that of the Gaussian distribution. A few simulation studies [18, 19] show that the non-Gaussianity of EMG signals differs according to the level of muscle contraction, so that as the muscle contraction level increases, the distribution of EMG signals shifts towards the Gaussian. It is well known that the Compound-Gaussian model is usually employed for modeling the heavy-tailed distributions [20, 21, 22]. Recently, Furui et al [10] proposed a scale-mixture model to account for the non-stationarity of sEMG signals at different muscle contraction forces. From these studies [23, 10], it is evident that the variance of univariate sEMG signals is random in nature. In this study, we investigate the non-stationary models for multi-variate sEMG signals. In which case, the variance in univariate models is replaced by a random variable that represents the latent variable of the covariance matrix. For example, in [22], a multivariate compound model was proposed where the a latent variable follows an inverse gamma (IG) distribution. However, the suitability of the IG distribution was not evaluated through comparison with other possible distributions commonly used in compound Gaussian modeling. Some of the other possible models of this latent variable include Gamma, exponential and inverse Gaussian distributions [24, 10]. Hence identification of a suitable distribution for the latent variable of the covariance that best fits the non-stationary sEMG signal characteristics is the focus of this study. The major contributions of this study are as follows. A compound Gaussian model is proposed for the non-stationary surface EMG signals with the latent variable of the covariance following an exponential distribution. A new dataset of sEMG signals corresponding to weight training exercises under isotonic and isometric contractions is developed and named _electromyography analysis of human activities - database 2_ (EMAHA-DB2). The proposed model is tested on EMAHA-DB2 and its suitability is compared against the existing models using both the qualitative and quantitative approaches. Finally, the rate parameter (\(\lambda\)) of the proposed model and the multichannel signal power are analyzed for their dependencies on different measurement conditions. The rest of the work is organised as follows: Section-II presents the proposed model and its parameter estimation using Expectation Maximization (EM) algorithm, followed by model validation methods. Section-III describes the dataset, Section-IV presents model analysis and discussion. Finally, Section-V concludes the work. ## II Statistical Model and Problem Description ### _A Compound Gaussian Model_ A compound probabilistic model is proposed for the strength of multi-variate sEMG signals. Specifically, the multi-channel signal is modelled as product of two interacting random processes. The first component is a fast changing sEMG signal strength and the second component is a slow varying latent random variable that represents temporal fluctuations in the covariance of the observations. Thus proposed model for the multichannel sEMG observations \(\mathbf{y}_{n,k}\) is \[\mathbf{y}_{n,k}=\boldsymbol{\mu}+\sqrt{z_{k}}\mathbf{x}_{n,k} \tag{1}\] Here \(\mathbf{x}_{n,k}\) represents the fast changing multi-variate random process within each \(k\)-th segment and \(z_{k}\) is the slow changing hidden variable. Borrowing from the literature on compound Gaussian models for radar clutter [25], the variable \(z_{k}\) will be henceforth referred as the texture. The variations in each phase of hand activity can be attributed to the texture \(z_{k}\) of \(k\)-th segment. Here \(\boldsymbol{\mu}\) denotes mean vector, \(T=N\times K\) is the total number of observations in each channel, \(K\) denotes the number of segments in each channel and \(N\) denotes the number of observations within each \(k\)-th segment. The model analysis and the parameter estimation is carried out for \((K,N)=(325,40)\) and the justification of this choice is given in sec. IV-B2. An illustration of a two channel sEMG signal relating to the compound statistical model is shown in Fig. 1. The probability density function (pdf) of \(\mathbf{y}_{n,k}\) conditioned on the texture \(z_{k}\) is defined as \[p(\mathbf{y}_{n,k}|z_{k})=\frac{1}{(2\pi z_{k})^{d/2}\left|\boldsymbol{\Sigma }\right|^{1/2}}\exp\bigg{(}-\frac{Q(\mathbf{y}_{n,k})}{2z_{k}}\bigg{)} \tag{2}\] here the quadratic function \[Q(\mathbf{y}_{n,k})=(\mathbf{y}_{n,k}-\boldsymbol{\mu}_{k})^{T}\boldsymbol{ \Sigma}^{-1}(\mathbf{y}_{n,k}-\boldsymbol{\mu}_{k}) \tag{3}\] and \(\boldsymbol{\Sigma}\) and \(d\) represent the spatial covariance matrix and the number of channels under consideration respectively. In general, multichannel signals analyzed across \(K\) segments can have spatio-temporal correlations defined by the spatio temporal covariance matrix \[\boldsymbol{\Sigma}_{ST}=\boldsymbol{\Sigma}_{T}\otimes\boldsymbol{\Sigma} \tag{4}\] here \(\boldsymbol{\Sigma}_{T}\) represents the temporal correlations. In this study, it is assumed that the variations are independent across segments and hence the conditional covariance of \(\mathbf{y}_{n,k}\) reduces to \[\boldsymbol{\Sigma}_{ST}=z_{k}\boldsymbol{\Sigma} \tag{5}\] Note that in [10], a single channel sEMG signal was modelled and the covariance further reduced to a scalar variance modelled as inverse Gamma random variable. In this study, the texture \(z_{k}\) is proposed to follow an exponential distribution with the pdf defined as \[p(z_{k})=\frac{1}{\lambda}\exp\left(-\frac{z_{k}}{\lambda}\right) \tag{6}\] where \(\lambda\) is a rate parameter. The marginal distribution of \(\mathbf{y}_{n,k}\) can be obtained by integrating out the hidden variable \(z_{k}\) as follows \[p(\mathbf{y}_{n,k}) = \int_{0}^{\infty}p(\mathbf{y}_{n,k}|z_{k})p(z_{k})dz_{k} \tag{7}\] \[= \frac{1}{(2\pi)^{\frac{d}{2}}\lambda\left|\Sigma\right|^{\frac{1 }{2}}}\int_{0}^{\infty}z_{k}^{\frac{-d}{2}}e^{-\Big{(}\frac{T_{k}^{1}}{z_{k}} +\frac{z_{k}}{\lambda}\Big{)}}dz_{k}\] where \(T_{k}^{1}\) is defined as \[T_{k}^{1}=\frac{1}{2}\sum_{n=1}^{N}Q(\mathbf{y}_{n,k}) \tag{8}\] Using ET II 82(23)a, LET I 146(29) from [26] the following integral is identified \[\int_{0}^{\infty}x^{\vartheta-1}\exp\left(-\frac{A}{x}-Bx\right)dx=2\bigg{(} \frac{A}{B}\bigg{)}^{\frac{d}{2}}K_{\vartheta}\big{(}2\sqrt{AB}\big{)} \tag{9}\] where \(K(\cdot)\) represents the modified Bessel function of second kind and \(\vartheta\) is a order of Bessel function and \(A,B\) are its parameters. Using (9) the marginal distribution (7) reduces to [27] \[p(\mathbf{y}_{n,k}) = \frac{2}{(2\pi)^{\frac{d}{2}}\left|\Sigma\right|^{\frac{1}{2}} \lambda}\frac{K_{\frac{d}{2}-1}\bigg{(}\sqrt{\frac{2Q(y_{n,k})}{\lambda}}\bigg{)} }{\left(\sqrt{\frac{\lambda Q(y_{n,k})}{2}}\right)^{\frac{d}{2}-1}} \tag{10}\] ### _Estimation Problem_ The complete data likelihood model can be written as \[p(\mathbf{Y},\mathbf{z};\boldsymbol{\mu},\boldsymbol{\Sigma},\lambda)=\prod_{k=1} ^{K}\prod_{n=1}^{N}p(\mathbf{y}_{n,k}|z_{k};\boldsymbol{\mu},\boldsymbol{\Sigma })p(z_{k};\lambda) \tag{11}\] where the full observations set \(\mathbf{Y}\) is \[\mathbf{Y}=\{\mathbf{y}_{k}\}_{k=1}^{K} \tag{12}\] and \(\mathbf{y}_{k}=\{\mathbf{y}_{n,k}\}_{n=1}^{N}\) is the set of \(N\) observations within a \(k\)-th segment, the set of texture variables \(\mathbf{z}=\{z_{k}\}_{k=1}^{K}\). The parameter set is \(\Theta=\{\lambda,\boldsymbol{\mu},\boldsymbol{\Sigma}\}\) and assumed to be deterministic and unknown. The problem of estimation is summarized as follows: given the full data likelihood function (11), a set of measurements \(\mathbf{Y}\) that follows the conditional distribution in (2), the texture variables \(\mathbf{z}\) assumed to follow the exponential model (6), the objective is to estimate the posterior distribution of \(\mathbf{z}\) and the unknown parameters \(\Theta\) and assess the estimation performance. ### _Parameter estimation using Expectation Maximization (EM) algorithm_ Regarding statistical models involving hidden variables (11), the model parameters are usually estimated using an iterative Expectation Maximization (EM) algorithm [28]. In this study, the texture's moments and the unknown parameters \(\Theta\) are estimated using the EM algorithm described in [29] and variations of the EM for the CG-E are available in [27]. It involves the E-step and the M-step as described below. #### Iii-B1 E-step In this step, the posterior distribution of the texture \(z_{k}\) is evaluated based on the logarithm of the complete data likelihood model (11). which is written as \[L = \sum_{k=1}^{K}\sum_{n=1}^{N}ln\Big{(}p(y_{n,k}|z_{k})\Big{)}+\sum_ {k=1}^{K}ln\Big{(}p(z_{k})\Big{)}\] \[= -\frac{NKd}{2}ln(2\pi)-\frac{Nd}{2}\sum_{k=1}^{K}ln(z_{k})-\frac{ NK}{2}ln(|\boldsymbol{\Sigma}|)\] \[-\frac{1}{2}\sum_{k=1}^{K}\sum_{n=1}^{N}\frac{Q(\mathbf{y}_{n,k}) }{z_{k}}-Kln(\lambda)-\frac{1}{\lambda}\sum_{k=1}^{K}z_{k}\] Assuming the texture variables \(z_{k}\) are independent across segments, at each iteration, the posterior pdf of \(\mathbf{z}\) can be approximated as the product of individual posteriors as \[q_{j}(\mathbf{z})=\prod_{k=1}^{K}q_{j}(z_{k}) \tag{14}\] Gathering the \(z_{k}\) terms in (13), the log posterior of \(z_{k}\) is obtained as \[ln(q(z_{k}))=-\frac{Nd}{2}ln(z_{k})-\frac{T_{k}^{1}}{z_{k}}-\frac{z_{k}}{ \lambda}+C \tag{15}\] Applying the exponential function on both the sides leads to \[q(z_{k})\propto z_{k}^{-\frac{Nd}{2}}e^{-\frac{T_{k}^{1}}{z_{k}}-\frac{z_{k}}{ \lambda}} \tag{16}\] The normalization constant for above equation is evaluated as: \[V=\int_{0}^{\infty}q(z_{k})dz_{k} \tag{17}\] using (9) \(V\) becomes \[V = \int_{0}^{\infty}z_{k}^{-\frac{Nd}{2}}e^{-\frac{T_{k}^{1}}{z_{k}} -\frac{z_{k}}{\lambda}}dz_{k}, \tag{18}\] \[= 2\Big{(}\frac{T_{k}^{1}}{1/\lambda}\Big{)}^{\frac{\varsigma}{2} }K_{\varsigma}\Big{(}2\sqrt{T_{k}^{1}/\lambda}\Big{)}\] where \(\varsigma\) is a order of the Bessel function and is defined as \[\varsigma=\frac{-Nd}{2}+1 \tag{20}\] The posterior distribution of \(z_{k}\) is given by the pdf \[q(z_{k})=\frac{z_{k}^{-\frac{Nd}{2}}e^{-\frac{T_{k}^{1}}{z_{k}}-\frac{z_{k}}{ \lambda}}}{2\Big{(}\frac{T_{k}^{1}}{1/\lambda}\Big{)}^{\frac{\varsigma}{2}}K_ {\varsigma}\Big{(}2\sqrt{T_{k}^{1}/\lambda}\Big{)}} \tag{21}\] The estimated posterior (21) is not a known distribution. However, expectations of functions \(h(z_{k})\) required in the following M-step can be evaluated as follows \[\langle h(z_{k})\rangle = \int_{0}^{\infty}h(z_{k})q(z_{k})dz_{k} \tag{22}\] By substituting for \(q(z_{k})\) from (21) and utilizing (9), the posterior mean [30] is obtained as \[\langle z_{k}\rangle = \sqrt{\Big{(}\frac{T_{k}^{1}}{1/\lambda}\Big{)}}\frac{K_{\varsigma +1}(2\sqrt{T_{k}^{1}/\lambda})}{K_{\varsigma}(2\sqrt{T_{k}^{1}/\lambda})} \tag{23}\] Fig. 1: Illustration of two channel sEMG data relation to compound statistical model Similarly the other required moments \(\left\langle\ln(z_{k})\right\rangle\), \(\left\langle\frac{1}{z_{k}}\right\rangle\) are evaluated as \[\left\langle\ln z_{k}\right\rangle = \frac{1}{2}ln\Big{(}\frac{T_{k}^{1}}{1/\lambda}\Big{)}+\frac{ \frac{\partial K_{\xi}(2\sqrt{T_{k}^{1}/\lambda})}{\partial\xi}\Big{|}_{\xi= \varsigma}}{K_{\varsigma}(2\sqrt{T_{k}^{1}/\lambda})} \tag{24}\] \[\left\langle\frac{1}{z_{k}}\right\rangle = \Big{(}\frac{T_{k}^{1}}{1/\lambda}\Big{)}^{-\frac{1}{2}}\frac{K_ {\varsigma-1}(2\sqrt{T_{k}^{1}/\lambda})}{K_{\varsigma}(2\sqrt{T_{k}^{1}/ \lambda})} \tag{25}\] #### Iii-C2 M-step In this step, the expectation of log likelihood function \(L\) is. \[\left\langle\ln L\right\rangle_{q(z_{k})}=-\frac{Nd}{2}\sum_{k=1} ^{K}\left\langle lnz_{k}\right\rangle-\sum_{k=1}^{K}\left\langle\frac{1}{z_{ k}}\right\rangle T_{k}^{1}-\frac{NK}{2}ln(|\mathbf{\Sigma}|)\\ -Kln(\lambda)-\frac{1}{\lambda}\sum_{k=1}^{K}\left\langle z_{k}\right\rangle \tag{26}\] The parameters are estimated by maximizing the expectation as \[\nabla_{\mu,\Sigma,\lambda}\left\langle\ln(L)\right\rangle_{q(z_{k})}=0 \tag{27}\] Using the moments from (23) to (25) in (27), the parameters are estimated as \[\hat{\lambda} = \frac{1}{K}\sum_{k=1}^{K}\left\langle z_{k}\right\rangle, \tag{28}\] \[\hat{\mathbf{\mu}}_{k} = \frac{\sum_{k=1}^{K}\bar{y}_{k}\eta_{k}}{\sum_{k=1}^{K}\eta_{k}},\] (29) \[\hat{\mathbf{\Sigma}} = \frac{1}{NK}\sum_{k=1}^{K}\sum_{n=1}^{N}\eta_{k}Q(\mathbf{y}_{n,k}) \tag{30}\] Where \(\bar{y}_{k}=\frac{1}{N}\sum_{n=1}^{N}y_{n,k}\) and \(\eta_{k}=\left\langle\frac{1}{z_{k}}\right\rangle\). #### Iii-C3 Convergence criteria The E and M steps are repeated until the convergence criterion defined below is satisfied. \(\phi^{i}\) represents the sum of absolute change in the consecutive parameter estimates at the \(i^{th}\) iteration and defined as \[\phi^{(i)}=\left|\lambda^{i}-\lambda^{i-1}\right|+\left|\mu_{k}^{i}-\mu_{k}^ {i-1}\right|+\left|\Sigma_{c}^{i}-\Sigma_{c}^{i-1}\right| \tag{31}\] When the change \(\phi^{(i)}\) becomes sufficiently small ie,. \[\phi^{(i)}\leq\phi_{o} \tag{32}\] the iterations are halted. Here \(\phi_{o}\) represents a pre-defined value \(10^{-5}\). The EM-algorithm is summarized in Alg. 1. ### _Comparison Models_ In this work, we compare our proposed model (CG-E) with the benchmark [22], compound Gaussian distribution with inverse gamma texture (CG-IG) and another model, compound Gaussian distribution with the Gamma texture (CG-G) [24]. #### Iii-D1 CG-IG Model The texture \(z_{k}\) follows an inverse gamma distribution \[p(z_{k})=\mathcal{IG}\big{(}z_{k};\alpha_{IG},\beta_{IG}\big{)} \tag{33}\] and the conditional distribution of \(y_{n,k}\) given texture \(z_{k}\) is the same as (2) with the parameters \(\mathbf{\mu}_{IG},\mathbf{\Sigma}_{IG}\). Note that this benchmark model is based on the scale mixture model in [10]. Let \(\Theta_{1}=\{\mathbf{\mu}_{IG},\mathbf{\Sigma}_{IG},\alpha_{IG},\beta_{IG}\}\) be the parameter set for the CG-IG model. The EM-algorithm for estimation of the parameters is summarized as follows, similar results can be found in [10, 22]. * E-step: The posterior distribution of \(z_{k}\) has a closed form expression, which follows inverse gamma distribution ie,. \[q(z_{k})=\mathcal{IG}\big{(}z_{k};\alpha_{IG}^{*},\beta_{IG}^{*}\big{)}\] (34) where \(\alpha_{IG}^{*}\) and \(\beta_{IG}^{*}\) are written as \[\alpha_{IG}^{*} = \frac{Nd}{2}+\alpha_{IG}\] (35) \[\beta_{IG}^{*} = \beta_{IG}+T_{k}^{1}\] The moments of \(z_{k}\) are given as follows: \[\left\langle\ln z_{k}\right\rangle = \ln(\beta_{IG}^{*})-\psi(\alpha_{IG}^{*})\] (36) \[\left\langle\frac{1}{z_{k}}\right\rangle = \frac{\alpha_{IG}^{*}}{\beta_{IG}^{*}}\] * M-step: The estimates of \(\Theta_{1}\) are obtained as follows: An estimate of \(\alpha_{IG}\) is found by solving the non-linear equation \[K\psi(\alpha_{IG})-K\ln(\beta_{IG})+\sum_{k=1}^{K}\left\langle\ln\left(z_{k} \right)\right\rangle=0\] (37) using the Newton Raphson Method [31] and by using the solution of (37) an estimate of \(\beta_{IG}\) is obtained as \[\tilde{\beta}_{IG}=\frac{K\tilde{\alpha}_{{}_{IG}}}{\sum_{k=1}^{K}\left\langle \ln\left(z_{k}\right)\right\rangle}\] (38) The remaining estimates \(\boldsymbol{\mu}_{IG},\boldsymbol{\Sigma}_{IG}\) are similar to (29) and (30) except for the moments of \(z_{k}\) are replaced by those in (36). #### Ii-D2 CG-G Model Here the texture \(z_{k}\) is considered as a gamma random variable. \[p(z_{k})=\mathcal{G}(z_{k};\alpha_{G},\beta_{G}) \tag{39}\] The conditional distribution of \(y_{n,k}|z_{k}\) is again similar to (2) with the parameters \(\mu_{G},\boldsymbol{\Sigma}_{G}\). \(\alpha_{G},\beta_{G}\) are the parameters of the gamma distribution. Let \(\Theta_{2}=\{\boldsymbol{\mu}_{G},\boldsymbol{\Sigma}_{G},\alpha_{G},\beta_{G}\}\) be the parameter set of CG-G model. Note that this model is used for modeling non-stationary radar clutter [20]. * E-step: The posterior distribution of \(z_{k}\) with CG-G model is similar to (21) and is given as \[q(z_{k})=\frac{z_{k}^{\alpha_{G}-\frac{M}{2}-1}e^{-\frac{\mu_{1}^{2}}{z_{k}} -\beta_{G}z_{k}}}{2\left(\frac{T_{k}^{2}}{\beta_{G}}\right)^{\frac{2}{2}}K_{ \nu}\left(2\sqrt{\beta_{G}T_{k}^{4}}\right)}\] (40) here \(\nu=-\frac{Nd}{2}+\alpha_{G}\) represents the order of the Bessel function. The moments of \(z_{k}\) are similar to those of CG-E model (23) to (25) with the following replacements. \[\frac{1}{\lambda} \rightarrow \beta_{G}\] \[\varsigma \rightarrow \nu\] (41) * M-step: The estimate of \(\alpha_{G}\) is obtained by solving the non-linear equation given below \[K\psi(\alpha_{G})-Klog(\beta_{G})-\sum_{k=1}^{K}\left\langle\ln\left(z_{k} \right)\right\rangle=0\] (42) using (42), the estimate of \(\beta_{G}\) is obtained as \[\tilde{\beta}_{G}=\frac{K\tilde{\alpha}_{{}_{G}}}{\sum_{k=1}^{K}\left\langle z _{k}\right\rangle}\] (43) The estimates of \(\boldsymbol{\mu}_{G},\boldsymbol{\Sigma}_{G}\) are similar to (29) and (30) except for the modified moments of \(z_{k}\). ### _Evaluation methods_ **Visual inspection[32]**: It is a graphical approach to visualise the level of agreement between the histogram based empirical pdf (empdf) and an estimated pdf. In this study, these estimated compound pdfs are based on the CG-E, CG-G and CG-IG models. **Moment Analysis**: In this analysis, the statistical moments estimated from the three models are compared with those of the empdf. The required moments are computed from the following \[E(h(\mathbf{Y}))=\int h(\mathbf{Y})p(\mathbf{Y},\mathbf{z})d\mathbf{Y}d \mathbf{z} \tag{44}\] For the three models, the closed form joint pdfs lead to closed form moments. The moments from these models are compared with the data based moments which are evaluated numerically from (44) by replacing the joint pdf with the numerical empdf. **Kullback-Leibler divergence (KLD) [33]**: It is a statistical metric used to measure the distance between two pdfs. Let \(q_{1}\) and \(q_{2}\) be the empdf and the estimated model respectively, then the KLD between them is evaluated as \[D_{KL}(q_{1}||q_{2})=\sum_{x}\sum_{y}q_{1}(x,y)\ln\left(\frac{q_{1}(x,y)}{q_{2 }(x,y)}\right) \tag{45}\] If these models match with each other then the \(D_{KL}(q_{1}||q_{2})\) equals \(0\). Thus a lower \(D_{KL}(p_{1}||p_{2})\) indicates that an estimated model is closer to the empdf. **Coefficient of determination (COD) R-squared[34]**: It is a statistical measure that determines how well the estimated model fits the empdf. Specifically, it quantifies how much of the overall variance, the estimated model can explain. As the value of the \(R^{2}\) approaches 1, the agreement between the estimated model and the empdf improves. **Log-Likelihood Values (LLV)[32, 35]** : The LLV is another measure to compare two different statistical models. In order to determine which of the models is statistically significant, the likelihood values associated with the models are evaluated separately and compared. We determine the LLV for the three models mentioned above. ## III Data Description In this work, a novel sEMG dataset termed electromyographic analysis of human arm activities - database 2 (EMAHA-DB2) is developed. Ten healthy participants aged between \(18-21\) years were selected based on three levels of strength training experience: a) Beginner - with no prior training experience, b) Intermediate - with a few weeks of training experience and c) Trained - with at least one year of training experience [36]. Participants were free from all muscle disorders for the past one month prior to data collection. Prior to participating in the experiment, the purpose of the study was explained and an informed consent was obtained from the subjects. The data collection procedure was approved by the institutional ethics committee of the Indian Institute of Information Technology Sri City (No. IIITS/EC/2022/01) dated 19 September 2022 as per the principles of the Declaration of Helsinki. Before data acquisition session, the surface of the skin at the muscle site under consideration is cleaned with an alcohol based wipe to reduce the skin impedance. In EMAHA-DB2, sEMG signals are acquired using the Noraxon's Ultium sensors. As shown in fig. 2(a), Ultium sensors are placed at two muscle sites 1) biceps brachii (BB) representing the upper arm activity and 2) flexor carpi ulnaris (FCU) representing the forearm activity. Signal acquisition characteristics of the sensor are: \(16\)- bit A/D; Sampling rate: \(2000\) samples/sec; cutoff frequency: \(20-450\) Hz. The weights used during the activity include \(\text{dkg}\), \(\text{1kg}\), \(\text{2.5kg}\), \(\text{5kg}\), \(\text{6kg}\), \(\text{9kg}\) and \(\text{10kg}\). During the measurement, the subject is in a standing position and the weight is placed on a table at a convenient height. Each activity has three phases 1) rest (\(\text{10s}\)), 2) action (\(\text{5s}\)) and 3) release (\(\text{3s}\)) with a total duration of \(18\)s. Each activity is repeated nine times. In order to avoid muscle fatigue, subjects rest for two minutes between different activities. Further details of experiments are given below. The anthropometric details of the participants are shown in the table I and a summary of the dataset is presented in table II. The EMAHA-DB2 dataset is available here. #### Iii-B1 Experiment-I In the first experiment as shown in fig. 2(b), the subjects were asked to perform bicep curls with the right arm using the seven weights mentioned above. Recall that the biceps curl corresponds to isotonic muscle contractions [37]. #### Iii-B2 Experiment-II In this experiment, as shown in fig 2(c), the subjects were asked to hold a dumbbell with the right hand at \(90^{\circ}\) with respect to the upper arm i.e., the dumbbell is held in the transverse plane with its axis parallel to the frontal axis. The same set of weight variations from experiment \(I\) are used. Recall, for holding a weight, the arm flexion corresponds to isometric contractions [38]. ## IV Model Analysis and Discussion In this section, the most suitable model for the EMAHA-DB2 data is determined by comparing the following compound Gaussian models: * CG-E (Proposed model) * CG-IG [10, 22] * CG-G [24] Model validation is carried out for each of the sEMG signals corresponding to the experiments in section-III using the following evaluation methods. * Qualitative analysis based on visual inspection * Quantitative analyses: 1. Moment analysis 2. Analysis of KLD 3. Coefficient of determination (COD) R-squared 4. Log-likelihood values ### _Visual Inspection_ The Figs. 3 and 4 illustrates the empdf (yellow) and the models from CG-E (blue), CG-G (magenta) and CG-IG (red) estimated for the strength of two channel sEMG signals. Specifically, Fig. 3 illustrates the results from analysis on sEMG signals of experiment I (isotonic activity) corresponds to subject-1 while training with 6kg dumbbell. and Fig. 4 corresponds to the experiment II (isometric activity) with 6kg dumbbell. From these figures, it is noticed that the CG-E model fits the empdf better in comparison to other models. The models CG-G and CG-IG are weaker fits compared to CG-E. A similar analysis is carried out for the rest of the data \begin{table} \begin{tabular}{c c} \hline Weight & 0kg to 10kg \\ \hline Muscles & BB and FCU \\ \hline Subjects & 10 \\ \hline Rest duration & 10sec \\ \hline Activity duration & 8sec \\ \hline No of repetitions & 09 \\ \hline sEMG sensor & Noraxon \\ \hline Electrode & Agel \\ \hline Sampling frequency(Hz) & 2000 \\ \hline No of channels & 2 \\ \hline \end{tabular} \end{table} TABLE II: Characteristics of EMAHA-DB2 dataset \begin{table} \begin{tabular}{c c c c c c} \hline **Subject** & **COBB*(inches)** & **COFCU*(inches)** & **Experience** & **Weight (kg)** & **Height (cms)** \\ \hline 1 & 10.5 & 10 & No & 58 & 175 \\ \hline 2 & 11.5 & 10.5 & No & 75 & 183 \\ \hline 3 & 13 & 10.5 & No & 70 & 173.7 \\ \hline 4 & 12.8 & 10.5 & 2 months & 81 & 182 \\ \hline 5 & 12.5 & 10 & 3 months & 63 & 175 \\ \hline 6 & 11 & 9.8 & 3 months & 57 & 174 \\ \hline 7 & 12 & 10 & 4 months & 75 & 182 \\ \hline 8 & 13.8 & 11.9 & 1 year & 65 & 173 \\ \hline 9 & 14.3 & 12 & 2 years & 77 & 176 \\ \hline 10 & 13.8 & 12.2 & 1 year & 79 & 182.8 \\ \hline \end{tabular} * COBB and COFCU stand for circumference of BB and FCU respectively \end{table} TABLE I: Anthropometrics of Participants Fig. 2: (a) Placement of electrodes on the BB and FCU during weight training. (b) Isotonic activity: Performing bicep curls (c) Isometric activity: Holding the dumbbell at 90 \({}^{\circ}\). and it is observed that CG-E model has the best agreement with the empdf among the three compound models. ### _Quantitative Analysis_ #### Iv-B1 Moment Analysis The estimated moments such as the mean, covariance and the Mardia's kurtosis[39, 40] corresponding to Fig.3 and 4 are shown in table-III and IV. Among the three models, the moments of CG-E are best match to those of the empdf. In addition, the averaged moments across the subjects and trials for the isotonic activity corresponding to 6kg weight lifting are presented in table V. These results indicate agreement between the moments corresponding to the CG-E and the empdf. #### Iv-B2 KL-divergence In this study, the KLD is evaluated between the empdf and the three compound models and shown in Figs. 5 to 6. Fig. 5, illustrates KLD heatmaps as a function of subjects and activities. The KLD in each cell of the heatmap is an average over the trials of the corresponding activity. The KLDs corresponding to experiment-I are shown in Figs. 5 (a) to (c), while the KLDs corresponding to experiment-II are presented in Figs. 5 (d) to (f). Based on these heatmaps, it is observed that the CG-E model has the lowest KLD among the three compound models. The ranges of KLD for the heatmaps Fig. 4: Visual comparisons between (a) empdf (yellow) and estimated pdf’s from models: (b) CG-E (blue), (c) CG-G (magenta) and (d) CG-IG (red) for isometric activity during 6 kg lifting corresponding to the subject-5 and trial-8. \begin{table} \begin{tabular}{c c c c c c|c c c} \hline \multicolumn{2}{c}{**Estimates**} & \multicolumn{2}{c}{**empdf**} & \multicolumn{2}{c}{**CG-E**} & \multicolumn{2}{c}{**CG-IG**} & \multicolumn{2}{c}{**CG-G**} \\ \hline Mean & \(|-8.2495\) & \(0|*10^{-8}\) & \(|-0.0006551\) & \(-0.0003244\) & \(|0.0446\) & \(0.0155\) & \(|-0.001786\) & \(-0.001169\) \\ \hline Covariance & \(\begin{bmatrix}3.1235&0.0101\\ 0.0101&1.1892\end{bmatrix}*10^{4}\) & \(\begin{bmatrix}3.3269&0.0535\\ 0.0535&1.0009\end{bmatrix}*10^{4}\) & \(\begin{bmatrix}93.2162&1.5157\\ 1.5157&27.3331\end{bmatrix}\) & \(\begin{bmatrix}2.2445&0.0272\\ 0.0272&0.7818\end{bmatrix}*10^{4}\) \\ \hline Mardia’s Kurtosis & 8.3029 & & 7.7038 & & 13.6497 & & 7.1717 \\ \hline \end{tabular} \end{table} TABLE III: Estimated moments of isotonic activity during 6 kg lifting corresponding to the subject-1 and trial-8 Fig. 3: Visual comparisons between (a) empdf (yellow) and estimated pdfs from models: (b) CG-E (blue), (c) CG-G (magenta) and (d) CG-IG (red) for isotonic activity during \(6\) kg lifting corresponding to the subject-1 and trial-8. in Fig. 5 are given in table VI. For experiment II, the maximum KLD from the CG-E does not exceed the minimum KLD \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \multicolumn{2}{c}{**Estimates**} & \multicolumn{2}{c}{**empdf**} & \multicolumn{2}{c}{**CG-E**} & \multicolumn{2}{c}{**CG-IG**} & \multicolumn{2}{c}{**CG-G**} \\ \hline Mean & \(\left[0\ 0\right]\) & \(\left[0.00091\ \right]\) & \(\left[0.000035\right]\) & \(\left[0.00620\ \right]0.00087\) & \(\left[0.00284\ \right]0.00075\) \\ \hline Covariance & \(\left[3.0096\ \right]0.0652\) & \(\left[0.0652\right]\) & \(\left[3.0271\ \right]\) & \(\left[0.0708\ \right]\) & \(\left[1.216\right]\) & \(\left[2.8723\ \right]\) & \(\left[3.8457\ \right]\) & \(\left[0.0502\ \right]\) \\ \hline & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ \cline{1-1} \cline{6-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \cline{10-6} \clineine{10-6} \clineine{10-6} \clineine{10-6} \clineineine{10-6} \clineineine{10-6} \clineineineine{10-6} \cline from the CG-G and CG-IG. In the case of experiment I, the maximum KLD from the CG-E is less than half the maximum from the other models. Fig. 5(a) shows the averaged KLD across the subjects as a function of the weights and Fig. 5(b) shows the vice-versa. The KLD of the CG-E, CG-G and CG-IG are represented in blue, orange and yellow respectively. From the Figs. 5(a) and 5(b), it is noted that for both the experiments, the averaged KLD corresponding to either the subjects or the weights is the lowest for the CG-E, when compared to CG-G and CG-IG. As mentioned earlier, the optimal choice \((K,N)=(200,80)\) is made based on grid search for lowest KLD over a region of possible values for \(K\) and \(N\). #### Iv-B3 Log-Likelihood values Fig. 6(a) illustrates the LLV averaged across the subjects and trials as a function of the weights. Fig. 6(b) shows the LLV averaged across the weights and trials as a function of the subjects. The LLV of the CG-E, CG-G and CG-IG are shown again in blue, orange and yellow respectively. From these figures it can be observed that for both the experiments, the averaged LLV, for both the subjects and the weights, is the highest for the CG-E among the three models. #### Iv-B4 Coefficient of determination (COD) \(R^{2}\) The averaged \(R^{2}\) for CG-E, CG-G and CG-IG corresponding to experiments-I and II are illustrated in Fig 8. Specifically, Fig 7(a) shows \(R^{2}\) averaged across subjects vs. weights and Fig. 7(b) shows vice-versa. From these figures it is obvious that \(R^{2}\) associated with the CG-E is the highest among the models and followed by that of the CG-G and the CG-IG. It is also evident that the difference in \(R^{2}\) between CG-E and CG-G models is lesser in the experiment-I however it is much greater with respect to experiment-II. The minimum and maximum values of \(R^{2}\) as functions of subjects and weights for each activity are shown in tables-VII and VIII respectively. Recall that \(\lambda\) originally quantifies the statistical mean of the texture \(z_{k}\). The estimate of \(\lambda\) shown in Fig. 9 is an average over the trials and subjects. Figures on the left and right correspond to experiments I and II respectively. First, it is clear that the value of \(\hat{\lambda}\) increases with the dumbbell weight. Additionally, note that the variation of \(\hat{\lambda}\) is less with the experiment \(I\) (isotonic) and more with the experiment II (isometric). Furthermore in the experiment II, the estimate \(\hat{\lambda}\) doesn increase significantly till \(6\) kgs lifting weight. However, it rises rapidly at higher weights \(9,10\) kgs. Note that the forces corresponding to isometries are stronger that those in the isotonic contractions [41]. The amount of muscle force (or) muscles recruitment required for a weight lift is generally proportional to its weight. Thus from Fig. 9, muscle force required to lift a weight can be attributed to the rate parameter (\(\lambda\)). Note that the motor units recruitment may increase with the force generated. Thus for strength training athletes, the muscle force and rate of muscle force generated can be correlated to the texture variable's estimated mean \(\hat{\lambda}\). ### _Analysis of sum of variances_ The metric \(P_{T}\) denotes the square root of sum of variances (trace of \(\Sigma\)) from BB and FCU. \[P_{T}=\sqrt{\sigma_{T,BB}^{2}+\sigma_{T,FCU}^{2}} \tag{46}\] where \(\sigma_{T,BB}^{2}\), \(\sigma_{T,FCU}^{2}\) are the variances corresponding to BB and FCU. The \(P_{T}^{2}\) termed as _T-power_ is sum of variances of the sEMG signal from BB and FCU and can be related to the muscle force. Fig. 10 depicts \(P_{T}\) from BB and FCU for isotonic (left) and isometric (right) activities. From this figure, it is interesting to note that both for isotonic and isometric activities, for any lifting load, the total signal power seems to be directly related to the subject's experience. For any weight, the trained subjects produced the highest T-power while the beginners generated the least T-power. Additionally, \(P_{T}\) can be correlated with a subject's strength. A higher slope for \(P_{T}\) vs weights indicates higher strength to lift heavier weights. Based on Fig. 10, the trained group has a steeper slope of \(P_{T}\), followed by the intermediate and the beginner groups. ## V Conclusion and Future work In this paper, an multivariate Compound-Gaussian model is proposed for sEMG signals by considering improved variance as exponential model. This model is compared with existing (CG-IG) model. In addition to this, the proposed model is also compared with the CG model in which variance is modeled as gamma. The goodness of the model is justified using, (1) A qualitative comparison with the empdf reveals the best agreement with the CG-E model, (2) the KLD between the fitted model and the empdf, again the KLD is lowest for the CG-E model, (3) Coefficient of determination (COD) - \(R^{2}\) here it is noted that \(R^{2}\) in case of the CG-E model is closest to unity and (4) the Log-Likelihood values (LLV) that also support the CG-E model. Finally, the estimtates of the rate parameter (\(\lambda\)) and the signal covariance of the (CG-E) model is analyzed in different measurement conditions. In future work, the plans include statistical modeling of sEMG signals corresponding to sports activities and understand their role in muscle coordination.
2306.14256
A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: \underline{https://github.com/C4AI/gap-text2sql}.
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
2023-06-25T14:28:12Z
http://arxiv.org/abs/2306.14256v1
# A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention ###### Abstract Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [https://github.com/C4AI/gap-text2sql](https://github.com/C4AI/gap-text2sql). **Keywords:** Semantic parsing, SQL generation, deep learning, neural network, natural language process, text-to-SQL, databases, transformers self-attention, transformers, Spider dataset ## 1 Introduction Transformers with the attention mechanism have led to great leaps in natural language processing (NLP) [1]. However, they do have limitations. An example is the 512 tokens input limit, as this can be a drawback when dealing with long text sequences. The number of tokens is not really a limitation as it can be increased; however, expanding this number increases memory consumption quadratically and may disperse attention through many tokens. Different proposals, such as Big Bird [10], Longformer [11], Poolingformer [12], ETC [13], Linformer [14], Reformer [15], among others, process long text sequences and address the challenge of memory consumption by letting it grow near linearly, while keeping good performance. In this paper we explore techniques that enhance transformers in the context of _natural language to SQL_ (NL2SQL) translation. Existing NL2SQL parsers encode the combined text composed of the question and database schema information, especially the table names, column names, and their relations. More information about NL2SQL can be found in these surveys: [19][20][21]. NL2SQL parsers based on transformers have greatly evolved in the last few years. The Spider dataset1[2] has had a key role in that progress due to its features, such as the number of databases, query complexity, etc. The current leaderboard (measuring exact set match without values) is presented in Table 1. \begin{table} \begin{tabular}{l l c c c} \hline Rank & Model & Test & Dev & Reference \\ \hline 1 & Graphix-3B + PICARD & 0.740 & 0.771 & Anonymous \\ 1 & CatSQL + GraPPa & 0.739 & 0.786 & Anonymous \\ 3 & SHiP + PICARD & 0.731 & 0.772 & Anonymous \\ 4 & G*R + LGESQL + ELECTRA & 0.726 & 0.772 & Anonymous \\ 6 & RESDSQL+T5-1.1-lm100k-xl & 0.724 & 0.781 & Anonymous \\ 6 & T5-SR & 0.724 & 0.772 & Anonymous \\ 7 & S*SQL + ELECTRA & 0.721 & 0.764 & [6] \\ 8 & LGESQL + ELECTRA & 0.720 & 0.751 & [4] \\ 9 & T5-3B+PICARD & 0.719 & 0.755 & [7] \\ \hline \end{tabular} * * Techniques that currently do not have a paper associated are presented as anonymous. \end{table} Table 1: Spider Leaderboard - Exact Set Match without Values in September 2022 Currently, the works in the first positions that have a paper explaining their approach are: - S\({}^{2}\)SQL [6] is a technique that injects syntactic information of the question in the encoder, rather than just the question text. They also introduce a decoupling constraint in order to induce diverse edge embedding learning. - The idea behind LGESQL (Line Graph Enhanced Text-to-SQL) [4] is to use line graphs to include local (1-hop relation) and non-local (extracted from parameter matrix) features to compute. The line graph relates question nodes, table nodes, and column nodes. A graph pruning process helps indicate the relevant graph schema to the related question. The pretrained language model (PLM) ELECTRA-large-discriminator has achieved the best result. - The PICARD [7] (Parsing Incrementally for Constrained Auto-Regressive Decoding) approach constrains the decoded tokens during the inference process to find valid SQL queries through four levels in the parsing process. The best result within this approach has been achieved with the T5-3b model. A technique that has been a reference for many other techniques with good results in the Spider leaderboard is the RAT-SQL (Relation-Aware Transformer SQL) [3] that explored a database schema link with the natural language questions words, with important results when launched. RAT-SQL+GAP scheme [5] (0.697 Test and 0.718 Dev) is a variant that is used in this paper as a baseline. GAP means Generation-Augmented Pre-Training; it employs a custom pre-training in the BART model with learning objectives related to the NL2SQL task. Such training increases performance when this model is plugged into the RAT-SQL parser. When using transformers, the limitation over long input text sequences strikes. The natural language question is not a problem as it is usually short; however, the database schema may be large depending on the number of tables and columns. We here use the RAT-SQL+GAP when the model is BART-large (the pretrained model version was downloaded from Github2 ) and our multilingual mRAT-SQL version, but without GAP, when the model is mT5-large, which means the model is the original form of Hugging Face3. Footnote 2: RAT-SQL+GAP github:[https://github.com/awslabs/gap-text2sql](https://github.com/awslabs/gap-text2sql). Footnote 3: Google’s mT5:[https://huggingface.co/google/mt5-large](https://huggingface.co/google/mt5-large). The proposal for this paper is to present to the scientific community the improvement obtained with schema pruning in a multilingual approach. The motivation was the benefits of schema pruning because in NL2SQL with transformers it is an open problem to handle databases with big schemas that produce long input sequences that exceed 512 tokens. The multilingual approach producing better results was a good side effect, that we notice due to our previous work [16] using a combination of English and Portuguese languages, which was expanded with Spanish and French languages here. It is important to report and present the results of these finds to allow other researchers to analyze them as a possible choice to be applied in their context. The main contributions of this paper are the schema pruning to reduce the number of tokens to fit in the 512, preserving the relevant tables and column names used in the queries for the corresponding database, and a multilingual data augmentation process with four languages: English, Portuguese, Spanish, and French. Both contributions can be easily incorporated into other NL2SQL approaches thus making them a viable path to increase benchmark results. ## 2 Multilingual Data Augmentation Natural language processing now a days as great advances, but mainly in English Language. Operate with different languages could be problematic due to the limitation of language models pretrained in those languages. Multilingual language models are a good option [16][8][9]. In previous work on NL2SQL in Portuguese [16], we have found that it is better to work with multilingual models than with a model for a specific language (mostly because SQL queries naturally contain many English words). This was shown in our previous work [16] with the multilingual model mBART-50. Multilingual models allow training in English and Portuguese separately, and also the two languages together. We produced better results when training the model with multiple languages than with a single one even working with the English language. It is possible to deduce this is an effect of data augmentation because we double the dataset. In the current work, we chose mT5 [17] multilingual because achieves better results than mBART-50. Specifically, the mT5 large multilingual model with 1.2 billion parameters pre-trained with 101 languages, including the languages we are currently working on English, Portuguese, Spanish, and French. The Spider dataset consists of 3 files: train_spider.json (7,000 questions), train_others (1,659 questions) (both train dataset), and dev.json (1,034 questions) (validation dataset). We translated natural language questions from the Spider Dataset into Portuguese, Spanish and French and created versions of the four languages, with the same corresponding original query. We choose not to translate any information about the database schema to make the results compatible and comparable, which means we can make inferences with any of the four languages, and the resultant query can be evaluated with the Spider test suite [18]4. In Table 2 the question "What are the maximum and minimum budgets of the departments?" are presented in four languages; all are related to the same query: "SELECT max(budget_in_billions), min(budget_in_billions) FROM department". The translations were made using the Google Translate service. Footnote 4: Spider test suite: [https://github.com/taoyds/test-suite-sql-eval](https://github.com/taoyds/test-suite-sql-eval). We also created a dataset version that joins the four languages together. The original Spider has 8659 train and 1034 validation examples. This quad dataset has 34636 train and 4136 validation examples. Ours is a data augmentation approach that works with multilingual models. ## 3 Schema pruning The transformer self-attention mechanism size limitation also applies to NL2SQL. The figure 1 presents a graphical representation of the problem. The figure 0(a) shows the ideal situation where the junction of the question text and the serialized text of the schema (tables names, columns names, and their relations) fits under the 512 tokens. The figure 0(b) shows a real example case where the text that represents the database schema overcomes the limit of 512 tokens. One possible solution for that situation was to expand the limit to 2048 tokens to fit all necessary text, the figure 0(c) illustrates this solution. The problem is not the natural language part (the question) but the database schema. It is not usual for a question to have too many words (more than 512 tokens), but databases can have many tables and columns that lead to a schema with more than 512 tokens (when serialized as text). However, one question will typically not require information of the entire database schema to generate the expected SQL query. Considering the training data set, even a group of questions may not require the entire database schema. It is possible to analyze all questions in the training dataset related to the same database and see which tables and columns are used. This idea allows pruning table and column names that are not used for that pack of questions, reducing in that way the size of the database schema. With this reduced version of this database schema, It is possible to fit the natural language question and the database schema under 512 tokens, respecting the self-attention mechanism limitation. The figure 0(d) presents the effect of the schema pruning. Currently, the Spider dataset is composed of 166 databases 146 for training and 20 for validation. The schemas are organized in the tables.json file. RAT-SQL-GAP has a pre-process step that prepares the information for training and inference steps. We noticed that the number of training questions in fact used during the RAT-SQL-GAP pre-processing are always smaller than the number of questions in the training dataset. This is due to RAT-SQL+GAP code dropping examples that have more than 512 tokens. The original English training dataset has 8659 examples, but just 8558 were really used, using BartTokenizer, which will interfere with the number of tokens. 101 examples were rejected due to the combination of the question \begin{table} \begin{tabular}{l l} \hline Language & Question \\ \hline English & What are the maximum and minimum budgets of the departments? \\ Portuguese & Quais SΓ£o os orcamentos mΓ‘ximo e minimo dos departamento? \\ Spanish & ΒΏCuales son los valores mΓ‘ximo y minimo presupuesto de los departamentos? \\ French & Quels sont le budget maximum et minimum des dΓ©partements? \\ \hline \end{tabular} \end{table} Table 2: Question sample in English, Portuguese, Spanish and French, related to the same query: β€œSELECT max(budget_in_billions), min(budget_in_billions) FROM department” and the database schema (table names and column names) being greater than 512 tokens. The quad (English, Portuguese, Spanish, and French together) training dataset has 34363 examples, but just 33927 were really used, using MT5Tokenizer. 709 examples were rejected. The number of rejected examples did not increase by four times, although the quad dataset was four times larger, because different languages produce questions with a different number of words and use a different tokenizer (MT5Tokenizer). We analyzed the rejected examples and organized them by the database in Table 3. The questions related to these three databases are all in the training dataset file train_spider.json. In order to understand the reasons why just these three databases are related to the problem of exceeding the limit of 512 tokens, the number of original tables and columns was analyzed, see Table 3. These databases have a large number of tables and columns, for example, the Baseball_1 DB has 353 columns; if each column name uses two unique words, the 512 tokens limit is exceeded merely by the names of the columns. We created a code to analyze all queries related to these databases and present the tables that are really used, allowing the deletion of tables not used. To validate the process of pruning, we did it manually using the DB Browser Figure 1: Situations and possibles solutions; a) Ideal situation; b) Example that overcomes the 512 tokens; c) Possible solution, expand the token limit to 2048 tokens; d) Proposed solution, prune the database schema to fit in 512 token limit. \begin{table} \begin{tabular}{l l l} \hline Database & Original dataset & Quad Dataset \\ & BartTokenizer & MT5Tokenizer \\ \hline Baseball\_1 & 82 & 328 \\ cre\_Drama\_Workshop\_Groups & 19 & 328 \\ Soccer\_1 & 0 & 53 \\ \hline Total rejected & 101 & 709 \\ \end{tabular} \end{table} Table 3: Number of examples dropped from the training dataset during pre-processing. for SQLite. First, we deleted tables not used in the queries indicated by the code; later the not-used columns when the deletion of tables was not enough. For column deletion, we did not develop a specific code, the deletion was made based on the name of the column and on a visual inspection of the queries that used the table related. After the pruning, we updated the tables.json file with a new section that reflects the modified database. The dataset file train_spider.json has indexes related to the original tables and columns. We create a code to update these indexes with the modified pruned version. Table 4 shows the new numbers of tables and columns in the pruned version. This approach was made to evaluate the effects of using the entire Spider dataset (without drop examples); it can be applied to other training datasets, despite the manual effort, because it is one-time work. Creating an algorithm that makes the schema pruning automatically for each drop example candidate (more than 512 tokens) is an option, but it is worth considering that the schema pruning for each example drop candidate will create a new schema per example, not per database. To make the pruning schema per database, it is necessary to aggregate all the drop example candidates (more than 512 tokens) and relate the database candidates to make the pruning considering all the queries (with the tables and columns used). ## 4 Experiments and Analysis ### Multilingual Data Augmentation The experiments were performed on the following equipment: AMD Ryzen 9 3950X 16-Core Processor, 64GB RAM, 2 GPUs NVidia GeForce RTX 3090 24GB running Ubuntu 20.04.2 LTS. First, we reproduced the results of RAT-SQL+GAP [5] in our environment to use it as a baseline (fine tuning with 41000 steps and Batch Size=12); since it is BART-large, the GAP is active (pre-trained model by the RAT-SQL+GAP group), the train and validation datasets are in English. Figure 1(a) shows the "Exact Set Match without Values" accuracy result 0.718, which is the same as the RAT-SQL+GAP [5] paper. This metric considers the inferred query, but not the values. To validate the multilingual approach, we fine-tuned the mT5 model just with the original English Spider train dataset, and later with the quad (English, Portuguese, Spanish and French) Spider train dataset. Both \begin{table} \begin{tabular}{l l l l l} \hline Database & Tables & Columns & Tables & Columns \\ & Original & Original & after pruning & after pruning \\ \hline baseball\_1 & 26 & 353 & 13 & 87 \\ cre\_Drama\_Workshop\_Groups & 18 & 100 & 15 & 80 \\ soccer\_1 & 7 & 87 & 5 & 57 \\ \hline \end{tabular} \end{table} Table 4: Tables and columns sizes for rejected databases before and after schema pruning. with 51,000 steps, Batch Size=4 (this value was chosen to fit the model in our GPU memory). The validation dataset is in English for the three cases. A diagram with the three combinations is presented in Figure 2 on the left side. Figure 2b shows an inference result of 0.864 for the model trained just in English and Figure 2c shows a result of 0.715 for the model trained with the quad dataset. The three tests presented in Figure 2 were made with the standard self-attention of 512 tokens. It is possible to conclude that the multilingual model mT5 produces better results when trained with more languages. The results in Figure 2b and c show the increase from 0.684 to 0.715 for the same mT5-large model first trained in English and after with the quad train dataset (English, Portuguese, Spanish and French). This increase can be credited to a data augmentation effect that was enough to make mRAT-SQL (without GAP) achieve the value of 0.715, near to the BART-large baseline of 0.718 with RAT-SQL+GAP. This makes the training process simpler because a pre-training with the model is not necessary before the final NL2SQL training. ### Schema pruning To understand the influence of the schema pruning we fine-tune the mT5 model with the same quad dataset (without pruning), hereinafter called "standard quad" (English, Portuguese, Spanish, and French) Spider train dataset and after with the quad dataset with schema pruning, hereinafter called "FIT quad" (English, Portuguese, Spanish, and French) Spider train dataset. Both with 120,000 steps, Batch Size=4 and the standard self-attention of 512 tokens. We increased the number of steps to analyze as the model will converge with more steps than the 51,000 used in the prior tests, mainly because the training with mt5-large and the quad dataset achieved the best checkpoint in the last step (on an average rising slope). The validation dataset is in English for both cases. Figure 3c shows the inference result of 0.718 for the model trained with Figure 2: Exact Set Match without Values, the diagram on the left, and the results on the right; a) BART-large trained in English, infer in English(baseline); b) mT5-large model trained in English, infer in English; c) mT5-large model trained in English, Portuguese, Spanish and French, infer in English. the standard quad dataset and Figure (d)d shows results of 0.736 for the model trained with FIT quad database, whereby the schema was pruned. The increase in fine-tuning steps indicates to be adequate; the best checkpoints were 77,500 for the standard quad dataset and 105,100 for the FIT quad train dataset. Another possible approach to include all text sequences during the fine-tuning process is to increase the max number of tokens in the transformer self-attention mechanism. In our case, for using the whole standard quad train dataset, it was necessary to increase from 512 to 2048 tokens. Due to the memory consumption, we had to reduce the Batch Size to just 1, but it was necessary to increase the number of steps to 480000 to get a good convergence in the model training. Figure (b)b shows the inference result of 0.697. The use of the FIT quad Spider train dataset had a huge influence on the results raising from 0.718 Figure (c)c to 0.736 Figure (d)d. It can be deduced that the integral use of the training dataset, without the exclusions caused by exceeding 512 tokens, provided the best training samples. The attempt to increase the limit of 512 tokens to 2048 does not produce good results of 0.697 Figure (b)b. In fact, it was worst than 0.718. Figure (c)c achieved by the mt5-large fine-tuned with the standard quad train dataset. The possible cause is that the attention mechanism became too sparse. A diagram with the four combinations is presented in Figure 3 on the left side. Table 5 shows the question/query example difficulty level (easy, medium, hard, and extra hard) for the exact set match without values for the four cases. The improvement of the mT5 large fine-tuned with the FIT quad train dataset can be noticed in all levels if compared with mT5 large fine-tuned with the standard quad train dataset. The specific value of the mT5 large fine-tuned with the FIT quad train dataset for extra hard examples: 0.530 is the best of all the fine-tuning we performed, yet not reported here. The schema pruning that produced the FIT datasets shows important results, but it was just used in the training dataset because the validation dataset does not have examples requiring more than 512 tokens. It is possible to apply the same schema pruning approach to the validation dataset because we have the query related to the question to select unused tables and columns. In future real cases of NL2SQL where only the question and the database schema are available, it will be difficult to perform the schema pruning in the inference time. One option is to analyze the need of the complete schema in \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline \multirow{2}{*}{Model} & Max & \multirow{2}{*}{Train} & \multirow{2}{*}{Eval} & \multirow{2}{*}{Easy} & \multirow{2}{*}{Medium} & \multirow{2}{*}{Hard} & \multirow{2}{*}{Extra} & \multirow{2}{*}{All} \\ & Token & Limit & & & & & & \\ \hline Bart-large & 512 & en & en & 0.899 & 0.744 & 0.667 & 0.428 & 0.718 \\ mT5-large & 2048 & en-pt-es-fr & en & 0.855 & 0.753 & 0.540 & 0.476 & 0.697 \\ mT5-large & 512 & en-pt-es-fr & en & 0.879 & 0.756 & 0.580 & 0.518 & 0.718 \\ mT5-large & 512 & FIT-en-pt-es-fr & en & 0.895 & 0.776 & 0.603 & 0.530 & 0.736 \\ \hline \hline \end{tabular} \end{table} Table 5: Difficult levels for the exact set match without values. an inference endpoint and create a short schema version compatible with the limit of 512 tokens to get good inferences. ### Multilingual inference The mT5-large fine-tuned with the quad dataset can infer questions in each of the four languages trained. Figure 4 shows the results for the exact set match without values, for inferences with the validation dataset in English (Figure 4b and 4c), translated into Portuguese (Figure 4d and 4e), Spanish (Figure 4f and 4g) and French (Figure 4h and 4i). These results were produced with mT5-large fine-tuned with the standard quad (English, Portuguese, Spanish and French) Spider train dataset Figure 4b, 4d, 4f and 4h. The mT5-large fine-tuned with FIT quad (English, Portuguese, Spanish and French) Spider train dataset Figure 4c, 4e, 4g and 4i. For all the results, the checkpoint that produced the best result for each language was selected. same code from RAT-SQL+GAP, drops examples that exceed 512 tokens. To prove the hypotheses that pruning the schema tables and columns names will help the training processes with more examples, we manually performed this pruning and created a FIT version of the Spider dataset that does not have any examples excluded, this allowed the self-attention transformer mechanism to treat the entire training dataset. This Spider FIT dataset version can easily plug in other techniques that use the Spider dataset. The next step is to plug the Spider FIT dataset in another technique to evaluate the results. ## Abbreviations \begin{tabular}{l l} DEV & Validation dataset \\ ETC & Extended transformer construction \\ GAP & Generation-augmented pre-training \\ mT5 & Multilingual text-to-text transfer transformer model \\ NLP & Natural language processing \\ NL2SQL & Natural language to SQL \\ LGESQL & Line graph enhanced text-to-SQL \\ PICARD & Parsing incrementally for constrained auto-regressive decoding \\ RAT-SQL & Relation-aware transformer SQL \\ \end{tabular} Figure 4: Exact Set Match without Values for multilingual inferences: a) BART-large trained in English dataset standard, inferred in English (baseline); b) mT5-large model trained in English, Portuguese, Spanish and French standard quad dataset, inferred in English; c) mT5-large model trained in English, Portuguese, Spanish and French **FIT quad dataset**, inferred in English; d) mT5-large model trained in English, Portuguese, Spanish and French standard quad dataset, inferred in Portuguese; e) mT5-large model trained in English, Portuguese, Spanish and French **FIT quad dataset**, inferred in Portuguese; f) mT5-large model trained in English, Portuguese, Spanish and French standard quad dataset, inferred in Spanish; g) mT5-large model trained in English, Portuguese, Spanish and French **FIT quad dataset**, inferred in Spanish; h) mT5-large model trained in English, Portuguese, Spanish and French standard quad dataset, inferred in French; i) mT5-large model trained in English, Portuguese, Spanish and French **FIT quad dataset**, inferred in French.. SQ Structured query language S\({}^{2}\)SQL Syntax to question-schema graph encoder for text-to-SQL T5 Text-to-text transfer transformer model ## Declarations * **Funding** This work was carried out at the Center for Artificial Intelligence (C4AI-USP), with support by the Sao Paulo Research Foundation (FAPESP grant #2019/07665-4) and by the IBM Corporation. The second author is partially supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), grant 312180/2018-7. * **Conflict of interest/Competing interests** The authors have no relevant financial or non-financial interests to disclose. * **Ethics approval and consent to participate** Not applicable. * **Consent for publication** Not applicable. * **Availability of data and materials** [https://github.com/C4AI/gap-text2sql](https://github.com/C4AI/gap-text2sql) * **Code availability** [https://github.com/C4AI/gap-text2sql](https://github.com/C4AI/gap-text2sql) * **Authors' contributions** The two authors had an equivalent contribution to the paper write.
2302.09096
Using the Sun and the Moon as Source masses and the Earth's Rotation as a Modulation to Search for Exotic Spin-Dependent Interactions at Astronomical Distances
Exotic spin-dependent interactions mediated by new light particles led to solutions to several important questions in modern physics. Such interactions involving a scalar coupling $g_S^N$ at one vertex and a pseudo-scalar coupling $g_P^n$ at the polarized neutron vertex can be induced by the exchange of spin-0 bosons, or a vector/axial-vector coupling $g_V^N$/$g_A^N$ at one vertex and an axial-vector coupling $g_A^n$ at the polarized neutron vertex can be induced by the exchange of spin-1 bosons. If such new interactions exist, the Sun and the Moon can induce sidereal variations of effective fields along the direction perpendicular to the Earth's rotation axis. We derived new experimental upper limits on such exotic spin-dependent interactions at astronomical interaction ranges by analyzing existing data from laboratory measurements on the Lorentz and CPT violation. We set the most stringent experimental limits on $g_S^Ng_P^n$ ranging from $\sim 2\times 10^{10}$m to $\sim 10^{14}$m. Previously, the best limit on $g_S^Ng_P^n$ at this range is from astrophysics. The result is the first time laboratory limits surpass the astrophysical ones on the scalar-pseudoscalar type interaction, to our best knowledge. We report new constraints on vector-axial-vector and axial-axial-vector type interaction at the range of astronomical scales. The new limits on vector-axial-vector are improved by as much as $\sim$12 orders of magnitude. We also apply the analysis to the Hari-Dass interactions and obtain corresponding new constraints on the interactions. We discuss the possibilities of using the beam method to further search the interaction involving other particles, such as electrons, muons, etc., based on the same idea.
L. Y. Wu, K. Y. Zhang, M. Peng, J. Gong, H. Yan
2023-02-15T08:42:22Z
http://arxiv.org/abs/2302.09096v3
Using the Sun and the Moon as Source masses and the Earth's Rotation as a Modulation to Search for Exotic Spin-Dependent Interactions at Astronomical Distances ###### Abstract Exotic spin-dependent interactions mediated by new light particles led to solutions to several important questions in modern physics. Interactions between polarized neutrons and unpolarized nucleons proportional to \(g_{S}^{N}g_{p}^{\mathbf{\sigma}}\mathbf{\cdot}\mathbf{r}\), \(g_{S}^{N}g_{A}^{\mathbf{\alpha}}\mathbf{\cdot}\mathbf{v}\) and \(g_{S}^{N}g_{A}^{\mathbf{\alpha}}\mathbf{\cdot}\mathbf{v}\times\mathbf{r}\) are three such examples, where \(\mathbf{\sigma}\) is spin of the neutrons, \(\mathbf{r}\) and \(\mathbf{v}\) are the relative position and relative velocity between the interacting particles, \(g_{S}^{N}/g_{S}^{N}/g_{A}^{N}\) is the scalar/vector/axial-vector coupling constant of the nucleon, and \(g_{S}^{p}/g_{A}^{\mathbf{\alpha}}\) is the pseudo-scalar/axial-vector coupling constant of the neutron. Such interactions involving a scalar coupling \(g_{S}^{N}\) at one vertex and a pseudo-scalar coupling \(g_{S}^{p}\) at the polarized neutron vertex can be induced by the exchange of spin-0 bosons, or a vector/axial-vector coupling \(g_{S}^{N}/g_{A}^{N}\) at one vertex and an axial-vector coupling \(g_{A}^{\mathbf{\alpha}}\) at the polarized neutron vertex can be induced by the exchange of spin-1 bosons. If such new interactions exist, the Sun and the Moon can both induce sidereal variations of effective fields along the direction perpendicular to the Earth's rotation axis. We derived new experimental upper limits on such exotic spin-dependent interactions at astronomical interaction ranges by analyzing existing data from laboratory measurements on the Lorentz and CPT violation. We set the most stringent experimental limits on \(g_{S}^{N}g_{P}^{n}\) ranging from \(\sim 2\times 10^{10}\)m to \(\sim 10^{14}\)m. Previously, the best limit on \(g_{S}^{N}g_{P}^{n}\) at this range is from astrophysics. The result is the first time laboratory limits surpass the astrophysical ones on the scalar-pseudoscalar type interaction, to our best knowledge. We report new constraints on vector-axial-vector and axial-axial-vector type interaction at the range of astronomical scales. The new limits on vector-axial-vector are improved by as much as \(\sim\)12 orders of magnitude. We also apply the analysis to the Hari-Dass interactions and obtain corresponding new constraints on the interactions. We discuss the possibilities of using the beam method to further search the interaction involving other particles, such as electrons, muons, etc., based on the same idea. ## I Introduction Axions, predicted by the PQ (Peccei-Quinn) mechanism [1; 2; 3], can induce spin-dependent interactions [4]. Axions can have arbitrarily small mass and weak couplings to ordinary matter because the scale at which the PQ symmetry is broken can be arbitrarily large [5]. Thus, axions might mediate interactions in ranges from nanometers to astronomical distance scales. Though the PQ mechanism was originally proposed to solve the strong CP problem, the axions, which are light, weakly interacting, and pseudo-scalar, are also considered possible candidates for cold dark matter. New interactions might also be mediated by vector particles such as the para-photon (dark, hidden, heavy or secluded photon) [6; 7], \(Z^{\prime}\) boson [8], graviphoton [9], etc., or even unparticles [10]. Reference [11] proposed 16 different types of new interactions, 15 of which are spin-dependent. Non-Yukawa-type exotic interactions due to the dark or hidden sector were also proposed[12; 13] recently. As early as 1980, Fayet [14; 15] pointed out that the new U(1) vector bosons with small masses and weak couplings to ordinary matter can be produced by spontaneously breaking of the supersymmetric theories. Searching for the new interactions mediated by the new particles is related to the strong CP problem, dark matter, dark energy[16] and finding evidence of supersymmetry, which is among the most important unsolved problems in modern physics. The ALPs (Axion Like Particles), if exist, can generate a new interaction of the form \(\mathcal{L}_{\phi}=\bar{\psi}(g_{S}+ig_{P}\gamma_{5})\psi\phi\) through a light scalar boson \(\phi\) coupling to a fermion \(\psi\), \(g_{S}/g_{P}\) is the scalar/pseudo-scalar coupling constant [4]. The interaction between the probe particle of the polarized neutron and the unpolarized nucleon can be expressed as \[V_{\text{SP}}=\frac{\hbar^{2}g_{S}^{N}g_{P}^{n}}{8\pi m_{n}}(\frac{1}{\lambda r }+\frac{1}{r^{2}})\text{exp}(-r/\lambda)\mathbf{\sigma}\cdot\hat{r}, \tag{1}\] where \(\lambda=\hbar/m_{\phi}c\) is the interaction range, \(m_{\phi}\) is the mass of the new scalar boson, \(\mathbf{\sigma}\) is the spin operator of the polarized neutron, \(m_{n}\) is the neutron mass, \(N/n\) is nucleon/neutron, and \(r\) is the distance between the two interacting particles. The SP (scalar-pseudoscalar) interaction or the monopole dipole interaction has begun to attract more attention recently[17; 18; 19; 20; 21; 22; 23]. Recently, Wei et al.[24] proposed a laboratory experiment schemethethat could surpass the astrophysical limit for the SP-type interaction. Laboratory limits become closer and closer to the limits derived by combining \(g_{S}\) from the torsion balance experiment and \(g_{P}^{n}\) from SN1987A; however, all the present laboratory limits are less stringent than those astrophysical ones, to the best of our knowledge. VA (vector-axial-vector) and AA (axial-axial-vector) type interaction can be derived from a general Lagrangian \(\mathcal{L}_{X}=X_{\mu}\bar{\psi}(g_{V}\gamma^{\mu}+g_{A}\gamma_{5}\gamma^{\mu} )\psi\) in the non-relativistic limit, where \(X\) is the new vector particles, and \(g_{V}/g_{A}\) is the vector/axial-vector coupling constant. The derived potentials are \[\begin{split} V_{\text{VA}}&=\frac{\hbar g_{V}^{N }g_{A}^{n}}{2\pi}\frac{\exp(-r/\lambda)}{r}\mathbf{\sigma}\cdot\mathbf{v},\\ V_{\text{AA}}&=\frac{\hbar^{2}g_{A}^{N}g_{A}^{n}}{16 \pi m_{n}c}(\frac{1}{\lambda r}+\frac{1}{r^{2}})\text{exp}(-r/\lambda)\mathbf{ \sigma}\cdot(\mathbf{v}\times\hat{r}),\end{split} \tag{2}\] where \(\mathbf{v}\) is relative velocity between the two interacting particles. Many studies have been carried out to look for the new interactions, to detect either the macroscopic forces or the torques exerted on the polarized probe spins. For example, Leslie _et al_. [25] proposed experimental schemes to detect the new spin-dependent interaction between a spin-polarized source and a mechanical oscillator. For another example, Ding _et al_. [26] used a microfabricated magnetic structure as the polarized source, then tried to detect the AA type interaction in a range of \(\sim\mu\)m sensed by a gold-sphere-cantilever. Many groups have been searching for the new interaction through its rotating effects as a pseudo-magnetic field on the polarized spin. The VA and AA interactions between different combinations of fermions have been investigated already, such as electron-nucleon [26; 27; 28; 29], neutron-nucleon [30; 31; 32], electron-electron [33], and electron-antiproton [34]. Studies on these exotic interactions involving muons were performed very recently [35]. However, laboratory searches for the interaction at ranges of astronomical distances are scarce, if not non-existent. Since the signal induced by the new interactions is tiny, using a large source and modulating the signal to a higher frequency is crucial for detection. It is easy to understand that we need a large source. By modulating the signal, on the one hand, we can increase the SNR (Signal Noise Ratio) by decreasing the noise bandwidth. On the other hand, we can significantly reduce the \(1/f\) noise by modulating the signal to a higher frequency. In this Letter, by treating the Sun and the Moon as mass sources and the Earth's rotation as a modulation to the spin-dependent interactions, we obtain the most stringent limits on the interactions in Eq. (2) at astronomical ranges. ## II The basic idea All three types of spin-dependent interactions are in the form of \(\mathbf{s}\cdot\mathbf{B}^{\prime}\), where \(\mathbf{B}^{\prime}\) can be viewed as a kind of effective magnetic field. Searching for these spin-dependent interactions becomes a problem probing the effective magnetic field acting on polarized spins. For example, we will first illustrate the basic idea using the Sun as the source mass. In the Sun-centered frame shown in Fig. 1 (a), the aforementioned new interactions can generate effective magnetic fields at the Earth center as [36] \[\begin{split}\mathbf{B}^{\prime}_{\text{SP}}&=\frac{ \hbar g_{S}^{N}g_{B}^{n}N_{\odot}}{\lambda\pi m_{n}\gamma_{n}}(\frac{1}{ \lambda R}+\frac{1}{R^{2}})\text{exp}(-R/\lambda)[\cos{(\Omega_{\oplus}t)} \tilde{X}+\sin{(\Omega_{\oplus}t)}\tilde{Y}],\\ \mathbf{B}^{\prime}_{\text{VA}}&=\frac{g_{V}^{N}g_{A}^{n }N_{\odot}}{\pi\gamma_{n}}\frac{\text{exp}(-R/\lambda)}{R}[-\Omega_{\oplus}R \sin{(\Omega_{\oplus}t)}\hat{X}+\Omega_{\oplus}R\cos{(\Omega_{\oplus}t)} \hat{Y}],\\ \mathbf{B}^{\prime}_{\text{AA}}&=-\frac{\hbar g_{A}^{N }g_{A}^{n}N_{\odot}}{8\pi m_{n}c\gamma_{n}}(\frac{1}{\lambda R}+\frac{1}{R^{2 }})\text{exp}(-R/\lambda)\Omega_{\oplus}R\hat{Z},\end{split} \tag{3}\] where \(R\) is the distance from the Earth to the Sun, \(\gamma_{n}\) is the gyromagnetic ratio of the neutron, \(\Omega_{\oplus}\) is the Earth's orbital angular frequency, and \(N_{\odot}\) is the total nucleon number of the Sun. We now take into account the Earth's rotation effects. For a laboratory frame on the Earth as shown in Fig. 1 (b), we can realize it by the Euler rotations: we firstly rotate the frame by the angle \(\omega_{\oplus}t\) about the \(\hat{Z}\) axis where \(\omega_{\oplus}\) is the Earth's rotation frequency, we then rotate about the \(\hat{Y}\) direction by an angle \(\eta\) which is the Earth's obliquity. In the laboratory frame, we will observe effective time-varying fields as \[\mathbf{b}_{\rm SP} = \frac{\hbar g_{S}^{N}g_{P}^{n}N_{\odot}}{4\pi m_{n}\gamma_{n}}(\frac {1}{\lambda R}+\frac{1}{R^{2}}){\rm exp}(-R/\lambda)\Bigg{[}\begin{array}{c} \cos\eta\cos\left(\Omega_{\oplus}t\right)\cos\omega_{\oplus}t+\sin\left(\Omega _{\oplus}t\right)\sin\omega_{\oplus}t\\ -\cos\eta\cos\left(\Omega_{\oplus}t\right)\sin\omega_{\oplus}t+\sin\left( \Omega_{\oplus}t\right)\cos\omega_{\oplus}t\\ \sin\eta\cos\left(\Omega_{\oplus}t\right)\end{array}\Bigg{]}, \tag{4}\] \[\mathbf{b}_{\rm VA} = \frac{g_{V}^{N}g_{A}^{n}N_{\odot}}{\pi\gamma_{n}}\frac{\exp(-R/ \lambda)}{R}v_{\oplus}\Bigg{[}\begin{array}{c}-\cos\eta\cos\omega_{\oplus}t \sin\left(\Omega_{\oplus}t\right)+\sin\omega_{\oplus}t\cos\left(\Omega_{\oplus }t\right)\\ \cos\eta\sin\omega_{\oplus}t\sin\left(\Omega_{\oplus}t\right)+\cos\omega_{ \oplus}t\cos\left(\Omega_{\oplus}t\right)\\ -\sin\eta\sin\left(\Omega_{\oplus}t\right)\end{array}\Bigg{]},\] (5) \[\mathbf{b}_{\rm AA} = \frac{\hbar g_{A}^{N}g_{A}^{n}N_{\odot}}{8\pi m_{n}c\gamma_{n}}( \frac{1}{\lambda R}+\frac{1}{R^{2}}){\rm exp}(-R/\lambda)v_{\oplus}\Bigg{[} \begin{array}{c}\sin\eta\cos\omega_{\oplus}t\\ -\sin\eta\sin\omega_{\oplus}t\\ -\cos\eta\end{array}\Bigg{]}, \tag{6}\] where \(v_{\oplus}=\Omega_{\oplus}R\) is the orbital speed of the Earth. As the most straightforward case, \(\mathbf{b}_{\rm AA}\) clearly shows effective magnetic fields rotating in the laboratory frame at the Earth's rotation frequency. Although \(\mathbf{b}_{\rm SP}\) and \(\mathbf{b}_{\rm VA}\) appear more complicated due to their mixture with the Earth's orbital rotation, the situation can be greatly simplified when taking into account the fact that \(\omega_{\oplus}\gg\Omega_{\oplus}\). In summary, if the exotic spin-dependent interactions exist, the perpendicular components of the effective fields induced by the Sun are modulated by the Earth's rotations; thus, we could observe its signal in the laboratory. Although the modulating frequency is not high, it makes precision measurements to detect these new interactions possible. III Constraining the exotic spin-dependent interactions at astronomical distances using existing experimental data Dual-species co-magnetometers are convenient for detecting the tiny signals caused by these spin-dependent interactions since the two components occupy the same space, and common-mode background field noise can be mostly canceled. In principle, we can separate the sideral modulated signal from the noisy background during precise measurements. The ultrahigh sensitivity of the co-magnetometer to the magnetic field changes has an extensive implementation in new physics detection, including electric dipole moments (EDMs), CPT and Lorentz violation, spin-gravity interaction, and so on [37; 38; 39; 40]. This co-magnetometer method has been widely used to search for the constant cosmic background field due to Lorentz violation. Similarly, the experiments detect the sidereal variants of the field observed in the laboratory Figure 1: (a) The Sun-centered frame. The relative size of the Sun and the Earth is not to scale. (b) The Earth-based frame. We take \(\hat{z}\) along with the Earth’s rotation axis. The angle between the ecliptic plane and the Earth’s equatorial plane is \(\eta=23.4^{\circ}\). The red arrows represent the directions of effective fields of three types of spin-dependent interactions generated by the Sun’s nucleon. frame on Earth. Constraints on the components of the constant field perpendicular to the Earth's rotation axis were obtained. Reference [41] used the \({}^{129}\)Xe+\({}^{3}\)He co-magnetometer, while Ref. [42] is based on a K+\({}^{3}\)He one. Stringent constraints at a similar level were obtained. We found that limits on exotic spin-dependent interactions induced by the Sun can be obtained by using existing Lorentz violation searching results. For example, for the experiment described in Ref. [41], the \(\Omega_{\oplus}t\) in Eqs. (4), (5) and (6) approximately equals to \(\pi/2\), taking into account that the experiment had been performed for \(\sim\)10 days at the time when the Earth was around the vernal equinox. The sidereal oscillating effective field \(\mathbf{b}_{\perp}\) perpendicular to the Earth's rotation axis can be detected[36]: \[\mathbf{b}_{\mathrm{SP}\perp}= \frac{\hbar g_{S}^{N}g_{P}^{n}N_{\odot}}{4\pi m_{n}\gamma_{n}}( \frac{1}{\lambda R}+\frac{1}{R^{2}})\exp{(-R/\lambda)}\] \[[\sin{(\omega_{\oplus}t)}\hat{x}+\cos{(\omega_{\oplus}t)}\hat{y}],\] \[\mathbf{b}_{\mathrm{VAL}\perp}= -\frac{g_{V}^{N}g_{A}^{n}N_{\odot}}{\pi\gamma_{n}}\frac{\exp(-R/ \lambda)}{R}v_{\oplus}\cos\eta\] \[[-\cos{(\omega_{\oplus}t)}\hat{x}+\sin{(\omega_{\oplus}t)}\hat{y}],\] \[\mathbf{b}_{\mathrm{AA}\perp}= \frac{\hbar g_{A}^{N}g_{A}^{n}N_{\odot}}{8\pi m_{n}c\gamma_{n}}( \frac{1}{\lambda R}+\frac{1}{R^{2}})\mathrm{exp}(-R/\lambda)v_{\oplus}\sin\eta\] \[[\cos{(\omega_{\oplus}t)}\hat{x}-\sin{(\omega_{\oplus}t)}\hat{y}].\] Using the result of Ref. [41], we could derive: \[|\mathbf{b}_{\perp}|<0.023\;\mathrm{f}\Gamma\;\;\text{ (95\% C.L.)}. \tag{7}\] Plugging in all the known parameters such as \(\eta=23.4^{\circ}\) which is the angle between the Earth's equatorial plane and ecliptic plane, \(N_{\odot}\approx 1.2\times 10^{57}\) the nucleon number in the Sun, \(R=1.5\times 10^{11}\) m the orbital radius of the Earth, and \(v_{\oplus}\approx 3.0\times 10^{4}\) m/s the Earth orbital speed, we can obtain the constraints on the SP, VA and AA type spin-dependent interactions between nucleon and neutron. The derived constraint on \(|g_{S}^{N}g_{P}^{n}|\) is shown in Fig. 2. For \(\lambda\gtrsim 2\times 10^{10}\)m, it gives the most stringent limit on \(|g_{S}^{N}g_{P}^{n}|\). For \(\lambda>1\times 10^{12}\)m, our bounds \(|g_{S}^{N}g_{P}^{n}|<1.6\times 10^{-35}\)(95% C.L.). Previously, the most stringent constraints on \(|g_{S}^{N}g_{P}^{n}|\) were astrophysical limits that combined laboratory constraints on \(g_{S}^{N}\) and \(|g_{P}^{n}|\) from the SN1987A. To the best of our knowledge, no laboratory limits have ever surpassed the astrophysical ones. Suppose the constraint derived from the combination of weak equivalence and the SN1987A is extended to the long-range \(\sim 10^{12}\) m. In that case, our work improves the existing upper bound by as much as \(\sim\)70 times, which would be the first ever that laboratory constraint surpasses the astrophysical limits. Our constraint on \(|g_{V}^{N}g_{A}^{n}|\) is shown in Fig. 3. For \(\lambda\gtrsim 3\times 10^{7}\) m, it gives the most stringent limit on \(|g_{V}^{N}g_{A}^{n}|\). For \(\lambda>1\times 10^{12}\)m, our bounds \(|g_{S}^{N}g_{P}^{n}|<1.6\times 10^{-35}\)(95% C.L.). Previously, the most stringent constraint on \(|g_{S}^{N}g_{A}^{n}|\) near the interaction range under consideration was given in Ref. [31]. If the previous result can be extended to the range of \(\sim 10^{12}\) m, the present Figure 3: Constraint to the coupling constant product \(|g_{V}^{N}g_{A}^{n}|\) as a function of the interaction range \(\lambda\) (new vector boson mass). The solid lines are the result of this work; the left line uses the Moon as the source, and the right line uses the Sun. The dashed line is the result of Ref. [31]. The dark grey area is excluded by the result of this work and the light gray area is excluded by the result of Ref. [31]. Figure 2: Constraint to the coupling constant product \(|g_{S}^{N}g_{P}^{n}|\) as a function of the interaction range \(\lambda\) (new scalar boson mass). The solid lines are the result of this work; the left line uses the Moon as the source, and the right line uses the Sun. The dashed line is the result of Ref. [43; 44], which was derived by combining \(g_{S}^{N}\) of weak equivalence with \(g_{P}^{n}\) from SN1987A. The dark grey area is excluded by the result of this work and the light gray area is excluded by the result of Ref. [43; 44]. work improves the existing upper bound by as much as 12 orders of magnitude. The obtained constraint on \(|g_{A}^{N}g_{A}^{n}|\) is shown in Fig. 4. For \(\lambda\gtrsim\sim 1\times 10^{7}\) m, it gives the most stringent limit on \(|g_{A}^{N}g_{A}^{n}|\). or \(\lambda>1\times 10^{12}\)m, our bounds \(|g_{A}^{N}g_{A}^{n}|<8.1\times 10^{-31}(95\%\) C.L.). It is the only known constraint on \(|g_{A}^{N}g_{A}^{n}|\) to us at the astronomical ranges. We can apply the same analyzing method by using the Moon as a source. In this case, we shall consider errors due to several systematic effects. We show details of the analysis in the Supplementary Material [36]. We also plotted the results using the Moon in Figs. 2, 3, and 4. Furthermore, the spin-gravity interaction proposed by Leitner and Okubo and later generalized by Hari Dass [45, 46] can also be strictly constrained. Assuming CPT invariance, two types of discrete symmetry violation spin-gravity are constructed as \[V(r)=\frac{G_{N}M\hbar}{2}(\alpha_{1}\frac{\mathbf{\sigma}\cdot\hat{r}}{cr^{2}}+ \alpha_{2}\frac{\mathbf{\sigma}\cdot\mathbf{v}}{c^{2}r^{2}}), \tag{8}\] where \(G_{N}\) is the Newton constant of gravitation, \(M\) the mass of the gravity source, \(\alpha_{1}\) is the degree of P and T violation and \(\alpha_{2}\) is the degress of P and C violation. Spin-gravity interaction (8) emerges from a simple thought of the correlation between the lack of symmetry and its weakness of strength. However, it provides a direct way to test symmetry violation and the equivalence principle in General Relativity [46, 47]. These potentials are the starting point of many low-energy experiments [48, 49]. The Sun is an excellent object for symmetry violation gravity research because of its vast mass, and we can derive limits on \(\alpha\) \[\begin{split}|\alpha_{1}|&<2.2\times 10^{2}\ (95\% \ \text{C.L.}),\\ |\alpha_{2}|&<2.4\times 10^{6}\ (95\%\ \text{C.L.}). \end{split} \tag{9}\] When comparing with the results of Ref. [50], our limit on \(\alpha_{1}\) improves the existing result by \(\sim\)11 times, and on \(\alpha_{2}\) we get an improvement of \(\sim\)4 orders of magnitude. ## IV Conclusion and discussion By using the Sun and the Moon as sources, the Earth's rotation as modulation, and the existing laboratory limits on the Lorentz and CPT violation of the neutron at distances of astronomical scales, we have constrained three types of possible new interactions, which are neutron spin-dependent. We derived new laboratory limits on possible SP-type interactions with ranges from \(\sim 2\times 10^{10}\) m to \(\sim 10^{14}\) m. At the distance of \(\sim 10^{12}\) m, the limit is improved by \(\sim\)70 times compared to the previous astrophysical limit. This result is the first time laboratory limits exceed the astrophysical ones for SP-type interactions. We obtained new experimental limits on the VA-type interaction with ranges from \(\sim 3\times 10^{7}\) m to \(\sim 10^{14}\) m. At the distance of \(\sim 10^{12}\) m, the limit is improved by \(\sim\)12 orders of magnitude in comparison with the previous result of \({}^{3}\)He spin relaxation experiment. We derived the first experimental limits on the AA-type new interaction with ranges from \(\sim 1\times 10^{7}\) m to \(\sim 10^{14}\) m. We also constrained the Hari-Dass type spin-dependent new interactions and obtained new upper bounds on these types of new interactions. How could we extend the present work to other particles, such as electrons and muons? One such possibility is to apply the beam method proposed in Refs. [51, 35], using superconducting magnetic shielding to create a zero background field region, then fly the spin-polarized particles such as muon beams through and detect the sidereal variations of the polarization along the direction perpendicular to the Earth's rotation axis. In this way, we could probe the spin-dependent new interactions for particles such as muons, electrons, etc., caused by the Sun. Details on such experiment schemes are under work. ###### Acknowledgements. We acknowledge supports from the National Natural Science Foundation of China under grants U2230207 and U2030209. This work was also supported by the National Key Program for Research and Development of China under grants 2020YFA0406001 and 2020YFA0406002. We thank Dr. C. Fu and Y. M. Ma for their helpful discussions. Figure 4: Constraint to the coupling constant product \(|g_{A}^{N}g_{A}^{n}|\) as a function of the interaction range \(\lambda\) (new vector boson mass). The solid lines are the result of this work; the left line uses the Moon as the source, and the right line uses the Sun. The dark grey area is excluded by the result of this work.
2306.04045
Functional renormalization group study of neutral and charged pion under magnetic fields in the quark-meson model
We calculated the masses of neutral and charged pion and pion decay constants under an extra magnetic field at zero temperature. The quantum fluctuations are integrated through the functional renormalization group. We consider the quark and meson propagators in the Landau level representation and weak-field expansion, respectively. The neutral pion mass monotonically decreases with the magnetic field, while the charged pion mass monotonically increases with the magnetic field. The pion decay constant and the quark mass show the magnetic catalysis behavior at vanishing temperature. The neutral pion mass and pion decay constant are quantitatively in agreement with the lattice QCD results in the region of $eB < 1.2 {\rm GeV}^2$, and no non-monotonic mass behavior for charged pion has been observed in this framework.
Rui Wen, Shi Yin, Wei-jie Fu, Mei Huang
2023-06-06T22:21:33Z
http://arxiv.org/abs/2306.04045v1
Functional renormalization group study of neutral and charged pion under magnetic fields in the quark-meson model ###### Abstract We calculated the masses of neutral and charged pion and pion decay constants under an extra magnetic field at zero temperature. The quantum fluctuations are integrated through the functional renormalization group. We consider the quark and meson propagators in the Landau level representation and weak-field expansion, respectively. The neutral pion mass monotonically decreases with the magnetic field, while the charged pion mass monotonically increases with the magnetic field. The pion decay constant and the quark mass show the magnetic catalysis behavior at vanishing temperature. The neutral pion mass and pion decay constant are quantitatively in agreement with the lattice QCD results in the region of \(eB<1.2\)GeV\({}^{2}\), and no non-monotonic mass behavior for charged pion has been observed in this framework. ## I Introduction Studying Quantum Chromodynamics (QCD) matter under strong external magnetic field and vortical field have attracted many attentions in recent years. Relativistic heavy-ion collisions provide us a platform to study QCD matter under extreme conditions in the laboratory. In non-central heavy-ion collisions, the collision of two high-speed nuclei moving in opposite directions could create strong magnetic fields of order \(\sim 10^{18}\) Gauss [1; 2]. Strong magnetic fields also exist in the early universe and magnetars [3; 4; 5]. Understanding the strongly interacting matter in background magnetic fields requires a combination of the QCD and QED theories, which has brought about plenty of novel phenomena of magnetized quark matter, such as the chiral magnetic effect (CME) [6; 7], magnetic catalysis (MC) [8; 9; 10], inverse magnetic catalysis (IMC) [11; 12; 13], diamagnetism at low temperature and paramagnetism at high temperature [14]. These rich phenomena have attracted theoretical investigations in lattice Monte-Carlo simulations [15; 16; 17; 18; 19; 20; 21], as well as model calculations, such as Nambu-Jona-Lasinio (NJL) [22; 23; 24; 25; 26; 27; 28], quark-meson (QM) model [29; 30; 31] and AdS/QCD [32; 33], within mean-field approximation or functional methods [34; 35; 36; 37; 38], see e.g., [39; 40; 41; 42] for reviews. It is also valuable to study the meson spectrum of QCD under magnetic fields, which plays an important role in the understanding of the rich phenomena mentioned above. It is believed that the neutral pion is helpful to explain the inverse magnetic catalysis [43; 44], and the charged pions can explain the diamagnetic around the pseudo-critical temperature [38]. The meson spectra have been widely studied in lattice QCD and effective models [45; 46; 47; 48; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51]. Recent Lattice calculation in [20] showed that at zero temperature, the mass of the neutral \(\pi\) meson decreases monotonously with the magnetic field, while that of the charged pions shows a non-monotonic behavior. Some efforts have been made to understand the pion mass behavior under magnetic field in low energy effective models [52; 53; 54; 55; 56]. However, the mass behaviors of the neutral and charged pions under magnetic field have not been explained simultaneously. Besides, the lattice and effective model calculations are also extended to finite temperatures, see e.g., [57; 58; 59; 60; 21]. In this work, we employ the quark-meson model, which is also called the linear sigma model coupled to quarks (LSMq) [29; 61] to calculate the meson masses and decay constants under a magnetic field. This model is well used to study the QCD phase diagrams [62; 63], Equation of State (EoS) [64; 65] as well as the fluctuations of conserved charges [66; 67]. Note that it can be transformed from the NJL model through a Hubbard-Stratonovich transformation [68; 69]. The results of the mean-field approximation of the QM model coincide with the point-like particles. In this work, we include the quantum fluctuations through the functional renormalization group approach (FRG) [70; 71], which is a functional continuum field approach. This paper is organized as follows. In Section II, we introduce the low energy effective theory, i.e. the 2-flavor quark meson model. In Section III, the choice of the regulator, propagators under a magnetic field are discussed and the flow equations are presented.In Section IV, we show the numerical results in our calculation, including the meson masses, quark masses and decay constants as functions of the strength of magnetic field. In Appendix A, we show the vertexes of the 2-flavor quark meson model. In Appendix B, the threshold functions of the flow equations are given. ## II Low energy effective theory At high renormalization group (RG) scale, the first-principle QCD system only includes the degrees of freedom of quarks and gluons. As the RG scale decreases, due to the finite mass gap, the gluons are decoupled from the system, and their dynamics are integrated out, left with gluonic background field and its potential. Consequently, composite degrees of freedom, e.g., mesons and baryons, emerge naturally from the dynamics of elementary degrees of freedom, see, e.g., [72; 73; 74]. The degrees of freedom of the QCD system are transformed into those of quarks and hadrons, which can be described by low-energy effective models, such as the QM model and NJL model. The effective action of the two-flavor quark-meson model in Euclidean space reads [75] \[\Gamma_{k}= \int_{x}\bar{q}\gamma_{\mu}(\partial_{\mu}-iQA_{\mu})q+\text{Tr}( D_{\mu}\phi\cdot D_{\mu}\phi^{\dagger})\] \[+h\bar{q}(T^{0}\sigma+i\gamma_{5}\vec{T}\cdot\vec{\pi})q+V_{k}( \rho)-c\sigma, \tag{1}\] with \(\int_{x}=\int d^{4}x\), \(Q=diag(2/3,-1/3)e\) and \(q=(u,d)^{T}\). Here, \(\phi\) denotes the meson fields: \[\phi=T^{0}\sigma+\vec{T}\cdot\vec{\pi}=\frac{1}{2}\begin{pmatrix}\sigma+\pi^ {0}&\sqrt{2}\pi^{+}\\ \sqrt{2}\pi^{-}&\sigma-\pi^{0}\end{pmatrix}\,. \tag{2}\] In Equation (II), the potential \(V(\rho)\) is chiral symmetric with \(\rho\equiv\text{Tr}[\phi^{\dagger}\phi]=\frac{1}{2}(\sigma^{2}+\vec{\pi}^{2})\), and \(c\sigma\) is the linear sigma term, which explicitly breaks the chiral symmetry and accounts for the pion masses. The covariant derivative of meson fields reads \[D_{\mu}\phi=\partial_{\mu}-iA_{\mu}[Q,\phi]. \tag{3}\] Without loss of generality, a homogeneous magnetic field of strength \(B\) is assumed along the \(z\)-direction and the Landau gauge is adopted, i.e. \(A_{\mu}=(0,0,xB,0)\). For convenience, we define \(p_{\perp}=(p_{1},p_{2})\) and \(p_{\parallel}=(p_{0},p_{3})\). The curvature masses are defined as the two-point correlation function at vanishing external momentum \[m^{2}_{\phi,\text{cur}}=\Gamma^{(2)}_{\phi\phi}(p_{0}=0,\vec{p}=0), \tag{4}\] and for the \(\pi\) and \(\sigma\) meson, they are given as \[m^{2}_{\pi}=V^{\prime}(\rho)\quad m^{2}_{\sigma}=V^{\prime}(\rho)+2\rho V^{ \prime\prime}(\rho). \tag{5}\] The light quark mass is \[m_{q}=\frac{1}{2}h\sigma_{0}. \tag{6}\] Here \(\sigma_{0}\) is the vacuum expectation value of the sigma meson field, which is located at the minimum of the effective potential. The mesonic decay constant is also related to the vacuum expectation value via: \[f_{\pi}=\sigma_{0}. \tag{7}\] In this work, we employ the local potential approximation (LPA), which is the leading order of the derivative expansion. In other words, we ignore the mesonic and quark wave function renormalizations and the running of the Yukawa coupling. See [30] for a relevant discussion, where magnetic dependent wave function renormalizations beyond LPA are investigated in one-flavor case within the FRG approach. ## III Flow equations and regulators The evolution of the effective action with the RG scale is described by the Wetterich equation [76], where an infrared (IR) cutoff scale \(k\), i.e., the RG scale, is used to suppress quantum fluctuations of momenta below the scale. Starting from a high ultraviolet (UV) scale, say \(\Lambda_{\text{UV}}\), with the classical action as the initial condition, one is able to integrate-in quantum fluctuations of different modes successively by evolving the RG scale \(k\) from UV to IR. The Wetterich equation for the effective action Equation (II) reads: \[\partial_{t}\Gamma_{k}=\frac{1}{2}\text{Tr}[G^{\phi}_{k}(p)\partial_{t}R^{B}_{ k}]-\text{Tr}[G^{q}_{k}(p)\partial_{t}R^{F}_{k}]. \tag{8}\] Here \(R_{k}\) denotes the regulators and \(G^{\phi/q}_{k}(p)\) are scale-dependent propagators of mesons and quarks. In the vacuum, the effective action satisfies the \(O(4)\) space-time symmetry. When we consider an external magnetic field, the perpendicular (transverse) and parallel (longitudinal) directions to the magnetic field will split. Obviously, it will stay invariant in the temporal and \(z\) directions at zero temperature. A commonly used \(3d\) regulator for the spatial momenta breaks the \(O(4)\) symmetry in the vacuum [77], while a regularization on the transverse momenta would give rise to non-physical artifacts [78]. Therefore, in this work we adopt \(2d\) regulators which regularize the temporal and longitudinal momenta, as follows \[R^{B}_{k} =p^{2}_{\parallel}r_{B}(p^{2}_{\parallel}/k^{2}),\] \[R^{F}_{k} =ip_{\parallel}\cdot\gamma_{\parallel}r_{F}(p^{2}_{\parallel}/k^{2}), \tag{9}\] with \(p^{2}_{\parallel}=p^{2}_{0}+p^{2}_{3}\) and the shape functions \[r_{B}(x) =\bigg{(}\frac{1}{x}-1\bigg{)}\Theta(1-x)\] \[r_{F}(x) =\bigg{(}\frac{1}{\sqrt{x}}-1\bigg{)}\Theta(1-x). \tag{10}\] Figure 1: Feynman diagrams of the flow equations for the effective potential (upper) and the mesonic two-point correlation functions (lower). The solid lines and dashed lines denote the quark and meson propagators, respectively. The crossed circles donates the infrared regulators, as shown in Equation (9). Here \(\Theta(x)\) is the Heaviside step function. Notably, absence of regularization on the transverse momenta leads to a divergence for the flow equation of the potential \(V_{k}\). Fortunately, the two-point correlation functions stay finite [30]. The summation of the Landau level can be calculated through the Hurwitz \(\zeta\)-function [40] \[\zeta(s,q)=\sum_{n}\frac{1}{(q+n)^{s}}. \tag{11}\] In this work, we use a transverse momentum cutoff \(\Lambda_{\perp}=5\)GeV to calculate the \(u\)-\(d\) quark mixed threshold functions. We have checked that our results show no obvious dependence on the choices. ### propagators and flow equations The quark propagator in magnetic fields in the Schwinger scheme reads \[G(x,y)=e^{i\Phi(x_{\perp},y_{\perp})}\int\frac{d^{4}p}{(2\pi)^{4}}e^{-ip(x-y)} \tilde{G}(p). \tag{12}\] Where prefactor \(\Phi(x_{\perp},y_{\perp})=s_{\perp}(x^{1}+y^{1})(x^{2}-y^{2})|q_{f}B|/2\) with \(s_{\perp}\equiv\text{sign}(q_{f}B)\) is the Schwinger phase [79], which breaks the translational invariance. In this work, we ignore the Schwinger phase of the propagators under magnetic fields, and see, e.g., [25; 80] for more discussions on the Schwinger phase with the Ritus scheme. Recently, it has been found that the Schwinger phase can be neglected when the meson masses are calculated [50]. The translationally invariant part of the quark propagator in the representation of Landau levels in the Euclidean space with the regulator reads [30; 40]: \[\tilde{G}_{k}^{q}(p)=\exp(-\frac{p_{\perp}^{2}}{|q_{f}B|})\sum_{n=0}^{\infty} \frac{(-1)^{n}D_{n}(p_{\parallel,R_{F}},p_{\perp})}{p_{\parallel,R_{F}}^{2}+2 n|q_{f}B|+m_{f}^{2}}\,, \tag{13}\] with \(p_{\parallel,R_{F}}\equiv p_{\parallel}(1+r_{F})\) and \[D_{n}(p_{\parallel},p_{\perp})\] \[= (-i\gamma_{\parallel}p_{\parallel}+m_{f})\Bigg{[}(1+i\gamma_{1} \gamma_{2}s_{\perp})\mathcal{L}_{n}\bigg{(}\frac{2p_{\perp}^{2}}{|q_{f}B|} \bigg{)}\] \[-(1-i\gamma_{1}\gamma_{2}s_{\perp})\mathcal{L}_{n-1}\bigg{(} \frac{2p_{\perp}^{2}}{|q_{f}B|}\bigg{)}\Bigg{]}\] \[+4i\gamma_{\perp}p_{\perp}\mathcal{L}_{n-1}^{1}\bigg{(}\frac{2p_ {\perp}^{2}}{|q_{f}B|}\bigg{)}. \tag{14}\] Here \(\mathcal{L}_{n}^{a}(x)\) are the generalized Laguerre polynomials with \(\mathcal{L}_{-1}^{a}(x)=0\). Similarly, the translationally invariant part of the scale-dependent meson propagator reads \[\tilde{G}_{k}^{\phi}(p)= 2\exp(-\frac{p_{\perp}^{2}}{|q_{\phi}B|})\] \[\times\sum_{n=0}^{\infty}\frac{(-1)^{n}\mathcal{L}_{n}\big{(}\frac {2p_{\perp}^{2}}{|q_{\phi}B|}\big{)}}{p_{\parallel,R_{B}}^{2}+(2n+1)|q_{\phi}B |+m_{\phi}^{2}}. \tag{15}\] with \(p_{\parallel,R_{B}}\equiv p_{\parallel}(1+r_{B})^{\frac{1}{2}}\). With the aforementioned setup, one is led to the flow equations of the effective potential: \[\partial_{t}V_{k}= \frac{1}{2}\big{[}l_{B}(m_{\sigma})+l_{B}(m_{\pi^{0}})+2\,l_{B}(m _{\pi^{\pm}})\big{]}\] \[-4N_{c}\big{[}l_{F}(m_{f},q_{u})+l_{F}(m_{f},q_{d})\big{]}. \tag{16}\] The relevant Feynman diagrams are presented in the first line of Figure 1. Here \(l_{B},l_{F}\) are threshold functions given in Appendix B. By taking the second derivative of Equation (8) with the pion fields, one arrives at the flow equation of two-point correlation function of the neutral pion as follows \[\partial_{t}\Gamma_{\pi^{0}\pi^{0},k}^{(2)}= \frac{1}{2}\big{[}V_{2\pi^{0}2\sigma}\mathcal{J}_{B}(\sigma)+V_{4 \pi^{0}}\mathcal{J}_{B}(\pi^{0})\] \[+2V_{2\pi^{0}2\pi^{\pm}}\mathcal{J}_{B}(\pi^{\pm})\big{]}-V_{2\pi ^{0}\sigma}^{2}\mathcal{J}_{2B}(\pi^{0},\sigma)\] \[+V_{\bar{u}u\pi^{0}}^{2}\mathcal{J}_{F}(u)+V_{dd\pi^{0}}^{2} \mathcal{V}_{F}(d), \tag{17}\] and the flow equation of two-point correlation function of charged pions, \[\partial_{t}\Gamma_{\pi^{\pm}\pi^{\pm},k}^{(2)}= \frac{1}{2}\big{[}V_{2\pi^{\pm}2\sigma}\mathcal{J}_{B}(\sigma)+V_ {2\pi^{0}\pi^{\pm}}\mathcal{J}_{B}(\pi^{0})\] \[+2V_{4\pi^{+}}\mathcal{J}_{B}(\pi^{\pm})\big{]}-V_{2\pi^{\pm} \sigma}^{2}\mathcal{J}_{2B}(\pi^{\pm},\sigma)\] \[+V_{dd\pi^{\pm}}^{2}\mathcal{J}_{2F}(u,d). \tag{18}\] Here \(V_{[\dots]}\) denote different vertices listed in Appendix A, and \(\mathcal{J}_{B},\mathcal{J}_{2B},\mathcal{J}_{F},\mathcal{J}_{2F}\) are threshold functions, which are defined in Appendix B. The corresponding Feynman diagrams are shown in the second line of Figure 1. It can be readily verified that the neutral pion flow equation Equation (17) coincides with the flow equation of first order derivative of the potential, i.e., \[\partial_{t}\Gamma_{\pi^{0}\pi^{0},k}^{(2)}=\partial_{t}V_{k}^{\prime}(\rho). \tag{19}\] ### weak-field expansion The number of Landau levels increases significantly in the region of small magnetic field. We do the computation in this region by utilizing the weak-field expansion method. The weak-field expansion for the quark propagator in the Euclidean space reads [81; 82] \[\tilde{G}_{k}^{q}(p)\] \[= \frac{-ip_{\mu,R_{F}}\gamma_{\mu}+m_{f}}{p_{\mathrm{R}_{F}}^{2}+m_ {f}^{2}}+i\frac{\gamma_{1}\gamma_{2}(m_{f}-i\gamma_{\parallel}p_{\parallel,R_{F} })}{(p_{R_{F}}^{2}+m_{f}^{2})^{2}}q_{f}B\] \[+2\frac{p_{\perp}^{2}(m_{f}-i\gamma_{\parallel}p_{\parallel,R_{F} })+i\gamma_{\perp}p_{\perp}(m_{f}^{2}+p_{\parallel,R_{F}}^{2})}{(p_{R_{F}}^{2}+m_ {f}^{2})^{4}}(q_{f}B)^{2}\] \[+\mathcal{O}(q_{f}B)^{3}. \tag{20}\] Thus, one arrives at the quark loop function for the two-point correlation function of charged pions, as follows \[\mathcal{J}_{2F}(u,d)=-\frac{k^{4}N_{c}}{2\pi^{2}}\bigg{[}\frac{ \Lambda_{\perp}^{2}}{(k^{2}+m_{f}^{2})(k^{2}+m_{f}^{2}+\Lambda_{\perp}^{2})}\] \[+\Big{(}\frac{1}{4(k^{2}+m_{f}^{2})^{3}}+\frac{5k^{2}+5m_{f}^{2}+ 8\Lambda_{\perp}^{2}}{12(k^{2}+m_{f}^{2}+\Lambda_{\perp}^{2})^{4}}\Big{)}(q_{ u}B)(q_{d}B)\bigg{]}\] \[+\mathcal{O}(B)^{4}. \tag{21}\] In the same way, the quark loops for the two-point correlation function of neutral pions read \[\mathcal{J}_{2F}(q_{f})=-\frac{k^{4}N_{c}}{2\pi^{2}}\bigg{[}\frac {\Lambda_{\perp}^{2}}{(k^{2}+m_{f}^{2})(k^{2}+m_{f}^{2}+\Lambda_{\perp}^{2})}\] \[+\Big{(}\frac{1}{4(k^{2}+m_{f}^{2})^{3}}+\frac{5k^{2}+5m_{f}^{2}+ 8\Lambda_{\perp}^{2}}{12(k^{2}+m_{f}^{2}+\Lambda_{\perp}^{2})^{4}}\Big{)}(q_{ f}B)^{2}\bigg{]}\] \[+\mathcal{O}(B)^{4}. \tag{22}\] The weak-field expansion for the meson propagator reads [29; 83] \[\tilde{G}_{k}^{\phi}(p)= \frac{1}{p_{R_{B}}^{2}+m_{\phi}^{2}}+\frac{p_{\perp}^{2}-p_{\|,R_ {B}}^{2}-m_{\phi}^{2}}{(p_{R_{B}}^{2}+m_{\phi}^{2})^{4}}(q_{\phi}B)^{2}\] \[+\mathcal{O}(q_{\phi}B)^{4}. \tag{23}\] Then the weak-field expansions of the charged pion loop function \(\mathcal{J}_{B}(\pi^{\pm})\) and the pion-sigma loop function \(\mathcal{J}_{2B}(\pi^{\pm},\sigma)\) can be readily obtained, and their explicit expressions are listed in Appendix B. We find that the quark loops, as the last diagram in the second line of Figure 1 shows, play the dominant role for the pion two-point correlation functions. When \(\Lambda_{\perp}\to\infty\), the leading term in \(B\) reads \(1/(4(k^{2}+m_{f}^{2})^{3})q_{fi}q_{fi}B^{2}\). For the charged pion, the signs of \(q_{u},q_{d}\) are opposite, so this term would make a negative contribution to the flow equation, implying that the contribution of quantum fluctuations to the charged pion mass is positive, which results in larger mass for charged pions in FRG than the point-like mass. On the contrary, for the neutral pion, the sign of \(q_{u}^{2}\) or \(q_{d}^{2}\) are positive. Consequently, the flow is increased and the mass of neutral pion is decreased in comparison to that in vacuum. ## IV Numerical results In this work, we solved the flow equation of effective potential by employing the Taylor expansion method around a fixed point, i.e. \[V_{k}(\rho)=\sum_{n}^{N_{v}}\frac{\lambda_{n,k}}{n!}(\rho-\kappa)^{n}. \tag{24}\] Here \(\kappa\) denotes the expanding point, located at the minimal value of the effective potential with \(k=0\). We choose the maximal order of the Taylor expansion to be \(N_{v}=5\), and for more discussions on the convergence of the Taylor expansion see [77; 84]. We have also checked the physical-point expanding method, in which the expanding point is the minimal value of the effective potential at every value of the RG scale \(k\). We find that these two methods coincide with each other and produce consistent results. The UV cutoff is chosen to be \(\Lambda_{\rm UV}=700\) MeV, where the initial condition of the effective potential reads \[V_{\rm UV}(\rho)=\lambda_{1}\rho+\frac{\lambda_{2}}{2}\rho^{2}. \tag{25}\] Here, the parameters of the initial conditions and the corresponding physical observables at \(B=0\) are listed in Table 1. In order to compare with the lattice QCD results, \(m_{\pi}=220\) MeV and \(m_{\pi}=416\) MeV are chosen. Note that if not mentioned explicitly, most of the results are calculated with \(m_{\pi}=220\) MeV. \begin{table} \begin{tabular}{c c c c|c c c c} \hline \hline \(\lambda_{1}[\)MeV\(]^{2}\) & \(\lambda_{2}\) & \(h\) & \(c\) [MeV\(]^{4}\) & \(m_{\pi}\) [MeV] & \(m_{\sigma}\) [MeV] & \(m_{q}\) [MeV] & \(f_{\pi}\) [MeV] \\ \hline \((740)^{2}\) & -5.0 & 6.4 & \(4.5\times 10^{6}\) & 220 & 475 & 295 & 92 \\ \((775)^{2}\) & 6.0 & 6.4 & \(1.6\times 10^{7}\) & 416 & 675 & 295 & 92 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters for the initial conditions in Equations (1) and (25) and corresponding physical observables at \(B=0\). If not mentioned explicitly, most of the results are calculated with the parameters in the first line with \(m_{\pi}=220\) MeV. Figure 2: Neutral pion mass \(m_{\pi^{0}}\) as a function of the strength of magnetic field. The lattice QCD results are taken from Ref [20]. In Figure 2, we show the neutral pion mass \(m_{\pi^{0}}\) as a function of the strength of magnetic field in comparison to the Lattice QCD results [20]. In the region of small magnetic fields with \(eB<0.05{\rm[GeV^{2}]}\), we utilize the weak-field expansion method, while in other regions calculations are done through summation of the Landau levels. Our results are qualitatively or even quantitatively in agreement with the lattice results. If the neutral pion is regarded as a point particle, its masses will not change under magnetic fields. Due to the inner structure of the neutral pion, i.e. \(\bar{u}u\) or \(\bar{d}d\), the neutral pion mass decreases with the magnetic field, as discussed in Section III.2. The neutral pion mass decreases monotonically with the increase of magnetic fields, and the rate of decrease is gradually reduced. Finally, it tends to saturate in large magnetic fields. The charged pion mass \(m_{\pi^{\pm}}\) is defined as the lowest energy of quantum states for the charged pion [25], i.e. \(m_{\pi\pm}(B)=E_{\pi^{\pm}}|_{p_{x}=0,n=0}\). For the point particle of the charged pion, the mass is given by \(m_{\pi^{\pm}}(B)=\sqrt{m_{\pi^{\pm}}^{2}(B=0)+eB}\). According to the definition, we need to calculate the two-point correlation function \(\Gamma_{\pi^{\pm}\pi^{\pm}}^{(2)}(p_{\parallel}=0,p_{\perp}=|eB|)\). Note, however, that it is challenging to integrate the loop functions \(\mathcal{J}_{2F}(u,d)\) and \(\mathcal{J}_{2B}(\pi^{\pm},\sigma)\) with finite external momenta. In our calculation, we use the approximation as follows \[m_{\pi^{\pm}}(B)=\sqrt{\Gamma_{\pi^{\pm}\pi^{\pm}}^{(2)}(p_{\parallel}=0,p_{ \perp}=0)+eB}. \tag{26}\] We also calculate \(\Gamma_{\pi^{\pm}\pi^{\pm}}^{(2)}(p_{\parallel}=0,p_{\perp}=|eB|)\) at very large magnetic fields, and find that both results are consistent with each other. In the left panel of Figure 3, we plot the charged pion masses as functions of the strength of magnetic field with \(m_{\pi}(B=0)=220\) MeV. In order to compare with the Lattice QCD results [20], where the computation is done with \(m_{\pi}(B=0)\sim 220{\rm MeV}\). We use lattice results of \(m_{\pi^{\pm}}^{2}(B)-m_{\pi}^{2}(B=0)\) and construct \[m_{\pi^{\pm}}(B)=\sqrt{m_{\pi^{\pm}}^{2}(B)-m_{\pi}^{2}(B=0)+(220{\rm MeV})^{2}}, \tag{27}\] to be compared with FRG calculations. In the right panel of Figure 3, we use the initial conditions in the second line in Table 1, corresponding to \(m_{\pi}(B=0)=416\) MeV, and compare the normalized charged pion mass \(m_{\pi_{\pm}}(B)/m_{\pi}(0)\) with the lattice results with the same pion mass in the vacuum [17]. The charged pion masses in our calculation increase monotonically with the magnetic field. Our results are larger than the point-like charged pion masses and in agreement with the lattice QCD results in [17]. Similar results are also reported in the NJL calculation [25; 53]. However, for the lattice calculations in [20], the charged pion masses are smaller than the point-like results and exhibit nonmonotonic behaviors. This means our results receive an opposite contribution from the quantum fluctuation compared to the lattice QCD result in [20]. The main contribution of quantum fluctuations comes from the \(u\)-\(d\) quark loop, as discussed in the last paragraph of Section III.2, the leading order magnetic dependent quantum fluctuation of charged pion is opposite to that of the neutral pion, which could lead to the neutral pion masses smaller than point-like results and charged pion masses larger than point-like results in the region of weak magnetic field, as shown in the inlay in the left panel of Figure 3. The calculation results with the Landau level representation coincide with those of weak-field expansion. On the one hand, this discrepancy between the charged pion mass obtained in our calculations and that in lattice simulations in [20] also probably arises from the approximations used in our calculations, such as neglect of the magnetic dependence of the Yukawa couplings and the wave function renormalizations. Our calculation is based on an effective model, which only contains the scalar and pseudoscalar channels, and other tensor structure channels and gluon dynamics are not taken into account [54]. On the other hand, the opposite quantum contribution could come from the lattice QCD calculation. Notably, the lattice cutoff in [20]\(a\simeq 0.117\) fm and no continuum limit is done, while in [17] the continuum limit results are obtained, while the pion masses are much heavier than the physical value. Therefore, more detailed calculations and studies are required for both the lattice QCD and effective theories. In Figure 4, we plot the pion decay constant as a function of the strength of magnetic field and compared it with the lattice QCD results [20]. For the 2-flavor QM model, the pion decay constant is determined by the minimum of the effective potential in Equation (7). In the QM model, one cannot distinguish the \(u\) or \(d\) pion decay constants, and our results are close to that of \(f_{\pi_{d}^{0}}\) in lattice QCD. In Figure 5, we show the magnetic dependence of the sigma meson mass and light quark mass. The lattice QCD results are constructed from the quark chiral condensates in Ref [20]. Similar to the pion decay constant \(f_{\pi}\), the light quark mass is close to the \(d\) quark results of the lattice QCD. Furthermore, due to the internal structure of mesons, the mass of sigma meson varies with the magnetic field. The sigma meson and light quark masses increase monotonically with the magnetic field. The decay constant, sigma meson mass, and light quark mass reflect chiral symmetry breaking increasing with magnetic fields, which is related to the magnetic catalysis. ## V Conclusion This work calculates the meson masses and the pion decay constant at vanishing temperature under strong magnetic fields. The quantum fluctuations are successfully included using the FRG approach. The two-point correlation functions of neutral and charged pion are calculated. The neutral pion mass monotonically decreases with the magnetic field, while the sigma meson mass increases monotonically due to their internal structure. The decay constant and the light quark mass also increase with the magnetic field, reflecting the magnetic catalysis behavior at vanishing temperature. The neutral pion mass and pion decay constant are quantitatively in agreement with with the lattice QCD results especially in the range of \(eB<1.2\)GeV\({}^{2}\). However, the charged pion mass is in agreement with the lattice results in [17] but no non-monotonic mass behavior for charged pion has been observed in this framework as shown in [20]. This needs further investigation from both lattice QCD and functional methods. It is noteworthy that this is our first preliminary attempt to calculate meson masses and the pion decay constant in the QM model under strong magnetic fields within the FRG approach, and there are many things we need to do in the future. In the upcoming work, we will go beyond the LPA truncation, which includes the Figure 4: Pion decay constant as a function of the strength of magnetic field. The lattice QCD results are taken from Ref [20]. Figure 5: Quark mass as a function of the strength of magnetic fields. The lattice QCD results are constructed from the quark chiral condensates in Ref [20]. The \(\sigma\) meson mass is also plotted. Figure 3: Left panel: Charged pion mass \(m_{\pi^{\pm}}\) as a function of the strength of magnetic field with \(m_{\pi}(0)=220\) MeV. The lattice QCD results are constructed based on data from Ref [20] and more details are shown in the text. In the inlay, we show the charged pion mass in the weak-field expansion with FRG subtracted by the point-like result. Right panel: Normalized charged pion mass \(m_{\pi^{\pm}}(B)/m_{\pi}(0)\) as a function of magnetic fields with \(m_{\pi}(0)=416\) MeV in comparison to the relevant Lattice QCD [17]. magnetic dependent Yukawa couplings and wave function renormalizations, and calculate the spectral functions of the mesons. After that, we will extend them into finite temperatures and chemical potential. The strange quark and vector meson will also be included in future work. ###### Acknowledgements. We thank Chuang Huang, Jie Mei, Yang-yang Tan and Kun Xu for their valuable discussions. This work is supported in part by the National Natural Science Foundation of China (NSFC) Grant Nos: 12235016, 12221005, 12175030 and the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No XDB34030000, the start-up funding from University of Chinese Academy of Sciences(UCAS), and the Fundamental Research Funds for the Central Universities. ## Appendix A Vertexes As mentioned above, we need the n-point vertexes to calculate the neutral and charged pion two-point correlation function Equations (17) and (18). The n-point vertexes are defined as \[V_{\phi_{1},\phi_{2}\cdots\phi_{n}}=\frac{\partial^{n}\Gamma_{k}}{\partial\phi _{1}\partial\phi_{2}\cdots\partial\phi_{n}}. \tag{19}\] The quark-meson interaction in the 2-flavor QM model reads \[V_{\bar{u}d\pi^{+}} =V_{\bar{d}u\pi^{-}}=\frac{\sqrt{2}}{2}hi\gamma_{5} \tag{20}\] \[V_{\bar{u}u\pi^{0}} =V_{\bar{d}d\pi^{0}}=\frac{1}{2}hi\gamma_{5}\] (21) \[V_{\bar{u}u\sigma} =V_{\bar{d}d\sigma}=\frac{1}{2}h. \tag{22}\] The nonvanishing mesonic three-point and four-point vertexes are \[V_{2\pi\pm\sigma} =V_{2\pi^{0}\sigma}=\sigma V^{\prime\prime}(\rho) \tag{23}\] \[V_{3\sigma} =3\sigma V^{\prime\prime}(\rho)+\sigma^{3}V^{\prime\prime\prime }(\rho)\] (24) \[V_{2\pi^{\pm}2\sigma} =V_{2\pi^{0}2\sigma}=V^{\prime\prime}(\rho)+\sigma^{2}V^{\prime \prime\prime}(\rho)\] (25) \[V_{2\pi\pm 2\pi^{0}} =V^{\prime\prime}(\rho)\] (26) \[V_{4\pi^{0}} =3V^{\prime\prime}(\rho)\] (27) \[V_{4\pi^{\pm}} =2V^{\prime\prime}(\rho). \tag{28}\] ## Appendix B loop functions The threshold functions of the effective potential for neutral meson \(\pi^{0},\sigma\) in Equation (16) read \[l_{B}(m_{\phi})=\frac{k^{4}}{16\pi^{2}}(\log(k^{2}+m_{\phi}^{2}+\Lambda_{\perp }^{2})-\log(k^{2}+m_{\phi}^{2})). \tag{29}\] For the charged pion under magnetic fields, the threshold function is \[l_{B}(m_{\phi})=\frac{k^{4}}{8\pi^{2}}|q_{\phi}B|\sum_{n=0}^{ \Lambda_{\perp,n}}\frac{1}{k^{2}+m_{\phi}^{2}+(2n+1)|q_{\phi}B|}. \tag{30}\] The quark loop function for the effective potential in the vacuum: \[l_{F}=\frac{k^{4}}{16\pi^{2}}\Big{(}\log(k^{2}+m_{f}^{2}+\Lambda_{\perp}^{2}) -\log(k^{2}+m_{f}^{2})\Big{)}, \tag{31}\] and the quark loop threshold function under magnetic fields reads \[l_{F}=\frac{k^{4}}{16\pi^{2}}|q_{f}B|\sum_{n=0}^{\Lambda_{\perp,n} }\sum_{s=\pm 1}\frac{1}{k^{2}+m_{f}^{2}+|q_{f}B|(2n+1+s)}. \tag{32}\] The loop function of the tadpole diagram in weak-field expansion reads \[\mathcal{J}_{B}(\phi) =\frac{k^{4}}{8\pi^{2}}\Bigg{[}-\frac{\Lambda_{\perp}^{2}}{(k^{2 }+m_{\phi}^{2})(k^{2}+m_{\phi}^{2}+\Lambda_{\perp}^{2})}\] \[+\bigg{(}\frac{1}{3(k^{2}+m_{\phi}^{2})^{3}}-\frac{k^{2}+m_{\phi} ^{2}-5\Lambda_{\perp}^{2})}{3(k^{2}+m_{\phi}^{2}+\Lambda_{\perp}^{2})^{4}} \Big{)}(q_{\phi}B)^{2}\] \[+\mathcal{O}(q_{\phi}B)^{4}. \tag{33}\] For the loop functions of neutral meson, we just need to set \(q_{\phi}=0\). And loop functions of the charge pion in Landau level representation reads \[\mathcal{J}_{B}(\phi)=-\frac{k^{4}|q_{\phi}B|}{4\pi^{2}}\sum_{n=0 }^{\Lambda_{\perp,n}}\frac{1}{k^{2}+(2n+1)|q_{\phi}B|+m_{\phi}^{2}}. \tag{34}\] The \(\sigma-\pi\) loop functions in weak-field expansion read \[\mathcal{J}_{2B}(\pi,\sigma)\] \[= \frac{k^{4}}{8\pi^{2}}\Bigg{[}\bigg{(}\frac{1}{(k^{2}+m_{\sigma}^ {2}+\Lambda_{\perp}^{2})(k^{2}+m_{\pi}^{2}+\Lambda_{\perp}^{2})}\] \[-\frac{1}{(k^{2}+m_{\sigma}^{2})(k^{2}+m_{\pi}^{2})}\bigg{)}\] \[-\int_{0}^{\Lambda_{\perp}^{2}}\bigg{(}\frac{5p_{\perp}^{2}-3m_{ \pi}^{2}-3k^{2}}{(k^{2}+m_{\pi}^{2}+p_{\perp}^{2})^{5}(k^{2}+m_{\sigma}^{2}+p_ {\perp}^{2})}\] \[+\frac{p_{\perp}^{2}-k^{2}-m_{\pi}^{2}}{(k^{2}+m_{\pi}^{2}+p_{ \perp}^{2})^{4}(k^{2}+m_{\sigma}^{2}+p_{\perp}^{2})^{2}}\bigg{)}dp_{\perp}^{2}(q _{\pi}B)^{2}\] \[+\mathcal{O}(q_{\pi}B)^{4} \tag{35}\] with neutral pion \(q_{\pi^{0}}=0\) and charged pion \(q_{\pi^{\pm}}=\pm e\). The \(\sigma-\pi^{\pm}\) loop function in Landau level representation reads \[\mathcal{J}_{2B}(\pi^{\pm},\sigma)= -\frac{k^{4}}{4\pi^{2}}\sum_{n=0}^{\Lambda_{\perp,n}}\bigg{(}\frac{ 1}{(k^{2}+(2n+1)|eB|+m_{\pi}^{2})^{2}}\] \[\int_{0}^{\infty}\frac{e^{-y}\mathcal{L}_{n}(2y)dy}{y^{2}+(k^{2}+ m_{\sigma}^{2})/|eB|}\] \[+\frac{1}{(k^{2}+(2n+1)|eB|+m_{\pi}^{2})|eB|}\] \[\int_{0}^{\infty}\frac{e^{-y}\mathcal{L}_{n}(2y)dy}{(y^{2}+(k^{2} +m_{\sigma}^{2})/|eB|)^{2}}\bigg{)} \tag{10}\] The weak-field expansion of quark loop of the two-point correction of pion have shown in Equation (21) and Equation (22). If we set \(B=0\), they will come back to the representations of vacuum case. In Landau level representation, the quark loop threshold functions of the neutral pion become \[\mathcal{J}_{2F}(q_{f})=\frac{k^{4}N_{c}}{2\pi^{2}}\sum_{n=0}^{ \Lambda_{\perp,n}}\sum_{s=\pm 1}\frac{1}{(k^{2}+m_{f}^{2}+|q_{f}B|(2n+1+s))^{2}} \tag{11}\] For the charged pion two-point correction, it contains a \(u-d\) quark loop. The threshold function of this diagram reads \[\mathcal{J}_{2F}(u,d)= \frac{N_{c}k^{4}B}{\pi^{2}}\sum_{n_{1},n_{2}}^{\Lambda_{\perp,n}} (-1)^{(n_{1}+n_{2})}[((\bar{G}_{n_{1}}^{u})^{2}\bar{G}_{n_{2}}^{d}\] \[+\bar{G}_{n_{1}}^{u}(\bar{G}_{n_{2}}^{d})^{2})((k^{2}+m_{f}^{2})( LL(n_{1},n_{2}-1)\] \[+LL(n_{1}-1,n_{2}))-8BL^{1}L^{1}(n_{1}-1,n_{2}-1))\] \[-\bar{G}_{n_{1}}^{u}\bar{G}_{n_{2}}^{d}(LL(n_{1},n_{2}-1)+LL(n_{1} -1,n_{2})) \tag{12}\] here \[\bar{G}_{n}^{q_{f}}\equiv\frac{1}{k^{2}+m_{f}^{2}+2n|q_{f}B|} \tag{13}\] where we also define the \(LL(n_{1},n_{2})\) and \(L^{1}L^{1}(n_{1},n_{2})\) as the integrations of perpendicular direction \[LL(n_{1},n_{2})\equiv\int_{0}^{\infty}\!dx\exp(-x(\frac{1}{|q_{u }|}+\frac{1}{|q_{d}|})) \tag{14}\] \[\mathcal{L}_{n_{1}}(\frac{2x}{|q_{u}|})\mathcal{L}_{n_{2}}(\frac {2x}{|q_{d}|})\] \[L^{1}L^{1}(n_{1},n_{2})\equiv\int_{0}^{\infty}\!dxx\exp(-x(\frac{ 1}{|q_{u}|}+\frac{1}{|q_{d}|}))\] (15) \[\mathcal{L}_{n_{1}}^{1}(\frac{2x}{|q_{u}|})\mathcal{L}_{n_{2}}^{ 1}(\frac{2x}{|q_{d}|}),\] with \(\mathcal{L}_{n}^{a}(x)\) are the generalized Laguerre polynomials.
2305.04515
Novel Anisotropy of Upper Critical Fields in Fe$_{1+y}$Te$_{0.6}$Se$_{0.4}$
Studying the upper critical field ($\mu_0$$H$$_{\rm{c2}}$) and its anisotropy of superconductors is of great importance because it can provide an unusual insight into the pair-breaking mechanism. Since Fe$_{1+y}$Te$_{1-x}$Se$_x$ exhibits the high $\mu_0$$H$$_{\rm{c2}}$ and small anisotropic superconductivity, it has attracted considerable attention. However, some issues related to $\mu_0$$H$$_{\rm{c2}}$ are still unknown, including the effect of excess Fe content on $\mu_0$$H$$_{\rm{c2}}$ behavior and the origin of the crossover of the $\mu_0H_{\rm{c2}}^c $ -- $ T$ and $\mu_0H_{\rm{c2}}^{ab}$ -- $T$ curves. In this work, the value of $\mu_0$$H$$_{\rm{c2}}$ of Fe$_{1+y}$Te$_{0.6}$Se$_{0.4}$ single crystals with controlled amounts of excess Fe was obtained by resistivity measurements over a wide range of temperatures down to $\sim$ 1.5 K, and magnetic fields up to $\sim$ 60 T. The crossover of the $\mu_0H_{\rm{c2}}^c $ -- $ T$ and $\mu_0H_{\rm{c2}}^{ab}$ -- $T$ curves was found to be independent of the excess Fe content. The angle dependence of $\mu_0H_{\rm{c2}}$ was also checked. The $\mu_0H_{\rm{c2}}(\theta)$ symmetry at higher temperature near $T_c$ could be fitted by anisotropic G-L model, and novel fourfold symmetry of $\mu_0H_{\rm{c2}}$ at lower temperature was found. Based on our spin-locking pairing model, the crossover behavior originates from the anisotropic spin-paramagnetic effect, and the novel fourfold symmetry of $\mu_0H_{\rm{c2}}$ could be understood by our extended anisotropic G-L model.
Yongqiang Pan, Yue Sun, Nan Zhou, Xiaolei Yi, Jinhua Wang, Zengwei Zhu, Hiroyuki Mitamura, Masashi Tokunaga, Zhixiang Shi
2023-05-08T07:17:32Z
http://arxiv.org/abs/2305.04515v1
# Novel Anisotropy of Upper Critical Fields in Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) ###### Abstract Studying the upper critical field (\(\mu_{0}H_{c2}\)) and its anisotropy of superconductors is of great importance because it can provide an unusual insight into the pair-breaking mechanism. Since Fe\({}_{1+y}\)Te\({}_{1-x}\)Se\({}_{x}\) exhibits the high \(\mu_{0}H_{c2}\) and small anisotropic superconductivity, it has attracted considerable attention. However, some issues related to \(\mu_{0}H_{c2}\) are still unknown, including the effect of excess Fe content on \(\mu_{0}H_{c2}\) behavior and the origin of the crossover of the \(\mu_{0}H_{c2}^{c}\) - \(T\) and \(\mu_{0}H_{c2}^{ab}\) - \(T\) curves. In this work, the value of \(\mu_{0}H_{c2}\) of Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) single crystals with controlled amounts of excess Fe was obtained by resistivity measurements over a wide range of temperatures down to \(\sim\) 1.5 K, and magnetic fields up to \(\sim\) 60 T. The crossover of the \(\mu_{0}H_{c2}^{c}\) - \(T\) and \(\mu_{0}H_{c2}^{ab}\) - \(T\) curves was found to be independent of the excess Fe content. The angle dependence of \(\mu_{0}H_{c2}\) was also checked. The \(\mu_{0}H_{c2}(\theta)\) symmetry at higher temperature near \(T_{c}\) could be fitted by anisotropic G-L model, and novel fourfold symmetry of \(\mu_{0}H_{c2}\) at lower temperature was found. Based on our spin-locking pairing model, the crossover behavior originates from the anisotropic spin-paramagnetic effect, and the novel fourfold symmetry of \(\mu_{0}H_{c2}\) could be understood by our extended anisotropic G-L model. ## I Introduction The upper critical field \(\mu_{0}H_{c2}\) is sensitive to microscopic superconducting (SC) parameters (e.g., the SC energy gap \(\triangle_{\rm SC}\) and the mean free path \(\ell\))[1; 2; 3] and is beneficial for understanding the unconventional superconductivity, including the coherence length \(\xi\), electronic structure, and pair-breaking mechanism[4; 5]. According to the Bardeen-Cooper-Schrieffer (BCS) theory, the superconductivity is based on Cooper pairs. Cooper pairs are made up of two electrons with opposite spins and momenta, which can be depaired by an external magnetic field via two primary mechanisms[6]. One is the orbital depairing mechanism involving the Lorentz force (orbital depairing effect, ODE), which is dominant in the high-temperature region. The other is the Pauli spin-paramagnetic depairing mechanism involving the Zeeman effect (spin-paramagnetic depairing effect, SPDE)[7; 8], which is dominant in the low-temperature region. For single-band superconductors, according to the Werthamer, Helfand, and Hohenberg (WHH) theory[9], the two primary mechanisms are expressed using two dimensionless parameters, the Maki parameter \(\alpha\) and spin-orbit scattering \(\lambda_{\rm so}\)[1; 10]. For multiband superconductors, \(\mu_{0}H_{c2}\) can be described successfully using the two-band BCS model[4; 11; 12; 13; 14; 15; 16; 17] and two-band Ginzburg-Landau (G-L) theory[18; 19]. The two-band G-L theory yields a nonlinear temperature dependence of \(\mu_{0}H_{c2}(T)\) when the temperature is close to the critical temperature \(T_{\rm c}\), which is different from the linear behavior expected in the single-band theory. Additionally, the anisotropy \(\gamma\) of \(\mu_{0}H_{c2}\) (\(\gamma_{H}=H_{c2}^{ab}/H_{c2}^{c}\), where \(H_{c2}^{ab}\) and \(H_{c2}^{c}\) are the \(H_{c2}\) when \(H\|ab\) and \(H\|c\), respectively) is related to the dimensionality and topology of the electronic structure[20], which is crucial for understanding multiband effects. Iron-based superconductors (IBSs) exhibit rich distinctive features, such as the two-band effect, ODE, and SPDE, which lead to a peculiar temperature dependence of \(\mu_{0}H_{c2}(T)\)[8; 21; 22]. Among IBSs, the 11-system is unique in its structural simplicity which is favorable for probing the SC pairing mechanism. Recent reports have shown that Fe\({}_{1+y}\)Te\({}_{1-x}\)Se\({}_{x}\) presents a strong-coupling superconductivity, a strong electron correlation, and a crossover from BCS coupling to Bose-Einstein-condensation (BEC) coupling[23; 24; 25; 26]. For the upper critical fields, \(\mu_{0}H_{c2}^{c}\) of Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) shows a multiband behavior without the SPDE, which can be fitted by the two-band model[12]. By contrast, \(\mu_{0}H_{c2}^{ab}\) shows a single-band behavior due to the SPDE. Besides, the impurity density, especially the excess Fe stoichiometry \(y\)[27], greatly affects the value of \(\mu_{0}H_{c2}(0\) K) and behavior of \(\mu_{0}H_{c2}(T)\). In the work by Matsuura \(et\ al\), \(\mu_{0}H_{c2}\) (0 K) depends on the impurity concentration, as does the initial slope of \(\mu_{0}H_{c2}(T)\), which will increase with increasing impurity content[3; 28; 29; 30; 31; 32; 33; 34]. However, Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) samples with fewer impurities present higher \(\mu_{0}H_{c2}(T)\), which is abnormal and need to be studied. Interestingly, a crossover happens in \(\mu_{0}H_{c2}\) at low temperature (\(T_{\rm cr}\)), i.e., \(H_{c2}^{ab}>H_{c2}^{c}\) in the high-temperature region, whereas \(H_{c2}^{ab}<H_{c2}^{c}\) in the low-temperature region. This crossover on \(\mu_{0}H_{c2}\) is unusual and can be found in Ba\({}_{1-x}\)K\({}_{x}\)Fe\({}_{2}\)As\({}_{2}\)[21], \(A\)Cr\({}_{3}\)As\({}_{3}\)[35], \(A_{2}\)Cr\({}_{3}\)As [20; 36] (\(A\) represent alkali metal). It is interpreted as the orbital limiting effect persisted at all field angles, or Ising-like spin-singlet superconductivity. Until now, the origin of this crossover has not been understood, which might be due to a peculiar SC pairing mechanism. In addition, the angle dependence of \(\mu_{0}H_{\rm c2}(\theta)\) near the crossover point will be an interesting issue, since the crossover in \(\mu_{0}H_{\rm c2}(T)\) implies the change of symmetry of \(\mu_{0}H_{\rm c2}(\theta)\)[37]. In this work, a series of Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) single crystals were synthesized with different amount of excess Fe \(y\) ranging from 0 to 0.14. The values of \(\mu_{0}H_{\rm c2}\) were obtained under an external magnetic field up to \(\sim\)60 T. The \(\mu_{0}H_{\rm c2}\) anisotropy was investigated at the temperatures above and below \(T_{\rm cr}\) on the clean crystal free from excess Fe. A spin-locking pairing model has been proposed to explain the novel \(\mu_{0}H_{\rm c2}\) anisotropy. ## III Experimental details Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) single crystals were synthesized via the standard self-flux method as shown in our previous reports[38; 39; 40]. The crystals with different amounts of excess Fe were prepared by annealing[27]. The determination of the amount of excess Fe can be seen in our previous report[41; 42] labeled as S1 (as-grown sample, Fe\({}_{1.14}\)Te\({}_{0.6}\)Se\({}_{0.4}\)), S2 (half-annealed sample, Fe\({}_{1.07}\)Te\({}_{0.6}\)Se\({}_{0.4}\)), and S3 (fully-annealed sample, Fe\({}_{1.0}\)Te\({}_{0.6}\)Se\({}_{0.4}\))[41]. The electrical transport measurements were performed using a commercial PPMS-9 (Quantum Design, \(\sim\)9 T) and a high magnetic field generated from a 60-T magnet with a pulse width of \(\sim\)70 ms at the Wuhan National High Magnetic Field Center (WHHMF) and with a pulse width of \(\sim\)36 ms at the Institute for Solid State Physics, The University of Tokyo[43]. ## IV Results and discussions Figure 1(a) shows the reduced zero-field resistivity \(\rho/\rho_{\rm 300K}\) (where \(\rho_{\rm 300K}\) is the resistivity at \(T=300\) K) for the synthesized single crystals. \(T_{\rm c}\) determined by the 50% normal-state resistivity are \(\sim\)13.3 K (sample S1), \(\sim\)14.6 K (sample S2), and \(\sim\)15.2 K (sample S3), respectively. The residual resistivity ratio \(RRR\), defined as \(\rho_{\rm 300K}/\rho(T_{\rm c}^{\rm onset})\), are estimated as \(\sim\)0.74, \(\sim\)0.92, and \(\sim\)2 for S1, S2, and S3, respectively. The increase of \(T_{\rm c}\) and decrease of residual resistivity manifest the improvement of sample quality after removing excess Fe[41; 44]. Magnetic field dependence of resistivity of S1, S2, and S3 are plotted at in Figs. 1(b)-(g). Temperature dependence of \(\rho/\rho_{\rm 300K}\) (\(\rho\)-\(T\) curves) under different magnetic fields is shown in Fig. S1 (Supplement Materials)[42]. The temperature dependence of \(\mu_{0}H_{\rm c2}\) can be obtained from the \(\rho\)-\(T\) and \(\rho\)-\(H\) curves using the criterion of the 50% normal-state resistivity. With this criterion, the effects of the vortex motion expected from the 10% criterion and the SC fluctuation expected from the 90% criterion can be minimized[17]. Additionally, \(\mu_{0}H_{\rm c2}\) obtained from both field sweeps (performed at the WHHMF) and temperature sweeps (using the PPMS) overlap with each other, demonstrating the consistence of the obtained \(\mu_{0}H_{\rm c2}\). Reduced temperature dependence of \(\mu_{0}H_{\rm c2}^{c}(T)\) and \(\mu_{0}H_{\rm c2}^{ab}(T)\) for the three samples are shown in Figs. 2(a)-(c) by points. The red curves and dark cyan dash curves present the WHH fit on \(\mu_{0}H_{\rm c2}^{ab}(T)\) and two band fit on \(\mu_{0}H_{\rm c2}^{c}(T)\), respectively. For \(\mu_{0}H_{\rm c2}^{c}(T)\), the fitting using the single-band WHH model was not successful because of the linear \(\mu_{0}H_{\rm c2}^{c}\) behavior at low temperatures [42], which is related to the multigap nature of Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\). Therefore, the two-band model was adopted, in the following form in the dirty limit[12]: Figure 1: (a) Temperature dependence of the resistivity \(\rho\) reduced by \(\rho_{\rm 300K}\) under zero field for three samples. The inset shows the enlarged region near \(T_{\rm c}\). (b)–(g)Magnetic field dependence of resistivity of samples S1, S2, and S3, respectively. (h)–(i) Schematics of the applied magnetic field directions for the cases of \(H\|ab\) and \(H\|c\), respectively. \[0=a_{0}(\ln t+U(h)(\ln t+U(\eta h))\] \[+a_{1}(\ln t+U(h))+a_{2}(\ln t+U(\eta h)), \tag{1}\] where \(a_{0}=2(\lambda_{11}\lambda_{22}\)-\(\lambda_{12}\lambda_{21})/\lambda_{0}\), \(a_{1}=1\)+(\(\lambda_{11}\)-\(\lambda_{22}\))/\(\lambda_{0}\), \(a_{2}=1\)-(\(\lambda_{11}\)-\(\lambda_{22}\))/(\(\lambda_{0}\)/2), \(\lambda_{0}=((\lambda_{11}\)-\(\lambda_{22})^{2}\)+4\(\lambda_{12}\lambda_{21})^{1/2}\), \(t=T/T_{c}\), \(h=\mu_{0}H_{c2}^{c}D_{1}/(2\Phi_{0}/T)\), \(\eta=D_{2}/D_{1}\), and \(U(\)x\()\) = \(\Psi\)(1/2+x)-\(\Psi\)(1/2). \(\Psi\)(x) is the digamma function. \(D_{1}\) and \(D_{2}\) are the diffusivity of each band. \(\lambda_{11}\) and \(\lambda_{22}\) denote the intraband coupling constants which can be derived from the \(\mu\)SR experiment and can be adjusted[45]. \(\lambda_{12}\) and \(\lambda_{21}\) are the interband coupling constants. Here, it is assumed that the intraband coupling dominates the \(\mu_{0}H_{c2}^{c}(T)\), and the interband coupling takes the value \(\lambda_{12}\) = \(\lambda_{21}\) to reduce the number of free parameters. The fitting results of the \(\mu_{0}H_{c2}^{c}(T)\) data obtained using the two-band model are shown in Figs. 2(a)-(c) by blue dash curves. The fitting parameters can be seen in Table 1. We can find \(\lambda_{12}\lambda_{21}\ll\lambda_{11}\lambda_{22}\), which is similar to MgB\({}_{2}\)[16], indicating that the interband coupling is weak. The upper critical fields obtained from the two-band model (\(\mu_{0}H_{c2}^{c,\rm TB}(0\) K)) are 45.50 (S1), 50.88 (S2), and 52.20 T (S3). The corresponding \(\xi_{ab}(0\) K), defined by the G-L equation \(\xi_{ab}(0\) K) = (\(\Phi_{0}/2\pi\mu_{0}H_{c2}^{c}(0\) K)\()^{1/2}\), are 2.69 (S1), 2.54 (S2), and 2.51 nm (S3), respectively. The slight increase of \(\mu_{0}H_{c2}^{c,\rm TB}(0\) K) and decrease of \(\xi_{ab}(0\) K) with reducing of excess Fe are related to the increase of \(T_{c}\). Meanwhile, the values of -\(\mu_{0}\)d\(H_{c2}^{c}\)/d\(T\) at \(T_{c}\) are 3.64 (S1), 4.76 (S2), and 6.74 T/K (S3). The increase of -\(\mu_{0}\)d\(H_{c2}^{c}\)/d\(T\) at \(T_{c}\) may lead to the change of coupling strength from BCS coupling to BEC coupling[42; 46; 47; 12] For \(\mu_{0}H_{c2}^{ab}(T)\), these \(\mu_{0}H_{c2}^{ab}(T)\) curves show a similar convex shape. The \(\mu_{0}H_{c2}^{ab}(0\) K) of all samples exceed 40 T. The WHH model considering both the Maki parameter \(\alpha\) and the spin-orbital effect parameter \(\lambda_{\rm so}\) is used to fit \(\mu_{0}H_{c2}^{ab}(T)\)[48]. As shown in Figs. 2(a)-(c) and Table 1, the orbital field \(\mu_{0}H_{c2}^{ab,\rm orb}(0\) K) for three samples defined by \(-0.693T_{c}\mu_{0}\)d\(T_{c2}\)/d\(|T|_{T=T_{c}}\) is 56.5 (S1), 109.4 (S2), and 122.0 T (S3), these values are larger than FeSe[49] but smaller than the typical values of 122 system[50; 51] and 112 system[17]. In order to investigate the influence of impurities on \(\mu_{0}H_{c2}^{ab}\), several important parameters and their variation tendency are noticed. For sample S1, a small \(\alpha\) (\(\sim\)1.3) results in its \(\mu_{0}H_{c2}^{ab}(0\) K) (\(\sim\) 41.7 Figure 2: Reduced temperature dependence of \(\mu_{0}H_{c2}^{c}\) and \(\mu_{0}H_{c2}^{ab}\) of samples (a) S1, (b) S2, and (c) S3. Hollow diamond and circle represent the \(\mu_{0}H_{c2}\) obtained from HMF. Hollow triangle represent the \(\mu_{0}H_{c2}\) obtained from PPMS. (d) Anisotropic \(\gamma_{H}\) of samples S1, S2 and S3. The red arrow indicates the crossover points of \(\mu_{0}H_{c2}\). As the black dash line shown, all crossover are located at same \(T/T_{c}\). T) being not particularly small, meaning that the SPDE is comparatively weak. Contrarily, for sample S3, a large \(\alpha\sim\)3.9 results in a limited \(\mu_{0}H_{c2}^{ab}(0\) K) (\(\sim\)44.5 T), which indicates that the SPDE is enhanced by removing excess Fe. The \(RRR\) dependence of \(\mu_{0}H_{c2}^{ab}(0\) K) and the Maki parameter \(\alpha\) are plotted in Fig. 3 (orange and green areas). These parameters exhibit similar increasing behavior with increasing \(RRR\). We speculated that the disorder, i.e., excess Fe, not only suppresses the \(T_{c}\), but also weakens the SPDE and reduces \(\mu_{0}H_{c2}^{ab}\). Meanwhile, the coherence length along \(c\)-axis of S3 is also calculated to be 9.5 A by two-band BCS model (\(\xi_{c}^{\rm BCS}(0\) K)) [42]. This value is larger than previous report \(\sim\)4.4 A[47]. Compared with the lattice parameter \(c\sim 6\) A, this small \(\xi_{c}^{\rm BCS}(0\) K) indicates that Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) may show some quasi-two-dimensional behavior. Furthermore, \(\mu_{0}H_{c2}^{ab}\) is smaller than \(\mu_{0}H_{c2}^{ab,{\rm orb}}\), and the difference becomes larger at low temperatures, indicating that the SPDE is dominant at the low-temperature region. The crossover can be observed in Figs. 2(a)-(d). Figure 2(d) shows the temperature dependence of the \(\mu_{0}H_{c2}\) anisotropy \(\gamma_{H}(T)\) defined as \(\mu_{0}H_{c2}^{ab}/\mu_{0}H_{c2}^{c}\). As the temperature decreases, \(\gamma_{H}(T)\) finally drops below 1 after the crossover temperature \(T_{\rm cr}\) (the red arrow in the Fig. 2(d)). Interestingly, the \(T_{\rm cr}/T_{c}\) are unchanged in different Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) samples (shown as the black dashed line, \(T_{\rm cr}=0.22T_{\rm c}\sim\)3.2 K). The value of \(T_{\rm cr}\) is plotted in Fig. 3 (blue area), it indicates that the crossover is disorder-robust and intrinsic property. The crossover of \(\mu_{0}H_{c2}\) in two directions is a novel phenomenon observed in Fe(Te,Se) and (Ba,K)Fe\({}_{2}\)As\({}_{2}\)[8; 11; 21], and its origin is still unclear. Previous works have tried to explain this crossover in terms of the strong SPDE, that suppresses the \(\mu_{0}H_{c2}^{ab}\) at low temperatures[8]. However, the origin of such a strongly anisotropic SPDE remains unknown. The appearance of crossover implies a dramatic change on \(\mu_{0}H_{c2}\) anisotropy. In order to investigate the symmetry of \(\mu_{0}H_{c2}\) above and below \(T_{\rm cr}\), we further measured the angle \(\theta\) (\(\theta\) is the angle between \(H\) and \(c\)-axis) dependence of magneto-resistance of S3 sample free from excess Fe at 14 K (near \(T_{\rm c}\)) and 2.3 K (below \(T_{\rm cr}\sim\) 3.2 K), respectively. The angle \(\theta\) dependence of \(\mu_{0}H_{c2}(\theta)\) was obtained according to the magneto-resistance data (The raw data of \(R(H)\) in different field directions are shown in Fig. S4 in supplement materials). \(\mu_{0}H_{c2}(\theta)\) at 14 K is shown in Fig. 4(a), exhibiting a ellipse shape with twofold symmetry. The maximum value of \(\mu_{0}H_{c2}(\theta)\) appears at \(H||ab\). The anisotropic G-L model[52; 53; 54; 55] was used to fit the \(\mu_{0}H_{c2}(\theta)\) at 14 K, \[H_{c2}^{2}(\theta)=\frac{(H_{c2}^{ab})^{2}}{\rm sin^{2}\theta+\gamma_{GL}^{2} cos^{2}\theta}. \tag{2}\] The anisotropy parameter \(\gamma_{\rm GL}\) estimated from the anisotropic G-L model fitting is 2.24, consistent with the \(\gamma_{H}\sim\)2.4 estimated from \(R\)-\(T\) curves under fields (\(\mu_{0}H_{c2}^{ab}(14\) K)/\(\mu_{0}H_{c2}^{c}(14\) K)). The \(\mu_{0}H_{c2}(\theta)\) at 2.3 K are shown in Fig. 4(b). At this temperature, \(\mu_{0}H_{c2}^{c}\) is slightly larger than \(\mu_{0}H_{c2}^{ab}\), \(\mu_{0}H_{c2}(\theta)\) at 2.3 K shows a butterfly-pattern with fourfold symmetry. Two minimum values appear in the directions of \(H||ab\) and \(H\|c\), suggesting the SPDE get its largest value when \(H\|ab\), while the ODE get its largest value when \(H\|c\) (shown as the arrows in Fig. 4(b)). In theory, SPDE is interpreted as depairing mechanism due to the Zeeman effect aligning the spins of the two electrons with the applied field paralleled to \(ab\)-plane, while ODE is interpreted as depairing mechanism due to the Lorentz force acting via the charge on the momenta of the paired electrons with the applied field parallel to \(c\)-axis[6; 55]. Besides, two maxima appear near 22\({}^{\circ}\) (338\({}^{\circ}\)) and 158\({}^{\circ}\) (202\({}^{\circ}\)), suggesting that both ODE and SPDE are angle dependent, and ODE's attenuation rate exceeds the enhancement rate of in-plane SPDE when the \(H\) rotate from 0\({}^{\circ}\) (\(H\|c\)) to 90\({}^{\circ}\) (\(H\|ab\)). To explain the observed strongly anisotropic SPDE and novel \(\mu_{0}H_{c2}^{c}(\theta)\) anisotropy, a spin-locking pairing model in quasi-two-dimensional superconductors is proposed[17]. In this model, the half-itinerant carriers are proposed. In a normal state with \(T\) slightly above the pre-pairing critical temperature \(T_{\rm pf}\), the majority of carriers are half-itinerant, i.e., their spin orientation is locked in the \(ab\)-plane but their charge and spin are itinerant. These half-itinerant carriers are mainly from Fe ions in Fe\({}_{1+y}\)Te\({}_{1-x}\)Se\({}_{x}\) lattice, as shown in the schematic in Fig. 4(c). When \(T\leq T_{\rm c}\), these half-itinerant carriers constitute Cooper pairs, and the long-range phase coherence of these Cooper pairs and supercurrent are formed. The spin-locked Cooper pairs display a peculiar anisotropy under an applied magnetic field. For \(H\)\(\parallel\)\(ab\)-plane, an angle exists between the spin direction of Figure 3: \(RRR\) dependencies of \(\mu_{0}H_{c2}^{ab}(0\) K), the Maki parameter \(\alpha\), and \(T_{\rm cr}\) of samples S1, S2, and S3, respectively. Those colored areas are represent the variation tendency of parameters, its are corresponding to colored vertical coordinates. two carries in a Cooper pair and \(H\), their magnetization energy are \(+|\vec{M}\cdot\vec{H}|\) and -\(|\vec{M}\cdot\vec{H}|\), and the depairing energy reaches a maximum value, which causes the paramagnetic effect to be dominant for \(\mu_{0}H_{c2}^{ab}\). When \(H\parallel c\)-axis (\(H\) is perpendicular to the spin of all carriers of Cooper pairs), the magnetization energy is always zero, and no SPDE occurs. In this case, the ODE is dominant for \(\mu_{0}H_{c2}^{c}\), and the two-band effect could uncover in Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\). This model provides a good explanation of the origin of the anisotropic SPDE. Considering the spin-locking pairing model, an extended anisotropic G-L model was used to fit the \(\mu_{0}H_{c2}(\theta)\) at 2.3 K. Due to spin-locked Cooper pairs, the angle-dependent Zeeman splitting energy is assumed as a separate term described by \(\Delta E_{Z}=2|\vec{M}\cdot\vec{H}|=|2g_{ab}\mu\sin\theta)|\)[56; 57; 58; 59]. The ODE (\(\Delta E_{orb}\)) could be expressed with the formation of the Lorenz force \(F_{L}\), \(\Delta E_{orb}=|2F_{L}\cdot\xi_{GL}(\theta)/2|=|eHv_{ab}|\xi_{\rm GL}(\theta)\)[6]. Here, the \(g_{ab}\) and \(v_{ab}\) are the Lande factor and Fermi velocity in \(ab\)-plane, respectively (The derivation of the formula can be found in supplement materials). Taking both Zeeman splitting effect and ODE in anisotropic G-L model to get the extended anisotropic G-L model[42], the \(\mu_{0}H_{c2}(\theta)\) can be written as: \[H_{c2}(\theta)=\frac{\hbar^{2}}{2m^{*}\frac{(\xi_{GL}^{ab})^{2}}{\sin^{2} \theta+\gamma_{CL}\cos^{2}\theta}[\frac{\hbar e}{m^{*}c}+|2g_{ab}\mu_{B}\sin \theta|+|ev_{ab}|(\frac{(\xi_{GL}^{ab})^{2}}{\sin^{2}\theta+\gamma_{CL}\cos^{ 2}\theta})^{0.5}]}, \tag{3}\] where \(\mu_{\rm B}\), \(m^{*}\), and \(\hbar\) are Bohr magneton, electron effective mass, and reduced Planck constant, respectively. The fitting result is shown in Fig. 4(b) by orange curve. Two minimum values appear when angle \(\theta=0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\), indicates that both SPDE and ODE have strong directionality, which is consistent with the spin-locking pairing model. \(\xi_{\rm GL}^{ab}\)(2.3 K) and anisotropy parameter \(\gamma_{\rm GL}\) of \(\mu_{0}H_{c2}\)(2.3 K) obtained from the extended anisotropic G-L model are 2.73 nm and 1.24, respectively, which is consistent with those shown in Table I. Theoretically, the SPDE in the \(ab\)-plane is enhanced quickly with decreasing temperature, resulting in an obvious suppression on \(\mu_{0}H_{c2}^{ab}\) at low temperatures. On the other hand, the ODE along \(c\)-axis is enhanced slowly with decreasing temperature, leading the value of \(\mu_{0}H_{c2}^{c}\) to linearly increase and overshoot \(\mu_{0}H_{c2}^{ab}\) below \(T_{\rm cr}\). When \(H\) rotate away from \(90^{\circ}\) (\(0^{\circ}\)), SPDE (ODE) weaken quickly, resulting in a larger \(\mu_{0}H_{c2}(\theta)\) than it at \(ab\)-plane (\(c\)-axis). Therefore, the \(\mu_{0}H_{c2}(\theta)\) changes from a twofold symmetry near \(T_{\rm c}\) to a fourfold symmetry at low temperatures. ## Conclusion \(\mu_{0}H_{c2}\) of Fe\({}_{1+y}\)Te\({}_{0.6}\)Se\({}_{0.4}\) single crystals with selected amounts of excess Fe were investigated by conducting resistivity measurements over a wide range of temperatures and magnetic fields. \(\mu_{0}H_{c2}^{c}\) and \(\mu_{0}H_{c2}^{ab}\) were fitted by the two-band model and the WHH model, respectively. The crossover observed on \(\mu_{0}H_{c2}\) is disorder-robust and indicates the presence of a strong anisotropic SPDE. Furthermore, the angle dependent \(\mu_{0}H_{c2}(\theta)\) ex Figure 4: (a) Angle \(\theta\) dependent \(\mu_{0}H_{c2}(\theta)\) at \(T=14\) K, which are shown by blue solid points. The orange curve represents the anisotropic G-L model fitting result. Inset is a schematic of the applied magnetic field directions. (b) Angle \(\theta\) dependent \(\mu_{0}H_{c2}(\theta)\) in the range of -\(15^{\circ}\sim 200^{\circ}\) at \(T=2.3\) K, which are shown by blue solid points. The hollow points represent the \(180^{\circ}\) rotated points of \(\mu_{0}H_{c2}(\theta)\) in the range of -\(15^{\circ}\sim 200^{\circ}\). The orange curve represents the extended anisotropic G-L model fitting result. (c) Schematic diagram of the spin-locking phenomenon of Cooper pairs. The spheres represent carriers and the arrows represent their spin direction. The pair of arrows in different layers is either parallel or antiparallel. The green ellipse signify the Cooper pair is constitutive of two carriers. hibits a novel anisotropy with a twofold symmetry near \(T_{c}\), but a fourfold symmetry at low temperatures. To understand the strong anisotropic SPDE and the novel anisotropy of \(\mu_{0}H_{c2}\), a spin-locking model was proposed and the novel fourfold symmetry of \(\mu_{0}H_{c2}(\theta)\) could be fitted by our extended anisotropic G-L model successfully. ###### Acknowledgements. The present work was partly supported by the National Key R&D Program of China (Grant No. 2018YFA0704300), the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB25000000), and the National Natural Science Foundation of China (Grant No. U1932217, No. 12204487). Yongqiang Pan, Yue Sun, and Nan Zhou contributed equally to this paper.
2308.02198
Unexpected fault activation in underground gas storage. Part I: Mathematical model and mechanisms
Underground gas storage (UGS) is a worldwide well-established technology that is becoming even more important to cope with seasonal peaks of gas consumption due to the growing uncertainties of the energy market. Safety issues concerning the reactivation of pre-existing faults might arise if the target reservoir is located in a faulted basin, where human activities can trigger (micro-)seismicity events. In the Netherlands, it has been observed that fault activation can occur somehow "unexpectedly" after the primary production (PP), i.e., during cushion gas injection (CGI) and UGS cycles, when the stress regime should be in the unloading/reloading path. To understand the physical mechanisms responsible for such occurrences, a 3D mathematical model coupling frictional contact mechanics in faulted porous rocks with fluid flow is developed, implemented and tested. The final aim of this two-part work is to define a safe operational bandwidth for the pore pressure range for UGS activities in the faulted reservoirs of the Rotliegend formation. Part I of this work concerns the development of the mathematical and numerical model of frictional contact mechanics and flow in faulted porous rocks. A mixed discretization of the governing PDEs under frictional contact constraints along the faults is used. A slip-weakening constitutive law governing the fault macroscopic behavior is also presented. The model is tested in the setting of an ideal reservoir located in the Rotliegend formation. The analyses point out how fault reactivation during PP can lead to a stress redistribution, giving rise to a new equilibrium configuration. When the fault is reloaded in the opposite direction during the CGI and/or UGS stages, further activation events can occur even if the stress range does not exceed either the undisturbed initial value or the maximum strength ever experienced by the formation.
Andrea Franceschini, Claudia Zoccarato, Selena Baldan, Matteo Frigo, Massimiliano Ferronato, Carlo Janna, Giovanni Isotton, Pietro Teatini
2023-08-04T08:31:46Z
http://arxiv.org/abs/2308.02198v1
# Unexpected fault activation in underground gas storage. Part I: Mathematical model and mechanisms ###### Abstract Underground gas storage (UGS) is a worldwide well-established technology that is becoming even more important to cope with seasonal peaks of gas consumption due to the growing uncertainties of the energy market. Safety issues concerning the reactivation of pre-existing faults might arise if the target reservoir is located in a faulted basin, where human activities can trigger (micro-)seisincity events. From a mechanical viewpoint, a fault is activated when the shear stress exceeds the limiting frictional value. In the Netherlands, it has been observed that this occurrence can develop somehow "unexpectedly" after the primary production (PP), i.e., during cushion gas injection (CGI) and UGS cycles, when the stress regime should be in the unloading/reloading path and the fault state far from failure. In order to understand the physical mechanisms responsible for such occurrences and build reliable simulation tools for predictive purposes, a 3D mathematical model coupling frictional contact mechanics in faulted porous rocks with fluid flow is developed, implemented and tested. In particular, the mechanisms and the critical factors responsible for the fault reactivation during the various UGS stages are investigated in the real-world setting of the Rottliegend formation in the Netherlands. The effect of the storage of different fluids for various purposes, such as the long-term sequestration of CO\({}_{2}\), the regular injection and extraction cycles of CH\({}_{4}\), and the highly irregular cycles of H\({}_{2}\), is investigated with respect to fault activation risk. The final aim of this two-part work is to define a safe operational bandwidth for the pore pressure range for UGS activities in the faulted reservoirs of the Rottliegend formation. Part I of this work concerns the development of the mathematical and numerical model of frictional contact mechanics and flow in faulted porous rocks. A mixed discretization of the governing PDEs under frictional contact constraints along the faults is used, where displacement and pressure in the porous medium, and traction on the fault surfaces are the main variables. A slip-weakening constitutive law governing the fault macroscopic behavior is also presented. The model is tested in the setting of an ideal reservoir located in the Rottliegend formation. The analyses point out how fault reactivation during PP can lead to a stress redistribution, giving rise to a new (deformed) equilibrium configuration. When the fault is reloaded in the opposite direction during the CGI and/or UGS stages, further activation events can occur even if the stress range does not exceed either the undisturbed initial value or the maximum strength ever experienced by the formation. keywords: Frictional contact, Mixed discretization, Underground gas storage, Slip-weakening law, Fault reactivation + Footnote †: journal: J. Comput. Phys. ## 1 Introduction Seismicity associated to fluid withdrawal from and injection into deep reservoirs is a geomechanical hazard that is receiving a growing attention in the scientific literature [1; 2; 3]. Fault reactivation, both aseismic and seismic, is caused by the change of the natural stress regime on the discontinuity surface due to the pore pressure \(p\) changes in the reservoir where mining activities are operated. More specifically, the onset and amount of slip, and the size of the reactivated fault zone depend on how the stress changes caused by the human operations at depth can interfere with the natural stress regime [4; 5; 6]. The current state-of-the-art research on this topic focuses on the two main processes: i) seismicity induced by production of (conventional) hydrocarbon reservoirs, where pore pressure depletion \(\Delta p\) and differential reservoir compaction are the main factors yielding fault reactivation [4; 7] (Fig. 1a); ii) fluid injection at depth (CO\({}_{2}\) sequestration, production from unconventional reservoirs, enhanced geothermal systems) where, independently of the possible thermal processes, the increase of the fluid pressure (largely) above the natural undisturbed value \(p_{i}\) within the faulted zone crossing or bounding the targeted formation drives the reactivation of rock discontinuities [8; 9; 10; 11] (Fig. 1b). Over the last decade, induced seismicity has been observed in some parts of the world also in reservoirs used for underground gas storage (UGS). Somehow unexpectedly, fault reactivation occurred not only during primary production (PP) or gas storage at pressure larger than \(p_{i}\)[12; 13; 14], i.e., at a stress regime that had never been experienced before by the reservoir and the nearby faults, but also during cushion gas injection (GCI) or producing and storing phases with a pore pressure smaller than \(p_{i}\) and larger than \(p_{\text{min}}\), i.e., the minimum pressure experienced by the field usually at the end of primary production before its conversion to UGS (Fig. 1c) [15; 16; 17; 18]. The present work is aimed to shed light on these "unexpected" events. Because of the current CH\({}_{4}\) importance for energy production purposes and the international turbulence on this market, the interest in developing UGS projects is increasing worldwide. Multiple elements presently characterize UGS: seasonal and short-term balancing, strategic reserves in case of interruption of deliveries, optimisation of gas production and gas system distribution, overcoming of local restrictions of gas grids [19; 20]. More recently, UGS has also been investigated as a possible method to store green energy in terms of compressed air and H\({}_{2}\)[21; 22; 23]. Sources of green energy, such as wind, waves, and sun, are characterized by a natural high-frequency fluctuation (from hours to day/night and to weeks). Excess electricity can be used to synthesize hydrogen or to compress air, store the gas in deep aquifers or depleted reservoirs, and use it at a later stage as fuel to generate electricity. The same technology can be also applied for long-term geological sequestration activities, such as CO\({}_{2}\) capture and sequestration to reduce carbon dioxide emissions in the atmosphere [24]. In this case, the targeted reservoir does not undergo a cyclic loading/unloading strength, but the pressure can increase up to a steady state value usually smaller or equal to the initial value \(p_{i}\). Analysis of the social and environmental hazards and risks associated with subsurface gas storage is a recurrent issue whenever a new UGS site is planned. Many different aspects are involved, such as formation integrity, health and safety as related to public perception, economic risk, and environmental impact. Among the latter, the geomechanical effects induced by seasonal gas injection and withdrawal, such as movements of the land surface, may play an important role [25]. UGS has been rarely associated with induced seismicity. According to data provided by the European Commission [26], recent works [27], and the _HiQuake_ database [28], only a few sites have reported human-induced earthquakes out of 160 UGS facilities in Europe and more than 380 in USA [2]. Three cases of these, i.e., Bergermeer, Norg, and Grijpskerk fields, are located in The Netherlands. These reservoirs are located in the Carboniferous-Rotliegend formation, northern Europe, which is one of the most intensively explored petroleum systems in the world [29], and where a relatively high number of induced seismicity events has been recorded over the last decades [30; 31]. Several studies have addressed the topic of fault reactivation in Rotliegend reservoirs, the most famous of which concerns the Groningen field [32]. Most studies focus on a specific reservoir in the Netherlands and northern Germany, or, more generally, try to investigate the relationship between the typical geological features of these reservoirs, their usual production life, and the possible induced seismicity [7; 33; 34; 35; 6]. The recent literature, however, is mainly Figure 1: Sketches of two (\(a\) and \(b\)) β€œexpected” induced seismicity scenarios and one (\(c\)) β€œunexpected”. \(a\)) Primary production with large pressure drop, \(b\)) fluid injection (CO\({}_{2}\) sequestration, waste water disposal, fracking) with significant pressure increase), \(c\)) pressure in the range already experienced (UGS with \(p<p_{i}\)). concerned with primary production only and does not investigate the reasons why fault reactivation can occur during UGS phases. Moreover, a very simplified geological structure is assumed in such analyses, with a single fault in a two dimensional (2D) vertical plane, and most likely this can only partially capture the complex response expected from many intersecting faults in a fully three dimensional (3D) environment [34]. Only a few relatively old publications addressed the topic in UGS reservoirs [36; 37]. Nagelhout and Roest [36] developed a 2D geomechanical model by means of the FLAC simulator for a typical faulted vertical section and concluded that _while the gas field is depleted, fault slip occurs due to compaction of the reservoir and due to the upward movement of strata underlying the reservoir. Negligible amounts of additional slip are induced when the reservoir is subjected to alternating injection/extraction periods_. Orlic et al. [37] simulated the geomechanical behavior of a specific UGS reservoir using the finite-element package DIANA. Their results highlighted that _the critically stressed section of the central fault affected by the fault slipped... during gas production. Additional fault slip could be expected during the subsequent phase of cushion gas injection... During annual cycles of gas injection and production, the central fault is not critically stressed anymore_. The aim of this work is multifold: i) to develop a robust computational framework allowing for the simulation of the inception of fault activation in 3D real-world geological settings; ii) to improve the understanding of the physical mechanisms underlying induced seismicity during UGS activities, with specific reference to the typical configurations of Dutch UGS reservoirs; iii) to investigate the factors that can increase the chance of fault reactivation during UGS activities, identifying the settings, conditions and material properties that could most likely cause "unexpected" fault reactivation in the reservoirs located in the Rotliegend formations; iv) to define a set of practical guidelines allowing for a safe operational bandwidth in such UGS fields, in consideration also of the different potential storage activities (CH\({}_{4}\), H\({}_{2}\), CO\({}_{2}\)). A few preliminary outcomes were already reported in [38] and [39]. In order to accomplish such a complex multi-disciplinary task, the overall work is subdivided into two parts. The present paper (Part I) is mainly concerned with objectives i) and ii), focusing on the mathematical and computational aspects of the modeling approach, on its application in a representative 3D test case of the problem of interest, and on the mechanisms that can cause "unexpected" fault activation during UGS activities. The application of the model developed herein to the specific real-world cases of the Rotliegend formation, with a detailed sensitivity analysis for the different storage activities and the definition of preliminary guidelines (aforementioned objectives iii) and iv)), is the target of Part II [40]. The paper is organized as follows. The mathematical model of frictional contact mechanics and flow in a 3D visco-elasto-plastic porous medium, built on top of the works [41; 42; 43], is introduced along with its numerical discretization and solution algorithms. Faults are explicitly simulated within the porous rock as inner contact boundaries, whose activation is macroscopically governed by Coulomb's criterion. Pressure change within the faults, variation of Coulomb's parameters due to slip-weakening, and the rheology of the caprock are properly accounted for. The model is applied to a synthetic reservoir and fault system that realistically represents the main geological features of the Rotliegend reservoirs. Two scenarios are simulated to deepen the understanding of the geomechanical behavior of a faulted UGS system. Computational results are presented and the mechanisms responsible for fault reactivation during UGS phases are pointed out. A few conclusive remarks close the presentation. ## 2 Mathematical and numerical model In this section, we discuss the development of the mathematical and numerical model used to investigate the fault activation in the context of UGS reservoirs. The aim is at solving the frictional contact mechanics problem for a faulted porous medium, where the constraints are imposed in an exact way by Lagrange multipliers. The friction behavior of the fracture is governed by Coulomb's criterion, with a slip-weakening constitutive law. The variational formulation, its numerical discretization and the possible related instability phenomena are discussed. The pore pressure, both in the continuous matrix and inside the fracture network, is computed by a flow simulator with a one-way coupled approach [44; 45], which turns out to be fully warranted at the space and time scale of interest. We use the quasi-static assumption, i.e., no acceleration contribution is accounted for, under the hypothesis of likely negligible inertia of the system when small (e.g., centimetric) slip and small areal extent characterize the fault reactivation [46]. ### Strong formulation for the contact problem A fault can be modeled at the macroscale as a lower dimensional internal boundary \(\Gamma_{f}\) embedded in a 3D domain \(\Omega\subset\mathbb{R}^{3}\). The fracture is represented as a pair of surfaces in contact, conventionally denoted as _top_ and _bottom_ and represented by \(\Gamma_{f}^{+}\) and \(\Gamma_{f}^{-}\), respectively. On such surfaces, normal and frictional contact conditions have to be enforced, like the impenetrability of solid bodies and the fulfillment of a friction criterion. In this work, we use Coulomb's frictional criterion to provide the limiting modulus for the shear component of traction on the fault surface. To complete the problem setting definition, we introduce the external domain boundary \(\Gamma\equiv\partial\Omega\), with its outer unit normal vector \(\mathbf{n}\), while \(\mathbf{n}_{f}=\mathbf{n}_{f}^{-}=-\mathbf{n}_{f}^{*}\) denotes the normal direction to the fracture surface \(\Gamma_{f}\). Fig. 2 shows a sketch of the domain \(\Omega\), the fault \(\Gamma_{f}\) and the related quantities. Any vector field can be decomposed along the normal and tangential direction to the fracture, i.e., \(\mathbf{v}=v_{N}\mathbf{n}_{f}+\mathbf{v}_{T}\), with \(v_{N}=\mathbf{n}_{f}^{T}\mathbf{v}\) and \(\mathbf{v}_{T}=\mathbf{v}-v_{N}\mathbf{n}_{f}=\left(\mathbf{1}-\mathbf{n}_{f}\otimes\mathbf{n}_{f} \right)\mathbf{v}\), where the subscripts \(N\) and \(T\) are used to denote the normal and tangential components, respectively, and \(\mathbf{1}\) is the identity tensor of order \(2\). Assuming quasi-static conditions and infinitesimal strains, the strong form of the linear momentum balance at every instant \(t\) in the time interval \([0,t_{\text{max}}]\) can be stated as follows [47; 48; 49]: find the displacement vector \(\mathbf{u}:\overline{\Omega}\times[0,t_{\text{max}}]\rightarrow\mathbb{R}^{3}\) such that: \[\mathbf{\nabla}\cdot\hat{\mathbf{\sigma}}(\mathbf{u})+\mathbf{b} =0 \text{in }\Omega\times[0,t_{\text{max}}], \tag{1a}\] \[\mathbf{u} =\bar{\mathbf{u}} \text{on }\Gamma_{u}\times[0,t_{\text{max}}],\] (1b) \[\hat{\mathbf{\sigma}}(\mathbf{u})\cdot\mathbf{n} =\bar{\mathbf{t}} \text{on }\Gamma_{\sigma}\times[0,t_{\text{max}}], \tag{1c}\] where \(\hat{\mathbf{\sigma}}\) is the total stress tensor, \(\mathbf{b}\) collects the external body loads and \(\Gamma_{u}\cup\Gamma_{\sigma}=\Gamma\), \(\Gamma_{u}\cap\Gamma_{\sigma}=\varnothing\), are the portion of the boundary where Dirichlet and Neumann conditions are imposed, respectively. On the fracture \(\Gamma_{f}\), normal and friction compatibility conditions need to be enforced [48; 49]. The normal contact conditions on the fracture read: \[t_{N}=\mathbf{t}\cdot\mathbf{n}_{f} \leq 0 \text{only compressive traction is allowed}, \tag{2a}\] \[g_{N}=\llbracket\mathbf{u}\rrbracket\cdot\mathbf{n}_{f} \geq 0 \text{impenetrability condition},\] (2b) \[t_{N}g_{N} =0 \text{either the fracture is compressed or it is open}. \tag{2c}\] The conditions for the frictional component are: \[\|\mathbf{t}_{T}\|_{2} \leq\tau_{\text{max}}\left(t_{N},\|\mathbf{g}_{T}\|_{2}\right) \text{Coulomb's criterion}, \tag{3a}\] \[\dot{\mathbf{g}}_{T}\cdot\mathbf{t}_{T}-\tau_{\text{max}}\left(t_{N},\| \mathbf{g}_{T}\|_{2}\right)\|\dot{\mathbf{g}}_{T}\|_{2} =0 \text{frictional traction is aligned with sliding rate}. \tag{3b}\] In Eqs. (2)-(3), we split the traction \(\mathbf{t}\) on the fracture and the displacement jump across it into normal and tangential components, i.e., \(\mathbf{t}=t_{N}\mathbf{n}_{f}+\mathbf{t}_{T}\) and \(\llbracket\mathbf{u}\rrbracket=g_{N}\mathbf{n}_{f}+\mathbf{g}_{T}\), respectively. The jump is defined as \(\llbracket\mathbf{u}\rrbracket=\mathbf{u}\rrbracket_{\text{top}}-\mathbf{u}\rrbracket_{ \text{bottom}}\). To characterize the standard Coulomb frictional criterion, \(c\) and \(\varphi\) are introduced, i.e., the cohesion and the friction angle, respectively, obtaining: \[\tau_{\text{max}}(t_{N},\|\mathbf{g}_{T}\|_{2})=c-t_{N}\tan\left(\varphi\left(\| \mathbf{g}_{T}\|_{2}\right)\right). \tag{4}\] In Eq. (4), the friction angle generally depends on the modulus of the tangential component of the displacement jump, i.e., the slippage, so as to simulate a slip-weakening frictional behavior. Since a quasi-static approach is used, we can replace the tangential displacement rate \(\dot{\mathbf{g}}_{T}\) in Eq. (3b) with the incremental tangential displacement \(\Delta\mathbf{g}_{T}\) with respect to the previous time-step value [50]. The fault surface \(\Gamma_{f}\) can be split into three non-intersecting portions. Each portion is characterized by a different operating mode allowed for by the possible combinations of the previous conditions (2)-(3): Figure 2: Sketch of the 3D domain \(\Omega\) with its boundary, outer normal and inner fracture \(\Gamma_{f}\) (left), made of the top and bottom contact surfaces and the normal direction \(\mathbf{n}_{f}\) (right). * _Stick_: the surface is compressed (\(t_{N}<0\)) and the shear traction modulus does not exceed the limiting value provided by the criterion in Eq. (4). The displacement field is continuous across \(\Gamma_{f}\); * _Slip_: the normal traction is still negative, but the surface is free to slip. In this case, Coulomb's equality holds [51]: \[t_{T}^{\star}=\tau_{\max}(t_{N},\|\mathbf{g}_{T}\|_{2})\frac{\Delta\mathbf{g}_{T}}{\| \Delta\mathbf{g}_{T}\|_{2}}.\] (5) Only the normal component of the displacement field is continuous across \(\Gamma_{f}\); * _Open_: the normal traction is non-negative and the two contact surfaces \(\Gamma_{f}^{+}\) and \(\Gamma_{f}^{-}\) are free to move on condition to avoid compenetration. Hence, the displacement field across \(\Gamma_{f}\) is discontinuous and the traction on the fault vanishes, i.e., \(\mathbf{t}=0\). For additional details on the mathematical formulation, see [47; 48; 49] and more recently [41; 52; 53]. According to Terzaghi's principle, the total stress tensor in a saturated porous medium can be decomposed as the sum of two contributions: the effective stress tensor acting on the solid skeleton and a volumetric term depending on the averaged fluid pressure \(p\): \[\hat{\mathbf{\sigma}}=\begin{cases}\mathbf{\sigma}-\mathbf{1}p&\text{on }\Gamma_{\sigma} \cup\Gamma_{f},\\ \mathbf{\sigma}-\alpha\mathbf{1}p=\mathbb{C}:\mathbf{\varepsilon}-\alpha\mathbf{1}p&\text{in } \Omega,\end{cases} \tag{6}\] where the fluid pressure is averaged by the saturation indices of the different phases, \(\alpha\) is the Biot coefficient, taking care of the ratio between the grain and the porous matrix compressibility, and \(\mathbf{1}\) the identity tensor of order 2 [54]. The effective stress tensor \(\mathbf{\sigma}\) is called effective Terzaghi stress tensor on the domain boundaries, while it is the effective Biot stress tensor inside the domain itself. In Eq. (6), the constitutive relationship defining the effective Biot stress tensor is introduced, where \(\mathbb{C}\) is a fourth order elasticity tensor, generally non-linear, and \(\mathbf{\varepsilon}=\nabla^{\star}\mathbf{u}\) is the strain tensor, with \(\nabla^{\star}=(\nabla+\nabla^{T})/2\) the symmetric gradient operator. The mechanical constitutive law relates a strain variation in the porous medium to an effective stress variation. Such a law can be described by a simple linear elastic model (Hooke's law), with constant or variable parameters, but also more complex elasto-plastic rules with time-dependent contributions can be introduced, e.g., a visco-elasto-plastic law. For more details on the appropriate constitutive laws and their implementations, see [51; 55]. ### Mass balance equation The mass conservation of the fluid species \(\kappa\) reads [56; 57; 58; 59]: \[\frac{\partial}{\partial t}(\rho^{\kappa})+\nabla\cdot\mathbf{F}^{\kappa}=q_{s}^ {\kappa}, \tag{7}\] where \(\rho^{\kappa}\) and \(\mathbf{F}^{\kappa}\) are the density and the flux, respectively, of the fluid species \(\kappa\). The density \(\rho^{\kappa}\) represents the mass of \(\kappa\) per unit of rock volume and can be written as: \[\rho^{\kappa}=\phi\sum_{\beta}S_{\beta}\rho_{\beta}\chi_{\beta}^{\kappa}, \tag{8}\] with \(\phi\) the porosity, \(S_{\beta}\) the saturation of phase \(\beta\), that can be either liquid or gas, \(\rho_{\beta}\) the density of phase \(\beta\), and \(\chi_{\beta}^{\kappa}\) the mass fraction of component \(\kappa\) in phase \(\beta\). Usually, assuming isothermal conditions the fluid density is a function of pressure, but can also depends on other quantities, such as the mass fraction according to some equation of state. Saturations and mass fractions are constrained by the well-known conditions: \[\sum_{\beta}S_{\beta}=1\quad\text{and}\quad\sum_{\kappa}\chi_{\beta}^{\kappa}=1. \tag{9}\] The fluid flux of component \(\kappa\) is the sum of the fluxes for each phase: \[\mathbf{F}^{\kappa}=\sum_{\beta}\chi_{\beta}^{\kappa}\mathbf{F}_{\beta}, \tag{10}\] and each phase flux is described by Darcy's law as: \[\mathbf{F}_{\beta}=\rho_{\beta}\mathbf{v}_{\beta}=-\rho_{\beta}\frac{\mathbf{k}\ k_{r\beta}}{ \mu_{\beta}}\left(\nabla p_{\beta}-\rho_{\beta}\mathbf{g}\right), \tag{11}\] with \(\mu_{f}\) and \(k_{r,\beta}\) the viscosity and the relative permeability of the phase, \(\mathbf{k}\) the permeability tensor, and \(p_{\beta}\) the pressure in phase \(\beta\). According to [54; 60], the porosity update accounting for poro-elastic effects can be expressed as: \[\phi=\phi_{0}+\alpha\varepsilon_{v}+\frac{\left(\alpha-\phi_{0}\right)\left(1 -\alpha\right)}{K_{d}}\left(p-p_{0}\right), \tag{12}\] where \(K_{d}\) is the drained bulk modulus, \(\varepsilon_{v}=\text{trace}\left(\mathbf{\varepsilon}\right)\) is the volumetric strain, and \(\phi_{0}\) and \(p_{0}\) are the reference porosity and fluid pressure respectively. In Eq. (12), \(p\) is the averaged fluid pressure, computed as: \[p=\sum_{\beta}S_{\beta}p_{\beta}. \tag{13}\] In oedometric conditions and a constant total stress state, from Terzaghi's principle (Eq. (6)) we have \(d\sigma_{z}=\alpha\,dp\) and \[\frac{\partial}{\partial t}\varepsilon_{v}=C_{m}\alpha\frac{\partial}{ \partial t}p, \tag{14}\] with \(C_{m}\) is the vertical uniaxial compressibility. In this case, the mass balance is decoupled from the linear momentum balance and can be solved in advance, providing a pressure field acting as an external body load for the structural problem. Even though this assumption is not guaranteed for the reservoir application considered in the present work, at the (large) space and time scale of interest coupling is weak and a one-way coupled approach, where Eqs. (7) are solved first for all the fluid species and the averaged pressure of Eq. (13) is then introduced into Eq. (1), is fully warranted, see for instance [44; 61; 62; 63; 64; 65]. The set of Eqs. (7) for obtaining the pressure field in the porous medium is usually solved applying a finite volume method because it preserves the mass conservation at the elemental level [66; 67; 68]. Nevertheless, also a finite element or mixed finite element approach can be successfully used, e.g., [69; 70; 71; 72; 73]. In our analysis, the numerical simulation of the multiphase flow in the porous matrix has been carried out by using _Open Porous Media_, an open-source reservoir simulator based on a classical finite volume discretization [74; 75]. As to the computation of the pressure field within the network of faults, two strategies can be employed: either the domain explicitly contains the faults as _thin_ 3D cells, or the pressure is extended to the faults from the surrounding 3D cells, according to some physical treatment of the contact surfaces as inner boundaries. We elect to use the latter approach, thus allowing to represent the fault at the macroscale as zero-thickness lower dimensional elements. Generally speaking, the two limiting cases that can be met in reality are _sealing_ and _non-sealing_ faults. In the former case, the fault acts as an impermeable barrier and the pressure change does not propagate from one side to the other of the contact surfaces. In this situation, we can assume that the pressure variation in the fault is null. On the contrary, in the latter case, the fault is fully permeable and does not exhibit any resistance to the fluid flow. In this situation, we assume the pressure variation in the fault to be equal to the arithmetic average of the pressure computed on the two side cells. ### Variational formulation and discretization In this section, the variational formulation for the strong form of the linear momentum balance in Eq. (1), equipped with the constraints of Eqs. (2)-(3), is presented. The weak form of the governing equations naturally produces a variational inequality because of the frictional contact constraints [47]. In order to avoid this difficulty, it is possible to reduce the original inequality to a standard variational formulation by an active-set strategy and either a penalty regularization or the introduction of Lagrange multipliers. We elect to use the Lagrange multiplier technique, which can be computationally more expensive, because new primary unknowns are introduced and the resulting algebraic problem gains a saddle-point nature, but generally much more accurate, robust and stable. Moreover, though generating saddle-point systems, this formulation allows to produce a sequence of linear problems less sensible to ill-conditioning issues [76]. From a physical viewpoint, Lagrange multipliers represent the traction field on the fault surfaces, thus the stress evaluation and the related reactivation risk becomes straightforward. We emphasize that to retrieve the complete variational formulation for the problem at hand, defining the right functional spaces and conditions, is far beyond the purposes of this work and we refer the interested readers to more specific researches, like Kikuchi and Oden [47] and Wohlmuth [50]. Here, we briefly report the selected function spaces and the residual equations. The notations \((\cdot,\cdot)_{\Omega}\) and \(\langle\cdot,\cdot\rangle_{\Gamma}\) denote the \(L^{2}\)-inner product of functions in \(\Omega\) (3D domain) and \(\Gamma\) (lower dimensional domain), respectively. Let \(\mathcal{V}=[H^{1}(\Omega)]^{3}\) be the Sobolev space of vector functions whose first derivatives belong to \(L^{2}(\Omega)\); let \(\mathcal{M}\) be the dual space of the trace space \(\mathcal{W}=[H^{1/2}(\Omega)]^{3}\); and let \(\mathcal{M}(t_{N},\|\mathbf{g}_{T}\|)\) be its subspace such that \[\mathcal{M}(t_{N},\|\mathbf{g}_{T}\|)=\left\{\mathbf{\mu}\in\mathcal{M}:\langle\mathbf{\mu },\mathbf{v}\rangle_{\Gamma_{f}}\leq\langle\tau_{\max}(t_{N},\|\mathbf{g}_{T}\|_{2}), \|\mathbf{v}_{T}\|\rangle_{\Gamma_{f}},\mathbf{v}\in\mathcal{W}\text{ with }v_{N}\leq 0 \right\}. \tag{15}\] Given the finite-dimensional subspaces \(\mathcal{V}^{h}\subset\mathcal{V}\) and \(\mathcal{M}^{h}(t_{N}^{h},\|\mathbf{g}_{T}^{h}\|)\subset\mathcal{M}(t_{N},\|\mathbf{ g}_{T}\|)\), the finite dimensional weak form of the problem in Eq. (1) with Terzaghi relation of Eq. (6) and conditions Eqs. (2)-(3) can be stated as follows: at every instant \(t\in[0,t_{\max}]\), find \([\mathbf{u}^{h},\mathbf{t}^{h}]\in\mathcal{V}^{h}\times\mathcal{M}^{h}(t_{N}^{h},\| \mathbf{g}_{T}^{h}\|)\) such that: \[\mathcal{R}_{u} =(\nabla^{s}\mathbf{\eta},\hat{\mathbf{\sigma}})_{\Omega}-\langle\mathbf{\eta },\hat{\mathbf{\sigma}}\cdot\mathbf{n}\rangle_{\Gamma}-(\mathbf{\eta},\mathbf{b})_{\Omega}\] \[=(\nabla^{s}\mathbf{\eta},\hat{\mathbf{\sigma}})_{\Omega}-\langle\mathbf{\eta },\hat{\mathbf{\sigma}}\cdot\mathbf{n}^{+}_{\Gamma_{f}}\rangle_{\Gamma_{f}}-\langle \mathbf{\eta},\hat{\mathbf{\sigma}}\cdot\mathbf{n}^{-}_{f}\rangle_{\Gamma_{f}}-\langle \mathbf{\eta},\hat{\mathbf{\sigma}}\cdot\mathbf{n}\rangle_{\Gamma_{\sigma}}-(\mathbf{\eta}, \mathbf{b})_{\Omega}\] \[=\left(\nabla^{s}\mathbf{\eta},\mathbf{\sigma}(\mathbf{u}^{h})-\alpha\mathbf{1} \mathbf{p}\right)_{\Omega}-\langle\llbracket\mathbf{\eta}\rrbracket,\mathbf{t}^{h}-p\mathbf{u} _{f}\rangle_{\Gamma_{f}}-\langle\mathbf{\eta},\bar{\mathbf{t}}\rangle_{\Gamma_{ \sigma}}-(\mathbf{\eta},\mathbf{b})_{\Omega}=0 \forall\mathbf{\eta}\in\mathcal{V}^{h}, \tag{16a}\] \[\mathcal{R}_{t} =(t_{N}^{h}-\mu,g_{N})_{\Gamma_{f}}+\langle\mathbf{t}_{T}^{h}-\mathbf{\mu }_{T},\Delta\mathbf{g}_{T}\rangle_{\Gamma_{f}}\geq 0 \forall\mathbf{\mu}\in\mathcal{M}^{h}(t_{N}^{h},\|\mathbf{g}_{T}^{h}\|). \tag{16b}\] We use a Galerkin approach, hence the test functions \(\mathbf{\eta}\) and \(\mathbf{\mu}\) belong to the same function spaces used to define the trial functions for the displacement and traction field, respectively. To transform the variational inequality of Eq. (16b) into a variational equality, an iterative active-set algorithm [77; 78] is applied. According to this approach, the fault surfaces \(\Gamma_{f}\) is subdivided into active and inactive regions for both components of the traction, i.e., the _Stick_ (\(\Gamma_{f}^{\text{stick}}\), active for normal and tangential components), _Slip_ (\(\Gamma_{f}^{\text{slip}}\), active for normal component), and _Open_ (\(\Gamma_{f}^{\text{open}}\), inactive) portions of \(\Gamma_{f}\). With this subdivision, the variational inequality of Eq. (16b) becomes: \[\mathcal{R}_{t}=\langle\mathbf{\mu},\llbracket\mathbf{u}^{h}\rrbracket\rangle_{\Gamma_{f }^{\text{stick}}}+\langle\mu_{N},g_{N}\rangle_{\Gamma_{f}^{\text{slip}}}+ \frac{1}{k}\langle\mathbf{\mu}_{T},\mathbf{t}_{T}^{h}-\mathbf{t}_{T}^{*}\rangle_{\Gamma_{ f}^{\text{slip}}}+\frac{1}{k}\langle\mathbf{\mu},\mathbf{t}^{h}\rangle_{\Gamma_{f}^{ \text{open}}}=0 \forall\mathbf{\mu}\in\mathcal{M}^{h}(t_{N}^{h},\|\mathbf{g}_{T}^{h}\|), \tag{17}\] where \(k\) is a unitary coefficient introduced to ensure dimensional consistency. The non-linear system of Eqs. (16a)-(17) is solved by a Newton linearization and at convergence we check the consistency of the traction state on the faults with the initial subdivision of \(\Gamma_{f}\) into \(\Gamma_{f}^{\text{stick}}\), \(\Gamma_{f}^{\text{slip}}\), and \(\Gamma_{f}^{\text{open}}\). If the consistency check is satisfied, the active-set algorithm is stopped and we can move to the following time instant, otherwise a new subdivision of \(\Gamma_{f}\) is defined, Eq. (17) is re-computed and the resulting non-linear system solved again. Introducing \(\mathbf{u}^{h}=\sum_{i}u_{i}\mathbf{\eta}_{i}\) and \(\mathbf{t}^{h}=\sum_{j}t_{j}\mathbf{\mu}_{j}\), i.e., the discrete representation of the displacement and traction fields, where \(\{\mathbf{\eta}_{i}\}\) and \(\{\mathbf{\mu}_{j}\}\) are bases for \(\mathcal{V}^{h}\) and \(\mathcal{M}^{h}(t_{N}^{h},\|\mathbf{g}_{T}^{h}\|)\), respectively, the set of variational equalities of Eqs. (16a)-(17) becomes an algebraic nonlinear system. The bases for the finite-dimensional spaces \(\mathcal{V}^{h}\) and \(\mathcal{M}^{h}\) are selected with the aid of the finite element method. Given the regularity requirements defined above, we use low-order discretization spaces for both displacement and traction. The computational domain is subdivided into non-overlapping hexahedral elements, \(\Omega=\bigcup_{i=1}^{n_{\text{c}}}\Omega_{i}\) and \(\Omega_{i}\cap\Omega_{j}=\varnothing\) for any \(i\neq j\). This choice is done for the sake of the consistency with the domain discretization used for the multiphase flow model, which is based on a standard finite volume approach. We use a conformal representation of the faults, i.e., \(\Gamma_{f}=\bigcup_{j}\partial\Omega_{j}\), where \(\Omega_{j}\) are elements sharing a face with \(\Gamma_{f}\). In such a way, the fault contact surfaces are composed by pairs of quadrilateral elements. Each pair of quadrilateral elements is also denoted as a zero-thickness _interface finite element_[79]. According to the value of the traction, every interface element can change its status, i.e., it can belong to either the stick, slip or open portion of \(\Gamma_{f}\). The mixed finite element discretization adopted in this work is described in Franceschini et al. [52; 53]. It consists of a \(\mathbb{Q}_{1}\) first-order interpolation for the nodal-based displacement field and a \(\mathbb{P}_{0}\) piecewise constant interpolation for the element-based traction field. The collection of coefficients \(u_{i}\) and \(t_{j}\) are the components of the unknown algebraic arrays, named \(\mathbf{u}\) and \(\mathbf{t}\) of sizes \(3n_{n}\) and \(3n_{f}\), with \(n_{n}\) the number of nodes in the hexahedral mesh and \(n_{f}\) the number of interface elements. This approach has the main advantage of being naturally coupled with a finite volume pressure solution computed on the same domain discretization at no additional cost, since both the traction and the pressure are represented using the same space on the same grid. On the other side, in order to ensure the LBB-stability of the proposed mixed finite element spaces, a tailored jump stabilization has been proposed in [53]. For a complete analysis of the LBB-stability of a general pair of mixed finite element spaces, see Elman et al. [80]. An implementation of the presented algorithm can be found in [81]. We emphasize that even if a linear elastic constitutive relation is used for the porous medium, the set of equations reported in Eqs. (16a)-(17) represents a non linear problem, because a consistent partitioning of the fracture surface is unknown and has to be computed depending on the solution vectors. To be more specific, constraints in Eqs. (2)-(3) are the Karush-Kuhn-Tucker (KKT) conditions and we are dealing with a non-linear optimization problem [77]. As already mentioned, the solution strategy is based on an active-set strategy, with each non-linear problem addressed by a classical exact Newton algorithm. ### Linearization and linear system solution The use of a mixed finite element approximation produces a linearized step with a generalized saddle-point Jacobian matrix [82]. The Jacobian is generally non-symmetric because of the contribution related to the friction component of the traction when the fracture slides. In particular, at each Newton iteration, the linear system that has to be solved is: \[J\delta\mathbf{x}=-\mathbf{r}, \tag{18}\] with the \(2\times 2\) block matrix \(J\), the residual vector \(\mathbf{r}\), and the solution vector \(\delta\mathbf{x}\) given by: \[J=\begin{bmatrix}\frac{\partial\mathcal{R}_{u}}{\partial\mathbf{u}}&\frac{ \partial\mathcal{R}_{u}}{\partial\mathbf{u}}\\ \frac{\partial\mathcal{R}_{u}}{\partial\mathbf{u}}&\frac{\partial\mathcal{R} _{u}}{\partial\mathbf{u}}\end{bmatrix},\quad\mathbf{r}=\begin{bmatrix} \mathcal{R}_{u}\\ \mathcal{R}_{u}\end{bmatrix},\quad\text{and}\quad\delta\mathbf{x}=\begin{bmatrix} \delta\mathbf{u}\\ \delta\mathbf{t}\end{bmatrix}, \tag{19}\] where \(\mathcal{R}_{u}\) and \(\mathcal{R}_{t}\) are computed at the current counter level \(l\). As usual, the updated solution vector at level \(l+1\) is: \[\begin{bmatrix}\mathbf{u}\\ \mathbf{t}\end{bmatrix}^{l+1}=\begin{bmatrix}\mathbf{u}\\ \mathbf{t}\end{bmatrix}^{l}+\begin{bmatrix}\delta\mathbf{u}\\ \delta\mathbf{t}\end{bmatrix}. \tag{20}\] For the detailed expression of the Jacobian, see [52]. The resulting linear system is characterized by a large and sparse matrix and it is necessary to use a preconditioned iterative method for its efficient solution. To achieve satisfactory results, the use of a suitable preconditioner is mandatory. Since the properties of the linear system may change significantly as the simulation proceeds and the fault elements change status, also the preconditioner must evolve. Among others, an idea is to exploit the scalability intrinsically present in the multigrid approach and combine it with the known physics-based partitioning of the blocks to be able to solve the saddle-point matrix. For details on robust and efficient techniques used for the solution of this peculiar linear system, the reader may refer to [42; 53; 83]. ### Constitutive model for fracture: slip weakening In this work we use both the classical Coulomb criterion with a constant friction coefficient, and a slip-weakening friction law with a variable friction coefficient. Originally used by Andrews [84] to take into account the change from static to dynamic friction, slip-weakening friction laws [85; 86] are based on the concept that the shear stiffness of the fracture decreases as sliding occurs. From a mathematical viewpoint, the standard Coulomb criterion reads: \[\|\mathbf{t}_{T}\|_{2}\leq c-t_{N}\mu,\quad\text{with }\mu=\tan\varphi, \tag{21}\] while a more general slip-weakening friction law reads: \[\|\mathbf{t}_{T}\|_{2}\leq c-t_{N}\mu(\|\mathbf{g}_{T}\|_{2}). \tag{22}\] A simple expression to account for the friction reduction with fault motion is provided by a piecewise linear function, as shown in Fig. 3, where the friction coefficient linearly decreases from the static value \(\mu_{s}\) down to the dynamic value \(\mu_{d}\) at a sliding value equal to \(D_{c}\). For larger sliding values, the friction coefficient remains constantly equal to \(\mu_{d}\). Other analytical expression can be also used to simulate the friction coefficient reduction with the sliding, such as an exponential law (see Fig. 3), which has the advantage to allow for a _smooth_ variation that can be differentiated everywhere. It provides: \[\mu=\mu_{d}+(\mu_{s}-\mu_{d})\exp\left(-\frac{\left\|\mathbf{g}_{T}\right\|_{2}}{D_ {c}}\right). \tag{23}\] A similar smooth behavior can be formulated based on inverse trigonometric functions (see Fig. 3): \[\mu=\mu_{d}+(\mu_{s}-\mu_{d})\left(1-\frac{2}{\pi}\arctan\frac{\left\|\mathbf{g}_{ T}\right\|_{2}}{D_{c}}\right). \tag{24}\] In order to compare the three different expressions, we use the simple 1D problem sketched in Fig. 4. The selected physical parameter set is: \(\mu_{s}=\tan(30^{\circ})\), \(\mu_{d}=\tan(10^{\circ})\), \(D_{c}=2\) mm, with the spring stiffness \(K=11\times 10^{9}\) N/m and a compression load \(N=3\times 10^{7}\) N. The first three values are representative of the conditions typically found in the seismogenic gas fields within the Rotliegend stratigraphic units in the Netherlands [45; 87]. The physical quantities of interest are shown in Fig. 4. The primary variable is always the displacement of the point where the external load \(N\) is applied, while the outcomes are: (i) the friction strength \(F\), (ii) the relative displacement \(u_{r}\) between the body connected to the spring and the fixed basement, (iii) the global system stiffness \(\overline{K}\), and (iv) the internal energy \(U\). Though the response in terms of friction strength are different, both relative displacement and global energy are comparable. By distinction, the global stiffness behaves differently and for two cases out of three it reaches negative amounts that are greater in absolute value than the original spring stiffness \(K\). The finite element approach used in the present modeling analysis is based on the global equilibrium of the system and not on a local (elemental) balance. This is the reason why a comparison of the global energy associated with the different laws is meaningful. At the elemental level, it is desirable to avoid negative stiffness, which could potentially lead to friction instabilities. Hence, we chose to work with the law based on the inverse trigonometric function, i.e., the only providing a minimum stiffness smaller in absolute value than the original one. In such a way, we can ensure, at least for conditions similar to the ones used in this example, a positive global stiffness. ## 3 Model set-up ### Conceptual model In order to test the mathematical model and identify the main mechanisms governing fault reactivation in UGS fields, we use a simplified geological model representative of the typical features of the Rotliegend UGS reservoirs, such as Norg and Grikpskerk [18], e.g., see Fig. 5. These reservoirs are bounded by normal faults with a significant throw (up to a 250 m) and consist of a few compartments separated by internal faults. The gas fields are located between 2000 and 3000 m of depth, with the Rotliegend reservoir rock characterized by an average net thickness of Figure 3: Slip weakening constitutive laws. From left to right: piecewise linear friction law, exponential friction law and inverse trigonometric friction law. Figure 4: Mono-dimensional friction system with 1 degree of freedom. On the left: sketch of the model used in this example. On the right, from the top left, (i) friction strength, (ii) relative displacement, (iii) global system stiffness and (iv) internal energy. Continuous, dotted and dashed line represent linear, exponential and inverse trigonometric slip-weakening formulation, respectively Figure 5: On the left: base Zechstein semblance map of the Norg UGS (in blue) and surrounding area with traces of the bounding faults and localization of the recorded seismic events. On the right: conceptual map of the Norg field with major and minor faults highlighted in blue and red, respectively. 150-200 m. Detailed information about the geological setting and typical geometric features of UGS reservoirs in the Netherlands can be found, among others, in [16; 37; 88; 89] and published reports [17; 18]. Based on those features, we define a conceptual model composed of two adjacent compartments, 2000\(\times\)2000 m wide, 200 m thick, and 2000 m deep, where UGS activities are carried out. The reservoir compartments are laterally confined by two families of orthogonal faults, denoted as F1-F2 (parallel to \(y\)-axis) and F4-F5 (parallel to \(x\)-axis). Another fault, denoted as F3, separates the two reservoir blocks (Fig. 6). The two compartments have only a partial hydraulic connection depending on the sealing properties of fault F3, so the pore pressure distribution in space and time may be different. Faults F1 and F2 are inclined with respect to the vertical \(z\)-axis by a dip angle equal to \(\pm\)10\({}^{\circ}\), while F3, F4 and F5 are vertical faults, as shown in Fig. 6. The faults extend from -3000 m to -1600 m depth, i.e., they terminate within the caprock sealing the reservoir, called Zeichestein formation. Notice that the blocks have a 200-m offset along the vertical direction, corresponding to the entire thickness of the reservoir, relative to the Rotliegend formation located in the sideburden. ### Finite element-interface element discretization The reservoir is embedded in a 30-km wide square domain. The overall model size is much larger than the reservoir dimension to minimize the effects of the (arbitrary) boundary conditions on the solution in the area of interest (Fig. 6). The bottom of the model is 5000-m deep and the land surface is located at the elevation of 0 m. Standard conditions with zero displacement and zero pore pressure variation on the outer and bottom boundaries are prescribed, whereas the land surface is a traction-free boundary. A 3D finite element mesh of the selected domain is built by using hexahedral elements, which are particularly suitable for the symmetric configuration with the faults parallel to the Cartesian axes. Fig. 7 shows an axonometric view of the full computational grid used in the geomechanical model. The mesh consists of 253,165 nodes and 236,208 hexahedral elements with a finer discretization in the reservoir layers, i.e., at depth between 2000 and 2200 m. The element size within the reservoir is 100\(\times\)100\(\times\)20 m. Fig. 8 shows the fault system embedded in the continuous 3D grid as discretized by 5,215 interface elements. The state of each element of the faults is synthetically evaluated with the aid of the _criticality index_ defined as: \[\chi=\frac{\|\mathbf{t}_{T}\|_{2}}{\tau_{\max}}=\frac{\|\mathbf{t}_{T}\|_{2}}{c-t_{N} \tan\left(\varphi\left(\|\mathbf{g}_{T}\|_{2}\right)\right)}. \tag{25}\] From Eq. (25), it is easy to see that \(\chi\in[0,1]\), where 0 is associated with the safest condition and 1 to plastic sliding. ### Simulated scenarios To evaluate the capabilities of the presented numerical model and understand the possible mechanisms causing fault reactivation during CGI and UGS, a few scenarios are simulated in the typical setting of the Rotliegend reservoirs in the Netherlands. The main geological and geomechanical parameters are reported in Tab. 1. For the sake of simplicity, a linear elastic behavior is assumed in the reservoir during the UGS activities. The pressure history prescribed in an active well located in each compartment is sketched in the leftmost frame of Fig. 9. We assume a 10-y duration for the PP phase, where the pressure drops linearly by up to 20 MPa. After this Figure 6: On the left: plain view of the model. On the right: vertical sections of the conceptual model along the trace A-A and B-B shown on the left. Figure 8: Interfece element discretization of the fault discontinuities. The planar trace of F1, F2 and F3 is parallel to the y-axis, whereas that of F4 and F5 is parallel to the \(x\)-axis. F3 is the central fault separating the two reservoir compartments. Figure 7: Axonometric view of the computational domain used for the geomechanical simulations: full 3D finite element grid (left) and interface element grid (blue) embedded in a portion of the full 3D grid (right). period, a 2-year CGI phase follows, where the pressure recovers to the initial (undisturbed) value \(p_{i}\), and then UGS cycles start. They are characterized by a 6-month extraction period, during which the pressure drops by 10 MPa, and a 6-month injection period, when the pressure returns to \(p_{i}\). To initialize the simulation, the undisturbed stress regime must be prescribed. We assume it has the principal effective stress tensor directions aligned with the Cartesian axes, in particular, \(\sigma_{1}=\sigma_{v}=\sigma_{z}\), \(\sigma_{2}=\sigma_{H}=\sigma_{y}\), and \(\sigma_{3}=\sigma_{h}=\sigma_{x}\), where \(\sigma_{v}\) denotes the vertical compressive stress, and \(\sigma_{H}\) and \(\sigma_{h}\) the largest and smallest compressive horizontal principal stress, respectively. At the reservoir average depth, i.e., \(z=-2100\) m, we have \(\sigma_{v}=-25.4\) MPa, \(\sigma_{h}=M_{1}\sigma_{v}=-18.8\) MPa, \(\sigma_{H}=M_{2}\sigma_{v}=-21.1\) MPa, with \(M_{1}=0.40\) and \(M_{2}=0.47\). The initial normal stress acting on the faults is shown in Fig. 9. Two scenarios have been simulated based on the parameters describing the Coulomb frictional criterion. In the reference scenario (scenario 1) \(\varphi_{s}=30^{\circ}\) and fault weakening is not accounted for. The effect of slip-weakening behavior is investigated in scenario 2, where the friction angle reduces from \(\varphi_{s}=30^{\circ}\) to \(\varphi_{d}=10^{\circ}\) in a slip distance of \(D_{c}=2\) mm. Cohesion \(c=2\) MPa in both scenarios. Finally, the time step is 1 year during the PP phase, then during CGI and UGS phases it is reduced to 2 months. ## 4 Numerical results The objective of the representative simulations reported herein is to evaluate the fault reactivation risk during the different stages of the UGS activities in the conceptual reservoir. For this reason, we mainly focus on the criticality index \(\chi\) defined in Eq. (25). For the sake of clarity and ease of readability, \(\chi\) is represented for each fault as a function of depth only, i.e., for each \(z\)-value we compute the \(\chi\) average for the stripe of interface elements located at the same depth. Another significant quantity is the maximum sliding, i.e., the maximum value of \(\|\mathbf{g}_{T}\|_{2}\) simulated along each fault. These two quantities are closely related each other, since a single element can slide only when \(\chi=1\). However, we prefer to propose an averaged version of \(\chi\), so as to obtain information on the criticality state of the entire fracture at a given depth. The last quantity used to interpret the results and analyze the fault behavior is the tangential component of the traction. In particular, we use \(t_{T,z}\), i.e., the vertical component of \(\mathbf{t}_{T}\). Usually, the 2-norm of the tangential traction is \begin{table} \begin{tabular}{l|c|c|c} layer & density [kg/m\({}^{3}\)] & Young modulus [GPa] & Poisson ratio \\ \hline Overburden & 2200 & 10.0 & 0.25 \\ Upper Zechstein Salt (-1500 to -1800 m) & 2100 & 35.0 & 0.30 \\ Lower Zechstein Salt (below -1800 m) & 2100 & 20.0 & 0.30 \\ Reservoir (Upper Rotliegend) & 2400 & 11.0 & 0.15 \\ Underburden & 2600 & 30.0 & 0.20 \\ \end{tabular} \end{table} Table 1: Formation-dependent geomechanical parameters. See Fig. 6 for a detail on the depths. Figure 9: On the left: sketch of the pore pressure variation in time prescribed in an active well of the reservoir compartments. On the right: initial normal stress with respect to the fault orientation. The principal stresses \(\sigma_{h}\), \(\sigma_{H}\) and \(\sigma_{v}\) are parallel to the Cartesian axes. Faults F4 and F5 are more loaded because of their orthogonality to \(\sigma_{H}\). analyzed, i.e., \(\|\mathbf{t}_{T}\|_{2}\), but this does not provide information on the shear direction. However, thanks to the symmetric geometry of the conceptual model, in some locations there is no horizontal component of \(\mathbf{t}_{T}\), thus, \(\|\mathbf{t}_{T}\|_{2}=|\mathbf{t}_{T,z}|\). The two quantities share the same modulus, but the vertical component carries additional information on the sliding direction. ### Pore pressure variation As previously mentioned, in this work we adopt a one-way coupled approach, thus the multiphase flow prediction is computed first. The simulation is performed through the open-source reservoir simulator Open Porous Media [74; 75]. As a reference scenario, a typical year-long cycle of UGS activity has been considered, with the injection-production history represented in Fig. 10. Fig. 11 shows the location of the injection/production wells with respect to the fault system. Note that, to avoid any interpolation among computational grids, the OPM finite volume mesh exactly corresponds to the finite element grid of a single block within the 3D geomechanical model. The characteristic horizontal and vertical permeability of a reservoir in the study area are \(k_{h}=600\) mD and \(k_{v}=300\) mD, respectively. The "working gas" volume amounts to 6.5\(\times 10^{9}\) Sm\({}^{3}\) per compartment. The numerical results in terms of pressure variation are summarized in Fig. 12. The figure shows the depth-averaged pressure behavior along a section passing through the production/injection wells every 3 months. After 3 months the maximum production rate is achieved, after 6 months the production phase ends, after 9 months the maximum injection rate is met, and, finally, after 12 months the simulation ends. Notice that the pressure perturbation during the entire production (or injection) phase is almost uniform in space and varies approximately within the interval between 0 and -10 MPa with respect to the initial value \(p_{i}\). This outcome shows that, for the setting defined in these representative simulations, the spatial gradient of the pore pressure variation into each compartment is expected to be quite limited. Hence, considering a constant pressure variation value for each reservoir block appears also to be a reasonable assumption. Figure 11: Location of the injection/production wells in the two reservoir blocks (left) and axonometric view of the 3D computational grid used in OPM to simulate the injection/production phase in each reservoir compartment (right). The OPM mesh exactly corresponds to the finite element grid of a single block within the 3D geomechanical model. Figure 10: Time behavior of the production/injection rate used in the OPM simulation. ### Analysis of the fault reactivation risk The mechanisms for the possible fault reactivation have been investigated in scenario 1. The value \(\chi_{\max}=1\) is reached on faults F1 and F2 at loading step 9, with \(\chi_{\max}\) up to 0.8 at the end of CG and UGS injection phases (Fig. 13). Conversely, \(\chi=0\) on fault F3 irrespective of the loading step due to the symmetry of the geometry and loading configurations. A comparison between the behavior versus depth of the criticality index along fault F1 and F4 and the distribution of \(\chi\) on the whole fault system at the end of PP are shown in Fig. 14. Notice that the most critical condition develops along the top and bottom of the reservoir in agreement with previous modeling study [35]. Moreover, faults F4 and F5 exhibit smaller values of \(\chi\) with respect to F1 and F2, showing that a sub-vertical orientation is usually more likely to reactivate. Fig. 15 shows the stress path in the \(t_{N}-\|t_{T}\|_{2}\) plane experienced by a representative element located on fault F1 at the top of the reservoir. The actual stress state touches the yield bound at loading step number 9 and remains on the yield surface till the end of PP (loading step 10). During CGI, the stress state initially departs from the yield condition but returns close to it during the last part of the injection when the pressure recovers to the initial value. UGS behaves elastically over a new path with respect to what experienced during the last part of the CGI phase, with an almost constant \(t_{N}\) value. Again, the stress state approaches the critical condition at the end of the UGS injection phase when \(p\) rises back to \(p_{i}\). A deeper explanation for this behavior can be found by analyzing the actual direction of the shear stress. Fig. 16 shows the vertical component of tangential traction \(t_{T,z}\) on fault F1 at loading steps 0 (initial condition), 10, 11, 12, 12.5, and 13. This component is meaningful because of the symmetry of the model, indeed, we have that \(\|t_{T}\|_{2}=|t_{T,z}|\). Sketches of the reservoir-fault-sideburden conditions are provided for the same loading steps. The initial shear stress differs from the null value because of the fault dip. The largest value of \(t_{T,z}\) are observed at the end of PP (loading step 10). Reservoir compaction induced by the pressure depletion is accompanied by fault reactivation. Note that a positive and negative shear stress characterizes the reservoir bottom and top, respectively. As physically expected because of the compaction mechanism, the direction of the shear stress is oriented toward the center of the reservoir. When CGI starts, the shear stress orientation changes and the reactivated part of the fault returns stick. At loading Figure 12: Depth-averaged value of the pore pressure variation during a production/injection cycle as obtained by the OPM flow simulator. Figure 13: Behavior of \(\chi_{\max}\) from all the loading steps for each fault. Note that due to symmetry F1 and F2 behave identically, as well as F4 and F5. Figure 16: On the top left: distribution of vertical component of the shear stress \(t_{T,z}\) for the loading steps (l.s.) 0 (initial condition), 10, 11, 12, 12.5, 13 on fault F1 (dip = 10\({}^{\circ}\)). On the top right: time behavior of \(\|t_{T}\|_{2}\) for the points denoted by the thick black dots in the previous frame located at the top, bottom, and center of the reservoir. Positive values mean that the shear stress is directed upward. On the bottom: sketches representing the reservoir deformation, shear stress direction, and inactive/active portions of the fault at the same loading steps. Figure 14: On the left: behavior of the criticality index \(\chi\) vs depth at loading step 10 (end of PP) for faults F1 and F4. On the right: \(\chi\) factor on all the fracture surfaces at loading step 10. Figure 15: On the left: location of the selected element. On the right: stress path \(\|t_{T}\|_{2}\) vs \(-t_{N}\) for the element highlighted on the left sketch by a red dot. The red line is the yield bound. Numbers along the path denote the loading steps. It can be easily recognized the primary production (loading steps 1 to 10), the cushion gas injection (loading steps 10 to 12) and the underground gas storage (loading steps 12 to 12.5 – production – and 12.5 to 13 – injection). step 11, half of the pore pressure change has been recovered. As the reservoir expands due to pressure recovery, \(t_{T,z}\) decreases on the reservoir top and bottom (the orientation remains the same but the absolute value decreases) and an almost null \(t_{T,z}\) on the previously sliding IEs is obtained at this step. Differently, \(t_{T,z}\) does not significantly change for the elements surrounding the activated stripes of the fault. The reservoir continues to recover pressure and re-expand until loading step 12. During this second part of CGI shear stress increases, with a sign opposite to that experienced during PP (Fig. 16). A mirror behavior occurs for the IEs at the reservoir bottom. Therefore, expansion during CGI increases the criticality condition of the fault (mainly at the reservoir top and bottom) due the stress re-distribution after the sliding developed over the PP. Fig. 13 shows that faults F1 and F2 approach the criticality state (\(\chi_{\rm max}>0.8\)) when the pressure recover the initial value at the end of CGI and UGS injection phase, i.e., in a pressure state close to the initial undisturbed one, which is not generally expected to be associated to fault reactivation. ### Slip-weakening effect The adopted Coulomb frictional criterion can handle slip-weakening effects. Here the outcome of a slip-weakening constitutive law for the fault behavior is compared to that previously obtained using a static friction coefficient equal to \(\varphi_{s}=30^{\circ}\). The two parameters defining the new constitutive law are \(\varphi_{d}\) and \(D_{c}\), i.e., the dynamic friction angle and the slip weakening distance, respectively. In the simulated scenario, the friction angle reduces from \(\varphi_{s}=30^{\circ}\) to \(\varphi_{d}=10^{\circ}\) in a slip distance of \(D_{c}=2\) mm. Fig. 17 provides the time behavior of the fault maximum sliding for the proposed scenario. It can be seen that the current sliding is more than twice that obtained using a static friction angle. Fig. 18 shows a comparison between the criticality index during the entire simulation for scenario 1 and 2. It can be noticed that the new constitutive law causes F1, F2, F4 and F5 to slip as well at the end of the cushion gas and UGS injection phases, but not at loading step 12.5, i.e., at the end of the 6-month UGS production phase (see the zoom in Fig. 19). Finally, Fig. 20 shows the stress path for the same location as in Fig. 15. Because of the reduced friction angle, the yield surface is reached more easily during PP, at the end of CGI, and at the end of UGS phases. As observed for Figure 17: Maximum sliding versus time for the investigated scenarios. On the left: reference case (scenario 1). On the right: using slip-weakening constitutive law (scenario 2). Figure 18: Effect of the Coulomb parameters on \(\chi_{\rm max}\) at increasing loading steps for each fault. As usual, the pairs F1-F2 and F4-F5 behave identically due to symmetry. The proposed scenario corresponds to \(\varphi_{d}=10^{\circ}\) and \(D_{c}=2\) mm. the reference scenario, the elastic phases develop with an almost constant normal stress because of the selected ratios between the reservoir and overburden stiffness and between the pressure change in the reservoir and within the fault. The stress path and the yield bound are quite complex due to weakening. Moreover, due to the very small friction angle (\(\varphi=10^{\circ}\)), a large part of UGS is characterized by a the stress state that develops either on the yield surface or very close to it. ## 5 Conclusions The first underground gas storage site became operational as early as 1915 [90]. Since then, this technology has spread to all continents, reaching nowadays more than 600 facilities worldwide. Despite this large use, there are risks related to the possible reactivation of existing geological fractures. Although it is a "rare" event from a statistical viewpoint [28], it deserves the proper attention due to its strong social and economic effects. Most of recorded human-induced seismic events can be explained by a pressure increase until it exceeds the initial value, triggering the shear stress on the fault surface to reach the limit strength. However, there are recorded events that cannot be explained by this mechanism. They are the so-called "unexpected" seismic events, which occur when the pressure is in the range already experienced during the primary production. The main scope of this work is to identify these phenomena, explain their basic processes, and define some safe operational bandwidth for the UGS reservoir management with reference to the gas fields located in the Roetligend formation, the Netherlands. To accomplish these aims, we use a computational modeling approach for the accurate and robust simulation of the mechanics of faulted porous rocks. The overall work is split into two parts. This paper deals with Part I, which concerns the development, implementation and test of the mathematical and numerical model used for computational simulations. A one-way coupled strategy is adopted to deal with the poro-mechanical interaction. First, the set of governing relationships for frictional contact mechanics are introduced, then the weak variational formulation is derived and discretized. We use Lagrange Figure 19: Zoom of Fig. 18 over the cushion gas injection and UGS phases for faults F1 = F2 and F4 = F5. Figure 20: Stress path \(\|\mathbf{t}_{T}\|_{2}\) vs \(-t_{N}\) for the F1 element highlighted in Fig. 15. The dashed and the continuous red lines are the yield bound corresponding to the static condition (\(\varphi_{s}\)) and after the slip distance \(D_{c}\) is overcome, respectively. The numbers along the path denote the loading steps. The primary production (loading steps 1 to 10), the cushion gas injection (loading steps 11 to 12) and the underground gas storage (loading steps 12 to 12.5 – production – and 12.5 to 13 – injection) can be easily recognized. multipliers to prescribe the normal and frictional constraints on faults, a mixed-dimensional approach, and a mixed finite element discretization with displacement in the 3D porous body and traction on the fault surfaces are the main unknowns. In order to be consistent with classical finite volume discretizations for the multiphase flow, we focus on low-order hexahedral elements for the 3D continuum and a piecewise constant representation of the traction on the contact surfaces, thus requiring a proper stabilization to ensure the regularity of the resulting generalized saddle-point problem. An active-set algorithm and an exact Newton method are implemented for the solution of the overall nonlinear problem, while ad hoc preconditioning strategies are used to allow for and accelerate the convergence of the inner linear Krylov solver. A discussion on the slip-weakening constitutive law for fault frictional behavior is also provided. Finally, the model is applied to two realistic scenarios, carried out on a conceptual model built from an idealization of real UGS fields located in the formation of interest. Modeling simulations allow to identify the main mechanisms potentially inducing a fault reactivation during UGS activities, even in "unexpected" situations where the current stress state appears to be less demanding than what the porous medium had already experienced in the past. The use of a slip-weakening rheological model for the frictional behavior can increase the chance of producing a fault reactivation during CGI and UGS activities. Part II of this work will focus on the model application in a real-world scenario, with an extensive sensitivity analysis on the factors that can mostly impact on the reactivation chances. Further development concerns widening the feasible parameter ranges, e.g., testing different constitutive laws for the continuous medium, using different parameter values, changing the fault positions and orientation. The analysis will be extended to other kinds of storage activities, such as CO\({}_{2}\) geological sequestration or underground H\({}_{2}\) and N\({}_{2}\) storage. The final objective is to draw some guidelines to define a safe operational bandwidth for the management of storage reservoirs in the Netherlands, and, at the same time, build a methodological example that can be successfully extended to other real-world experiences. ## CRediT authorship contribution statement **Andrea Franceschini**: Methodology, Software, Writing - Original Draft, Visualization. **Claudia Zoccarato**: Conceptualization, Methodology, Formal analysis, Investigation. **Selena Baldan**: Investigation, Visualization. **Matteo Frigo**: Software, Investigation. **Massimiliano Ferronato**: Conceptualization, Methodology, Writing - Review and Editing, Supervision. **Carlo Janna**: Software. **Giovanni Isotton**: Software. **Pietro Teatini**: Conceptualization, Methodology, Formal analysis, Writing - Review and Editing, Supervision, Funding acquisition. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements This research was supported by the State Supervision of Mines (SodM), Ministry of Economic Affairs (The Netherlands), Project KEM01 "Safe Operational Bandwidth of Gas Storage Reservoirs" grant. Portions of this work were performed within the 2020 INdAM-GNCS project "Optimization and advanced linear algebra for PDE-governed problems". Computational resources were provided by University of Padova Strategic Research Infrastructure Grant 2017: "CAPRI: Calcolo ad Alte Prestazioni per la Ricerca e l'Innovazione".
2307.05072
Aggregating Credences into Beliefs: Agenda Conditions for Impossibility Results
Binarizing belief aggregation addresses how to rationally aggregate individual probabilistic beliefs into collective binary beliefs. Similar to the development of judgment aggregation theory, formulating axiomatic requirements, proving impossibility theorems, and identifying exact agenda conditions of impossibility theorems are natural and important research topics in binarizing belief aggregation. Building on our previous research on impossibility theorems, we use an agenda-theoretic approach to generalize the results and to determine the necessary and sufficient level of logical interconnection between the issues in an agenda for the impossibility theorems to arise. We demonstrate that (1) path-connectedness and even-negatability constitute the exact agenda condition for the oligarchy result stating that binarizing belief aggregation satisfying proposition-wise independence and deductive closure of collective beliefs yields the oligarchies under minor conditions; (2) negation-connectedness is the condition for the triviality result obtained by adding anonymity to the oligarchy result; and (3) blockedness is the condition for the impossibility result, which follows by adding completeness and consistency of collective beliefs. Moreover, we compare these novel findings with existing agenda-theoretic characterization theorems in judgment aggregation and belief binarization.
Minkyung Wang, Chisu Kim
2023-07-11T07:15:11Z
http://arxiv.org/abs/2307.05072v1
# Aggregating Creidences into Beliefs: ###### Abstract Binarizing belief aggregation addresses how to rationally aggregate individual probabilistic beliefs into collective binary beliefs. Similar to the development of judgment aggregation theory, formulating axiomatic requirements, proving impossibility theorems, and identifying exact agenda conditions of impossibility theorems are natural and important research topics in binarizing belief aggregation. Building on our previous research on impossibility theorems, we use an agenda-theoretic approach to generalize the results and to determine the necessary and sufficient level of logical interconnection between the issues in an agenda for the impossibility theorems to arise. We demonstrate that (1) path-connectedness and even-negatability constitute the exact agenda condition for the oligarchy result stating that binarizing belief aggregation satisfying proposition-wise independence and deductive closure of collective beliefs yields the oligarchies under minor conditions; (2) negation-connectedness is the condition for the triviality result obtained by adding anonymity to the oligarchy result; and (3) blockedness is the condition for the impossibility result, which follows by adding completeness and consistency of collective beliefs. Moreover, we compare these novel findings with existing agenda-theoretic characterization theorems in judgment aggregation and belief binarization. ## 1 Introduction The question of how to rationally aggregate individual beliefs into collective beliefs is important and ubiquitous in our society. In this regard, there has been abundant literature on collective decision theory, judgment aggregation, and probabilistic opinion pooling studies. One of the essential features of belief is that there are different types of beliefs. For example, some beliefs may be represented by traditional "logical" languages--she believes that it is raining outside--while other types of beliefs might be modeled by "probability functions"--she believes with 90 percent certainty that it is raining outside. Logical languages are similar to our natural languages and are therefore efficient for communicating with human agents, despite the fact that they sometimes suffer from significant information reduction, as in the case of the Lottery paradox. In contrast, probabilistic beliefs hold a fair amount of information to deal with uncertain environments, although people usually do not reach that level of precision. Considering these pros and cons of different types of beliefs, it is not surprising that different types of beliefs may be required at different stages of belief aggregation procedures depending on situations. If objective chances of issues in question can be given, it is epistemically preferable to report individual opinions in terms of degrees of belief. If the conclusion of an epistemic collective decision guides action (e.g., a jury verdict), it is practically better to report the collective opinion by means of plain logic. Therefore, rational belief aggregation should be able to deal with different types of beliefs. One important topic in aggregating one type of belief into a different type of belief is aggregating probabilistic beliefs into collective binary beliefs (e.g., [12][17]). We call this subject matter "binarizing belief aggregation" [17]. We can observe these belief aggregation problems in expert panels, the scientific community, and political parties, whenever individuals' opinions can be encoded probabilistically, and the group's beliefs should be more decisive. Similar to the development of judgment aggregation theory (e.g., [7][16]), formulating axiomatic requirements, proving impossibility theorems, and identifying exact agenda conditions of impossibility theorems are natural and important research topics in binarizing belief aggregation. Building on our previous research on impossibility theorems, this paper uses an agenda-theoretic approach to determine which level of logical interconnection between the issues in an agenda is necessary and sufficient for the impossibility theorems to arise. Indeed, our previous paper assumed the agenda to be an algebra, which is the most typical when dealing with probabilistic beliefs. However, in practice, the agenda being an algebra might be quite demanding because we might not be interested in, for example, the conjunction of two propositions when making a collective decision on the two propositions. Besides the literature on judgment aggregation, agenda-theoretic approaches can be found in other fields as well. In probabilistic opinion pooling, general agendas were investigated to characterize linear pooling (e.g., [3][4]). In the belief binarization problem, general agendas were studied to characterize impossibility theorems (e.g., [5][6]). In this study, we demonstrate that (1) path-connectedness and even-negatability constitute the exact agenda condition for the oligarchy result, which states that binarizing belief aggregation satisfying proposition-wise independence and deductive closure of collective beliefs yields the oligarchies under certain conditions; (2) negation-connectedness is the condition for the triviality result obtained by adding anonymity to the oligarchy result; and (3) blockedness is the condition for the impossibility result, which follows by adding completeness and consistency of collective beliefs. Moreover, we compare these novel findings with existing agenda-theoretic characterization theorems in judgment aggregation and belief binarization. All proofs of lemmas and theorems are provided in the full paper. ## 2 Binarizing Belief Aggregation and the impossibility results We begin by introducing notations and definitions we will use throughout this paper. Let \(W\) be a finite non-empty set of possible worlds. An _agenda_\(\mathcal{A}\) is a non-empty set of subsets of \(W\) that is closed under complement. Let \(N:=\{1,...,n\}(n\geq 2)\) be the set of individuals. For each \(i\in N\), an individual \(i\)'s _probabilistic belief_\(P_{i}\) is a function extendable to a probability function on the smallest algebra that includes \(\mathcal{A}\). We denote by \(\vec{P}:=(P_{1},...,P_{n})=(P_{i})_{i\in N}\) a profile of \(n\) individuals' probabilistic beliefs. Binarizing belief aggregation deals with individuals' probabilistic beliefs and the group's binary beliefs. Binary beliefs are represented by a function \(Bel:\mathcal{A}\rightarrow\{0,1\}\). Sometimes, we abuse the notation and denote by \(Bel\) the _belief set_\(\{A\in\mathcal{A}|\;Bel(A)=1\}\), and \(BelA\) is a shorthand for \(A\in Bel\) or \(Bel(A)=1\). A binarizing aggregator (BA) \(F\) is a function that takes a profile \(\vec{P}\) of \(n\) probabilistic beliefs in a given domain and returns a binary belief \(F(\vec{P})\). Now, let us define the axiomatic requirements on BA that are needed to formulate our impossibility results. First, we need the following rationality requirements on the domain and codomain of a BA. \(\bullet\) Universal Domain (UD): the domain of \(F\) is the set of all profiles \(\vec{P}\) of \(n\) probabilistic beliefs \(\bullet\) Collective Deductive Closure (CDC)/Consistency (CCS)/Completeness (CCP): for all \(\vec{P}\) in the domain, the resulting collective beliefs \(F(\vec{P})\) is deductively closed/consistent/complete, respectively Note that a binary belief \(Bel\) is deductively closed iff it holds that, if \(Bel\vDash A(i.e.,\bigcap Bel\subseteq A)\), then \(BelA\) for all \(A\in\mathcal{A}\). Moreover, \(Bel\) is consistent if \(Bel\not\models\emptyset\), and \(Bel\) is complete if \(BelA\) or \(Bel\overline{A}\) for all \(A\in\mathcal{A}\) where \(\overline{A}\) is the complement of \(A\). Second, we enlist different rationality requirements on BAs themselves. \(\bullet\) Certainty Preservation (CP)/Zero Preservation (ZP): for all \(A\in\mathcal{A}\), if \(\vec{P}(A)(:=(P_{1}(A),\cdots,P_{n}(A)))=(1,...,1)/\vec{P}(A)=(0,...,0)\), then \(F(\vec{P})(A)=1/F(\vec{P})(A)=0\), respectively, for all \(\vec{P}\) in the domain of \(F\). \(\bullet\) Anonymity (AN): \(F((P_{\pi(i)})_{i\in N})=F\big{(}(P_{i})_{i\in N}\big{)}\) for all \(\vec{P}\) in the domain of \(F\) and all permutation \(\pi\) on \(N\). \(\bullet\) Independence (IND): for all \(A\in\mathcal{A}\), there exists a function \(G_{A}\) such that \(F(\vec{P})(A)=G_{A}(\vec{P}(A))\) for all \(\vec{P}\) in the domain of \(F\). \(\bullet\) Systematicity (SYS): there exists a function \(G\) such that \(F(\vec{P})(A)=G(\vec{P}(A))\) for all \(A\in\mathcal{A}\) and for all \(\vec{P}\) in the domain of \(F\). Our previous paper [17] proved the following theorems under the assumption that \(\mathcal{A}\) is an algebra with at least three elements besides the empty set and W, which we call a non-trivial algebra. We aim to relax this in this study. 1. (The Oligarchy Result) The only BAs satisfying UD, CP, ZP, IND, and CDC are the following oligarchies: there is a non-empty subset \(M\) of \(N\) such that \[F(\vec{P})(A)=\left\{\begin{array}{ll}1&\mbox{if $P_{i}(A)=1$ for all $i\in M$}\\ 0&\mbox{otherwise}\end{array}\right.\] for all \(A\in\mathcal{A}\). 2. (The Triviality Result) The only BAs satisfying UD, CP, ZP, IND, CDC and AN are the oligarchy with \(M=N\), which we call the trivial rule. 3. (The Impossibility Result) There is no BA satisfying UD, CP, IND, CCP, and CCS. ## 3 The Agenda Condition for the Oligarchy Result This section presents and proves our first main result: the agenda condition for the oligarchy result. The following two agenda conditions have been extensively studied, as they characterize the most famous impossibility agendas in judgment aggregation. **Definition 1** (Path-connected and Even-negatable Agenda).: _(1) For any \(A,B\in\mathcal{A}\), we say that \(A\) conditionally entails \(B\) (\(A\vDash^{*}B\)) if there is a subset \(\mathcal{Y}\subseteq\mathcal{A}\) that is consistent with \(A\) and \(\overline{B}\)1 such that \(\{A\}\cup\mathcal{Y}\vDash B\) (i.e., \(\bigcap\{\{A\}\cup\mathcal{Y}\}\subseteq B\) and we write this as \(A\vDash^{*}_{\mathcal{Y}}B\)). An agenda \(\mathcal{A}\) is path-connected (PC) if \(A\vDash^{**}B\) for all contingent issues \(A,B\in\mathcal{A}\), where \(\vDash^{**}\) is the transitive closure of \(\vDash^{*}\). (2) An agenda \(\mathcal{A}\) is even-negatable (EN) iff there is a minimally inconsistent set \(\mathcal{Y}\subseteq\mathcal{A}\) such that \(\mathcal{Y}_{-\mathcal{Y}}:=(\mathcal{Y}\setminus\mathcal{Z})\cup\{\overline{ A}|\,A\in\mathcal{Z}\}\) is consistent for some subset \(\mathcal{Z}\subseteq\mathcal{Y}\) of even size._ Footnote 1: That is, \(\mathcal{Y}\cup\{A\}\not\models\emptyset\) and \(\mathcal{Y}\cup\{\overline{B}\}\not\models\emptyset\) Path-connectedness means that every two issues are connected by a path, i.e., a chain of conditional entailment relations. Regarding conditional entailment relation, let us mention a useful fact. If \(A\vDash^{*}_{\mathcal{Y}}B\), it also holds that \(\overline{B}\vDash^{*}_{\mathcal{Y}}\overline{A}\), and thus if \(A\vDash^{**}B\), then \(\overline{B}\models^{**}\overline{A}\). And even-negability says that a minimally inconsistent subset of the agenda can be made consistent by negating some even number of its element. It is well-known that an agenda is even-negatable unless the propositions in the agenda are composed only with negation and biconditional from some logically independent propositions. Note that these two conditions are weaker than the agenda being a non-trivial algebra, which is the assumption on the agenda in [17]. **Lemma 1**.: _Every non-trivial algebra is path-connected and even-negatable._ From now on, we add one more assumption on \(\mathcal{A}\) that \(\emptyset\notin\mathcal{A}\)(and thereby \(W\notin\mathcal{A}\)).2 Thus, our agenda \(\mathcal{A}\) is a complement-closed finite non-empty set of some contingent subsets of the underlying set \(W\). The following lemma shows that path-connectedness is sufficient to obtain what is called the contagion lemma. Footnote 2: In the following, especially in Theorem 2 and Theorem 3, we will use some results of Nehring & Puppe (2010), where the agenda consists of contingent issues. To describe our proof more simply, we adopt that assumption. **Lemma 2** (Agenda Condition for the Contagion Lemma).: _Let \(\mathcal{A}\) be path-connected. If a BA \(F\) with UD satisfies CDC, CP, and IND, then it satisfies SYS._ This lemma parallels the one in generalized opinion pooling of Dietrich & List (2017a): path-connectedness characterizes that if generalized OP satisfies CP and IND, then it satisfies SYS. In our lemma as well, its converse--if \(\mathcal{A}\) is not path-connected, then there is a BA F on \(\mathcal{A}\) satisfying CDC, CP, and IND but not SYS--also holds. The counterexample will be indicated in the proof of Theorem 1. The following definition and lemma will be needed to prove our succeeding main theorem. **Definition 2** (Non-simple Agenda and Pair-negatable Agenda).: _(1) An agenda \(\mathcal{A}\) is non-simple(NS) iff there is a minimally inconsistent subset \(\mathcal{Y}\subseteq\mathcal{A}\) with \(|\mathcal{Y}|\geq 3\)._ _(2) An agenda \(\mathcal{A}\) is pair-negatable iff there is a minimally inconsistent set \(\mathcal{Y}\subseteq\mathcal{A}\) such that \(\mathcal{Y}_{\neg\mathcal{Z}}\) is consistent for some subset \(\mathcal{Z}\subseteq\mathcal{Y}\) with \(|\mathcal{Z}|=2\)._ Non-simple agendas can be used as a criterion for determining whether a given agenda has minimal complexity. Pair-negatable agendas are a special case of even-negatable agendas. The following lemma shows that a pair-negatable agenda is sufficient to be an even-negatable agenda, and a path-connected agenda already has a fairly complex structure. **Lemma 3**.: _(1) An agenda \(\mathcal{A}\) is even-negatable iff \(\mathcal{A}\) is pair-negatable._ _(2) If an agenda \(\mathcal{A}\) is path-connected, then it is non-simple._ Now we prove that the agenda being path-connected and even-negatable is the sufficient and necessary condition for the oligarchy result. **Theorem 1** (Agenda Condition for the Oligarchy Result).: _Let \(|N|\geq 3\). An agenda \(\mathcal{A}\) is path-connected and even-negatable iff the only BAs on \(\mathcal{A}\) satisfying UD, ZP, CP, IND, and CDC are the oligarchies._ The only-if direction of the theorem generalizes the oligarchy result and shows that even if an agenda satisfies a weaker condition--path-connectedness and even-negatability--than a non-trivial algebra, the oligarchy result holds. If we examine the proof of the oligarchy result in [16] in detail, we can observe that the agenda condition was used solely to establish the following two facts: (Fact 1) if \(\vec{a}\leq\vec{b}\) and if \(G(\vec{a})=1\), then \(G(\vec{b})=1\) where \(G\) is a function satisfying \(F(\vec{P})(A)=G(\vec{P}(A))\). (Fact 2) if \(\vec{a}+\vec{b}-\vec{1}\geq\vec{0}\) and if \(G(\vec{a})=1\) and \(G(\vec{b})=1\), then \(G(\vec{a}+\vec{b}-\vec{1})=1\). Therefore, to prove the only-if direction, it is enough to derive (Fact 1) from even-negatability and (Fact 2) from path-connectedness. The agenda conditions are only relevant to (Fact 1) and (Fact 2), and once we see that they hold then we can apply the proof of the oligarchy result in [16]. Our proof also reveals that if we assume the stronger property of SYS instead of IND, then Lemma 1 is not needed, and non-simplicity (NS) is sufficient to obtain the oligarchy result. This observation indicates that stronger properties of a BA lead to weaker agenda conditions for achieving the oligarchy result. To provide additional agenda conditions for the oligarchy result, let us introduce the concept of monotonicity (MON) for a BA as follows: (MON) If \(\vec{P}(A)\leq\vec{P}^{\prime}(A)\) and \(F(\vec{P})(A)=1\), then \(F(\vec{P}^{\prime})(A)=1\) where \(\leq\) is applied to each component of two vectors. If we assume MON, we can bypass the need to prove (Fact 1), thereby eliminating the requirement for the agenda to be even-negatable (EN). This is because (Fact 1) is already implied by SYS and MON. The following table illustrates the agenda conditions that are sufficient to achieve the oligarchy result based on different properties of BA: \begin{tabular}{|c||c|c|} \cline{2-3} \multicolumn{1}{c||}{} & IND & SYS \\ \hline \hline without MON & PC, EN & NS, EN \\ \hline with MON & PC & NS \\ \hline \end{tabular} It is noteworthy that the agenda condition required for our oligarchy result is the same as the one for the dictatorship and oligarchy results in judgment aggregation (e.g., [7][2]). In our proof of the if-direction, we extend their counterexamples to our domain in a manner that satisfies UD, ZP, CP, IND, CDC, and CCS: the counterexample for a non-path-connected agenda is a minimal extension satisfying MON, as we do not exclude even-negatablility; the one for a non-even-negatable agenda is an extension satisfying not MON but SYS, as we do not exclude path-connectedness. So the proof follows a similar structure to those in judgment aggregation, but the ways of extension to construct counterexamples are not trivial--particularly the counterexample for not even-negatable agendas--, and so our proof includes novel ideas that are needed due to the difference between binary and probabilistic beliefs. ## 4 The Agenda Condition for the Triviality Result This section presents and proves our second main result: the agenda condition for the triviality result. Stronger properties of a BA yield weak agenda conditions. Thus, one might ask whether the agenda condition for the oligarchy result can be weakened, if we add AN. We will demonstrate that the agendas that yield the triviality result can be characterized by negation-connectedness, which is also the agenda condition for an impossibility result of belief binarization methods as shown in [6]. **Definition 3** (Negation-connected Agenda).: _An agenda \(\mathcal{A}\) is negation-connected (NC) iff for every contingent issue \(A\in\mathcal{A}\) it holds that \(A\models^{**}\overline{A}\)._ So negation-connectedness means that every issue has a path to its complement. According to Proposition 1 in Dietrich & List (2021), the agenda being negation-connected is equivalent to the agenda being partitioned into subagendas each of which is path-connected, where a subagenda is a non-empty subset of the agenda that is closed under complementation. The following lemma will be needed for the proof of the first part of the succeeding theorem. Part (1) allows us to consider the stronger condition, namely path-connectedness, than negation-connectedness to prove the triviality result. Part (2) will be used when the agenda is path-connected and not even-negatable. **Lemma 4**.: _(1) If the triviality result holds--i.e., the only BA on \(\mathcal{A}\) satisfying UD, CDC, ZP, CP, IND, and AN is the trivial one--for any path-connected agenda \(\mathcal{A}\), then the same holds for any negation-connected agenda._ _(2) If an agenda \(\mathcal{A}\) is not even-negatable, then for any minimally inconsistent subset \(\mathcal{Y}\subseteq\mathcal{A}\) and any even-sized subset \(\mathcal{Z}\subseteq\mathcal{Y}\) it holds that \(\mathcal{Y}_{\neg\mathcal{Z}}\) is also minimally inconsistent._ The following lemma will be needed for the proof of the second part of the succeeding theorem. This lemma looks technical but it is closely related to the notion of median point in the next section. Indeed, if \(\mathcal{H}_{0}\) is the empty set, then \(\bigcap\mathcal{M}\) is the set of all median points where \(\mathcal{H}_{0}\) and \(\mathcal{M}\) are defined in the following lemma. **Lemma 5**.: _Let \(\mathcal{H}_{0}\) be the set \(\{A\in\mathcal{A}|\ A\ \mathrm{e}^{**}\ \overline{A}\ \mathrm{and}\ \overline{A}\mathrm{e}^{**}\ A\}\). If \(\mathcal{A}\) is not negation-connected, then there is a non empty subset \(\mathcal{M}\subseteq\mathcal{A}\setminus\mathcal{H}_{0}\) such that for any minimally inconsistent set \(\mathcal{Y}\subseteq\mathcal{A}\) it holds that \(|\mathcal{Y}\cap\mathcal{M}|\leq 1\). Furthermore, for any minimally inconsistent set \(\mathcal{Y}\subseteq\mathcal{A}\) intersecting \(\mathcal{H}_{0}\) it holds that \(|\mathcal{Y}\cap\mathcal{M}|=0\). In addition, for \(B\in\mathcal{A}\setminus\mathcal{H}_{0}\), it holds that \(B\in\mathcal{M}\) iff \(\overline{B}\notin\mathcal{M}\)._ Now let us prove the theorem that negation-connectedness is the sufficient and necessary condition for the triviality result. **Theorem 2** (Agenda Condition for the Triviality Result).: _An agenda \(\mathcal{A}\) is negation-connected iff the only BA on \(\mathcal{A}\) satisfying UD, ZP, CP, CDC, IND, and AN is the trivial one._ The only-if direction of the theorem shows that the triviality result holds if the agenda is negation-connected, which is a generalization of the triviality result. The proof suggests further that, if we assume SYS, then non-simplicity (NS) becomes the sufficient condition to obtain the triviality result. In this case, neither EN nor MON is needed, unlike in the case of Theorem 1, as illustrated in the following table: \begin{tabular}{|c||c|c|} \cline{2-3} \multicolumn{1}{c||}{} & IND & SYS \\ \hline \hline with or without MON & NC & NS \\ \hline \end{tabular} Compared to the case of the oligarchy result, when we add AN, we obtain the triviality result even under a weaker agenda condition: (i) instead of requiring path-connectedness (PC), negation-connectedness (NC) is sufficient, and (ii) the triviality result holds even when the agenda is not even-negatable (EN). The difference mentioned in (i) does not play a role in finding the sufficient condition according to Lemma 4. However, the necessary condition is not path-connectedness but negation-connectedness. In cases where the agenda is PC and EN, we can apply Theorem 1 since the oligarchy satisfying AN is the trivial one (i.e., the oligarchy with \(M=N\)). Thus, we only need to focus on the cases where the agenda is PC and not EN. When the agenda is assumed to be not EN, we encounter the following difficulty: to show the triviality result, we used (Fact 1), which could be proved if the agenda was assumed to be EN. Our strategy here is to prove a weaker claim than (Fact 1): (Fact \(1^{\prime\prime}\)) If \(G(\vec{a})=1\), then \(G(\vec{c})=1\) for all \(\vec{c}\geq|2\vec{a}-\overline{1}|\). The new claim (Fact \(1^{\prime\prime}\)) is weaker than (Fact 1), as it only guarantees that vectors greater than \(|2\vec{a}-\overline{1}|\) are mapped to 1, rather than all vectors greater than \(\vec{a}\). One might ask whether we can apply the proof presented in Dietrich & List (2021) to our theorem, or vice versa. However, there are differences between the two proofs. On the one hand, we cannot use their proof because, while they deal with probabilistic beliefs, we are dealing with profiles of probabilistic beliefs: in particular, for negation-connected agendas in our framework, we can only show (Fact \(1^{\prime\prime}\)) instead of (Fact 1). On the other hand, since we have not relied on the assumption that \(|N|\geq 2\), our proof can be applied to the context of belief binarization, where \(|N|=1\), and so we can recover their results. The if-direction gives a counterexample of the triviality result when an agenda is not negation-connected, which implies the agenda being not path-connected. The counterexample presented in Theorem 1 is not applicable in this case because it does not satisfy AN. Moreover, there would be no counterexample if we only assumed an agenda to be not path-connected. This is the reason why we need to weaken path-connectedness to negation-connectedness, even though they fulfill the same role concerning the sufficient agenda condition for the triviality result. Our counterexample for a non-negation-connected agenda is an extension of the belief binarization rule proposed in Dietrich & List (2021). We extend the rule while maintaining MON, but not minimally, which differs from the way of the extension in Theorem 1. ## 5 The Agenda Condition for the Impossibility Result Now we will show that the agendas for the impossibility result can be characterized by blocked agendas. **Definition 4** (Blocked Agenda).: _An agenda \(\mathcal{A}\) is blocked iff there is an issue \(A\in\mathcal{A}\) such that \(A\vDash^{**}\overline{A}\) and \(\overline{A}\vDash^{**}A\)._ So a blocked agenda contains an issue that has a path to its complement. Recall that \(\mathcal{H}_{0}\) is defined by the set \(\{A\in\mathcal{A}|\ A\vDash^{**}\overline{A}\text{ and }\overline{A}\vDash^{**}A\}\). Then \(\mathcal{A}\) is negation-connected iff \(\mathcal{H}_{0}=\mathcal{A}\), and \(\mathcal{A}\) is blocked iff \(\mathcal{H}_{0}\neq\emptyset\). If \(\mathcal{A}\) is negation-connected, then it is blocked. The following definition and lemma will be needed for the succeeding theorem. **Definition 5** (Median Point).: _Let \(\mathcal{A}\) be an agenda on the set \(W\) of possible worlds. A possible world \(m\in W\) is a median point iff for any minimally inconsistent subset \(\mathcal{Y}\subseteq\mathcal{A}\), it holds that \(|\{A\in\mathcal{Y}|\ m\in A\}|\leq 1\)._ So a median point is a possible world that is contained in at most one issue in every minimally inconsistent set. It is well-known in judgment aggregation that if a median point is guaranteed to exist, then we can easily construct an anonymous, complete, and consistent judgment aggregator where a median point is thought of as a default collective judgment unless everybody believes the issue being true/false at the median point to be false/true [15]. The following lemma states that the agenda not being blocked is the necessary and sufficient condition for the existence of a median point. **Lemma 6**.: _An agenda \(\mathcal{A}\) is not blocked iff there is a median point._ Now let us formulate and prove our last theorem. **Theorem 3** (Agenda Condition for the Impossibility Result).: _An agenda \(\mathcal{A}\) is blocked iff there is no BA on \(\mathcal{A}\) satisfying UD, CP, IND, CCP, and CCS._ Indeed, CCS and CCP together are stronger assumptions than CDC. As a result, we obtain the impossibility result more easily, without assuming AN and non-dictatorship, and with a more relaxed agenda condition. The proof demonstrates that by adding SYS, the impossibility result still holds even without CP and even when no agenda condition is assumed--e.g., even when \(\mathcal{A}=\{A,\overline{A}\}\). The blocked agenda is also the agenda condition for the impossibility results on judgment aggregation with AN in [16] and belief binarization in [2]. Our counterexample for non-blocked agenda is an extension of the counterexample in Dietrich & List (2018). It is an extension that satisfies MON, but not minimally so. This is the same as the extension in Theorem 2, but different from the one in Theorem 1. Note that the median point \(m\) in the proof of this theorem plays the same role as \(\mathcal{M}\) in the proof of Theorem 2. The only difference is that \(m\) is a possible world and \(\mathcal{M}\) is a set of issues. This difference arises from assuming CDC versus assuming CCS and CCP. ## 6 Discussion All the results in this paper are stated in Table 1: (1) path-connectedness and even-negatability constitute the exact agenda condition for the oligarchy result; (2) negation-connectedness is for the triviality result; and (3) blockedness is for the impossibility result. These new findings can be compared to the existing characterization theorems in judgment aggregation and belief binarization. Regarding (1), it has the same agenda condition as (1\({}^{\prime}\)) [2] and (3\({}^{\prime}\)) [7] in judgment aggregation. For (2), it is similar to (2\({}^{\prime\prime}\)) [6] in belief binarization, with the difference being the use of ZP instead of CCS for for (2\({}^{\prime\prime}\)). Since applying our proofs can weaken CCS to ZP, the agenda condition for (2\({}^{\prime}\)), which has not been discussed in the literature, is also negation-connected because an anonymous and independent judgment aggregator can be viewed as a belief binarization function. As for (3), it is similar to (4\({}^{\prime}\)) [15] in judgment aggregation and (4\({}^{\prime\prime}\)) [4] in belief binarization. Let us mention some further research topics. One might think that the rationality norms for collective binary beliefs could be weakened since adhering to deductive closure might be too demanding for group agents. Instead, we could focus on requiring group beliefs to respect consistency or pairwise consistency. By exploring these weaker norms, we can investigate stronger impossibility results. Furthermore, let us discuss how to obtain possibility results. For this purpose, it is advantageous that binarizing belief aggregation provides a framework that generalizes the problem of judgment aggregation or belief binarization. As in judgment aggregation, we can employ and study premise-based binarizing belief aggregation methods. Alternatively, we can combine an individual belief binarization procedure with judgment aggregation. If we assume that linear or geometric pooling methods are very natural given individual credences, we can apply belief binarization methods to the pooled group credence. Of course, we can also come up with new procedures that cannot be reduced to existing methods. Ultimately, we should keep in mind that binarizing belief aggregation is an _epistemic_ collective decision problem. Therefore, we should be concerned about which methods accurately track the truth. One natural approach would be to investigate belief binarization methods that minimize the expected distance from the truth in light of the group's pooled credence. In conclusion, binarizing belief aggregation opens a new research area in which various procedures of belief aggregation, different studies on the relation between credences and beliefs, and epistemic decision theory can be combined and explored. AcknowledgementsWe would like to express our sincere gratitude to Hannes Leitgeb and Christian List for their invaluable feedback and profound insights. The research conducted by the first author benefited from the generous support of the German Academic Scholarship Foundation and the Alexander von Humboldt Foundation. \begin{table} \begin{tabular}{|l|l|} \hline There is no BA satisfying... & Agenda Condition \\ \hline (1) UD, ZP, CP and IND + CDC + Non-oligarchy & path-connected, even-negatable \\ (2) UD, ZP, CP and IND + CDC + AN + Non-triviality & negation-connected \\ (3) UD, CP and IND + CCS and CCP & blocked \\ \hline \hline There is no judgment aggregator satisfying... & Agenda Condition \\ \hline (1\({}^{\prime}\)) UD, ZP, CP and IND + CDC + Non-oligarchy & path-connected, even-negatable \\ (2\({}^{\prime}\)) UD, ZP, CP and IND + CDC + AN + Non-triviality & negation-connected \\ (3\({}^{\prime}\)) UD, CP and IND + CCS and CCP + non-dictatorship & path-connected, even-negatable \\ (4\({}^{\prime}\)) UD, CP and IND + CCS and CCP + AN & blocked \\ \hline \hline There is no belief binarization rule satisfying... & Agenda Condition \\ \hline (2\({}^{\prime\prime}\)) UD, CCS, CP and IND + CDC + Non-triviality & negation-connected \\ (4\({}^{\prime\prime}\)) UD, CCS, CP and IND + CCP & blocked \\ \hline \end{tabular} \end{table} Table 1: Classification of Agenda Conditions for Impossibility Results
2301.08800
In-situ Water quality monitoring in Oil and Gas operations
From agriculture to mining, to energy, surface water quality monitoring is an essential task. As oil and gas operators work to reduce the consumption of freshwater, it is increasingly important to actively manage fresh and non-fresh water resources over the long term. For large-scale monitoring, manual sampling at many sites has become too time-consuming and unsustainable, given the sheer number of dispersed ponds, small lakes, playas, and wetlands over a large area. Therefore, satellite-based environmental monitoring presents great potential. Many existing satellite-based monitoring studies utilize index-based methods to monitor large water bodies such as rivers and oceans. However, these existing methods fail when monitoring small ponds-the reflectance signal received from small water bodies is too weak to detect. To address this challenge, we propose a new Water Quality Enhanced Index (WQEI) Model, which is designed to enable users to determine contamination levels in water bodies with weak reflectance patterns. Our results show that 1) WQEI is a good indicator of water turbidity validated with 1200 water samples measured in the laboratory, and 2) by applying our method to commonly available satellite data (e.g. LandSat8), one can achieve high accuracy water quality monitoring efficiently in large regions. This provides a tool for operators to optimize the quality of water stored within surface storage ponds and increasing the readiness and availability of non-fresh water.
Satish Kumar, Rui Kou, Henry Hill, Jake Lempges, Eric Qian, Vikram Jayaram
2023-01-20T20:56:52Z
http://arxiv.org/abs/2301.08800v2
# In-situ Water quality monitoring in Oil and Gas operations ###### Abstract From agriculture to mining, to energy, surface water quality monitoring is an essential task. As oil and gas operators work to reduce the consumption of freshwater, it is increasingly important to actively manage fresh and non-fresh water resources over the long term. For large-scale monitoring, manual sampling at many sites has become too time-consuming and unsustainable, given the sheer number of dispersed ponds, small lakes, plays, and wetlands over a large area. Therefore, satellite-based environmental monitoring presents great potential. Many existing satellite-based monitoring studies utilize index-based methods to monitor large water bodies such as rivers and oceans. However, these existing methods fail when monitoring small ponds-the reflectance signal received from small water bodies is too weak to detect. To address this challenge, we propose a new Water Quality Enhanced Index (WQEI) Model, which is designed to enable users to determine contamination levels in water bodies with weak reflectance patterns. Our results show that 1) WQEI is a good indicator of water turbidity validated with 1200 water samples measured in the laboratory, and 2) by applying our method to commonly available satellite data (e.g. LandSat8), one can achieve high accuracy water quality monitoring efficiently in large regions. This provides a tool for operators to optimize the quality of water stored within surface storage ponds and increasing the readiness and availability of non-fresh water. The code-base is publicly available at Github: [https://github.com/satish1901/In-situ-Water-quality-monitoring-in-Oil-and-Gas-operations](https://github.com/satish1901/In-situ-Water-quality-monitoring-in-Oil-and-Gas-operations) ## 1 Introduction Water is one of the most abundant and most essential resources on earth. Not only does water sustain all life on Earth, but it is also requisite to industrial processes such as fabrication, washing, cooling, and fuel generation [1, 2]. Particularly in the oilfield, the average frac job uses 4 million gallons of water, and availability becomes a key logistical issue in high activity basins such as the Permian [3]. The industry addresses this necessity with efficient storage of water, particularly in networks of open body ponds down to the frac site [4]. However, new challenges arise from even this, such as monitoring, sourcing, transportation, and treatment. The monitoring and maintenance of these water bodies is crucial. Monitoring is usually done manually: water samples are collected from different points in the pond (as impurities can vary spatially) and sent off to a lab to be tested. Tests include checking for types of impurities, sediment concentration, algae growth, turbidity of the water, and chloride concentration changes over time. The amount of manual field work needed to collect samples from multiple points and multiple water bodies across a large distance is almost infeasible. Due to the logistical challenges testing presents, sampling and testing is conducted at extremely infrequent intervals. To make matters worse, tests done on water samples do not produce in-situ results. To overcome these major limitations, we propose our novel Water Quality Enhanced Index (\(WQEI\)) to detect the turbidity and salinity in water ponds, open tanks, water storage ponds for irrigation, playas, etc using multispectral satellite images [5, 6]. Remote Sensing has been used in multiple domains [7, 8, 9], often to create spectral indexes. Spectral indexes are combinations of spectral reflectance from two or more wavelengths that indicate the relative abundance of features of interest [10, 11, 12]. Vegetation indices are the most popular of these, derived using the reflectance properties of vegetation [13, 14, 12]. The most popular vegetation index is NDVI (Normalized, Difference Vegetation Index) [13, 14]. The question of water quality detection using satellite imagery has seen much scientific inquiry already [17, 18, 19, 20, 21, 22, 23, 24, 25].However most of this work detects water quality in large water bodies with substantial depth(\(\geq\sim 100\)ft) and size(\(\geq 100\) acres). The reflectance signals recorded in such cases have a high Signal-to-Noise Ratio (SNR). There is minimal interference from surrounding confounding elements like the bottom of the water body and pixels with low SNR. Small-sized water bodies present unique challenges and prevent the translation of existing methods directly to monitor small ponds, shallow lakes, storage tanks, playas, etc. The small size (\(\sim 2\) acres) causes the pixels representing surface reflectance in the satellite image to be very noisy due to interference from surroundings(soil, rocks, bushes, and other potentially confusing elements on the ground). Shallownness(\(\sim 30\) ft) of the water body also adds noise to reflectance values: the earth below has a different surface reflectance than the water above. In this study, we focus on the following areas: * We propose a novel Water Quality Enhanced Index (\(WQEI\)) for detecting the turbidity, and salinity in very small and shallow water tanks, ponds, playas using satellite images. * The proposed index improves the SNR of the weak signal received from the ground. It is robust to changing conditions in the surroundings. e.g. changes in the vegetation type, in soil salinity, etc. * We verified the output of our estimation index against lab tests done on 49 water ponds for hydraulic fracturing purposes. This data was collected over a 2-year time period for each pond at variable frequency, totaling \(\sim 1200\) samples. * We tested our index on multi-spectral data from two different sources of satellite imagery: commercial satellite (private) and LandSat8 (public). * We developed an end-to-end pipeline that pulls data from LandSat8 periodically every week. This allows us to monitor any water body anywhere on earth. * We also developed and trained a neural network to check the effectiveness of machine learning models for such problems. Overall we developed a cost-effective and efficient technique for monitoring small water bodies' turbidity and salinity. These small sources of water can thus become usable to prevent wildlife from consuming polluted water, and in the restoration of natural playas and a healthier ecosystem. ## 2 Related Works Our method draws from numerous areas of remote sensing, utility of spectral signatures, and information capture from the landscape. In this section, we discuss the key relevant areas of existing works that motivated our work and contributions. **Deterministic and Semi-empirical methods:** There have been many studies done on detecting different types of impurities in water[17, 18, 19, 20, 21, 22, 23, 24, 25, 26], these studies are often designed for a specific water body in a specific region. For example, [17] took the data from a 1978 Malaysian water monitoring program and developed estimation methods for suspended matter[27], phytoplankton growth[28] and turbidity[29], using multi-spectral data from LandSat5 satellite. That monitoring program had created data for only two similar rivers within Malaysia, which greatly limits its generalizability[24]. used multi-spectral data from LandSat5 (TM) to analyze the presence of suspended sediments and chlorophyll in Lake Chicot, Arkansas[23] took a different approach and proposed the importance of detecting landscape features while assessing the impurities in a water body[23]. analysed the relationship of landscape pattern ecological processes. For example, environmental attributes and processes like water quality, nutrient flow, and population dynamics are correlated with landscape spatial patterns using different indexes[26]. studied the attenuation coefficient of water by mixing different types of impurities in it and analyzed the relationship between spectral reflectance and different level of depth of water. This study was conducted on 432 acres of Loosdrecht lakes in the Netherlands. **Regression and Neural Networks based methods:** Regression analysis is the most common data-driven approach in remote sensing[30]. mentioned the requirement of comparing multiple approaches e.g. deterministic, semi-empirical, and empirical, data-driven regression analysis methods when computing the water quality index[30, 31, 32, 33, 34]. mentioned different data-driven approaches[30, 31]. for the first time mentioned that water quality measurement methods were dependent on the water body classification (i.e, lake, pond, playa, tank, etc) and water depth that needs multiple bands information[30, 32]. calibrated empirical relation between different bands in LandSat8-OLI/TRS imagery to detect chlorophyll present in the water body[24]. evaluated the possibility of a nonlinear relationship model by checking the water bodies in Arkansas and Mississippi, USA[24]. claim was supported by[34] of using regression analysis[34]. evaluated the combination of multiple bands and proved that no single band combination has uniqueness, thereby implying the use of information from all the bands to make any kind of decision. Neural Network approaches have been proven to be better than traditional methods [35, 36]. The usefulness of the neural networks was tested by [37] to compute the water quality index [37].'s use case proved that neural network captures all the vital, non-computable by regression methods relations like atmospheric disturbance, non-ideal contextual uncertainties, etc. The McCulloch and Pitt model [33] is the widely used network. It consists of a simple multi-layer neural network with convolution layers and non-linear activation functions in between batch normalization. With all the benefits and flexibility of neural networks, the major limitation is that the convergence of neural networks is very difficult with less amount of data. The curse of dimensionality if a huge issue in neural networks. Since the multispectral data is very high in dimension with very small numbers of pixels of interest. It needs a large amount of clean training data and big compute power(GPUs) to train a neural network. This was one of the motivations for our work of selecting simple nonlinear regression methods for creating the water quality index explained in detail in the next section. ## 3 Approach In this section, we describe the technical approach for our Water Quality Enhanced Index (WQEI) in detail, we will also talk about the study area, details about the dataset used, and satellite information. ### Study Area The area selected for the development, analysis, and evaluation of \(WQEI\) in Midland, Texas, USA. This region uniquely contains a large number of frac ponds closer to the oil drilling sites along with a large number of plays in the same region. Right next to the Midland region, there are many ponds for irrigation purposes in Stanton, Texas, USA. Along with these, there are many small natural water bodies in the Permian basin, Texas region, that appears during the monsoon season and disappear over the year as summer progresses. This region uniquely contains water bodies of different sizes, shapes, and depths. Another major benefit of these sites is that they have different use cases (i.e. fracing job in oil well drilling, irritation purpose, industrial use, drinking purposes, etc). That adds a different types of sediments and impurities, this made our \(WQEI\) more robust and generic to be used for different types of water bodies. Figure 3 shows few samples of different types of water bodies tested in our case. The water stored in frac ponds is built to store fresh water or flowback water from the well, or a mixture during the course of well-site development. It is most important to monitor frac ponds as the water which flows back from wells may contain some dangerous substances, and should be monitored for those according to federal, state, and local laws [38]. Usually this water becomes unusable overtime, and it adds a huge economic burden on the oil & gas companies and damage to environment. Some of the irrigation ponds are retention ponds that capture stormwater. Texas alone has more than one million ponds and small farming lakes [39]. This is an extremely large number to be monitored Figure 1: Image β€œa”shows data sample from private source at 2m per pixel resolution, the marked regions are frac ponds. Image β€œb” shows data from LandSat8 satellite at 30m per pixel resolution, it shows different types of water storage ponds. It can be seen that at 30m resolution the ponds contribute only few pixel in the image that makes the problem more challenging to detect \(WQEI\). manually, which is where remote sensing is the most effective. The variety of sizes, shapes, depths uses and seasonalities make this region an ideal environment for a monitoring study. ### Dataset curation We used satellite data from 2 different sources i.e. Commercial satellite dataset (private data repository) and LandSat8 satellite data (public data). Both the datasets have multispectral bands. The initial development and analysis were done using the data from commercial Pleiades-1B Satellite Sensor (AIRBUS Defence & Space). It covers a few sections of Midland, Texas region. The spatial resolution of this data is \(\sim 2m\) per pixel and 4 spectral bands. 3 of the spectral bands are in visible spectrum (Blue: \(430-550\mu m\), Green: \(490-610\mu m\), Red: \(600-720\mu m\)) and 1 in Near Infrared spectrum (\(750-950\mu m\)). All the data files are geo-tagged for each pixel. This dataset covered 17 frac pond sites with less than \(10\%\) cloud cover. Sample from it dataset is shown in Figure 2(a). To make our model more effective and useful, we used LandSat8 satellite -OLI[5] to have a continuous stream of multispectral images from anywhere on the earth at a frequency of 2 weeks. LandSat8 orbits the earth every \(99~{}minutes\) in a sun-synchronous orbit at an altitude of \(705~{}kms\). LandSat8 acquires 740 scenes a day where each scene is \(185\times 180~{}kms\). LandSat8 have 11 spectral bands information. The 3 bands in visible spectrum lies in the range \(0.43-0.67~{}\mu m\) with \(30m\) resolution. The 4 bands in infrared regions lies in the range \(0.85-1.38~{}\mu m\) with \(30m\) resolution. LandSat8 also have 2 thermal bands in wavelength range \(10.6-12.51~{}\mu m\) at \(100m\) resolution. Along with these, LandSat8 data have a Panchromatic (PAN) band for visible spectrum (\(0.50-0.68~{}\mu m\)) at resolution of \(15m\). The data is captured by the sensor in 12-bit dynamic range. We downloaded data for 2 years (2018-2020), to do the initial development, analysis and evaluation of our \(WQEI\). We manually downloaded the initial data from Midland and Permian basin in Texas. For later stage, we developed an end-to-end pipeline to pull data from LandSat8 and run \(WQEI\) and create analysis report along with visualization ### Pre-processing **LandSat8 satellite data** is at \(30m\) resolution, small ponds constitute very few pixels in the image. This way we does not have enough information to make a pixel level prediction. To overcome this issue, we used the panchromatic band data which is at higher resolution of \(15m\) to improve the resolution of all visible and short infrared bands. First we normalize each band using mean (\(\mu\)) and standard deviation (\(\sigma\)) computed from the whole dataset downloaded. Then a high resolution Figure 2: The image shows the improvement in quality of \(WQEI\) output by up-sampling the image using panchromatic band information. The shown sample is an irrigation pond. image is created by up-sampling the visible bands using bi-linear interpolation method. The distribution ratio is then computed using _Brovey_ transform[40]. The decision for choosing _Brovey_ transform over others is motivated because of its simplicity and analysis by[41]. _Brovey_ transform is based on spectral modelling and increase the visual contrast in the high and low ends of the data's histogram. Each up-sampled band is multiplied by ratio of corresponding panchromatic pixel intensity to weighted sum of all multispectral bands[40]. **Lab reports** of water quality testing from frac ponds and other sources were organized to be in syn with satellite data. To minimize outliers, we selected only those sample which have atleast 20 test samples collected per pond over the span of 2 years. Since water sample collection for lab testing have different frequency, we shortlist only those were collected with 5 days window of LandSat8 satellite passing that location. We also normalized all the reading and unified the measuring scale of all the parameters measured in the lab reports ("pH", "conductivity", "temperature", "algae", "dissolved oxygen", etc). ### Water Quality Enhanced Index (WQEI) With the motivation insight mentioned in previous sections, we propose a novel Water Quality Enhanced Index. It is a new, invariant to spatial extent, robust analytical approach for characterizing the level of impurity spatially. We can map each and every section of the given water body to generate a spatial visual color-coded output representing the water quality. The \(WQEI\) is developed in the following steps: First a generic scan is done in the whole area, that scans for moisture content in the whole image, covers water body as well as soil around. Now we remove the potential confusers due to sand/soil/rocks or dust that settles on water temporarily, this is done by second term in equation 1. The overall water impurity detector developed so far is as shown below \[imp\_detected=\frac{(R-NIR)}{R+NIR}-(\sqrt{B}+R) \tag{1}\] where \(R\) is red band (\(0.65\mu m\)), \(G\) is green band (\(0.55\mu m\)), \(B\) is expected value of (B\({}_{1}\) (\(0.44\mu m\)), B\({}_{2}\) (\(0.48\mu m\))) i.e. blue band, \(NIR\) is near infrared band (\(0.86\mu m\)). Next we observed that algae have a similar spectral signature to the index computed in equation 1, and most of the algae growth is in or around the water body. We compute a similarity index with algae detection index, the intuition behind that is it will further remove the confuser element from the site. This increases the strength of the spectral signal representing water impurity. This step also ensures that algae growth on general vegetation around the water body or somewhere else in the area is removed. This is shown as below: \[amp\_sig=imp\_detected\ \circ\ \frac{G-R}{G+R} \tag{2}\] where \(imp\_detected\) is from equation 1. Now we normalize this \(amp\_signal\) with the sum of red and near infrared signal. This is a standard practice in literature as it scales the values in the range \(-1\) to \(+1\)[42]. Now to have a better fit according to the settings of the environment, we made the numerator and denominator differentiable function. We used regression-based approaches to estimate the values of variable parameters. The final estimated function is: \[WQEI_{1}=\frac{\{\frac{(R-NIR)}{R+NIR}-(\sqrt{B}+R)\}\ \circ\ \frac{G-R}{G+R}}{ \sum_{i}^{\lambda\times w}\alpha\times G-\beta\times(R+NIR)}, \tag{3}\] here \(h\) and \(w\) are the spatial dimension (height and width) of the image (water body) respectively, \(\alpha\)=2.74 and \(\beta\)=4.89 are the estimated values. Equation 3 is the final estimation index for water quality detection. Next, we analyzed the water quality for the presence of chlorides, fluorides, and other salts. Our lab reports did not have direct tests for the presence of salts in the water. Salt level monitoring is also a very important factor when it comes to water quality. Specially used water pumped back into frac ponds from wells, or water discharged into natural playas for wildlife. Maintaining the level of salinity is very crucial in such cases. Our engineers performed tests for the conductivity of water samples collected from the ponds. Our basic assumption is here is that conductivity has a high correlation with the salinity of water[43, 445, 46]. derived an approximate relation between the temperature of the water and its reflectance pattern. According to [45], in the wavelength range \(200\mu m-1000\mu m\) as the temperature of water increases, the reflectance of water reduces as shown in figure 3. This study was done on clean water. A scientific report from NASA[46], further backed this claim, on studying the reflectance pattern of different types of water bodies, e.g. salt water bodies, and clear water bodies at different temperatures. We further study the behavior of certain types of salt when they are mixed with water. Most of the saltwater mixture reactions as endothermic reactions [47], which include, sodium chloride, phosphates, magnesium salts, etc. So we used the above two properties of water to propose a novel index to estimate the salinity in very small and shallow bodies of water. Most of the standard salinity indexes like normalized difference salinity index (NDSI) are not effective because they are designed to study the soil salinity or salinity of arid or semi-arid vegetation. The reflectance property of salts present in the soil is totally different from salts mixed in water. We start with the prior knowledge that water has a strong absorption towards red (\(0.65\mu m\)), near-infrared bands (\(0.86\mu m\)) and higher reflectance towards green (\(0.55\mu m\)), blue (\(0.44\mu m\)) bands [45]. As the number of soluble impurities in water increases, its reflectance increase [48]. There is a shift in the reflectance and absorption pattern of water towards red bands. The base salinity/impurity level detection of the water body is computed as: \[sal\_detect=(G+B)-\gamma(R+NIR), \tag{4}\] where \(R\) is red band (\(0.65\mu m\)), \(G\) is green band (\(0.55\mu m\)), \(B\) is expected value of (B\({}_{1}\) (\(0.44\mu m\)), B\({}_{2}\) (\(0.48\mu m\))) i.e. blue band, \(NIR\) is near infrared band (\(0.86\mu m\)) and \(\gamma\) is a learnable coefficient. Next we explore the dependence on temperature, using the prior knowledge of [43, 44, 45, 46] salt-water reaction being endothermic and temperate-reflectance relationship. Since temperature has a inverse relation with reflectance and variations in temperature are directly visible in the thermal bands (\(10.8\mu m\)) of LandSat8. \[sal\_detect=(G+B)-(\gamma(R+NIR))^{(\frac{\theta}{\tau})} \tag{5}\] here \(T\) is the thermal band (\(10.8\mu m\)) and \(\theta\) is learnable coefficient. This boost the weak signal from the from the small and shallow water pond as there is a shift towards the \(R\) and \(NIR\) bands with increase in impurity. Next we normalize equation 5. The final Water Quality Enhanced Index is as shown below \[WQEI_{2}=\frac{(G+B)-(\gamma(R+NIR))^{(\frac{\theta}{\tau})}}{NIR+R+G+B} \tag{6}\] Equation 6 is the another estimated index for detection salinity of water. The final estimated/learnred values of \(\gamma\)=1.8 and \(\theta\)=2. The output (\(WQEI_{2}\)) of equation 6 is in range \(-1\) to \(+1\). Figure 3: This plot shows the dependence of reflectance of light at different wavelength versus temperature. The reflectance of water decreases are the temperature increases. Also, lower wavelengths have higher reflectance and wavelengths in near infrared region have lower reflectance ## 4 Verification of indexes In this section, we will discuss the experiments that verified our created indexes and validation for our design choices. The validation tests are performed against the test done on water samples collected from the 49 frac ponds/irrigation ponds/playas. Here we will discuss in detail the types of tests and how we estimated the correlation between various tests and both the indexes created. **Satellites**: We used the multispectral images (remote sensing data) from two different satellites to ensure the generality of our methods. The multispectral data from commercial source has \(\sim 2m\) per pixel spatial resolution and 4 spectral bands, covering the visible and near-infrared regions. The spatial resolution is good enough for our small water ponds, we directly computed \(WQEI_{1}\) and \(WQEI_{2}\) for all the satellite images. In \(11\) multispectral images, we have \(17\) ponds sites. Since the output of each index is of the size of the input image (\(H=20kms\times W=18kms\)). The maximum size of any frac pond is \(70m\times 70m\), we cropped size of \(100m\times 100m\) (expected to cover the whole pond) around the \(lat,\ long\) (approximate center of the pond) from the indexes output. Next, we compute the mean (\(\mu_{i}^{1},\mu_{i}^{2}\)) and standard deviation (\(\sigma_{i}^{1},\sigma_{i}^{2}\)) for each of the cropped outputs of indexes. \[\mu_{i}^{1},\sigma_{i}^{1}=\sum_{k=0}^{h\times w}P_{k}^{1},\ \frac{1}{n}\sum_{k=0}^{h \times w}P_{k}^{1}-\mu_{i}^{1}\ \ \forall\ (WQEI_{1}) \tag{7}\] \[\mu_{i}^{2},\sigma_{i}^{2}=\sum_{k=0}^{h\times w}P_{k}^{2},\ \frac{1}{n}\sum_{k=0}^{h \times w}P_{k}^{2}-\mu_{i}^{2}\ \ \forall\ (WQEI_{2}) \tag{8}\] where \(\mu_{i}\) and \(\sigma_{i}\) are mean and std of \(i_{th}\) cropped output of indexes respectively, \(P_{k}\) is the \(k_{th}\) pixel in the cropped output. **LandSat8** has spatial image resolution is \(30m\) per pixel, we used panchromatic band information to up-sample the spatial resolution to \(15m\) per pixel. Details about up-sampling transform are discussed in section 3.3. On the up-sampled multispectral image, we compute both \(WQEI_{1}\) and \(WQEI_{2}\) for all the 49 ponds where each pond have approximately 18 images over the span of 2 years. Then we compute the mean (\(\mu_{i}^{1},\mu_{i}^{2}\)) and standard deviation (\(\sigma_{i}^{1},\sigma_{i}^{2}\)) of all images. In total (commercial satellite and LandSat8) we have approximately 1200 images taken at different time intervals. **Lab test** reports data is cleaned and formatted to be compared directly with outputs (\(WQEI_{1},WQEI_{2}\)) from satellite data. Each pond water sample is tested for the following parameters: \(pH\), \(Conductivity\), \(Turbidity\), \(Dissolved\ Oxygen\), \(Temperature\), \(H_{2}S\), \(Depth\). To find out the relation of our indexes (\(WQEI_{1},WQEI_{2}\)) with lab test parameters, we match each index with each of the test parameters recording using a matching criterion. Figure 4: Row 1 : Plot of \(WQEI_{C}\) over 4 different ponds over the time period of one year, it shows the strong correlation of \(WEQI_{C}\) with Lab reports for \(Conductivity\). Row 2: Plot of \(WQEI_{T}\) over 4 different ponds for 1 year and its strong correlation with Lab report of Conductivity computation. ### Matching Criterion: The first matching criterion used our case is **Pearson Correlation** (PC). The reason for using it is that, Pearson Correlation coefficients are used in statistics to measure how strong a relationship is between two signals irrespective of the amplitude. It is defined as the covariance of the two signals divided by the product of their standard deviations. \[\rho_{indexes,lab\_test}=\frac{cov(indexes,lab\_test)}{\sigma_{indexes}\ \circ\ \sigma_{lab\_test}}, \tag{9}\] here \(\sigma_{indexes}\) is \((\sigma_{i}^{1},\sigma_{i}^{2})\) and \(\sigma_{lab\_test}\) is \((\sigma_{pH},\sigma_{conductivity},..etc)\), \(cov\) is covariance. PC shows how significantly the shape of the curve plotted for indexes output readings (\(\mu_{i}^{1},\mu_{i}^{1}\)) correlates well with each lab test parameter (\(pH,conductivity,..etc\)) over the time span of 2 years. For the PC, we are ignoring the magnitude because the signals (indexes v/s lab test) are from 2 totally different domain spaces. Magnitude optimization may lead to incorrect matching or no matching at all. So we started with curve shape matching first. First we set all the learnable parameters (\(\alpha,\beta,\gamma,\theta\)) to 1 in equation 3 & 6. Next for each pond over the time span of 2 years (\(\sim 18\) readings), we compute PC for each index (\(WQEI_{1},WQEI_{2}\)) with each lab test parameter (\(pH,conductivity,Turbidity,..etc\)). And picks the best match lab test parameter for each of our indexes. We found out that \(WQEI_{1}\) has the highest correlation with \(Turbidity\) and \(WQEI_{2}\) has \(Conductivity\). \[\begin{cases}\rho_{1}=argmax(\textbf{PC}\ (WQEI_{1},\ lab\_test))\\ \rho_{2}=argmax(\textbf{PC}\ (WQEI_{2},\ lab\_test))\\ \hskip 14.226378pt\forall\ lab\_test\in(pH,Conductivity,..),\end{cases} \tag{10}\] where **PC** is Pearson Correlation. We observed that \(\rho_{1}\) points to \(Turbidity\) and \(\rho_{2}\) to \(Conductivity\). Hence we found that our \(WQEI_{1}\) is a measure of \(Turbidity\) of water in the pond and \(WQEI_{2}\) is a measure of \(Conductivity\) of water estimated using satellite imagery. For the sake of simplicity now on, we will use \(WQEI_{T},WQEI_{C}\) for \(WQEI_{1}\) and \(WQEI_{2}\) Figure 5: Row 1& 2: shows qualitative results of computing the \(WQEI_{C}\) for a irrigation field and natural plays. The data is observed over 8 months and compared with Lab test also. Row 3 & 4: shows qualitative results for computing the \(WQEI_{T}\) for the same water bodies, and the data is observed over same time interval. We can see, towards the months of summer, the natural playa tries up and water is very saline and turbid, while the irrigation water pond was filled up from the near by canal in the month of july, showing less saline and more turbid, as turbidity increases temporarily as water is discharged in from canal respectively. Next we find the optimum value for out learnable coefficients (\(\alpha,\beta,\gamma,\theta\)) of indexes (\(WQEI_{T},WQEI_{C}\)). For this we used second matching criterion using simple regression methods. **Mean Square Error** (MSE) is used to estimate the right values of learnable coefficients. MSE is a model evaluation metric used for regression tasks. The main reason for using MSE as an evaluation/matching criterion is to estimate as precise as possible values for \(Turbidity\) and \(Conductivity\) of water in small ponds using satellite imagery. We used simple regression to estimate those values and optimized on minimizing the MSE for both the \(WQEI_{T}\) and \(WQEI_{C}\). The final indexes are as follows: \[WQEI_{T}=\frac{\{\frac{(R-NIR)}{R+NIR}-(\sqrt{B}+R)\}\ \circ\ \frac{G-R}{G+R}}{\sum_{i}^{h \times w}2.74\times G-4.89\times(R+NIR)}, \tag{11}\] \[WQEI_{C}=\frac{(G+B)-(1.8(R+NIR))^{(\frac{2}{7})}}{NIR+R+G+B} \tag{12}\] With the help of our very simple and novel indexes, we can monitor the quality of water in small ponds, tanks, playas, etc using satellite imagery in the remotest section of the world. We can reuse the wastewater to recharge the natural plays. We can point out the location where the water is unfit for drinking by the wildlife, provide alternate sources and block the unfit pond, or tank. ### Estimation with Neural Networks: In this section we will discuss the neural network architecture we designed to estimate \(Turbidity\) and \(Conductivity\). We started with a small VGG16[60] network pre-trained on natural images dataset (imageNet)[61]. We used convolutional layers only of the VGG16 as feature extractor, appended 2 Fully Connected (FC) layers initialized randomly. The final layer has 2 output neurons. The training data used is 1200 satellite images cropped out for each pond at different timestamps. The ground truth are the lab test reports discussed in section 3.3. We created a 80-10-10% train-val-test split of the dataset. **Loss function** used is inspired from[62, 63]. Since we are directly predicting a value for \(Turbidity\) and \(Conductivity\), the output of last layer is passed through a softmax function. The loss function used is simple MSE for optimization. Each input image is of size \(100\times 100\) with 12 bit depth information per pixel. The batch size is 32 images with a learning rate of 0.005 at starting and reduces by a factor of 0.5 after every 10 epochs. Stochastic Gradient Descent is used as optimizer for the network. The network is trained for 30 epochs. \begin{table} \begin{tabular}{l c c c c c} \multirow{2}{*}{_Methods_} & \multicolumn{5}{c}{_Ponds (MSE) - Turbidity_} \\ & _P1_ & _P2_ & _P3_ & _P4_ & _P5_ \\ \multirow{2}{*}{_TI1[49]_} & 0.955 & 0.880 & 0.861 & 0.797 & 0.862 \\ & 0.768 & 0.724 & 0.736 & 0.777 & 0.779 \\ \multirow{2}{*}{_MLR[51]_} & 0.645 & 0.655 & 0.588 & 0.511 & 0.680 \\ & 0.563 & 0.541 & 0.581 & 0.410 & 0.471 \\ \multirow{2}{*}{_\(Reg\) Model[52]_} & 0.157 & 0.243 & 0.159 & 0.151 & 0.140 \\ & 0.218 & 0.245 & 0.273 & 0.395 & 0.206 \\ \multirow{2}{*}{_\(N\) Neural Net (Ours)_} & **0.097** & **0.028** & **0.017** & **0.040** & **0.035** \\ \end{tabular} \end{table} Table 1: Comparison of \(WQEI_{T}\) with the popular methods of detecting water turbidity. \(WQEI_{T}\) have lower MSE than other methods when tested on 5 randomly selected ponds. \begin{table} \begin{tabular}{c c c c c c} \multirow{2}{*}{_Methods_} & \multicolumn{5}{c}{_Ponds (MSE) - Salinity_} \\ & _P1_ & _P2_ & _P3_ & _P4_ & _P5_ \\ \multirow{2}{*}{_SI4[53]_} & 0.757 & 0.997 & 0.892 & 0.704 & 0.932 \\ & 0.799 & 0.655 & 0.618 & 0.890 & 0.860 \\ & 0.508 & 0.648 & 0.669 & 0.669 & 0.550 \\ \multirow{2}{*}{_SAVI[56]_} & 0.444 & 0.545 & 0.497 & 0.589 & 0.633 \\ & 0.358 & 0.466 & 0.341 & 0.438 & 0.353 \\ \multirow{2}{*}{_\(NSSI\)[58]_} & 0.165 & 0.179 & 0.177 & 0.165 & 0.177 \\ & 0.106 & 0.175 & 0.298 & 0.217 & 0.131 \\ \multirow{2}{*}{_Neural Net (Ours)_} & **0.254** & **0.137** & **0.141** & **0.196** & **0.256** \\ & **0.082** & **0.068** & **0.099** & **0.063** & **0.081** \\ \end{tabular} \end{table} Table 2: Comparison of \(WQEI_{C}\) to detect salinity with the most popular indexes that exists in literature. It can be seen \(WQEI_{C}\) have lower mean square error as compared with other methods. We performed it on 5 randomly selected ponds and computed the salinity using each index listed from satellite images and the corresponding lab report. ## 5 Results and Conclusions In this study, we utilize multi-spectral data (LandSat8) to monitor water bodies that are dispersed in a large region. By creating the Water Quality Enhanced Index (\(WQEI_{T},WQEI_{C}\)), we show: * Comparing our \(WQEI_{T},WQEI_{C}\) model prediction with laboratory water quality measurements, our results show that applying our model can achieve frequent water body monitoring with sufficient accuracy. * The time series plot (Figure 5) shows that \(WQEI\) is a good indicator of water quality. More specifically, \(WQEI_{C}\) shows a strong correlation with water conductivity and \(WQEI_{T}\) with water turbidity. * Compared with traditional water quality index models, the \(WQEI\) formulation includes a water impurity term (Equation 1, a signal amplification term (Equation 2) and a normalization term (Equation 3). The combination of multiple terms we introduced leads to better performance, especially for small-sized water bodies. The proposed methods are evaluated on the criterion of Mean Squared Error (MSE). They are also compared to standard indices for detecting \(Turbidity\) and \(Conductivity/Salinity\). As shown in table 1&2, \(WQEI_{T}\) and \(WQEI_{C}\) performs better than most existing methods[53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 242, 243, 244, 251, 252, 266, 270, 271, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 328, 332, 340, 309, 320, 335, 336, 337, 341, 342, 343, 344, 345, 346, 347, 348, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 409, 400, 404, 407, 409, 400, 408, 409, 400, 401, 402, 403, 404, 405, 406, 407, 409, 401, 408, 409, 401, 402, 403, 404, 409, 404, 405, 406, 407, 409, 401, 408, 409, 402, 403, 404, 409, 404, 406, 407, 409, 400, 407, 408, 409, 400, 409, 400, 407, 409, 400, 408, 409, 400, 401, 402, 403, 404, 409, 405, 406, 407, 409, 408, 409, 400, 407, 409, 400, 409, 401, 402, 403, 404, 409, 405, 406, 407, 409, 408, 409, 400, 407, 409, 400, 409, 400, 408, 409, 400, 401, 402, 403, 404, 409, 406, 407, 409, 408, 409, 400, 409, 400, 407, 409, 400, 409, 401, 402, 403, 404, 409, 409, 404, 408, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 401, 402, 403, 404, 409, 409, 401, 404, 409, 401, 402, 404, 409, 409, 400, 409, 400, 409, 401, 402, 403, 404, 409, 401, 402, 403, 404, 409, 401, 404, 409, 402, 404, 409, 403, 404, 409, 405, 406, 407, 409, 408, 409, 400, 409, 400, 409, 401, 402, 403, 404, 409, 409, 409, 400, 409, 400, 409, 400, 409, 400, 409, 401, 402, 403, 404, 409, 409, 400, 409, 401, 402, 403, 404, 409, 405, 406, 407, 409, 401, 408, 409, 409, 400, 409, 401, 402, 403, 404, 409, 409, 409, 400, 409, 400, 409, 400, 409, 409, 400, 409, 400, 409, 400, 409, 400, 409, 400, 409, 409, 400, 409, 409, 400, 409, 400, 409, 409, 400, 409, 409, 400, 409, 409, 400, 409, 400, 409, 400, 409, 40, 409, 400, 409, 409, 400, 409, 409, 400, 409, 409, 409, 400, 409, 409, 409, 409, 409, 409, 409, 409, 41, 409, 409, 410, 411, 411, 42, 43, 44, 414, 45, 46, 47, 48, 49, 41, 48, 49, 42, 40, 41, 45, 46, 49, 43, 47, 48, 49, 45, 49, 46, 47, 49, 48, 49, 409, 41, 409, 41, 42, 45, 46, 48, 49, 43, 49, 40, 41, 45, 46, 49, 47, 48, 49, 49, 40, 41, 45, 46, 49, 47, 49, 48, 49, 49, 40, 49, 41, 42, 43, 44, 45, 46, 47, 49, 48, 49, 40, 49, 41, 45, 49, 40, 49, 41, 45, 46, 47, 48, 49, 40, 41, 46, 49, 42, 43, 44, 47, 48, 49, 40, 49, 40, 41, 45, 49, 42, 45, 46, 49, 47, 49, 48, 49, 40, 41, 49, 42, 45, 49, 43, 49, 40, 41, 4 ### Pearson Correlation plots As discussed in section 4.1 in the main paper, the correlation plots for for equation 10 are shown in figure 6. Figure 5(a) shows the Pearson correlation coefficient plot for \(WQEI_{C}\) with each of the lab test parameter (\(pH,Conductivity,Turbidity,etc\)). As shown with orange oval mark in figure 5(a), \(Conductivity\) has highest correlation with \(WQET_{C}\). Similary figure 5(b) shows the Pearson correlation coefficient plot for \(WQEI_{T}\) with each of the lab test parameter (\(pH,Conductivity,Turbidity,etc\)). The green oval mark shows a strong correlation of \(WQEI_{T}\) with Turbidity. We also observed that it has significant correlation with volumne of water too in the pond. This way, using \(WQEI_{T}\), we can make a relative estimate of volumne of water in the water body using satellite imagery only. ### Qualitative results Figure 8 shows some of the qualitative results on very small frac ponds in Midland, Texas region. The indexes are computed on images from LandSat8 satellite. We picked a mixture of ponds, the ones from which, water is used very frequently and refilled frequently and the ones from which, water is used only once in 3-4 months. Row 1 and 2 (Turbidity (\(WQEI_{T}\))) shows the ponds which are used very frequently (almost every 3 weeks) and refilled very frequently. We can see, most of the time the water stays turbid. Row 3 (Turbidity (\(WQEI_{T}\))) shows very less used pond (3-4 months, the water stays calm and idle but the pond starts to dry up over time. We can see for the month of march, april, june of row 3, the turbidity is low and the pond starts to dry up, refilled back in next months, based on usage, we can see the increase in turbidity. Row 4,5,6 (Salinity/Conductivity (\(WQEI_{C}\))) shows how the salinity of the frac pond changes over time. We observed that the salinity is very high in the months towards the end of summer, as the fracturing process is at higher rate and a lot of water is pumped back from the well back to the pond. Salinity is very high also in those cases when the water in the pond has almost dried up. As we can see in row 5 in month is September (this pond was unused), towards the end of summer, almost all the water in the pond dries up and we can see high concentration of salts in the ponds before it dries up completely. ### Neural Network Architecture As mentioned in the main paper we tested a simple VGG16[60]. Here we will add more details about the architecture and the observations on training a neural network on remote sensing data for our problem set. The input image size is \(100\times 100\times 3\). We used pre-trained VGG16(pre-trained on imagenet dataset[61]) for input image feature extraction to a feature map of size \(7\times 7\times 512\). Fully connected layer with ReLu activation are initialized randomly. The output is passed through a softmax function to predict score of turbidity and salinity/conductivity. The architecture diagram is shown in Figure 7. Figure 7: Neural Network architecture used for detecting water turbidity and conductivity/salinity. **Limitations:** The training data used is very limited in our case and the predictions are only the score of \(Turbidity\) and \(Conductivity/Salinity\) of the whole pond. Limited data problem arises in almost all the fields when it comes to remote sensing. As the terrain covered to do any kind of anomaly detection (e.g. water quality, sand quality, GHG emissions, etc) is extremely large ( 1000s of miles) and it is very challenging to generate ground truth information in such cases. For example in our case only, the data from 49 ponds only was a major challenge, our team spent 2 years of time and millions of dollars just to monitor such a small number of ponds. Unlike natural images object detection where simple objects Figure 8: Qualitative results on frac ponds from LandSat8 satellite for turbidity and salinity detection using \(WQEI_{T}\) & \(WQEI_{C}\) respectively. The results are shown from the start of summer in midland, Texas region until start of winter. (e.g. car, truck, animal, person) seen in surroundings can be annotated with very minimal amount of knowledge. We need specialized equipments and expertise in the subject matter (e.g. chemical engineering, hydrologist, geologist, etc) to check presence of certain types of impurities. And after that the next challenge is understanding of multispectral data. **Index creation**[49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59] is one way where people have done analysis of multispectral data from satellites using remote sensing. Index creation needs expertise in both the multispectral imagery from satellite and chemical properties of object of interest. Indexes are very much prone to change in environmental conditions and the change in terrain. The simplistic neural network used for the problem gives green light to application of machine learning in remote sensing. We can predict different kind of impurities score by just using one neural network, instead of using a specialized index for each one. Our future work in this domain is to work towards using an encoder-decoder network architecture, so that we can generate a spatial map of impurities in the whole water body instead of a single score. But that is also going to need lab test on water samples collected from different sections of same water body.
2304.03153
Zero-Shot Next-Item Recommendation using Large Pretrained Language Models
Large language models (LLMs) have achieved impressive zero-shot performance in various natural language processing (NLP) tasks, demonstrating their capabilities for inference without training examples. Despite their success, no research has yet explored the potential of LLMs to perform next-item recommendations in the zero-shot setting. We have identified two major challenges that must be addressed to enable LLMs to act effectively as recommenders. First, the recommendation space can be extremely large for LLMs, and LLMs do not know about the target user's past interacted items and preferences. To address this gap, we propose a prompting strategy called Zero-Shot Next-Item Recommendation (NIR) prompting that directs LLMs to make next-item recommendations. Specifically, the NIR-based strategy involves using an external module to generate candidate items based on user-filtering or item-filtering. Our strategy incorporates a 3-step prompting that guides GPT-3 to carry subtasks that capture the user's preferences, select representative previously watched movies, and recommend a ranked list of 10 movies. We evaluate the proposed approach using GPT-3 on MovieLens 100K dataset and show that it achieves strong zero-shot performance, even outperforming some strong sequential recommendation models trained on the entire training dataset. These promising results highlight the ample research opportunities to use LLMs as recommenders. The code can be found at https://github.com/AGI-Edgerunners/LLM-Next-Item-Rec.
Lei Wang, Ee-Peng Lim
2023-04-06T15:35:11Z
http://arxiv.org/abs/2304.03153v1
# Zero-Shot Next-Item Recommendation using Large Pretrained Language Models ###### Abstract. Large language models (LLMs) have achieved impressive zero-shot performance in various natural language processing (NLP) tasks, demonstrating their capabilities for inference without training examples. Despite their success, no research has yet explored the potential of LLMs to perform next-item recommendations in the zero-shot setting. We have identified two major challenges that must be addressed to enable LLMs to act effectively as recommenders. First, the recommendation space can be extremely large for LLMs, and LLMs do not know about the target user's past interacted items and preferences. To address this gap, we propose a prompting strategy called **Zero-Shot Next-Item Recommendation (NIR)** prompting that directs LLMs to make next-item recommendations. Specifically, the NIR-based strategy involves using an external module to generate candidate items based on user-filtering or item-filtering. Our strategy incorporates a 3-step prompting that guides GPT-3 to carry subtasks that capture the user's preferences, select representative previously watched movies, and recommend a ranked list of 10 movies. We evaluate the proposed approach using GPT-3 on MovieLens 100K dataset and show that it achieves strong zero-shot performance, even outperforming some strong sequential recommendation models trained on the entire training dataset. These promising results highlight the ample research opportunities to use LLMs as recommenders. The code can be found at [https://github.com/AGI-Edgerunners/LLM-Next-Item-Rec](https://github.com/AGI-Edgerunners/LLM-Next-Item-Rec). 2019 Next-Item Recommendation, Large Language Models, Zero-Shot Learning, Prompting 1 ## 1. Introduction Large language models (LLMs) (Beng et al., 2017; Chen et al., 2018; Wang et al., 2021), such as GPT-3 (Beng et al., 2017), have achieved impressive results in various natural language processing (NLP) tasks. Nevertheless, LLMs are also very large and often accessible only via some API service. Hence, they cannot be fine-tuned like the earlier pre-trained language models (PTMs) (Beng et al., 2017; Wang et al., 2021). Many works have demonstrated that LLMs are capable of solving many known NLP problems through task-specific prompts under the zero-shot setting, i.e., without any demonstration examples or further training (Beng et al., 2017; Chen et al., 2018). Nevertheless, using LLMs to perform next-item recommendations in the zero-shot setting is still a research topic in the nascent stage. We use Figure 1 to illustrate the differences between a NLP reasoning task and a recommendation task. This NLP reasoning task provides GPT-3 (Kang et al., 2021) (also the default LLM in this work) a question in a prompt and the latter generates the answer text (e.g., "9"). Unlike this NLP task which can directly rely on built-in textual knowledge of LLMs, the recommendation task requires LLMs to know the target user's previous item-interactions, the universe of items to be recommended, and the appropriate approach to select the recommended items. Given that LLMs are not naturally trained to perform recommendation, poor results are expected when directly using them to perform recommendation (Wang et al., 2021). Moreover, LLMs can only contribute to recommendation when they have some background knowledge about the items to be recommended. For recommendation of items in proprietary domains, it is unclear how LLMs can be of much use. In our research, we therefore assume that items for recommendation should appear in training data of LLMs. Examples of such items include movies, songs, novels, online games, etc.. For illustration and evaluation purposes, we focus on the next-movie recommendation task. We also choose GPT-3 as the LLM due to its popularity and accessibility. As depicted in Figure 1(b), we show a simple prompting strategy that directly incorporates user's previously watched movies into a text prompt, i.e., "Based on the movies I have watched, including..", followed by a question "can you recommend 10 movies to me?". While this prompting strategy allows a GPT-3 to act as a movie recommender, its recommendation accuracy is likely to be poor due to an _extremely large recommendation space_ and _inadequate user preference modeling_. In this paper, we therefore propose a principled approach to the next-item recommendation called **Zero-Shot Next-Item Recommendation (NIR) prompting**, which involves a 3-step prompting strategy that significantly outperforms simple prompting in the zero-shot setting. Zero-Shot NIR adopts a three-pronged approach to enhance recommendation accuracy. First, it restricts the recommendation space to the scope of MovieLens (Dong et al., 2017) dataset by deriving a candidate movie set for the target user using the user filtering and item filtering techniques well known in previous next-item recommendation research. Second, Zero-Shot NIR prompting strategy performs multi-step prompting of GPT-3 to capture the user's preferences (Step 1), select representative movies from the user's previously watched movies (Step 2), and recommend a ranked list of 10 movies (Step 3). Finally, Zero-Shot NIR introduces a formatting technique in Step 3 to facilitate the extraction of movie items from the answer text generated by GPT-3 (i.e., "a watched movie: \(\sim\)- a candidate movie ->"). We evaluate our approach using MovieLens 100K and GPT-3 engine text-davinci-003. The experimental results show that Zero-Shot NIR prompting achieves good recommendation accuracy in the zero-shot setting and is comparable to other next-item recommendation methods trained using a large dataset. ## 2. Related Works Next-item recommendation is an important and well studied research problem. Early research works proposed Markov Chains to model low-order relationships between items for next-item recommendation (Garfinkel et al., 2015; Gershtein et al., 2016). With the advancement of neural models, deep neural networks (Garfinkel et al., 2015; Gershtein et al., 2016; Gershtein et al., 2016; Gershtein et al., 2017; Gershtein et al., 2018; Gershtein et al., 2019) have been applied to the modeling of sequential patterns which leads to improved recommendation accuracy. Recent research has also explored the use of data augmentation and contrastive learning to enhance the representations of users and items, thereby making further improvement to recommendation performance (Gershtein et al., 2016; Gershtein et al., 2017; Gershtein et al., 2018; Gershtein et al., 2019). Nevertheless, all the above methods require model training using users' historical item-interactions. In other words, they are not capable of making recommendations in the zero-shot setting. To the best of our knowledge, there has been very little research on zero-shot recommendation, and it remains to be unclear whether LLMs can be good recommenders. Among the earlier efforts in LLM-based recommendation (Zhang et al., 2017; Zhang et al., 2017; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018; Zhang et al., 2018), Zhang et al. (Zhang et al., 2018) proposed to use GPT-2 (Zhang et al., 2018) or BERT (Zhang et al., 2018) as the backbone recommender, making the next movie prediction based on five previously watched movies by the target user. However, the huge recommendation space and inadequate user preference modeling make it perform poorly. With newer LLMs such as GPT-3 (Garfinkel et al., 2015), OPT (Zhang et al., 2018), and PaLM (Zhang et al., 2018) which have shown significantly improved results in various NLP tasks, our work chooses GPT-3 to be the LLM for developing more effective zero-shot recommendation methods. Instead of designing the prompting strategy from scratch, our proposed Zero-Shot NIR prompting strategy incorporates user and item filtering approach to derive a candidate movie set and devises a 3-step prompting approach. This way, it mimics well-known recommendation techniques to achieve more accurate zero-shot recommendations. ## 3. Zero-Shot NIR prompting strategy ### Overview Zero-Shot NIR prompting is a multi-step prompting strategy that enables GPT-3 to act as a next-item recommender in the zero-shot setting. Figure 2 illustrates the entire process of our Zero-Shot NIR prompting strategy. It consists of three components: * **Candidate set construction:** This component uses user filtering or item filtering to create a candidate set for each target user, so as to narrow down the recommendation space. These candidate movies are then used to build the three-step GPT-3 prompts. * **Three-step GPT-3 prompting:** This component involves three instruction prompts corresponding to three subtasks. In the first subtask (_user preference subtask_), we design a user preference prompt to probe GPT-3 to summarize the user's preferences based on the previously interacted movies by the target user. In the second subtask (_representative movies subtask_), we create a prompt that combines the user preference prompt with the prompt answer as well as a trigger instruction to request GPT-3 to select representative movies in descending order of preference. In the third subtask, we integrate the representative movies prompt, its answers, and a question to create the recommendation prompt to guide GPT-3 to recommend 10 movies from the candidate movie set that are similar to the representative movies. The result is expected to be in the following format: "a watched movie: <- a candidate movie ->". * **Answer extraction:** This component extracts the recommended items from the textual results of three-step GPT-3 prompting using a simple rule-based extraction method. The extracted recommended movie results can be used in downstream applications or for performance evaluation. ### Candidate Set Construction As mentioned in Section 1, an extremely large recommendation space poses a major challenge to LLM-based recommendation. An unconstrained recommendation space can complicate both practical Figure 1. Example inputs and outputs of GPT-3 with zero-shot prompting for a NLP task and a recommendation task. use and evaluation of recommendation results. On the other hand, it is infeasible to feed the LLM with all items expecting the former to be aware of the item universe. Even for the purpose performance evaluation, the MovieLens 100K dataset contains a set of 1,683 movies which is still too large to fit into a prompt. In our Zero-Shot NIR prompting, we therefore propose to first construct the candidate movie set for each target user using a principled approach. The candidate movies should satisfy two criteria: (a) they should be more relevant to the user instead of randomly selected movies, and (b) the size of candidate movies should be small so as to fit into a prompt. To achieve these criteria, we use two well known principles for determining candidate movies, i.e., _user filtering_ and _item filtering_. **User Filtering.** This principle assumes that the candidate movies should also be liked by other users similar to the target user. Hence, we first represent every user by a multi-hot vector of their watched movies. Users similar to the target user are then derived by cosine similarity between the target user's vector and vectors of other users. Next, we select the \(m\) most similar users and the candidate movie set of size \(s\) is constructed by selecting the most popular movies among the interacted movies of similar users. **Item Filtering.** Similar to user filtering, we represent each movie by a multi-hot vector based on its interacted users. Using cosine similarity between two movies, we select the \(n\) most similar movies for each movie in the target user's interaction history. We then generate a candidate set of size \(s\) based on the "popularity" of these similar movies among the movies in the target user's interaction history. The candidate set construction can be pre-computed by an external module. We then incorporate the candidate movies into the subsequent prompts for recommendation using the sentence: "Candidate Set (candidate movies):" as shown in Figure 2. Following the candidate set, the prompts also include the list of target user's previously interacted movies starting with: "The movies I have watched (watched movies):". ### Three-Step Prompting Figure 1 (b) shows that the simple prompting method overlooks candidate movies, user preferences, and previously interacted movies that best reflect the user's tastes. In contrast, our proposed three-step prompting approach uses three rounds of prompting to perform three subtasks: capturing the target user's preferences, ranking previously interacted movies by user's preferences, and recommending 10 similar movies from the candidate set. **Step 1: User Preference Prompting.** To capture the user's preferences, we include the sentence "Step 1: What features are most important to me when selecting movies (summarize my preferences briefly)?" into the first prompt. As shown in Figure 2, the answer returned by GPT-3 summarizes the target user preference (highlighted in yellow). **Step 2: Representative Movie Selection Prompting.** As the second step, this prompt includes the previous prompt text appended with the answer of Step 1. It then includes the instruction: "Step 2: You will select the movies... that appeal to me the most... presented in descending order of preference (...)" to determine the previously interacted movies that best reflect the target user's tastes. Figure 2 shows the GPT-3's answers highlighted in purple. **Step 3: Recommendation Prompting.** Again, this prompt includes the previous text appended with the answers of Step 2. It then includes the instruction "Step 3: Can you recommend 10 movies from the Candidate Set similar to...". This prompt explicitly instructs GPT-3 to generate 10 recommended movies from the candidate set as highlighted in blue. ### Answer Extraction We add the hint "(Format:... -- a candidate movie ->)" to the third prompt to generate answers in the desired format for easy extraction. Our study has shown that such a hint has worked very well. ## 4. Experiments ### Experimental Setup **Dataset.** The proposed prompting approach is evaluated on a widely-used movie recommendation dataset MovieLens 100K (Kang et al., 2016), which contains 944 users and 1,683 movies. **Baselines.** We compare our proposed NIR prompting strategy with two types of baselines: _strong next-item recommendation baselines_ and _zero-shot baselines_. The former includes **POP** (a popularity-based model), **FPMC**(Kang et al., 2016) (an approach combining matrix factorization and Markov chains), **GRU4**(Kang et al., 2016) (a GRU-based sequential Figure 2. Zero-Shot NIR prompts. The ground truth movie (i.e., The Rock) has been highlighted in red. recommendation model), **SARese**(Krizhevsky et al., 2017) (a sequential recommendation model with self-attention), and **CL4SRec**(Krizhevsky et al., 2017) (a contrastive learning based sequential recommendation model). As these strong baselines have the advantage of full model training, they are expected to outperform zero-shot methods. The zero-shot baselines include **Zero-Shot Simple Prompting, CS-Random-IF** (that randomly selects 10 movies from the item filtering-based candidate set), and **CS-Random-UF** (that randomly selects 10 movies from the UF-based candidate set). We implement our NIR prompting strategy with four variants. **NIR-Combine-IF/NIR-Combine-UF** combines the 3 steps into a single prompt leaving out the intermediate answers. We only prompt GPT-3 once to generate 10 recommended movies from the IF/UF-based candidate set. **NIR-Multi-IF/NIR-Multi-UF** uses three separate prompts to guide GPT-3 step-by-step and to incorporate intermediate answers to the subsequent prompts (as shown in Figure 2) with the IF/UF-based candidate set. **Implementations.** In our experiments, we employ the public GPT-3 text-davinci-003 (175B) as the backbone language model, one of the most popular LLMs with public APIs1. To ensure consistent output, we set the temperature parameter to 0. For evaluation, we adopt the same evaluation metrics as in CL4SRec and report HR@10 and NDCG@10 for all methods. Footnote 1: [https://beta.openai.com/docs/models/gpt-3](https://beta.openai.com/docs/models/gpt-3) ### Main Results As shown in Table 1, our NIR-based methods (NIR-Single-IF, NIR-Single-UF, NIR-Multi-IF, NIR-Multi-UF) outperform POP baseline by a large margin. Interestingly, NIR-Single-UF, NIR-Multi-IF, and NIR-Multi-UF consistently outperform FPMC, a fully trained method. Compared with very strong sequential recommendation models (i.e., GRU4Rec, SASRec, and CL4SRec), the three NIR-based methods still deliver slightly worse but competitive performance, suggesting that LLMs with proper prompting strategy can be reasonably good zero-shot recommenders. In the zero-shot setting, CS-Random-UF(IF)'s superior performance over Simple Prompting show that candidate set not only reduce the recommendation space, but also improve performance. Our proposed NIR-based prompting methods consistently outperforms Simple Prompting and CS-Random-IF/UF, suggesting that incorporating user preference, representative movie selection, and formatting techniques in the prompting process allow GPT-3 to make better recommendations. As Multi-IF(UF) outperforms Combine-IF(UF), we know that separate prompts incorporating the intermediate answers into subsequent prompts leads to more accurate recommendations. Finally, UF-based HIR-based prompting consistently outperforms IF-based prompting, indicating that UF yields better candidate sets than IF. ### Detailed Analysis **Effects of Components of Prompting.** We now conduct an ablation study on NIP-Multi-UF to evaluate the contribution of different components of our prompting strategy. Table 2 shows that all prompting components contribute to recommendation performance. Simple Prompting method with a candidate set (HR@10=0.1071) outperforms that without a candidate set (HR@10=0.0297). Incorporating user preferences or representative movie selection into the prompting improves performance, indicating that task-specific instructions can guide GPT-3 to perform better. **Impact of Candidate Set Size.** We investigate the impact of candidate movie number by varying the set size from 15 to 22 while keeping other parameters unchanged. The results in Figure 3 indicate that recommendation performance is sensitive to candidate set size. The best results occur when there are 19 candidate movies, and smaller or larger set size causes performance degradation. One possible explanation is that a small candidate set restricts the performance limit, while a large set increases the difficulty of making accurate recommendations using GPT-3. \begin{table} \begin{tabular}{l l c c} \hline \hline \multirow{2}{*}{Setting} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{MovieLens 100K} \\ & & HR@10 & NDCG@10 \\ \hline \multirow{4}{*}{Full Training} & POP & 0.0519 & 0.0216 \\ & FPMC & 0.1018 & 0.0463 \\ & GRU4Rec & 0.1230 & 0.0559 \\ & SASRec & 0.1241 & 0.0573 \\ & CL4SRec & 0.1273 & 0.0617 \\ \hline \multirow{4}{*}{Zero-Shot} & Simple Prompting & 0.0297 & 0.0097 \\ & CS-Combine-IF & 0.0805 & 0.0352 \\ & CS-Combine-IF & 0.0954 & 0.0457 \\ & NIR-Single-IF & 0.0975 & 0.0501 \\ & NIR-Single-UF & 0.1135 & 0.0529 \\ & NIR-Multi-IF & 0.1028 & 0.0505 \\ & NIR-Multi-UF & 0.1187 & 0.0546 \\ \hline \hline \end{tabular} \end{table} Table 1. Main result comparison on MovieLen 100K. Figure 3. Results for different candidate set sizes. \begin{table} \begin{tabular}{c c c|c} \hline \hline Candidate Set & User Preference & Representative Movies & HR@10 \\ \hline – & – & – & 0.0297 \\ βœ“ & – & – & 0.1071 \\ βœ“ & βœ“ & – & 0.1136 \\ βœ“ & – & βœ“ & 0.1082 \\ βœ“ & βœ“ & βœ“ & 0.1187 \\ \hline \hline \end{tabular} \end{table} Table 2. Ablation study of the impact of different components in the proposed prompting on MovieLens 100K. ## 5. Conclusion In this paper, we propose a three-step prompting strategy called Zero-Shot Next-Item Recommendation (NIR) for GPT-3 to make next-movie recommendations without further training. We evaluate our approach on a movie recommendation dataset and demonstrate its strong zero-shot performance. Our results highlight the potential of using LLMs in zero-shot recommendation and call for further exploration of using LLMs in recommendation tasks. This work can extended in several directions, including recommendation in other domains and few-shot setting.
2305.14706
PruMUX: Augmenting Data Multiplexing with Model Compression
As language models increase in size by the day, methods for efficient inference are critical to leveraging their capabilities for various applications. Prior work has investigated techniques like model pruning, knowledge distillation, and data multiplexing to increase model throughput without sacrificing accuracy. In this paper, we combine two such methods -- structured pruning and data multiplexing -- to compound the speedup gains obtained by either method. Our approach, PruMUX, obtains up to 7.5-29.5X throughput improvement over BERT-base model with accuracy threshold from 80% to 74%. We further study various combinations of parameters (such as sparsity and multiplexing factor) in the two techniques to provide a comprehensive analysis of the tradeoff between accuracy and throughput in the resulting models. We then propose Auto-PruMUX, a meta-level model that can predict the high-performance parameters for pruning and multiplexing given a desired accuracy loss budget, providing a practical method to leverage the combination effectively.
Yushan Su, Vishvak Murahari, Karthik Narasimhan, Kai Li
2023-05-24T04:22:38Z
http://arxiv.org/abs/2305.14706v2
# PruMUX: Augmenting Data Multiplexing with Model Compression ###### Abstract As language models increase in size by the day, methods for efficient inference are critical to leveraging their capabilities for various applications. Prior work has investigated techniques like model pruning, knowledge distillation, and data multiplexing to increase model throughput without sacrificing accuracy. In this paper, we combine two such methods - structured pruning and data multiplexing - to compound the speedup gains obtained by either method. Our approach, PruMUX, obtains up to 7.5-29.5X throughput improvement over BERT-base model with accuracy threshold from 80% to 74%. We further study various combinations of parameters (such as sparsity and multiplexing factor) in the two techniques to provide a comprehensive analysis of the tradeoff between accuracy and throughput in the resulting models. We then propose Auto-PruMUX, a meta-level model that can predict the high-performance parameters for pruning and multiplexing given a desired accuracy loss budget, providing a practical method to leverage the combination effectively.1 Footnote 1: Our code is available at [https://github.com/yushansu/PruMUX](https://github.com/yushansu/PruMUX) ## 1 Introduction Large language models (LLMs) have achieved state-of-the-art performance across various NLP tasks and resulted in impressive user-facing demonstrations such as ChatGPT.2 However, their large size necessitates the use of enormous amounts of compute and memory at inference time, which limits their widespread use. Footnote 2: [https://chat.openai.com/](https://chat.openai.com/) Two types of techniques have been explored to reduce the cost of model inference. The first is model compression including network pruning (LeCun et al., 1989; Han et al., 2015; Frankle and Carbin, 2019), quantization (Han et al., 2016), knowledge distillation (Hinton et al., 2015), combinations of multiple methods (Xia et al., 2022). The second is recently proposed data multiplexing (Murahari et al., 2023), which multiplexes multiple inputs into a single input for model inference. While both types of methods leverage the over-parameterization effect (Allen-Zhu et al., 2019; Radhakrishnan et al., 2020) in modern deep neural networks to improve the throughput-to-compute cost ratio, the manner in which they do so is different. Model compression aims at reducing the number of parameters in the model, hence reducing the overall compute cost (denominator) to improve the ratio. Data multiplexing, on the other hand, compresses multiple inputs into one to improve throughput (numerator) while keeping the model size fixed. This observation naturally leads us to hypothesize that the two types of methods could be complementary and can be combined for maximal gain in the throughput-to-compute cost ratio. There are two challenges to this hypothesis. The first is that both model compression and data multiplexing aim at trading a small accuracy loss for large throughput improvement. Intuitively, the Figure 1: Throughput improvements (\(\times\)) of CoFi, DataMUX, and PruMUX over the BERT-base model (Devlin et al., 2018) on the MNLI task (Williams et al., 2017). The sparsity for a CoFi’s data point is labeled as \(s\). The width of multiplexing for a DataMUX’s data point is labeled as \(N\). The parameter pair for a PruMUX’s data point is labeled as (\(N\), \(s\)). combination may incur an accuracy loss larger than either method and it is not clear how they interact with each other when combining them together. A research question is how to combine the two methods such that the combination achieves better throughput than each type of method individually, given any accuracy loss budget or accuracy threshold. The second challenge is to efficiently find the best parameters pair (\(N,s\)) where \(N\) is the width of the data multiplexing and \(s\) is the sparsity of the model compression method. Training and testing with each parameter combination is costly and time-consuming. A research question is how to automatically predict and find top parameters based on the model's performance on one set of parameters. To address the first research question, we present PruMUX, a combination of model compression and data multiplexing. Our method is simple and consists of three phases - multiplexed model pre-training, task-specific fine-tuning and task-specific model compression. In our implementation, we make use of CoFi (Xia et al., 2022), a state-of-the-art model compression method that includes intermediate knowledge distillation steps that help minimize accuracy hits, and DataMUX (Murahari et al., 2023), which performs vector-based input multiplexing over instances. Our results over four datasets (MNLI, QNLI, QQP and SST-2) demonstrate that PruMUX achieves significantly higher throughput over CoFi and DataMUX individually for a large range of accuracy thresholds. As an example, Figure 1 shows the throughput improvements over the BERT-base model on task MNLI, providing a more optimal Pareto frontier in the tradeoff between accuracy and throughput. To address the second research question, we propose Auto-PruMUX, a meta-model to automatically predict and find the high-performance parameter combinations for a desired accuracy loss budget on a task based on the model's performance on one set of parameters without running additional experiments. We use interpolation and estimation models over a set of data points to predict the accuracy and throughput of a PruMUX model based on sparsity and multiplexing factor. We show promise in modeling the tradeoffs accurately and Auto-PruMUX can find high-performance combinations of known parameters as well as unknown parameters, providing a practical method for choosing a high-performance PruMUX model for a downstream task. Our key insight for why PruMUX can achieve better throughput than model compression and data multiplexing individually is that they improve the throughput of a model in two different dimensions: reducing the latency of an inference and compressing multiple inferences. In addition, both methods lead to non-linear drops in model accuracy at some points. PruMUX can achieve high throughput while avoiding each method's limitations. ## 2 Background ### CoFi Pruning CoFi is a state-of-the-art model compression method (Xia et al., 2022) that uses distillation and structured pruning to jointly prune a Transformer network (Devlin et al., 2018). Its key idea is to distill the knowledge from the base model into the pruned model during training. A layer-wise distillation approach is used to guide the pruning from the teacher model, i.e., dense model, to the student model, i.e., pruned model, with a loss defined as: \[L_{layer}=\sum_{i\in\tau}MSE(W_{layer}\mathbf{H}_{s}^{m(i)},\mathbf{H}_{t}^{ i})\] where \(\mathbf{H}_{s}^{m(i)}\) and \(\mathbf{H}_{t}^{i}\) are hidden representations of the \(m(i)\)th feed-forward layer of the student model and \(i\)th feed-forward layer of the teacher model. \(i\) is the teacher model's closest layer to the layer \(m(i)\) of the student model. \(W_{layer}\) is a linear transformation matrix, initialized as an identity matrix. CoFi prunes both coarse-grained and fine-grained units of the distilled network. The coarse-grained units include multi-head attention layers, fully-connected layers, and attention heads. The fine-grained units include hidden dimensions and intermediate dimensions of the Transformer model. Different masks are used for different pruning units and are learned via \(l0\) regularization during training. The units with mask variables smaller than a threshold are pruned away before inference. ### DataMUX Data multiplexing (DataMUX) is a recently proposed method (Murahari et al., 2022, 2023) to compress multiple inputs into a single "mixed" representation of the same size as a single input to a network, in order to improve inference throughput. DataMUX introduces multiplexing layers, which multiplex different sequences into a single sequence of representations, i.e., multiplexed representations, and demultiplexing layers, which demultiplex/decompress the multiplexed representations. The multiplexed layer first compresses multiple input sequences into a single sequence of representations. These representations are then processed by a Transformer model and the resulting representations are then disentangled into independent representations by the demultiplexer layer. These representations are then used to make predictions. DataMUX, therefore, leads to a many-fold increase in inference throughput as just a single pass through the large Transformer model. The multiplexing layer is defined as \[\textbf{x}^{1:N}=\Phi(\textbf{x}^{1},...\textbf{x}^{N})=\frac{1}{N}\sum_{i=1}^ {N}\phi^{i}(\textbf{x}^{i})\] where **x** is the input sequence, \(\phi^{i},i\in[1,...N]\), is the Hadamard product with a fixed Gaussian random vector and \(N\) is the number of input sequences that get multiplexed. The multiplexed representations, \(\textbf{x}^{1:N}\), are then processed by the Transformer model to generate hidden multiplexed representations, \(\textbf{h}^{1:N}\). The demultiplexer layer, in order to disentangle the hidden multiplexed representation, \(\textbf{h}^{1:N}\), into independent representations, learns N parameterized demultiplexing functions, \(\psi^{i}\). The independent representations, \(\textbf{h}^{i}\), are then used to make predictions. \[\textbf{h}^{i}=\psi^{i}(\textbf{h}^{1:N})\quad\forall i\in 1,2,...N\] ### Observations Both model compression and data multiplexing aim at trading small accuracy losses for large inference throughput improvements. When CoFi prunes a Transformer at relatively low sparsities, its accuracy loss is minimal and throughput improvement is significant, but at 95% sparsity, its accuracy loss becomes relatively significant (Xia et al., 2022). DataMUX also shares this nonlinear property, as shown in Figure 1. In other words, the trade-off of each method is good only up to a certain point. The two methods improve the throughput of a model in two dimensions. CoFi reduces the latency of an inference, whereas DataMUX compresses multiple inferences into one. A natural question is whether combining the two methods can achieve higher throughput with a smaller accuracy loss than each method individually. ## 3 PruMUX Our key motivational question is the following: _given an accuracy loss budget, can the combination of model compression and data multiplexing achieve better throughput than each method individually?_ In this section, we first present PruMUX, a method to combine the two methods, and then show that PruMUX achieves substantially better throughput than each method alone for various accuracy thresholds in our experimental results. ### Method PruMUX is a method to convert any Transformer into a high throughput model, capable of compressing multiple inference inputs into a single input and executing it at a low latency. For multiplexing, PruMUX uses the recently proposed DataMUX (Murahari et al., 2023), which appends a multiplexer and demultiplexer as described in Sec 2.2. With width \(N\), the inference throughput of the Transformer can be improved by a factor of Figure 2: Illustration of PruMUX showing a multiplexer, sparse Transformer, and a demultiplexer, with multiplexing width of 10, where 10 input sequences are mixed into 1 input sequence. The multiplexed Transformer model is pruned to reduce inference time. The training for PruMUX consists of three steps including retrieval warm-up, multiplexed model training, and Transformer pruning. up to \(N\), as each multiplexed input takes the same amount of computing resources as performing inference over a single input. For model compression, PruMUX can use any method such as network pruning, distillation, or a combination of the two (such as CoFi). The goal is to substantially reduce the latency of processing an inference. For our experiments, PruMUX uses CoFi as the model compression method. Training a model with PruMUX consists of three phases as shown in Figure 2: Phase 1: Priming the multiplexed model with the token retrieval objectiveWe first prime the multiplexed transformer model with a token retrieval task. Murahari et al. (2022) introduced this "retrieval warm-up" self-supervised objective (shown below) and found it to be critical to improve the performance of multiplexed models. \[L_{retrieval}(\textbf{x}^{1:N})=\sum_{j=1}^{L}-\log P(\textbf{w}_{j}^{I}| \textbf{H}_{j}^{I})\] Phase 2: Pre-training and fine-tuning multiplexed modelsThe multiplexed models from the previous stage are then pre-trained on large-scale text corpora with the masked language modeling (MLM) objective. The pre-trained multiplexed models are then fine-tuned on downstream tasks to yield task-specific multiplexed models. Phase 3: Model compressionFinally, we use CoFi to jointly prune coarse-grained and fine-grained units in the multiplexed Transformer model. The coarse-grained units include entire attention heads, attention layers, and fully connected layers. The fine-grained units include hidden dimensions and intermediate dimensions of the Transformer model. The demultiplexer's input dimension is pruned in order to match the pruned hidden dimension of the Transformer model. During the pruning process, CoFi uses knowledge distillation to transfer knowledge from the teacher model, i.e., the task-specific multiplexed model, to the pruned model. ### Implementation Details We use the pre-trained multiplexed BERT-base models (Murahari et al., 2023) with the standard BERT pre-training recipe with the masked language modeling objective for \(N=2,5,10\) on Wikipedia (Foundation) and BooksCorpus (Zhu et al., 2015) datasets. We prime the multiplexed model before pre-training with the token retrieval task in Section 2.2 on the Wikipedia and BooksCorpus datasets. We then train the pre-trained multiplexed models on the four largest GLUE Tasks (Wang et al., 2018) - MNLI (Williams et al., 2018), QNLI (Wang et al., 2018), QQP (qqp), and SST-2 (Socher et al., 2013). We then use the CoFi structured pruning objective to get pruned multiplexed model on each task dataset. The hyperparameters we use for the training process are shown in Appendix A.1. We perform a single run to train the model for each setting, i.e., task, multiplexer width \(N\), model sparsity \(s\), following the training process. ### Experiments SetupWe would like to answer the question that given an accuracy threshold, whether PruMUX method can achieve a higher throughput than either CoFi or DataMUX alone. We compare PruMUXed BERT-base model to three baselines: * [leftmargin=*,noitemsep,topsep=0pt] * **BERT-base**: BERT-base model trained without data multiplexing and model compression. * **CoFi**: BERT-base model pruned by CoFi (Xia et al., 2022) with sparsity3\(s\) = 0.50, 0.60, 0.70, 0.80, 0.90, and 0.95. Footnote 3: Sparsity of 0.95 means 95% of the Transformer model weights are set to zero. * **DataMUX**: BERT-base model pre-trained by DataMUX (Murahari et al., 2023) with the multiplexer width \(N=2\), 5, and 10. We have applied PruMUX to the BERT-base model with all combinations of \((N,s)\) for all 4 tasks. We follow the procedure in Xia et al. (2022) to calculate throughput improvements for PruMUXed Transformers and all three baselines, i.e. BERT-base, DataMUX, and CoFi. The evaluation batch size is 128*\(N\), where \(N\) is the multiplexer width. ResultsFigure 3 shows the throughput improvements and accuracies of PruMUXed, DataMUXed, and CoFi-Pruned Transformers over the Transformer base model on the MNLI, QNLI, QQP, and SST-2 tasks with all available parameters. The main takeaway is that PruMUX achieves higher throughput than either CoFi or DataMUX individually in all cases starting at various accuracy thresholds: * For MNLI, with the accuracy thresholds from 80% to 74%, PruMUX achieves 7.5-29.5X throughput improvement over the BERT-base model, whereas CoFi improves by 4.0-10.6X and DataMUX by 2.0-4.9X. * For QNLI, with the accuracy thresholds from 87% to 82%, PruMUX achieves 4.1-26.6X improvement, whereas CoFi improves by 3.8-11.2X and DataMUX by 2.0-9.6X. * For QQP, with the accuracy thresholds from 89% to 86%, PruMUX achieves throughput improvement over BERT-base by 7.6-29.7X, whereas CoFi improves by 10.6X and DataMUX by 2.0-9.8X. * For SST-2, with the accuracy thresholds from 86.5% to 83%, PruMUX improves the throughput by 10.1-27.8X, whereas CoFi improves by 10.6X and DataMUX by 4.8-9.7X. The results also confirm the intuition that PruMUX with \((N,s)\) incurs an accuracy loss, loosely speaking, close to the sum of the accuracy loss of DataMUX with \(N\) and that of CoFi with \(s\). In general, PruMUX can achieve substantial throughput improvement when there is a decent accuracy loss budget. ### Discussion The results above find top PruMUX performance with all parameter pairs \((N,s)\), where \(N\) = 2, 5, 10 and \(s=\) 0.60, 0.70, 0.80, 0.90, and 0.95, for each accuracy loss budget. Searching for top PruMUX parameters at a finer parameter granularity will require training and testing on all additional parameter pairs. Exhaustive tests are impractical. First, for each \(N\), pre-training a DataMUX model with multiplexing width \(N\) is time-consuming. Second, given each pre-trained model with multiplexer width \(N\), different sparsities \(s\) provide different throughput and accuracy trade-offs. In order to find the sparsity \(s\) with the highest throughput given an accuracy budget, one has to train the model for all possible sparsities. The total training time for the sparsities from 0.60 to 0.95 at the granularity of 0.05 for each Figure 3: Throughput Improvement (\(\times\)) of PruMUX (ours), DataMUX (Murahari et al., 2023), and CoFi pruning (Xia et al., 2022) over the BERT-base model for the MNLI, QNLI, QQP, and SST-2 tasks. The x-axis is the Transformer accuracy, which is inverted to better show throughput improvements of each method for different accuracy loss budgets. \(N\) takes over six thousand GPU hours on commodity GPUs, for a small original BERT-base model. A key question is whether one can automatically find a high-throughput (\(N,s\)) with a small number of PruMUX experiments. ## 4 Auto-PruMUX To address the question above, we propose Auto-PruMUX, a method to search for top (\(N,s\)) parameters, to help practitioners balance the performance vs throughput trade-off. Our research question is: _Suppose we have some experimental data of PruMUX and the experimental data of DataMUX and CoFi, how can we find and predict the top parameters \((N,s)\) given an accuracy loss budget?_ Our approach is to develop performance models for the accuracy and throughput of PruMUX. We first train PruMUX models for a set of (\(N,s\)) combinations and measure both the accuracy and the throughput improvement. We then use this data to fit a throughput model and an accuracy model to predict throughput and accuracy respectively given \((N,s)\) parameters. We first discuss how we fit the accuracy and throughput models with a set of sparse data points. Given that we are working with a limited set of data points, we opt to use a simple class of interpolation models for modeling PruMUX accuracy and use an estimation model for modeling throughput. We then outline how we leverage these models to predict top \((N,s)\) parameters, given an accuracy loss budget. We then demonstrate the effectiveness of the Auto-PruMUX in predicting the top parameters across a wide range of accuracy loss budgets. ### Task Accuracy Model We use linear interpolation for our task accuarcy model. \[f_{A}(N,s)=\] \[\begin{cases}A_{1,1}(N,s)&\mathbf{N}_{0}\leq N\leq\mathbf{N}_{1}, \mathbf{s}_{0}\leq s\leq\mathbf{s}_{1},\\...\\ A_{i,j}(N,s)&\mathbf{N}_{i-1}\leq N\leq\mathbf{N}_{i},\mathbf{s}_{j-1}\leq s \leq\mathbf{s}_{j},\\...\\ A_{p,q}(N,s)&\mathbf{N}_{p-1}\leq N\leq\mathbf{N}_{p},\mathbf{s}_{q-1}\leq s \leq\mathbf{s}_{q}\end{cases}\] Each term is a linear combination of data multiplexer width and model sparsity. \[A_{i,j}(N,s)=\sum_{a=0}^{1}\sum_{b=0}^{1}k_{ab}^{(i,j)}N^{a}s^{b}\] The model is fitted on the gathered data of model task accuracy at different multiplexer width and sparsity. \[A_{i,j}(\mathbf{N}_{i},\mathbf{s}_{j}) =Acc(\mathbf{N}_{i},\mathbf{s}_{j})\] \[i =1,...,p,j=1,...,q\] where \(\mathbf{N}\) and \(\mathbf{s}\) are the range of \(N\) and \(s\) values used to fit the model. ### Throughput Model We collect the throughput values for all \(N\) and \(s\) on one task (\(task_{0}\)) and use the throughput values as the throughput estimations for all tasks. \[f_{T}(N,s)=Throu_{task_{0}}(N,s)\] ### Predicting \((N,s)\) We use our models, \(f_{A}(N,s)\) and \(f_{T}(N,s)\), to model the accuracy and the throughput of PruMUX with \(N>1\) and \(s>0\%\). \(Acc(1,s)\) and \(Throu(1,s)\) are the measured accuracy and throughput of CoFi-pruned models. \(Acc(N,0)\) and \(Throu(N,0)\) are the measured accuracy and throughput of DataMUX models. \(Acc(1,0)\) and \(Throu(1,0)\) are the performance of BERT-base model. We search for \((N,s)\) parameters that maximize \(\zeta_{f}\) defined below. \[\zeta_{f}(N,s)=Throu(N,s)\cdot g(Acc(N,s)) \tag{1}\] \[g(x)=\begin{cases}1&x\geq\xi\\ 0&x<\xi\end{cases}\] Intuitively, \(\zeta_{f}\) tries to tradeoff task performance and throughput, given an accuracy loss budget \(\xi\) with the goal of maximizing the throughput. \(g(x)\) provides a mechanism for a strict accuracy threshold - i.e. a model that does not meet the minimum required accuracy will have \(\zeta_{f}=0\). ### Experimental Results Experimental settingIn this section, we show Auto-PruMUX's prediction results by fitting the performance models using a set of parameter space and predicting top parameters on a larger set of parameter space. We define the set of (\(N,s\)) parameter space (test set) as follows. - \((N,s)\)\(N=1\), \(s\) = 0.00 * \((N,s)\)\(N=1\), \(\forall s\in\) 0.60, 0.70, 0.80, 0.90, 0.95 * \((N,s)\)\(\forall N\in\) 2,5,10, \(s\) = 0.00 * \((N,s)\)\(\forall N\in\) 2,5,10, \(\forall s\in\) 0.60, 0.65, 0.70, 0.75, 0.80, 0.85, 0.90, 0.954 Footnote 4: High sparsity doesn’t work for some \(N\)s and some tasks, i.e., (5, 0.95), (10, 0.90), (10, 0.95) for QNLI, (10, 0.85), (10, 0.90), (10, 0.95) for SST-2. We exclude these points from our training and test set. We fit the accuracy model with the model accuracies on \((N,s)\)\(\forall N\in\) 2,5,10, \(\forall s\in\) 0.60, 0.70, 0.80, 0.90, 0.95 (training set). We fit the throughput model with the throughput of one task on all parameter pairs. Our goal is to evaluate the task accuracy model, the throughput model, and parameter prediction performance. Performance Model AccuracyTo evaluate the accuracy of the task performance models on the training set, we perform leave-one-out cross-validation for each task. We show the fraction \(M_{A}\) of accuracy predictions with error falling within \(\Delta\xi=1.5\%\) from real accuracy in Table 1. To evaluate the accuracy of the throughput model on the training set, we fit the model using PruMUX's performance of the QQP task. We show the fraction \(M_{T}\) of throughput predictions with error within 20% of real throughput improvement in Table 1. Across different tasks, our accuracy and throughput models are accurate across a broad set of parameter combinations. Top Parameter PredictionWe show Auto-PruMUX's prediction results by fitting the accuracy model on the training set and fitting the throughput model using the throughput of the QQP task, and predicting top parameters on the test set. We show Auto-PruMUX's top parameter predictions for accuracy loss budget 3% in Table 2. Auto-PruMUX predicts the actual best parameter pairs within its top 3 predictions. In Table 3, we use Auto-PruMUX to predict parameters for accuracy loss budgets in 0%, 0.5%,..., 10% and show the percentage of accuracy loss budgets which Auto-PruMUX predicts the actual best parameter in its top 3 predictions. Auto-PruMUX is able to predict top parameters in most cases. tively lower than the unstructured pruning method for the same accuracy loss budget (Yu et al., 2017; Narang et al., 2017; Wen et al., 2017; Mao et al., 2017; Wang et al., 2019; McDanel et al., 2022). Structured pruning has been applied to transformers to improve inference throughput (Fan et al., 2019; Sajjad et al., 2023; Voita et al., 2019; Michel et al., 2019; Prasanna et al., 2020; Chen et al., 2020; McCarley et al., 2019; Hou et al., 2020; Yao et al., 2021). Distillation compresses a model by transferring knowledge from a large teacher model to a small student model (Hinton et al., 2015). General distillation for Transformer models learn from unlabeled corpus (Sanh et al., 2019; Sun et al., 2020; Wang et al., 2020; Turc et al., 2019; Jiao et al., 2019). Task-specific distillation for Transformer models learns on task-specific data (Sun et al., 2019). (Jiao et al., 2019) combines the two distillation methods to improve performance. Pruning with distillation objective have been explored (Sanh et al., 2020; Lagunas et al., 2021). (Xia et al., 2022) proposes structured pruning with distillation objective to reduce the Transformer parameters by up to 95% and achieve over 10x speedups with small accuracy drops. Multi-input Multi-output ModelsMulti-input Multi-output models concurrently process multiple inputs within one neural network to reduce network over-parameterization. (Havasi et al., 2021) and (Rame et al., 2021) train independent sub-networks and ensemble them into a multi-input multi-output model to obtain better accuracy and uncertainty estimation with inference cost similar to a single network. (Murahari et al., 2022) proposes data multiplexing technique to multiplex multiple input sequences into one input sequence to Transformer model, which leads to up to 18x inference speedup. (Murahari et al., 2023) develops pre-trained multiplexed language models to improve model throughput. Performance ModelingVarious methods have been proposed to estimate the performance of machine learning models. (Justus et al., 2018) proposes a method to predict CNN execution time for training. They decompose CNN training into several components, estimate the time for each component, and predict the model execution time as the combination of different components. (Qi et al., 2017; Cai et al., 2017) predict the performance of deep neural networks based on the neural network models' architecture. (Stamoulis et al., 2018) proposes predictive models for the power and memory of neural networks executing on GPUs. Machine-learning-based cost models (Chen et al., 2018; Bouzidi et al., 2020) have been explored to predict program running time. Interpolation (Davis, 1975) is widely used in engineering and science (Oliver and Webster, 1990; Keys, 1981; Lehmann et al., 1999), where function values at discrete data points are collected in experiments and the function values at the intervals between discrete data points are estimated using interpolation methods. ## 6 Conclusion We propose PruMUX, a method to combine model compression and data multiplexing to build high throughput transformers. Our implementation of PruMUX makes use of CoFi and DataMUX and we show that it achieves substantial throughput improvement over either CoFi or DataMUX for a large range of accuracy thresholds. We conclude that the reason that PruMUX performs well in certain range of accuracy loss budgets is that CoFi and DataMUX improve the throughput of a model in two different dimensions: reducing the latency of an inference and compressing multiple inferences. When the accuracy loss budget is large, both methods lead to non-linear drops in model accuracy, PruMUX can achieve much better performance than either approach because it uses more conservative parameters for CoFi and DataMUX before each reaches its bad trade-off point. We also present Auto-PruMUX, a meta-model to automatically predict high-performance parameter combinations for a desired accuracy on a task. We show it is promising in predicting parameters without individual data points and additional training. ## 7 Limitations Our experiments are limited to 3 DataMUXed pretrained models (\(N=\) 2, 5, and 10) due to compute constraints. More pre-trained models with different \(N\)'s would provide PruMUX with more options to improve throughput and would allow us to conduct a more detailed evaluation of Auto-PruMUX. PruMUX uses CoFi as its model compression method. Experiments with other methods could improve our understanding of the interactions between model compression and data multiplexing.
2307.15261
Explicit Hopcroft's Trick in Categorical Partition Refinement
Algorithms for partition refinement are actively studied for a variety of systems, often with the optimisation called Hopcroft's trick. However, the low-level description of those algorithms in the literature often obscures the essence of Hopcroft's trick. Our contribution is twofold. Firstly, we present a novel formulation of Hopcroft's trick in terms of general trees with weights. This clean and explicit formulation -- we call it Hopcroft's inequality -- is crucially used in our second contribution, namely a general partition refinement algorithm that is functor-generic (i.e. it works for a variety of systems such as (non-)deterministic automata and Markov chains). Here we build on recent works on coalgebraic partition refinement but depart from them with the use of fibrations. In particular, our fibrational notion of $R$-partitioning exposes a concrete tree structure to which Hopcroft's inequality readily applies. It is notable that our fibrational framework accommodates such algorithmic analysis on the categorical level of abstraction.
Takahiro Sanada, Ryota Kojima, Yuichi Komorida, Koko Muroya, Ichiro Hasuo
2023-07-28T02:08:21Z
http://arxiv.org/abs/2307.15261v2
# Explicit Hopcroft's Trick in Categorical Partition Refinement ###### Abstract Algorithms for _partition refinement_ are actively studied for a variety of systems, often with the optimisation called _Hopcroft's trick_. However, the low-level description of those algorithms in the literature often obscures the essence of Hopcroft's trick. Our contribution is twofold. Firstly, we present a novel formulation of Hopcroft's trick in terms of general trees with weights. This clean and explicit formulation--we call it _Hopcroft's inequality_--is crucially used in our second contribution, namely a general partition refinement algorithm that is _functor-generic_ (i.e. it works for a variety of systems such as (non-)deterministic automata and Markov chains). Here we build on recent works on coalgebraic partition refinement but depart from them with the use of _fibrations_. In particular, our fibrational notion of _\(R\)-partitioning_ exposes a concrete tree structure to which Hopcroft's inequality readily applies. It is notable that our fibrational framework accommodates such algorithmic analysis on the categorical level of abstraction. partition refinement, category theory, coalgebra, fibration, tree algorithm 201220 Such a variety of target systems is uniformly addressed by a recent body of work on _coalgebraic partition refinement_[4, 5, 14, 26]. Here, a target system is identified with a categorical construct called _coalgebra_\(c\colon C\to FC\) (see e.g. [13]), where \(C\) represents the state space, the _functor_\(F\) specifies the type of the system, and \(c\) represents the dynamics. By changing the functor \(F\) as a parameter, the theory accommodates many different systems such as DFAs and weighted automata. The coalgebraic partition refinement algorithms in [4, 5, 14, 26] are _functor-generic_: they apply uniformly to such a variety of systems. The current work is inspired by [14] which successfully exploits Hopcroft's trick for generic coalgebraic partition refinement. In [14], their coalgebraic algorithm is described in parallel with its set-theoretic (or even binary-level) concrete representations, letting the latter accommodate Hopcroft's trick. Their experimental results witnessed its superior performance, beating some existing tools that are specialised in a single type of systems. However, the use of Hopcroft's trick in [14] is formulated in low-level set-theoretic terms, which seems to obscure the essence of the algorithm as well as the optimisation by Hopcroft's trick, much like in the original paper [11]. Therefore, in this paper, we aim at 1) an explicit formulation of Hopcroft's trick, and 2) a categorical partition refinement algorithm that exposes an explicit data structure to which Hopcroft's trick applies. We achieve these two goals in this paper: 1) an explicit formulation that we call _Hopcroft's inequality_, and 2) a categorical algorithm that uses a _fibration_. Here is an overview. **Hopcroft's Inequality** We identify _Hopcroft's inequality_ (Thm. 2.9) as the essence of Hopcroft's trick. Working on general trees with a general notion of vertex weight, it uses the classification of edges into _heavy_ and _light_ ones and bounds a sum of weights in terms of (only) the root and leaf weights. This inequality can be used to bound the complexity of many tree generation algorithms, including those for partition refinement. This general theory can accommodate different weights. We exploit this generality to systematically derive partition refinement algorithms with different complexities (SS6.2). **A Fibrational Partition Refinement Algorithm** Hopcroft's inequality does not directly apply to the existing coalgebraic partition refinement algorithms [4, 5, 14, 26] since the latter do not explicitly present a suitable tree structure. To address this challenge, we found the categorical language of _fibrations_[12] to be a convenient vehicle: it allows us to speak about the relationship between 1) an equivalence relation (an object in a fibre category) and 2) a partitioning of a state space (a mono-sink in the base category). The outcome is a partition refinement algorithm that is both _abstract_ (it is functor-generic and applies to a variety of systems) and _concrete_ (it explicitly builds a tree to which Hopcroft's inequality applies.) Our development relies on the fibrational theory of bisimilarity [8, 17]; yet ours is the first fibrational partition refinement algorithm. More specifically, in a fibration \(p\colon\mathbb{E}\to\mathbb{C}\), an equivalence relation \(R\) on a set \(X\) is identified with an object \(R\in\mathbb{E}_{X}\) in the fibre over \(X\) (consider the well-known fibration \(\mathbf{EqRel}\to\mathbf{Set}\) of sets and equivalence relations over them). We introduce a categorical notion of _\(R\)-partitioning_; it allows \(R\in\mathbb{E}_{X}\) to induce a mono-sink (i.e. a family of mono-morphisms) \(\{\kappa_{i}\colon C_{i}\mapsto C\}_{i\in I}\). The latter is identified with the set of \(R\)-equivalence classes. Fig. 1 illustrates one iteration of our fibrational partition refinement algorithm \(\mathsf{fPR}^{\mathrm{H}}\) (Algo. 2). In the last step (Fig. 1c), the mono-sink \(C_{010},C_{011},C_{012}\rightsquigarrow C_{01}\) arises as the \((c\circ\kappa)^{*}\overline{F}R\)-partitioning of \(C_{01}\). In this manner, a tree structure explicitly emerges in the base category \(\mathbb{C}\). Hopcroft's inequality directly applies to this tree, allowing us to systematically present the Hopcroft-type optimisation on the categorical level of abstraction. We note that, at this moment, our fibrational framework (with a fibration \(p\colon\mathbb{E}\to\mathbb{C}\)) has only one example, namely the fibration \(\mathbf{EqRel}\to\mathbf{Set}\) of equivalence relations over sets. While it is certainly desirable to have other examples, their absence does not harm the value of our fibrational framework: we do not use fibrations for additional generality (beyond functor-genericity);1 we use them to explicate trees in the base category (cf. Fig. 1). Footnote 1: In this sense, we can say that our use of fibrations is similar to some recent usages of string diagrams in _specific_ monoidal categories, such as in [2, 22]. **Contributions**: Summarising, our main technical contributions are as follows. * _Hopcroft's inequality_ that explicates the essence of Hopcroft's trick. * A fibrational notion of _\(R\)-partitioning_ that turns a fibre object into a mono-sink (SS4). * A fibrational partition refinement algorithm \(\mathsf{fPR}^{\mathrm{H}}\) that combines the above two (SS6.1). * Functor-generic partition refinement algorithms \(\mathsf{fPR}^{\mathrm{H}\_\_\_}\mathsf{RP}_{\_\mathrm{OC}}^{\mathrm{H}\_ \_\mathsf{RP}},\mathsf{fPR}^{\mathrm{H}\_\_\mathsf{ERR}}_{\_\mathrm{RR}}\), obtained as instances of \(\mathsf{fPR}^{\mathrm{H}}\) but using different weights in Hopcroft's inequality. The three achieve slightly different, yet comparable to the best known, complexity bounds (SS6.2). ## 2 Hopcroft's Inequality We present our first contribution, _Hopcroft's inequality_. It is a novel formalisation of Hopcroft's trick in terms of rooted trees. It also generalises the trick, accommodating arbitrary _weights_ (Def. 2.2) besides the particular one that is typically and widely used (e.g. [11, 14, 15, 24]). Let \(T\) be a rooted tree. We denote the set of leaves by \(L(T)\), the set of vertices by \(V(T)\), the set of edges in the path from \(v\) to \(u\) by \(\mathrm{path}(v,u)\), the set of children of \(v\in V(T)\) by \(\mathrm{ch}(v)\), and the subtree whose root is \(v\in V(T)\) by \(\mathrm{tr}(v)\). [weight function] Let \(T\) be a rooted finite tree. A _weight function_ of \(T\) is a map \(w\colon V(T)\to\mathbb{N}\) satisfying \(\sum_{u\in\mathrm{ch}(v)}w(u)\leq w(v)\) for each \(v\in V(T)\). We call a weight function _tight_ if \(\sum_{u\in\mathrm{ch}(v)}w(u)=w(v)\) for all \(v\in V(T)\setminus L(T)\). [heavy child choice] For a weight function \(w\) of a tree \(T\), a _heavy child choice_ (hcc for short) is a map \(h\colon V(T)\setminus L(T)\to V(T)\) satisfying \(h(v)\in\mathrm{ch}(v)\) and \(w(h(v))=\max_{u\in\mathrm{ch}(v)}w(u)\) for every \(v\in V(T)\setminus L(T)\). We write \(h(v)\) as \(h_{v}\) and call the vertex \(h_{v}\) a _heavy child_ of \(v\), and a non-heavy child a _light child_. We define \(\mathrm{lch}_{h}(v)=\mathrm{ch}(v)\setminus\{h_{v}\}\). An edge \((v,u)\) is a _light edge_ if \(u\in\mathrm{lch}_{h}(v)\). We define \(\mathrm{lpath}(v,u)=\{e\in\mathrm{path}(v,u)\mid e\text{ is a light edge}\}\). Note that a heavy child choice always exists but is not unique in general. Figure 1: An iteration in our algorithm \(\mathsf{fPR}^{\mathrm{H}}\) (Algo. 2). Fig. 0(a) shows an equivalence relation \(R\) over \(C\), and the corresponding partitioning \(C_{00},C_{01},C_{1}\to C\) of the state space \(C\). (The history of refinement is recorded as a tree; this is important for complexity analysis.) In Fig. 0(b), the equivalence relation \(R\) is refined into \(c^{\top}\overline{F}R\) along the one-step transition of the system dynamics \(c\), and is further restricted to the partition \(C_{01}\). In Fig. 0(c), the resulting equivalence relation \((c\circ\kappa)^{\top}\overline{F}R\) over \(C_{01}\) yields a partitioning of \(C_{01}\), expanding the tree Examples are in Fig. 2; the weight on the left is not tight while the right one is tight. In the rest of this section, our technical development is towards Hopcroft's inequality in Thm. 2.9. It gives an upper bound for a sum of weights-- we only count those for light children, which is the core of the optimisation in Hopcroft's partition refinement algorithm [11]--in terms of weights of the root and the leaves. This upper bound makes no reference to the tree's height or internal weights, making it useful for complexity analysis of tree generation algorithms. The following lemma crucially relies on the definition of weight function. Let \(T\) be a finite tree with a root \(r\), \(w\) be a weight function of \(T\), and \(S\) be an arbitrary set of edges of \(T\). Then \(\sum_{v\in V(T)}\sum_{\begin{subarray}{c}w\in\operatorname{ch}(v)\\ (v,u)\notin S\end{subarray}}w(u)\geq\sum_{l\in L(T)}\bigl{|}\operatorname{ path}(r,l)\setminus S\bigr{|}\cdot w(l)\) holds. The equality holds when \(w\) is tight. Lem. 2.5 is our first key lemma; we use Lem. 2.4 in its proof. It relates the sum of weights of the light children--for which we aim to give an upper bound in Thm. 2.9--with the leaf weights and (roughly) the tree height. Let \(T\) be a finite tree with a root \(r\), \(w\) be a weight function of \(T\), and \(h\) be an hcc for \(w\). Then the following inequality holds. The equality holds when \(w\) is tight. \[\sum_{v\in V(T)}\sum_{u\in\operatorname{ch}_{h}(v)}w(u)\;\geq\;\sum_{l\in L(T )}\bigl{|}\operatorname{path}(r,l)\bigr{|}\cdot w(l). \tag{1}\] For the right tree in Fig. 2, the left-hand side of (1) is \((14+7)+5+(2+3)+0+2=33\), and the right-hand side is \(1\times 5+0\times 10+1\times 9+2\times 2+2\times 3+2\times 2+1\times 5=33\). The inequality in (1) is the opposite of what we want (namely an upper bound for the left-hand side). We thus force an equality using the following notion of tightening. [tightening] Let \(w\) be a weight function of a rooted finite tree \(T\), and \(h\) be its heavy child choice. The _tightening_\(w^{\prime}\colon V(T)\to\mathbb{N}\) of \(w\) along \(h\) is defined recursively by \[w^{\prime}(u)=\left\{\begin{array}{ll}w(u)&\text{if $u$ is the root of $T$}\\ w^{\prime}(v)-\sum_{u^{\prime}\in\operatorname{clcha}_{h}(v)}w(u^{\prime})& \text{if $u=h_{v}$ for the parent $v$ of $u$}\\ w(u)&\text{otherwise.}\end{array}\right.\] In Fig. 2, the weight function of the right tree is a tightening of that of the left tree. We observe that tightening maintains a heavy child choice: Let \(T\) be a rooted finite tree, \(w\) be a weight function of \(T\), \(h\) be an hcc for \(w\), and \(w^{\prime}\) be the tightening of \(w\) along \(h\). The following hold. 1. The map \(w^{\prime}\) is a tight weight function of \(T\). 2. The map \(h\) is also a heavy child choice for \(w^{\prime}\). 3. For the root \(r\), \(w(r)=w^{\prime}(r)\) holds, and for each \(v\in V(T)\), \(w(v)\leq w^{\prime}(v)\) holds. Our second key lemma towards Thm. 2.9 is as follows, bounding \(|\operatorname{path}(r,v)|\) that occurs on the right in (1). Its proof is by what is commonly called _Hopcroft's trick_[11, 1]: it observes that, along a light edge, weights decay at least by \(1/2\). Figure 2: Examples of rooted trees, each with a weight function and an hcc. The heavy children are indicated by thick edges. A thin edge represents a light and \(w^{\prime}\) be the tightening of \(w\) along \(h\). The following hold. **Lemma 2.8**.: _Let \(T\) be a finite tree with a root \(r\), \(w\) be a weight of \(T\), and \(h\) be an hcc for \(w\). For each vertex \(v\in V(T)\) with \(w(v)\neq 0\), the following inequality holds: \(|\operatorname{path}(r,v)|\leq\log_{2}w(r)-\log_{2}w(v)\)._ We combine Lem. 2.8 and Lem. 2.5 (its equality version; we can use it via tightening) to obtain Hopcroft's inequality. It bounds a sum of weights by the root and leaf weights. **Theorem 2.9** (Hopcroft's inequality).: _Let \(T\) be a finite tree with root \(r\), \(w\) be a weight function of \(T\), and \(h\) be a heavy child choice for \(w\). The following inequality holds._ \[\sum_{v\in V(T)}\sum_{w\in\operatorname{clb}_{h}(v)}w(u)\;\leq\;w(r)\log_{2}w( r)-\sum_{l\in L(T),w(l)\neq 0}w(l)\log_{2}w(l). \tag{2}\] For complexity analysis, we use Hopcroft's inequality in the following form. Assume that a tree generation algorithm takes \(t(v)\) time to generate all the children (both heavy and light) of \(v\). If there exists \(K\) with \(t(v)\leq K\sum_{v\in\operatorname{clb}(v)}w(u)\)--note that the bound only refers to light children--then the time to generate the whole tree is bounded by \(Kw(r)\log_{2}w(r)\). **Corollary 2.10**.: _Let \(T\) be a rooted finite tree with root \(r\), \(w\) be a weight function of \(T\), and \(h\) be a heavy child choice for \(w\). If a map \(t\colon V(T)\to\mathbb{N}\) satisfies that there exists a constant \(K\in\mathbb{N}\) such that \(t(v)\leq K\sum_{u\in\operatorname{clb}_{h}(v)}w(u)\) for every \(v\in V(T)\), then the sum of \(t(v)\) is bounded by \(Kw(r)\log_{2}w(r)\), that is \(\sum_{v\in V(T)}t(v)\leq Kw(r)\log_{2}w(r)\)._ **Remark 2.11**.: Further adaptations of Hopcroft's trick are pursued in the literature, e.g. in [25], where the notion of heavy child choice is relaxed with an extra parameter \(\alpha\in[1/2,1)\). Our theory can easily be extended to accommodate \(\alpha\), in which case the above description corresponds to the special case with \(\alpha=1/2\). Details are deferred to another venue. ## 3 Categorical Preliminaries The rest of the paper is about our second contribution, namely a functor-generic partition refinement (PR) algorithm optimised by an explicit use of Hopcroft's inequality (Thm. 2.9). It is given by our novel formulation of coalgebraic PR algorithms in fibrational terms. Here we shall review some necessary categorical preliminaries. We use categorical formalisation of intersections and unions for monomorphisms. **Definition 3.1**.: For monomorphisms \(m\colon A\to C\) and \(n\colon B\to C\) in \(\mathbb{C}\), the _intersection_\(m\cap n\colon A\cap B\to C\) and the _union_\(m\cup n\colon A\cup B\to C\) are defined by the following pullback and pushout, respectively: We say \(m\colon A\to C\) and \(m^{\prime}\colon A^{\prime}\to C\) are _equivalent_ if there is an isomorphism \(\phi\colon A\to A^{\prime}\) such that \(m=m^{\prime}\circ\phi\). The set \(\operatorname{\mathbf{Sub}}(\mathbb{C})_{C}\) of equivalence classes of monomorphisms whose codomains are \(C\) forms a lattice, assuming enough limits and colimits. ### Fibrations A fibration \(p\colon\mathbb{E}\to\mathbb{C}\) is a functor satisfying some axioms. When \(p(R)=C\) for an object \(R\in\mathbb{E}\) and an object \(C\in\mathbb{C}\), we see that \(R\) equips \(C\) with some information, e.g. a predicate, a relation, a topology, etc. The main example in this paper is the fibration \(\operatorname{\mathbf{EqRel}}\to\operatorname{\mathbf{Set}}\) where \(C\) is a set and \(R\) is an equivalence relation over \(C\). Fibrational constructs that are the most relevant to us are the _inverse image_\(f^{*}(R^{\prime})\) and the _direct image_\(f_{*}(R)\) along a morphism \(f\colon S\to S^{\prime}\) in \(\mathbb{C}\). In the case of \(\mathbf{EqRel}\to\mathbf{Set}\), these are computed as follows. In what follows we introduce some basics of fibrations; they formalise the intuition above. For details, see e.g. [12]. [fibration] Let \(p\colon\mathbb{E}\to\mathbb{C}\) be a functor. A morphism \(f\colon P\to R\) in \(\mathbb{E}\) is _Cartesian_ if for any \(g\colon Q\to R\) in \(\mathbb{E}\) with \(pg=pf\circ v\) for some \(v\colon pQ\to pP\), there exists a unique \(h\colon Q\to P\) in \(\mathbb{E}\) above \(v\) (i.e. \(ph=v\)) with \(f\circ h=g\). The functor \(p\) is a _fibration_ if for each \(R\in\mathbb{E}\) and \(u\colon C\to D\) in \(\mathbb{C}\) with \(pR=D\), there are an object \(u^{*}R\) and a Cartesian morphism \(\dot{u}(R)\colon u^{*}R\to R\) in \(\mathbb{E}\). See below. The category \(\mathbb{E}\) is called _total category_ and the category \(\mathbb{C}\) is called _base category_ of the fibration. For an object \(C\in\mathbb{C}\), the objects in \(\mathbb{E}\) above \(C\) forms a category \(\mathbb{E}_{C}\), called _fibre category_ above \(C\). The fibre category \(\mathbb{E}_{C}\) is the category of "equivalence relations" on \(C\). [fibre category] Let \(p\colon\mathbb{E}\to\mathbb{C}\) be a fibration and \(C\in\mathbb{C}\). The _fibre category_\(\mathbb{E}_{C}\) over \(C\) is the subcategory of \(\mathbb{E}\) whose objects are defined by \(\operatorname{ob}(\mathbb{E}_{C})=\{R\in\mathbb{E}\mid pR=C\}\), and morphisms are defined by \(\mathbb{E}_{C}(Q,R)=\{f\in\mathbb{E}(Q,R)\mid pf=\operatorname{id}_{C}\}\) for \(Q,R\in\operatorname{ob}(\mathbb{E}_{C})\). [EqRel \(\to\mathbf{Set}\) is a fibration] Let \(\mathbf{EqRel}\) be the category of equivalence relations. The objects of \(\mathbf{EqRel}\) are pair \((S,R)\) of a set \(S\) and an equivalence relation \(R\) on \(S\). The morphism \(f\colon(S,R)\to(S^{\prime},R^{\prime})\) in \(\mathbf{EqRel}\) is a function \(f\colon S\to S^{\prime}\) satisfying \((f(x),f(y))\in R^{\prime}\) for all \((x,y)\in R\). We sometimes write just \(R\) for \((S,R)\) when no confusion arises. The functor \(p\colon\mathbf{EqRel}\to\mathbf{Set}\) defined by \(p(S,R)=S\) is a fibration. For a morphism \(u\colon C\to D\) in the base category of a fibration, the map \(u^{*}\colon\operatorname{ob}(\mathbb{E}_{D})\to\operatorname{ob}(\mathbb{E}_ {C})\) extends to a functor \(u^{*}\colon\mathbb{E}_{D}\to\mathbb{E}_{C}\) between fibre categories. We call the functor \(u^{*}\colon\mathbb{E}_{D}\to\mathbb{E}_{C}\) an _inverse image_ functor. Given a fibration \(\mathbb{E}\to\mathbb{C}\) and an endofunctor \(F\colon\mathbb{C}\to\mathbb{C}\) on the base category, if \(R\in\mathbb{E}\) is above \(C\in\mathbb{C}\), we would like to get an object in \(\mathbb{E}\) above \(FC\). A lifting of \(F\) specifies the choice of an object above \(FC\). [lifting, fibred lifting] Let \(p\colon\mathbb{E}\to\mathbb{C}\) be a fibration and \(F\colon\mathbb{C}\to\mathbb{C}\) be a functor. A _lifting_ of \(F\) is a functor \(\overline{F}\colon\mathbb{E}\to\mathbb{E}\) with \(p\circ\overline{F}=F\circ p\). A pair \((F,\overline{F})\) of functor \(F\colon\mathbb{C}\to\mathbb{C}\) and its lifting \(\overline{F}\colon\mathbb{E}\to\mathbb{E}\) is _fibred_ if \(\overline{F}\) preserves Cartesian morphisms. When \((F\colon\mathbb{C}\to\mathbb{C},\overline{F}\colon\mathbb{E}\to\mathbb{E})\) is fibred, for \(f\colon C\to D\) in \(\mathbb{C}\) and \(R\in\mathbb{E}_{D}\), we have \(\overline{F}(f^{*}R)=(Ff)^{*}(\overline{F}R)\) in \(\mathbb{E}_{FC}\). An important example of a lifting is a relation lifting. [relation lifting [13]] Let \(F\colon\mathbf{Set}\to\mathbf{Set}\) be a weak pullback preserving functor. We define a lifting \(\operatorname{Rel}(F)\colon\mathbf{EqRel}\to\mathbf{EqRel}\) of \(F\) along the fibration \(p\colon\mathbf{EqRel}\to\mathbf{Set}\), called the _relation lifting_ of \(F\), as follows. For an object \((C,R)\in\mathbf{EqRel}\), there is the inclusion \(\langle r_{1},r_{2}\rangle\colon R\to C\times C\). We define the relation lifting on the object \(R\) by \(\operatorname{Rel}(F)(R)=\operatorname{Im}\langle Fr_{1},Fr_{2}\rangle\), where \(\operatorname{Im}\langle Fr_{1},Fr_{2}\rangle\) is the image factorisation. By the assumption that \(F\) preserves weak pullbacks, we can show that \(\operatorname{Rel}(F)(R)\) is an equivalence relation. \(\operatorname{Rel}(F)\) can be extended to a functor. In this paper, we deal with a restricted class of fibrations, called \(\mathbf{CLat}_{\cap}\)-fibrations. [CLat\({}_{\cap}\)-fibration] A fibration \(p\colon\mathbb{E}\to\mathbb{C}\) is a \(\mathbf{CLat}_{\cap}\)-_fibration_ if each fibre \(\mathbb{E}_{C}\) is a complete lattice and each inverse image functor \(u^{*}\colon\mathbb{E}_{D}\to\mathbb{E}_{C}\) preserves meets \(\cap\). For a \(\mathbf{CLat}_{\cap}\)-fibration, there always exists the left adjoint \(u_{*}\colon\mathbb{E}_{C}\to\mathbb{E}_{D}\) to an inverse image functor \(u^{*}\), as is well-known (cf. Freyd's adjoint functor theorem). The functor \(u_{*}\) is defined by \(u_{*}(P)=\bigcap\{R\in\mathbb{E}_{D}\mid P\sqsubseteq u^{*}(R)\}\) on objects. We call \(u_{*}\) a _direct image_ functor. [CLat\({}_{\cap}\)-fibration] The functor \(p\colon\mathbf{EqRel}\to\mathbf{Set}\) from Example 3.1 is a \(\mathbf{CLat}_{\cap}\)-fibration. We describe the inverse image functor \(f^{*}\) and the direct image functor \(f_{*}\) for a function \(f\colon S\to S^{\prime}\). For an equivalence relation \(R^{\prime}\) on \(S^{\prime}\), the inverse image \(f^{*}(R^{\prime})\) is the equivalence relation \(\{(x,y)\in S\times S\mid(f(x),f(y))\in R^{\prime}\}\) on \(S\). For an equivalence relation \(R\) on \(S\), the direct image \(f_{*}(R)\) is the _equivalence closure_ of the relation \(\left\{(f(x),f(y))\in S^{\prime}\times S^{\prime}\,\big{|}\,(x,y)\in R\right\}\). ### Coalgebras and Bisimulations Coalgebras are widely used as a generalisation of state-based systems [13, 23]. [\(F\)-coalgebra] Let \(\mathbb{C}\) be a category and \(F\colon\mathbb{C}\to\mathbb{C}\) be an endofunctor. An _\(F\)-coalgebra_ is a pair \((C,c)\) of an object \(C\in\mathbb{C}\) and a morphism \(c\colon C\to FC\). For an \(F\)-coalgebra \(c\colon C\to FC\), \(F\) specifies the type of the system, the carrier object \(C\) represents the "set of states" of the system, and \(c\) represents the transitions in the system. When \(\mathbb{C}=\mathbf{Set}\), for an \(F\)-coalgebra \(c\colon C\to FC\) and a state \(x\in C\), the element \(c(x)\in FC\) represents properties (e.g. acceptance) and successors of \(x\). A major benefit of coalgebras is that their theory is _functor-generic_: by changing a functor \(F\), the same theory uniformly applies to a vast variety of systems. We describe some \(F\)-coalgebras for functors \(F\) on \(\mathbf{Set}\). 1. For the powerset functor \(\mathcal{P}\), a \(\mathcal{P}\)-coalgebra \(c\colon C\to\mathcal{PC}\) is a _Kripke frame_. For a state \(x\in C\), \(c(x)\in\mathcal{PC}\) is the set of successors of \(x\). 2. Let \(\Sigma\) be an alphabet and \(N_{\Sigma}=2\times(\mathcal{P}-)^{\Sigma}\). An \(N_{\Sigma}\)-coalgebra \(c\colon C\to N_{\Sigma}C\) is a _non-deterministic automaton_ (NA). For a state \(x\in C\), let \((b,t)=c(x)\in 2\times(\mathcal{PC})^{\Sigma}\). The state \(x\) is accepting iff \(b=1\), and there is a transition \(x\xrightarrow{a}y\) in the NA iff \(y\in t(a)\). 3. The distribution functor \(\mathcal{D}\) is defined on a set \(X\) to be \(\mathcal{D}X=\{d\colon X\to[0,1]\mid\{x\in X\mid f(x)\neq 0\}\) is finite and \(\sum_{x\in X}d(x)=1\}\). A \(\mathcal{D}\)-coalgebra \(c\colon C\to\mathcal{D}C\) is a _Markov chain_. For a state \(x\), \(c(x)\in\mathcal{D}C\) is a probabilistic distribution \(C\to[0,1]\), which represents the probabilities of transitions to successor states of \(x\). We are interested in similarity of states of a state-transition system, where we consider two states to be similar if one state can mimic the transitions of the other. _Bisimilarity_ by Park [21] and Milner [19] is a notion that captures such behaviour of states. Hermida and Jacobs [8] formulated bisimilarity as a coinductive relation on a coalgebra, using a fibration. **Definition 3.11** (bisimulations and the bisimilarity).: Let \(p\colon\mathbb{E}\to\mathbb{C}\) be a \(\mathbf{CLat}_{\cap}\)-fibration, \(F\colon\mathbb{C}\to\mathbb{C}\) be a functor, \(c\colon C\to FC\) be an \(F\)-coalgebra and \(\overline{F}\) be a lifting of \(F\). An \((F,\overline{F})\)_-bisimulation_ is a \(c^{*}\circ\overline{F}\)-coalgebra in \(\mathbb{E}_{C}\), that is an object \(R\in\mathbb{E}_{C}\) with \(R\sqsubseteq c^{*}\circ\overline{F}(R)\). By the Knaster-Tarski theorem, there exists the greatest \((F,\overline{F})\)-bisimulation \(\nu(c^{*}\circ\overline{F})\) with respect to the order of \(\mathbb{E}_{C}\), and it is called the \((F,\overline{F})\)_-bisimilarity_. In the above definition, the choice of \(\overline{F}\) determines a notion of bisimulation. The relation lifting \(\operatorname{Rel}(F)\) (Def. 3.6) is often used as a lifting of \(F\). For all the functors we consider, the bisimilarity wrt. \(\operatorname{Rel}(F)\) coincides with the _behavioural equivalence_, another well-known notion of bisimilarity [13, SS4.5]. **Example 3.12** (\((F,\operatorname{Rel}(F))\)-bisimilarities).: We illustrate \((F,\operatorname{Rel}(F))\)-bisimilarity (also called _logical \(F\)-bisimilarity_[13]) for \(F\) in Example 3.10. Let \(C\in\mathbf{Set}\) and \(R\in\mathbf{EqRel}_{C}\). 1. (\(F=\mathcal{P}\)). The \((\mathcal{P},\operatorname{Rel}(\mathcal{P}))\)-bisimilarity \(\nu(c^{*}\circ\operatorname{Rel}(\mathcal{P}))\) for a \(\mathcal{P}\)-coalgebra \(c\colon C\to\mathcal{P}C\) is the maximum relation \(R\) on \(C\) such that if \((x,y)\in R\) then * for every \(x^{\prime}\in c(x)\), there is \(y^{\prime}\in c(y)\) such that \((x^{\prime},y^{\prime})\in R\), and * for every \(y^{\prime}\in c(y)\), there is \(x^{\prime}\in c(x)\) such that \((x^{\prime},y^{\prime})\in R\). 2. (\(F=N_{\Sigma}\)). The \((N_{\Sigma},\operatorname{Rel}(N_{\Sigma}))\)-bisimilarity \(\nu(c^{*}\circ\operatorname{Rel}(N_{\Sigma}))\) for an \(N_{\Sigma}\)-coalgebra \(c\colon C\to N_{\Sigma}C\) is the ordinary bisimilarity for the NA \(c\), that is the maximum relation \(R\) on \(C\) such that if \((x,y)\in R\) then \(\pi_{1}(c(x))=\pi_{1}(c(y))\) and * for each \(a\in\Sigma,x^{\prime}\in\pi_{2}(c(x))(a)\), there is \(y^{\prime}\in\pi_{2}(c(y))(a)\) such that \((x^{\prime},y^{\prime})\in R\), and * for each \(a\in\Sigma\) and \(y^{\prime}\in\pi_{2}(c(y))(a)\), there is \(x^{\prime}\in\pi_{2}(c(x))(a)\) such that \((x^{\prime},y^{\prime})\in R\). 3. (\(F=\mathcal{D}\)) [3]. The \((\mathcal{D},\operatorname{Rel}(\mathcal{D}))\)-bisimilarity \(\nu(c^{*}\circ\operatorname{Rel}(\mathcal{D}))\) for a \(\mathcal{D}\)-coalgebra \(c\colon C\to\mathcal{D}C\) is the maximum relation \(R\) such that if \((x,y)\in R\) then \(\sum_{z\in K}c(x)(z)=\sum_{z\in K}c(y)(z)\) for every equivalence class \(K\subseteq C\) of \(R\). ## 4 Fibrational Partitioning We introduce the notion of fibrational partitioning, one that is central to our algorithm that grows a tree using fibre objects (cf. Fig. 1). Given an "equivalence relation" \(R\) over \(C\)--identified with an object \(R\in\mathbb{E}_{C}\) over \(C\in\mathbb{C}\) in a suitable fibration \(p\colon\mathbb{E}\to\mathbb{C}\)--a fibrational \(R\)-partitioning is a mono-sink, shown on the right, that is subject to certain axioms. The notion allows us to explicate equivalence classes (namely \(\{C_{i}\}_{i}\)) in the abstract fibrational language. **Definition 4.1** (\(R\)-partitioning).: Let \(\mathbb{C}\) be a category with pullbacks and an initial object \(0\), and \(p\colon\mathbb{E}\to\mathbb{C}\) be a \(\mathbf{CLat}_{\cap}\)-fibration. Let \(C\in\mathbb{C}\) and \(R\in\mathbb{E}_{C}\). An \(R\)_-partitioning_ is a mono-sink \(\{\kappa_{i}\colon C_{i}\mapsto C\}_{i\in I}\) to \(C\) that satisfies the following conditions. 1. \(\kappa_{i}^{*}(R)=\top_{C_{i}}\) for all \(i\in I\), 2. \(\bigsqcup_{i\in I}(\kappa_{i})_{*}(\top_{C_{i}})=R\), and 3. \(C_{i}\not\cong 0\) and \(C_{i}\cap C_{j}\cong 0\) for each \(i,j\in I\) with \(i\neq j\). We say a \(\mathbf{CLat}_{\cap}\)-fibration \(p\)_admits partitioning_ if (1) for each \(C\in\mathbb{C}\) and \(R\in\mathbb{E}_{C}\), there is an \(R\)-partitioning; and moreover, (2) the following _monotonicity_ holds: for each \(C\in\mathbb{C}\), \(R,R^{\prime}\in\mathbb{E}_{C}\) s.t. \(R^{\prime}\sqsubseteq R\), and each \(R\)-partitioning \(\{\kappa_{i}\colon C_{i}\mapsto C\}_{i\in I}\), we have \(\bigsqcup_{i\in I}(\kappa_{i})_{*}(\kappa_{i}^{*}R^{\prime})=R^{\prime}\). Cond. 3 asserts that the components \(C_{i}\) are nontrivial and disjoint. Cond. 1 says the partitioning \(\{C_{i}\}_{i}\) is _not too coarse_--the original equivalence \(R\), when restricted to \(C_{i}\), should be trivial (which is false when \(C_{i}\) includes a pair that is not \(R\)-equivalent). Conversely, Cond. 2 means that \(\{C_{i}\}_{i}\) is _not too fine_--if it were finer than \(R\), then the relation \(\bigsqcup_{i\in I}(\kappa_{i})_{*}(\top_{C_{i}})\) over \(C\) would be finer than \(R\). See the concrete description of \((\kappa_{i})_{*}\) in Example 3.8. **Example 4.2** (\(\mathbf{EqRel}\to\mathbf{Set}\) admits partitioning).: \(\mathbf{EqRel}\to\mathbf{Set}\) admits partitioning. Indeed, given an equivalence relation \(R\in\mathbf{EqRel}_{C}\) over \(C\), the mono-sink \(\{\kappa_{S}\colon S\mapsto C\}_{S\in C/R}\), where \(S\in C/R\) is naturally identified with a subset of \(C\), is an \(R\)-partitioning. Cond. 1-3 are easily verified following Example 3. An \(R\)-partitioning is not necessarily unique. This happens when \(R\in\mathbf{EqRel}_{C}\) has singleton equivalence classes. Let \(A\subseteq C\) be an arbitrary subset such that each \(x\in A\) composes a singleton \(R\)-equivalence class. Then \(\{\kappa^{\prime}_{S}\colon S\mapsto C\}_{S\in I}\), where \(I=(C/R)\setminus\big{\{}\left\{\left.x\right\}\,\middle|\,x\in A\,\right\}\), is also an \(R\)-partitioning. With this mono-sink (that is "narrower" than the original \(\{\kappa_{S}\}_{S\in C/R}\)), Cond. 2 is satisfied since the equivalence closure operation included in the direct images \((\kappa_{i})_{*}(\top_{C_{i}})\) (see Example 3.2) compensates the absence of \(x\in A\). We can easily check that \(\mathbf{EqRel}\to\mathbf{Set}\) satisfies monotonicity. The fibration \(\mathbf{EqRel}\to\mathbf{Set}\) is our leading example, and unfortunately, the only example that we know admits partitioning. There are many other examples of \(\mathbf{CLat}_{\cap}\)-fibrations (see [17]), but they fail to admit partitioning, typically due to the failure of Cond. 2 of Def. 4. This absence of examples does not harm the value of our fibrational framework: our goal is to explicate categorical essences of partition refinement; and we do not aim at new instances via categorical abstraction (although such are certainly desirable). We introduce further conditions that make fibrations well compatible with partitioning. It is easy to see that \(\mathbf{EqRel}\to\mathbf{Set}\) satisfies the conditions on \(p\) in Assum. 4.3. **Assumption 4.3**.: Assume a \(\mathbf{CLat}_{\cap}\)-fibration \(p\colon\mathbb{E}\to\mathbb{C}\) satisfies the following conditions. 1. For each \(C\in\mathbb{C}\), the lattice \(\mathbf{Sub}(\mathbb{C})_{C}\) of subobjects of \(C\) in \(\mathbb{C}\) is distributive. 2. (Beck-Chevalley) For every pullback diagram along monomorphisms in \(\mathbb{C}\), shown in the first diagram in Fig. 3, the induced diagram, the second in Fig. 3, commutes. 3. For any monomorphisms \(\kappa\colon A\mapsto C\) and \(\lambda\colon B\mapsto C\), the third diagram in Fig. 3 is a fork. ## 5 The Naive Fibrational Algorithm \(\mathsf{fPR}^{\mathrm{naive}}\) We introduce a naive fibrational partition refinement algorithm, called \(\mathsf{fPR}^{\mathrm{naive}}\), as a preparation step to our main algorithm \(\mathsf{fPR}^{\mathrm{H}}\) (Algo. 2). In what follows, a prefix-closed set \(T\subseteq\mathbb{N}^{*}\) (where \(\mathbb{N}^{*}\) is the set of strings over \(\mathbb{N}\)) is identified with a rooted tree. We denote the leaves of \(T\) by \(L(T)\). Let \(p\colon\mathbb{E}\to\mathbb{C}\) be a \(\mathbf{CLat}_{\cap}\)-fibration that satisfies Assum. 4.3, \(F\colon\mathbb{C}\to\mathbb{C}\) be a functor, and \(\overline{F}\colon\mathbb{E}\to\mathbb{E}\) be its lifting along \(p\) (Def. 3.2). Algo. 1 shows our _naive fibrational partition refinement algorithm_. Given a coalgebra \(c\colon C\to FC\), it computes a \(\nu(c^{*}\overline{F})\)-partitioning of \(C\), i.e. modulo the \((F,\overline{F})\)-bisimilarity of \(c\) (Def. 3.1). Algo. 1 starts with \(R=\top_{C}\in\mathbb{E}_{C}\) and a singleton family of a monomorphism \(\{\kappa_{\varepsilon}\colon C_{\varepsilon}\to C\}\). With each iteration, the object \(R\) on \(C\) gets smaller and \(\blacksquare\) Figure 4: The \(R\)-partitioning gets finer as the algorithm runs. Figure 3: Conditions for Assum. 4.3. Combining the loop invariant (Lem. 5.2) and termination (Lem. 5.3), we can prove the correctness of the naive algorithm. [loop invariant] At the beginning of each iteration of the main loop, we have 1. The mono-sink \(\{\kappa_{\sigma}\colon C_{\sigma}\rightsquigarrow C\}_{\sigma\in L(T)}\) is an \(R\)-partitioning. 2. \(\nu(c^{*}\overline{F})\sqsubseteq R\). [termination] If \(\mathbb{E}_{C}\) is a well-founded lattice, then Algo. 1 terminates. [correctness of the naive algorithm] If \(\mathbb{E}_{C}\) is well-founded, then Algo. 1 terminates and returns \(\nu(c^{*}\overline{F})\)-partitioning \(\{\kappa\colon C_{i}\rightsquigarrow C\}_{i\in I}\). ## 6 Optimised Algorithms with Hopcroft's Inequality Recall that the naive algorithm grows a tree _uniformly_ so that every leaf has the same depth (see Fig. 4; note that, even if \(C_{\sigma}\) is fine enough, we extend the node by a trivial partitioning). By selecting leaves in a smart way and generating a tree selectively, the time cost of each iteration can be reduced, so that Hopcroft's inequality is applicable. In SS6.1, we present a functor-generic and fibrational algorithm enhanced with the Hopcroft-type optimisation, calling it \(\mathsf{fPR}^{\mathsf{H}}\). We use Hopcroft's inequality (SS2) for complexity analysis. In SS6.2 we instantiate \(\mathsf{fPR}^{\mathsf{H}}\) to the fibration \(\mathbf{EqRel}\to\mathbf{Set}\), obtaining three concrete (yet functor-generic) algorithms \(\mathsf{fPR}^{\mathsf{H}\cdot\mathbf{ER}}_{w_{C}},\mathsf{fPR}^{\mathsf{H} \cdot\mathbf{ER}}_{w_{\mathsf{P}}}\), \(\mathsf{fPR}^{\mathsf{H}\cdot\mathbf{ER}}_{w_{\mathsf{R}}}\) that use different weight functions. \(\mathsf{fPR}^{\mathsf{H}\cdot\mathbf{ER}}_{w_{C}}\) is essentially the algorithm in [14]. The other two \((\mathsf{fPR}^{\mathsf{H}\cdot\mathbf{ER}}_{w_{\mathsf{P}}},\mathsf{fPR}^{ \mathsf{H}\cdot\mathbf{ER}}_{w_{\mathsf{R}}})\) use the weight functions from the works [6, 11, 16] on DFA partition refinement. The three algorithms exhibit slightly different asymptotic complexities. ### A Fibrational Algorithm \(\mathsf{fPR}^{\mathsf{H}}\) Enhanced by Hopcroft's Inequality We fix a \(\mathbf{CLat}_{\sqcap}\)-fibration \(p\colon\mathbb{E}\to\mathbb{C}\), functors \(F\colon\mathbb{C}\to\mathbb{C}\) and \(\overline{F}\colon\mathbb{E}\to\mathbb{E}\), an \(F\)-coalgebra \(c\colon C\to FC\), and a map \(w\colon\operatorname{ob}(\mathbf{Sub}(\mathbb{C})_{C})\to\mathbb{N}\) (which we use for weights). We write \(w(C^{\prime})\) for \(w(\lambda\colon C^{\prime}\rightsquigarrow C)\) when no confusion arises. The following conditions clarify which properties of \(\mathbf{EqRel}\to\mathbf{Set}\) are necessary to make our optimised fibrational algorithm \(\mathsf{fPR}^{\mathsf{H}}\) work: the last one (Assume. 10) is for complexity analysis; all the other ones are for correctness of the algorithm. 1. [leftmargin=*,noitemsep,topsep=0pt] 2. 1. \(\mathbb{C}\) has pullbacks, pushouts along monos, and an initial object \(0\). 3. The fibre category \(\mathbb{E}_{0}\) above an initial object \(0\) is trivial, that is \(\top_{0}=\bot_{0}\). 4. \(\overline{F}\) is a lifting of \(F\) along \(p\) and \((F,\overline{F})\) is fibred. 4. \(F\colon\mathbb{C}\to\mathbb{C}\) preserves monomorphisms whose codomain is not the initial object \(0\). 5. The fibration \(p\) admits monotone partitioning. 6. The fibration \(p\colon\mathbb{E}\to\mathbb{C}\) satisfies the three conditions in Assum. 4.3. 7. The fibre category \(\mathbb{E}_{C}\) is a well-founded lattice. 8. If \(C^{\prime}\to C\) and \(R\in\mathbb{E}_{C^{\prime}}\), every \(R\)-partitioning \(\{\lambda_{k}\colon D_{k}\mapsto C^{\prime}\}_{k\in K}\) is finite (\(|K|<\infty\)). 9. If \(\kappa\colon A\hookrightarrow C\) and \(\lambda\colon B\to C\) are monomorphisms and \(A\cap B\cong 0\), then the functor \(\mathbb{E}_{A}\times\mathbb{E}_{B}\xrightarrow{\kappa_{*}\times\lambda} \mathbb{E}_{C}\times\mathbb{E}_{C}\xrightarrow{\omega}\mathbb{E}_{C}\) is injective on objects. 10. For \(C^{\prime}\mapsto C\), \(R\in\mathbb{E}_{C^{\prime}}\), and \(R\)-partitioning \(\{\kappa_{i}\colon C_{i}\mapsto C^{\prime}\}\) of \(C^{\prime}\), \(\sum_{i=1}^{n}w(C_{i})\leq w(C^{\prime})\). Assum. 6.1.3 is not overly restrictive. Indeed, the following functors on \(\mathbf{Set}\) have a fibred lifting. The functors described in Example 3.10 are examples of the functor defined by (3). Consider the class of endofunctors on \(\mathbf{Set}\) defined by the BNF below. \[F\mathrel{\mathop{\kern 0.0pt\mathrel{\mathop{\kern 0.0pt\mathrel{\mathop{ \kern 0.0pt\mathrel{\mathop{\kern 0.0pt\mathrel{\mathop{\kern 0.0pt\mathrel{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop \leftleftleftleft({\leftleftleft({{\leftleftleftleft({{ \leftleftleftleft({{ \leftleftleftleft({{ \leftleftleft({{ } } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\}\\\\\\\\\\} The **Partitioning** part selects one leaf \(C_{\rho}\) whose states include at least one dirty state (Line 4). The tree is expanded at this selected leaf only. This selection makes Algo. 2 different from Algo. 1, which expands the tree at every leaf (cf. Fig. 4 and Fig. 5). The **Relabelling** part then updates the clean/dirty marking. Firstly, it chooses one "heavy child" \(C_{\rho k_{0}}\) (Line 10) from the leaves generated in **Partitioning**. Then the iteration calls the MarkDirty procedure (Line 14-19). It first collects states (\(B\) in Line 16) whose all "successors" with respect to the coalgebra \(c\colon C\to FC\) are in the object \(C_{\rho k_{0}}\cup\left(\bigcup_{\sigma\in L(J)\setminus\{\rho\}}C_{\sigma}\right)\); the latter intuitively consists of states "unaffected" by tree expansion. The procedure marks only states in \(B\) as clean (Line 19), which means that the rest of the states are marked dirty. Towards the correctness theorem of our optimised fibrational algorithm \(\mathsf{fPR}^{\mathrm{H}}\) (Thm. 6.8), we first make a series of preliminary observations. We use the following notations. **Notation 6.5**.: We write \(R_{i}\) for \(R\) defined at Line 3 of Algo. 2 at the \(i\)-th iteration. We write \(J_{i}\) for \(J\) at the beginning of the \(i\)-th iteration. We write \(C_{\sigma}^{\mathsf{cl},i}\) and \(\kappa_{\sigma}^{\mathsf{cl},i}\) for \(C_{\sigma}^{\mathsf{cl}}\) and the monomorphism \(\kappa_{\sigma}^{\mathsf{cl}}\colon C_{\sigma}^{\mathsf{cl}}\mapsto C\), respectively, at Line 16 at the \(i\)-th iteration. We identify loop invariants Prop. 6.6. Termination of \(\mathsf{fPR}^{\mathrm{H}}\) follows from Assum. 6.1.7 and 9 (Prop. 6.7). Combining these, we prove the correctness of \(\mathsf{fPR}^{\mathrm{H}}\) in Thm. 6.8. Figure 5: At each iteration one leaf of the tree is selected and refined. [top invariants] At the beginning of the \(i\)-th iteration, the following hold. \((c\circ\kappa_{\sigma}\circ\kappa_{\sigma}^{\mathrm{cl},i})^{*}\overline{F}(R_{i}) =\top_{C_{\sigma}^{\mathrm{cl},i}}\) for each leaf \(\sigma\in L(J_{i})\). The mono-sink \(\{\kappa_{\sigma}\colon C_{\sigma}\hookrightarrow C\}_{\sigma\in L(J_{i})}\) is an \(R_{i}\)-partitioning. \(\nu(c^{*}\overline{F})\sqsubseteq R_{i}\). Therefore, after Algo. 2 terminates, \((c\circ\kappa_{\sigma})^{*}\overline{F}R=\top_{C_{\sigma}}\) holds for each \(\sigma\in L(J)\), \(\{\kappa_{\sigma}\hookrightarrow C\}_{\sigma\in L(J)}\) is an \(R\)-partitioning, and \(\nu(c^{*}\overline{F})\sqsubseteq R\), for \(R\in\mathbb{E}_{C}\) defined in Line 3. [correctness] Algo. 2 terminates. [correctness] Algo. 2 terminates and returns a \(\nu(c^{*}\overline{F})\)-partitioning. The explicit correspondence between \(\mathsf{fPR}^{\mathrm{H}}\) and SS2 (Table 1) allows us to directly use Hopcroft's inequality. The following result, while it does not give a complexity bound for \(\mathsf{fPR}^{\mathrm{H}}\) itself, plays a central role in the amortised analysis of its concrete instances in SS6.2. If each call of MarkDirty in Algo. 2 takes \(\mathcal{O}(K\sum_{k=0}^{n}w(C_{\rho k}))\) time for some \(K\), the total time taken by the repeated calls of MarkDirty is \(\mathcal{O}(Kw(C)\log w(C))\). Concrete Yet Functor-Generic Algorithms \(\mathsf{fPR}^{\mathrm{H-ER}}_{w_{\mathrm{C}}}\), \(\mathsf{fPR}^{\mathrm{H-ER}}_{w_{\mathrm{P}}}\), \(\mathsf{fPR}^{\mathrm{H-ER}}_{w_{\mathrm{R}}}\) We instantiate the fibrational algorithm \(\mathsf{fPR}^{\mathrm{H}}_{(F,\overline{F}),w}\) with \(\mathbf{EqRel}\to\mathbf{Set}\) as a base fibration. In this situation, the functor \(F\) is an endofunctor on \(\mathbf{Set}\) and \(\overline{F}\) is an endofunctor on \(\mathbf{EqRel}\) which is a fibred lifting of \(F\). This instantiation also enables a semantically equivalent reformulation of MarkDirty--its "implementation" is now "predecessor-centric" rather than "successor-centric"--and this aids more refined complexity analysis. For a weight function \(w\) (a parameter of \(\mathsf{fPR}^{\mathrm{H}}\)), we introduce three examples \(w_{\mathrm{C}},w_{\mathrm{P}},w_{\mathrm{R}}\), leading to three functor-generic algorithms \(\mathsf{fPR}^{\mathrm{H-ER}}_{(F,\overline{F}),w_{\mathrm{C}}}\), \(\mathsf{fPR}^{\mathrm{H-ER}}_{(F,\overline{F}),w_{\mathrm{P}}}\) and \(\mathsf{fPR}^{\mathrm{H-ER}}_{(F,\overline{F}),w_{\mathrm{R}}}\). [fPR\({}_{w}^{\mathrm{H-ER}}\)] Let \(\mathbf{Set}\stackrel{{ F}}{{\to}}\mathbf{Set}\) and \(\mathbf{EqRel}\stackrel{{\overline{F}}}{{\to}}\mathbf{EqRel}\) be functors, \(C\stackrel{{ c}}{{\to}}FC\) be a coalgebra, and \(w\colon P(C)\to\mathbb{N}\) be a function (which amounts to \(w\colon\mathrm{ob}(\mathbf{Sub}(\mathbb{C})_{C})\to\mathbb{N}\) in SS6.1), all satisfying Assum. 1 (\(C\) must be finite, in particular). The algorithm \(\mathsf{fPR}^{\mathrm{H-ER}}_{w}\) is shown in Algo. 3; it computes a \(\nu(c^{*}\overline{F})\)-partitioning of \(C\). Line 14-19 of Algo. 3 uses this categorical notion of predecessor (Line 17), which is in Def. 1. Its equivalence to the original definition (Line 14-19 of Algo. 2) is easy; so \(\mathsf{fPR}^{\mathrm{H-ER}}_{w}\) is correct by Thm. 1. The successor-centric description is more convenient in the correctness proof (Thm. 1.1), while the predecessor-centric one is advantageous for complexity analysis. **Definition 6.11** (predecessor [14]).: Let \(c\colon C\to FC\) be a coalgebra in \(\mathbf{Set}\). For states \(x,y\in C\), we say \(x\) is a _predecessor_ of \(y\) if \(x\not\in B\), where \(B\) is a subset of \(C\) defined by the following pullback: \[\begin{array}{l}\includegraphics[width=142.26378pt]{puff.eps}\end{array}\.\ The complexity of the other parts of the algorithm is also bounded. We write \(C_{\sigma}^{\text{di}}\) for \(C_{\sigma}\setminus C_{\sigma}^{\text{cl}}\). The computation of \(R_{\rho}\) (Line 3-5 of Algo. 2) takes \(\mathcal{O}(f|C_{\rho}^{\text{di}}|)\), and the computation of \(R_{\rho}\)-partitioning (Line 9 of Algo. 2) takes \(\mathcal{O}(|C_{\rho}^{\text{di}}|)\), using appropriate data structures. Hence it takes \(\mathcal{O}(f|C_{\rho}^{\text{di}}|)\) for each iteration of the main loop except for MarkDirty. Therefore the total time for Algo. 3 except for MarkDirty (let us write \(T_{\setminus\text{MarkDirty}}\)) is \(\mathcal{O}(\sum_{\rho\text{ for each iteration}}f|C_{\rho}^{\text{di}}|)\). We use amortised analysis to bound this sum. Specifically, it is easy to see that the sum \(\sum_{\rho}f|C_{\rho}^{\text{di}}|\) is bounded by the number of times that states are marked as dirty, multiplied by \(f\). Throughout the algorithm, the number of times that states are marked as dirty (at Line 19 of Algo. 3) is at most the time consumed by MarkDirty, which is \(\mathcal{O}(M|C|\log|C|)\). Therefore \(T_{\setminus\text{MarkDirty}}\) is \(\mathcal{O}(fM|C|\log|C|)\); so is the total time.
2305.00773
Point Cloud Semantic Segmentation
Semantic segmentation is an important and well-known task in the field of computer vision, in which we attempt to assign a corresponding semantic class to each input element. When it comes to semantic segmentation of 2D images, the input elements are pixels. On the other hand, the input can also be a point cloud, where one input element represents one point in the input point cloud. By the term point cloud, we refer to a set of points defined by spatial coordinates with respect to some reference coordinate system. In addition to the position of points in space, other features can also be defined for each point, such as RGB components. In this paper, we conduct semantic segmentation on the S3DIS dataset, where each point cloud represents one room. We train models on the S3DIS dataset, namely PointCNN, PointNet++, Cylinder3D, Point Transformer, and RepSurf. We compare the obtained results with respect to standard evaluation metrics for semantic segmentation and present a comparison of the models based on inference speed.
Ivan Martinović
2023-05-01T11:20:51Z
http://arxiv.org/abs/2305.00773v1
# Semanticka Segmentacija Oblaka Tocaka ###### Abstract Semaitcka segmentacija 2D slika za cilij ima odrediti semanticki razred za svaki piksel ilazne slike. Za razliku od visikel razredenti u pravincloj 2D resteci, oblaci toaka zapravo su su kupoi rocaka augradeni v tokonai u konimirati c n prostor [1]. Oblaci toaka zapravo su su kupoi rocaka augradeni v tokonai u konimirati c n prostor [1]. Oblaci toaka zbog toga su strukturo razliciti od slika, za koje postoij veliki broj standardnih metoda u podrucju racunalnog vida (npr. primjena konvolucijski slojeva). Rad s oblacima tocaka, odnosno s 3D podacima opecnito, pronalazi primjene u autonommo voznji, roobicti c proisirenoj stvarnosti. Prema [1], pristupe ucenja iz oblaka tocaka dijelimo na: projekcijisk metode, metode koje se temelje na postupku vokselizacjie (engl. _voxel-based_) te metode koje razravno s oblacima tocaka na ulazu. Projekcijsk metode zasnivaju se na projekcijijk negravilnih 3D oblaka tocaka na pravilne 2D restekie iz vise razliciti pogleda [2, 3, 4]. Kod ovog pristu moze doci do gubitka informacij prilikom projekcij. Osim toga, performase modela koji koriste projekcijski pristu pingu oosivii o odabranon popeleda prema kojemu projiciraom oblaci tocaka. Za razliku od projekcijskih metoda, vokselizacjisk metode nepravilnih olblak tocaka transformiraju u 3D resteku, a potom nad 3D restekom provode 3D konvoluciju [5, 6]. Buduci da postupkom vokselizacjie nastaje 3D resteka, vokselizacjisk metode racunalno su i memorijski dosta zahtjevne. Umjesto projekcije il vokselizacjie oblaka tocaka, razvijene su i metode koje razravno s nepravilnim oblacima tocaka na ulazu, a pionir medu takvim arhitekturama je model _PointNet_[7]. Cilj ovoga rada reprordukcija je rezultata vise modela za semanticku segmentaciju oblaka tocaka. Kratak pregled kozristen'in modela dan je u odjeljku II. U istom odjeljku opisan je i koristeni skup podataka S3DIS. U odjeljku III prikazani su i komentirani dobiveni rezultati te su navedene smjernice za buduci rad. Kao sto je navedeno u odjeljku I, postoij vise nacina na koje mozemo ukit iz oblaka tocaka. Govorimo io iz noxnomu venju iz skupo tocaka, zaecntik i pokreta dalnjige istrazivanaja. svakako je model _PointNet_[7]. Osnovna ideja modela _PointNet_[7]. ougraditi je tocke u semanticki bogatiji prostor znacajki, apotom koristeci simetricnu funkciju sazimanja maksimumom (engl. _max pooling_) dobiti globalnu reprezentaciju ulaznog oblaka tocaka. U clanku [7] navode se 3 svojstva koja trebaju zadovolijit modeli koji na ulazravno dobivai skup tocaka: 1. invarijantnot n poredak tocaka na ulazu, 2. invarijantnot n a transformacije (npr. rotacija ili, translacija) te 3. sposobnost modeliranja lokalne interakcije tocaka. U clanku [7] svojstvo 1) pokusava se zadovolijiti primjenom simetricne funkcije azimanja maksimumom, sojstvo 2) primjenom plitle neuronske mreza koja ouci transformaciju, dok se vsojstvo 3) pokusava osigurati primjenom potpuno povezanih slojeva. ### _PointNet++_ Model _PointNet++_[8] nadogradnja je modela _PointNet_ u vidu hijeranhijske obrade ulaznog oblaka tocaka pri cemu se u obzir uuzima udaljenost medu tockama. Dok _PointNet_ saziman, ulazni oblak tocaka jedhim izvrsavanjem funkcije azimanja, maksimumom, model _PointNet++_ kroz vise hijeranhijski, razina gradi reprezentaciju ulaznog oblaka tocaka. Prema [8], jednu razinu hijeranhije nazivamo _razimon apstrakcije skupai_ (engl. _set abstraction level_). Svaka razina sastoi se od: 1), sloja uzorkovanja, 2) sloja grupiranja te 3) _PointNet_ sloja. Sloj, uzorkovanja omogucava odabir tocaka koje ce predstavljati centoride lokalnih regija ulaznog skup tocaka, sloj grupiranja odabir grup tocaka koje, zajedno s centroidom, cine jednu regiju, dok _PointNet_ sloj kodira lokalne regije u vektorez znacajki. ### _PointCNN_ Model _PointNet_ invarijantnot na transformacije postize naucenom transformacijom tocaka koja se na svaku tocku, primjenjujie zasebno, a invarijantnot na poredak postize se
2307.16768
Congruences concerning quadrinomial coefficients
In this paper, we establish congruences (mod $p^2$) involving the quadrinomial coefficients $\dbinom{np-1}{p-1}_{3}$ and $\dbinom{np-1}{\frac{p-1}{2}}_{3}$. This is an analogue of congruences involving the trinomial coefficients $\dbinom{np-1}{p-1}_{2}$ and $\dbinom{np-1}{\frac{p-1}{2}}_{2}$ due to Elkhiri and Mihoubi.
Mohammed Mechacha
2023-07-27T18:11:46Z
http://arxiv.org/abs/2307.16768v1
# Congruences concerning quadrinomial coefficients ###### Abstract In this paper, we establish congruences (mod \(p^{2}\)) involving the quadrinomial coefficients \(\binom{np-1}{p-1}_{2}\) and \(\binom{np-1}{p-1}_{3}\). This is an analogue of congruences involving the trinomial coefficients \(\binom{np-1}{p-1}_{2}\) and \(\binom{np-1}{p-1}_{2}\) due to Elkhiri and Mihoubi. _2010 Mathematics Subject Classification. -- 11B65, 11A07, 05A10._ Quadrinomial coefficients, harmonic numbers, congruences. ## 1 Introduction and statements of results Several mathematicians studied, for a prime number \(p\), congruences modulo powers of \(p\) involving the binomial coefficients \(\binom{2p-1}{p-1}\) and \(\binom{p-1}{\frac{p-1}{2}}\), see for instance [1, 12, 9, 6, 2, 3, 13]. In 2014, Sun[10] and Cao & Pan[4] described some properties and congruences involving the trinomial coefficients \(\binom{n}{k}_{2}\) defined by \[\left(1+x+x^{2}\right)^{n}=\sum_{k=0}^{2n}\binom{n}{k}_{2}x^{k}.\] In 2019, Elkhiri and Mihoubi[5] studied congruences modulo \(p^{2}\) for the trinomial coefficients \(\binom{np-1}{p-1}_{2}\) and \(\binom{np-1}{\frac{p-1}{2}}_{2}\). More precisely, they proved the following result. **Theorem** ([5], Theorem 1).: _Let \(p\geq 5\) be a prime number and \(n\) be a positive integer. We have_ \[\binom{np-1}{p-1}_{2}\equiv\left\{\begin{aligned} & 1+npq_{p}(3)\ (\text{mod}\,p^{2})&\text{if}\ p\equiv 1 \,(\text{mod}\,3),\\ &-1-npq_{p}(3)\ (\text{mod}\,p^{2})&\text{if}\ p\equiv 2 \,(\text{mod}\,3),\end{aligned}\right.\] and \[\binom{np-1}{\frac{p-1}{2}}\underset{2}{\equiv}\left\{\begin{array}{ll}1+np\Big{(} 2q_{p}(2)+\frac{1}{2}q_{p}(3)\Big{)}\ (\mathrm{mod}\,p^{2})&\text{if }\ p\equiv 1\,(\mathrm{mod} 6),\\ -\frac{1}{2}npq_{p}(3)\ (\mathrm{mod}\,p^{2})&\text{if }\ p\equiv 5\,(\mathrm{mod} 6),\end{array}\right.\] where \(q_{p}(a)\) is the Fermat quotient defined, for \(a\in\mathbb{Z}-p\mathbb{Z}\), by \(q_{p}(a)=\frac{a^{p-1}-1}{p}\). Analogously, one defines the quadrinomial coefficients \(\binom{n}{k}_{3}\) by \[\left(1+x+x^{2}+x^{3}\right)^{n}=\sum_{k=0}^{3n}\binom{n}{k}_{3}x^{k}.\] In this paper, we study congruences modulo \(p^{2}\) for the quadrinomial coefficients \(\binom{np-1}{p-1}_{3}\) and \(\binom{np-1}{\frac{p-1}{2}}_{3}\). Our purpose is to establish the following analogous result. Theorem A: Let \(p\geq 5\) be a prime number and \(n\) be a positive integer. We have \[\binom{np-1}{p-1}_{3}\equiv\left\{\begin{array}{ll}1+2npq_{p}(2)\ (\mathrm{mod}\,p^{2})&\text{if }\ p \equiv 1\,(\mathrm{mod}\,4),\\ -\frac{1}{2}npq_{p}(2)\ (\mathrm{mod}\,p^{2})&\text{if }\ p\equiv 3\,( \mathrm{mod}\,4),\end{array}\right. \tag{1.1}\] and \[\binom{np-1}{\frac{p-1}{2}}_{3}\equiv\left\{\begin{array}{ll}1+np\Big{(} \frac{13}{4}q_{p}(2)+\chi_{p}\Big{)}\ (\mathrm{mod}\,p^{2})&\text{if }\ p\equiv 1\,( \mathrm{mod}\,8),\\ -1-np\Big{(}\frac{13}{4}q_{p}(2)+\chi_{p}\Big{)}\ (\mathrm{mod}\,p^{2})& \text{if }\ p\equiv 3\,(\mathrm{mod}\,8),\\ -np\Big{(}\frac{1}{4}q_{p}(2)-\chi_{p}\Big{)}\ (\mathrm{mod}\,p^{2})& \text{if }\ p\equiv 5\,(\mathrm{mod}\,8),\\ np\Big{(}\frac{1}{4}q_{p}(2)-\chi_{p}\Big{)}\ (\mathrm{mod}\,p^{2})& \text{if }\ p\equiv 7\,(\mathrm{mod}\,8),\end{array}\right. \tag{1.2}\] where \(\chi_{p}:=P_{p-(\frac{2}{p})}/p\) is the Pell quotient and \((P_{n})_{n}\) is the Pell sequence (OEIS A000129). Using this theorem we obtain the following proposition, which is an analogue of Proposition 3 of [5]. Proposition B: Let \(p\geq 5\) be a prime number and \(n\) be a positive integer. Then \[\sum_{k=0}^{p-1}\binom{np-1}{k}_{3}\equiv\left\{\begin{array}{ll}1+\frac{ 2}{4}npq_{2}(2)\ (\mathrm{mod}\,p^{2})&\text{if }\ p\equiv 1\,(\mathrm{mod}\,4),\\ -\frac{1}{4}npq_{2}(2)\ (\mathrm{mod}\,p^{2})&\text{if }\ p\equiv 3\,( \mathrm{mod}\,4),\end{array}\right. \tag{1.3}\] and \[\sum_{k=0}^{\frac{p-1}{2}}\binom{np-1}{k}_{3}\equiv\left\{\begin{array}{ll}1+ \frac{3}{2}np\Big{(}2q_{p}(2)+\chi_{p}\Big{)}\ (\mathrm{mod}\,p^{2})&\text{if }\ p\equiv 1\,( \mathrm{mod}\,8),\\ -\frac{1}{4}np\Big{(}q_{p}(2)-2\chi_{p}\Big{)}\ (\mathrm{mod}\,p^{2})& \text{if }\ p\equiv 3\,(\mathrm{mod}\,8),\\ -\frac{1}{2}np\Big{(}q_{p}(2)-\chi_{p}\Big{)}\ (\mathrm{mod}\,p^{2})& \text{if }\ p\equiv 5\,(\mathrm{mod}\,8),\\ -\frac{1}{4}np\Big{(}q_{p}(2)+2\chi_{p}\Big{)}\ (\mathrm{mod}\,p^{2})& \text{if }\ p\equiv 7\,(\mathrm{mod}\,8).\end{array}\right. \tag{1.4}\] Corollary 4 of [5] gives congruences modulo \(p^{2}\) for the coefficients \(\binom{np^{2}-1}{k}_{2}\). A similar congruences for the coefficients \(\binom{np^{2}-1}{k}_{3}\) is given by the following statement. Corollary C: Let \(p\geq 5\) be a prime number and \(n,k\) be integers with \(n\geq 1\) and \(k\in\left\{0,1,\ldots,p-1\right\}.\) We have \[\binom{np^{2}-1}{k}_{3}\equiv\left\{\begin{array}{ll}1\ (\bmod p^{2})&\text{ if }k \equiv 0\,(\bmod 4),\\ -1\ (\bmod p^{2})&\text{ if }k\equiv 1\,(\bmod 4),\\ 0\ (\bmod p^{2})&\text{ if }k\equiv 2\,(\bmod 4),\\ 0\ (\bmod p^{2})&\text{ if }k\equiv 3\,(\bmod 4).\end{array}\right. \tag{1.5}\] ## 2. Some intermediate results In this section, we give some congruences which will be useful for establishing our main results. Lemma 2.1 (cf. [7, 11]): Let \(p\) be a prime number. We have \[H_{[p/2]}\equiv-2q_{p}(2)\ (\bmod p),\,p\geq 3, \tag{2.2}\] \[H_{[p/4]}\equiv-3q_{p}(2)\ (\bmod p),\,p\geq 5,\] (2.3) \[H_{[p/8]}\equiv-4q_{p}(2)-2\chi_{p}\ (\bmod p),\,p\geq 5, \tag{2.1}\] where \((H_{n})_{n}\) is the harmonic sequence defined by \(H_{0}=0\) and \(H_{n}=1+\frac{1}{2}+\cdots+\frac{1}{n}.\) Congruences (2.1) and (2.2) are due to Glaisher[7, pages 21-23]. Congruence (2.3) is derived from a paper of Williams[11, page 440]. Lemma 2.2: Let \(p\geq 5\) be a prime number. **1.** If \(p\equiv 1\,(\bmod 4)\) then \[\sum_{k=0}^{\frac{p-5}{4}}\frac{1}{4k+1}\equiv\frac{3}{4}q_{p}(2)\,(\bmod p), \quad\sum_{k=0}^{\frac{p-1}{4}}\frac{1}{4k+2}\equiv 1-\frac{1}{4}q_{p}(2)\,( \bmod p),\quad\sum_{k=0}^{\frac{p-1}{4}}\frac{1}{4k+3}\equiv\frac{1}{2}+ \frac{1}{4}q_{p}(2)\,(\bmod p). \tag{2.4}\] **2.** If \(p\equiv 3\,(\bmod 4)\) then \[\sum_{k=0}^{\frac{p-3}{4}}\frac{1}{4k+1}\equiv\frac{1}{4}q_{p}(2)\,(\bmod p), \quad\sum_{k=0}^{\frac{p-3}{4}}\frac{1}{4k+2}\equiv-\frac{1}{4}q_{p}(2)\,( \bmod p),\quad\quad\sum_{k=0}^{\frac{p-7}{4}}\frac{1}{4k+3}\equiv\frac{3}{4} q_{p}(2)\,(\bmod p). \tag{2.5}\] Proof: **1.** Assume that \(p\equiv 1\,(\bmod 4).\) We have \[\sum_{k=0}^{\frac{p-5}{4}}\frac{1}{4k+1}=\sum_{j=1}^{\frac{p-1}{ 4}}\frac{1}{4(\frac{p-1}{4}-j)+1}=\sum_{j=1}^{\frac{p-1}{4}}\frac{1}{p-4j},\] \[\sum_{k=0}^{\frac{p-1}{4}}\frac{1}{4k+2}=\sum_{j=\frac{p-1}{4}}^{ \frac{p-1}{2}}\frac{1}{4(\frac{p-1}{2}-j)+2}=\sum_{j=\frac{p-1}{4}}^{\frac{p-1 }{2}}\frac{1}{2p-4j}\] and \[\sum_{k=0}^{\frac{p-1}{4}}\frac{1}{4k+3}=\sum_{j=\frac{p+3}{4}}^{ \frac{p+1}{4}}\frac{1}{4(j-\frac{p+3}{4})+3}=\sum_{j=\frac{p+3}{4}}^{\frac{p+1 }{4}}\frac{1}{4j-p},\] so that \[\sum_{k=0}^{\frac{p-5}{4}}\frac{1}{4k+1} \equiv-\frac{1}{4}\sum_{j=1}^{\frac{p-1}{4}}\frac{1}{j}=-\frac{1}{4 }H_{[\frac{p}{4}]}=\frac{3}{4}q_{p}(2)\,(\text{mod}\,p),\] \[\sum_{k=0}^{\frac{p-1}{4}}\frac{1}{4k+2} \equiv-\frac{1}{4}\sum_{j=\frac{p-1}{4}}^{\frac{p-1}{2}}\frac{1}{j }=-\frac{1}{p-1}-\frac{1}{4}\sum_{j=1}^{\frac{p-1}{2}}\frac{1}{j}+\frac{1}{4} \sum_{j=1}^{\frac{p-1}{4}}\frac{1}{j}\] \[\equiv 1-\frac{1}{4}\sum_{j=1}^{\frac{p-1}{2}}\frac{1}{j}+\frac{1}{4 }\sum_{j=1}^{\frac{p-1}{2}}\frac{1}{j}=1-\frac{1}{4}H_{[\frac{p}{2}]}+\frac{1} {4}H_{[\frac{p}{4}]}=1-\frac{1}{4}q_{p}(2)\,(\text{mod}\,p)\] and \[\sum_{k=0}^{\frac{p-1}{4}}\frac{1}{4k+3} \equiv\frac{1}{4}\sum_{j=\frac{p+1}{4}}^{\frac{p+1}{2}}\frac{1}{j }=\frac{1}{2p+2}+\frac{1}{4}\sum_{j=1}^{\frac{p-1}{2}}\frac{1}{j}-\frac{1}{4} \sum_{j=1}^{\frac{p-1}{4}}\frac{1}{j}\] \[\equiv\frac{1}{2}+\frac{1}{4}\sum_{j=1}^{\frac{p-1}{2}}\frac{1}{j }-\frac{1}{4}\sum_{j=1}^{\frac{p-1}{4}}\frac{1}{j}=\frac{1}{2}+\frac{1}{4}H_{ [\frac{p}{2}]}-\frac{1}{4}H_{[\frac{p}{4}]}=\frac{1}{2}+\frac{1}{4}q_{p}(2)\, (\text{mod}\,p).\] _ii._ Assume now that \(p\equiv 3\,(\text{mod}4)\). From the equalities \[\sum_{k=0}^{\frac{p-3}{4}}\frac{1}{4k+1} =\sum_{j=\frac{p+1}{4}}^{\frac{p-1}{2}}\frac{1}{4(j-\frac{p-3}{4}- 1)+1}=\sum_{j=\frac{p+1}{4}}^{\frac{p-1}{2}}\frac{1}{4j-p},\] \[\sum_{k=0}^{\frac{p-3}{4}}\frac{1}{4k+2} =\sum_{j=\frac{p+1}{4}}^{\frac{p-1}{2}}\frac{1}{4(\frac{p-1}{2}-j )+2}=\sum_{j=\frac{p+1}{4}}^{\frac{p-1}{2}}\frac{1}{2p-4j}\] and \[\sum_{k=0}^{\frac{p-7}{4}}\frac{1}{4k+3} =\sum_{j=\frac{p+1}{4}}^{\frac{p+1}{2}}\frac{1}{4(\frac{p-7}{4}- j+1)+3}=\sum_{j=1}^{\frac{p-3}{4}}\frac{1}{p-4j},\] we deduce the congruences \[\sum_{k=0}^{\frac{p-3}{4}}\frac{1}{4k+1} \equiv\frac{1}{4}\sum_{j=\frac{p+1}{4}}^{\frac{p-1}{2}}\frac{1}{j }=\frac{1}{4}H_{[\frac{p}{2}]}-\frac{1}{4}H_{[\frac{p}{2}]}=\frac{1}{4}q_{p}( 2)\,(\text{mod}\,p),\] \[\sum_{k=0}^{\frac{p-3}{4}}\frac{1}{4k+2} \equiv-\frac{1}{4}\sum_{j=\frac{p+1}{4}}^{\frac{p-1}{2}}\frac{1}{j }=-\frac{1}{4}H_{[\frac{p}{2}]}+\frac{1}{4}H_{[\frac{p}{2}]}=-\frac{1}{4}q_{p} (2)\,(\text{mod}\,p)\] and \[\sum_{k=0}^{\frac{p-7}{4}}\frac{1}{4k+3} \equiv-\frac{1}{4}\sum_{j=1}^{\frac{p-3}{4}}\frac{1}{j}=-\frac{1}{ 4}H_{[\frac{p}{4}]}=\frac{3}{4}q_{p}(2)\,(\text{mod}\,p).\] Lemma 2.3: Let \(p\geq 5\) be a prime number. **1.** If \(p\equiv 1\,(\text{mod}\,8)\) then \[\sum_{k=0}^{\frac{p-1}{8}}\frac{1}{4k+1}=\sum_{j=\frac{p-1}{8}}\frac{1}{4}(\frac {p-1}{4}-j)+1=\sum_{j=\frac{p-1}{8}}\frac{1}{p-4j},\] \[\sum_{k=0}^{\frac{p-1}{8}}\frac{1}{4k+2}=\sum_{j=\frac{p-9}{8}}\frac{1}{4}(\frac {p-5}{4}-j)+3=\sum_{j=\frac{p-1}{9}}\frac{1}{p-2-4j},\] so that \[\sum_{k=0}^{\frac{p-1}{8}}\frac{1}{4k+1}\equiv -\frac{1}{4}\sum_{j=\frac{p-1}{8}}^{\frac{p-1}{8}}\frac{1}{j}=\frac {2}{p-1}-\frac{1}{4}\sum_{j=1}^{\frac{p-1}{8}}\frac{1}{j}+\frac{1}{4}\sum_{j= 1}^{\frac{p-1}{8}}\frac{1}{j}\] \[\equiv 2-\frac{1}{4}\sum_{j=1}^{\frac{p-1}{4}}\frac{1}{j}+\frac{1}{4} \sum_{j=1}^{\frac{p-1}{8}}\frac{1}{j}=2-\frac{1}{4}H_{[\frac{p}{2}]}+\frac{1} {4}H_{[\frac{p}{8}]}=2-\frac{1}{4}q_{p}(2)-\frac{1}{2}\chi_{p}\,(\text{mod}\,p),\] \[\sum_{k=0}^{\frac{p-1}{8}}\frac{1}{4k+2}\equiv \frac{2}{3}+\frac{1}{2}\sum_{j=\frac{p-1}{1}}^{\frac{p-1}{8}}\frac {1}{j}-\frac{1}{4}\sum_{j=1}^{\frac{p-1}{8}}\frac{1}{j}=\frac{2}{3}+\frac{1} {2}H_{[\frac{p}{2}]}-\frac{1}{4}H_{[\frac{p}{8}]}=\frac{2}{3}-\frac{1}{2}q_{p }(2)+\frac{1}{2}\chi_{p}\,(\text{mod}\,p)\] and, using the second congruence in (2.4), \[\sum_{k=0}^{\frac{p-1}{8}}\frac{1}{4k+3}\equiv -\sum_{j=\frac{p-3}{8}}^{\frac{p-5}{4}}\frac{1}{-4j-2}=-\sum_{j=0} ^{\frac{p-1}{4}}\frac{1}{4j+2}+\sum_{j=0}^{\frac{p-1}{8}}\frac{1}{4j+2}+\frac{1 }{p+1}-\frac{2}{p-5}-\frac{p+3}{2}\] \[\equiv -\frac{1}{4}\sum_{j=0}^{\frac{p-1}{4}}\frac{1}{4j+2}+\sum_{j=0}^{ \frac{p-1}{8}}\frac{1}{4j+2}+\frac{11}{15}=\frac{2}{5}+\frac{1}{4}H_{[\frac{p }{2}]}+\frac{1}{4}H_{[\frac{p}{2}]}-\frac{1}{4}H_{[\frac{p}{8}]}\] \[=\frac{2}{5}-\frac{1}{4}q_{p}(2)+\frac{1}{2}\chi_{p}\left(\text{ mod}\,p\right).\] _ii._ Assume that \(p\equiv 3\left(\text{mod}\,8\right)\). From the equalities \[\sum_{k=0}^{\frac{p-3}{8}}\frac{1}{4k+1}=\sum_{j=\frac{p-3}{8}}^{ \frac{p-3}{4}}\frac{1}{4(\frac{p-7}{4}-j+1)+1}=\sum_{j=\frac{p-3}{8}}^{\frac{ p-3}{4}}\frac{1}{p-4j-2},\] \[\sum_{k=0}^{\frac{p-3}{8}}\frac{1}{4k+2}= \frac{2}{p+1}+\frac{1}{2}\sum_{j=1}^{\frac{p-3}{4}}\frac{1}{j}- \frac{1}{4}\sum_{j=1}^{\frac{p-3}{8}}\frac{1}{j}\] and \[\sum_{k=0}^{\frac{p-3}{8}}\frac{1}{4k+3}=\sum_{j=\frac{p-3}{8}}^{\frac{p-3}{4} }\frac{1}{4\left(\frac{p-3}{4}-j\right)+3}=\sum_{j=\frac{p-3}{8}}^{\frac{p-3} {4}}\frac{1}{p-4j},\] we obtain the congruences \[\sum_{k=0}^{\frac{p-3}{8}}\frac{1}{4k+3}\equiv -\frac{1}{4}\sum_{j=\frac{p-3}{8}}^{\frac{p-3}{4}}\frac{1}{j}= \frac{2}{p-3}-\frac{1}{4}\sum_{j=1}^{\frac{p-3}{4}}\frac{1}{j}+\frac{1}{4}\sum _{j=1}^{\frac{p-3}{8}}\frac{1}{j}\] \[\equiv \frac{2}{3}-\frac{1}{4}\sum_{j=1}^{\frac{p-1}{2}}\frac{1}{j}+\sum _{j=1}^{\frac{p-1}{8}}\frac{1}{j}=\frac{2}{3}-\frac{1}{4}H_{[\frac{p}{4}]}+ \frac{1}{4}H_{[\frac{p}{8}]}=\frac{2}{3}-\frac{1}{4}q_{p}(2)-\frac{1}{2}\chi_{ p}\left(\text{mod}\,p\right),\] \[\sum_{k=0}^{\frac{p-3}{8}}\frac{1}{4k+2}\equiv 2+\frac{1}{2}\sum_{j=1}^{\frac{p-3}{4}}\frac{1}{j}-\frac{1}{4} \sum_{j=1}^{\frac{p-3}{8}}\frac{1}{j}=2+\frac{1}{2}H_{[\frac{p}{4}]}-\frac{1} {4}H_{[\frac{p}{8}]}=2-\frac{1}{2}q_{p}(2)+\frac{1}{2}\chi_{p}\left(\text{ mod}\,p\right)\] and, using the second congruence in (2.5), \[\sum_{k=0}^{\frac{p-3}{8}}\frac{1}{4k+1}\equiv -\sum_{j=\frac{p-3}{8}}^{\frac{p-3}{4}}\frac{1}{4j+2}=-\frac{2}{p+ 1}+\sum_{j=0}^{\frac{p-3}{8}}\frac{1}{4j+2}-\sum_{j=0}^{\frac{p-3}{4}}\frac{1 }{4j+2}\] \[\equiv -2+\sum_{j=0}^{\frac{p-3}{8}}\frac{1}{4j+2}-\sum_{j=0}^{\frac{p-3} {4}}\frac{1}{4j+2}=\frac{1}{4}H_{[\frac{p}{2}]}+\frac{1}{4}H_{[\frac{p}{2}]}- \frac{1}{4}H_{[\frac{p}{8}]}=2-\frac{1}{4}q_{p}(2)-\frac{1}{2}\chi_{p}\left( \text{mod}\,p\right).\] _iii._ Consider the case where \(p\equiv 5\,(\mathrm{mod}\,8)\). We have \[\sum_{k=0}^{\frac{p-5}{8}}\frac{1}{4k+1}= \sum_{j=\frac{p-4}{8}}^{\frac{p-1}{4}}\frac{1}{4(\frac{p-5}{4}-j+ 1)+1}=\sum_{j=\frac{p+3}{8}}^{\frac{p-1}{4}}\frac{1}{p-4j};\] \[\sum_{k=0}^{\frac{p-5}{8}}\frac{1}{4k+2}= \frac{2}{p-1}+\frac{1}{2}\sum_{j=1}^{\frac{p-5}{4}}\frac{1}{j}- \frac{1}{4}\sum_{j=1}^{\frac{p-5}{8}}\frac{1}{j}\] and \[\sum_{k=0}^{\frac{p-5}{8}}\frac{1}{4k+3}= \sum_{j=\frac{p-5}{8}}^{\frac{p-5}{4}}\frac{1}{4\big{(}\frac{p-1}{ 4}-j-1\big{)}+3}=\sum_{j=\frac{p-5}{8}}^{\frac{p-5}{4}}\frac{1}{p-4j-2}.\] It follows that \[\sum_{k=0}^{\frac{p-5}{8}}\frac{1}{4k+1}\equiv -\frac{1}{4}\sum_{j=\frac{p+3}{8}}^{\frac{p-1}{4}}\frac{1}{j}=- \frac{1}{4}H_{[\frac{p}{4}]}+\frac{1}{4}H_{[\frac{p}{8}]}=2-\frac{1}{4}q_{p}( 2)-\frac{1}{2}\chi_{p}\,(\mathrm{mod}\,p),\] \[\sum_{k=0}^{\frac{p-5}{8}}\frac{1}{4k+2}\equiv \frac{1}{2}\sum_{j=1}^{\frac{p-1}{4}}\frac{1}{j}-\frac{1}{4}\sum_ {j=1}^{\frac{p-5}{8}}\frac{1}{j}=\frac{1}{2}H_{[\frac{p}{4}]}-\frac{1}{4}H_{[ \frac{p}{8}]}=-\frac{1}{2}q_{p}(2)+\frac{1}{2}\chi_{p}\,(\mathrm{mod}\,p)\] and \[\sum_{k=0}^{\frac{p-5}{8}}\frac{1}{4k+3}\equiv \sum_{j=0}^{\frac{p-13}{8}}\frac{1}{4j+2}-\sum_{j=0}^{\frac{p-5}{ 4}}\frac{1}{4j+2}=\sum_{j=0}^{\frac{p-5}{8}}\frac{1}{4j+2}-\sum_{j=0}^{\frac{p -1}{4}}\frac{1}{4j+2}+\frac{1}{p+1}-\frac{2}{p-1}\] \[\equiv 3+\sum_{j=0}^{\frac{p-5}{8}}\frac{1}{4j+2}-\sum_{j=0}^{\frac{p- 1}{4}}\frac{1}{4j+2}\] \[\equiv 2+\frac{1}{4}H_{[\frac{p}{2}]}+\frac{1}{4}H_{[\frac{p}{4}]}- \frac{1}{4}H_{[\frac{p}{8}]}=2-\frac{1}{4}q_{p}(2)+\frac{1}{2}\chi_{p}\,( \mathrm{mod}\,p),\] _iv._ For \(p\equiv 7\,(\mathrm{mod}\,8)\), we have \[\sum_{k=0}^{\frac{p-7}{8}}\frac{1}{4k+1}= \sum_{j=\frac{p+1}{8}}^{\frac{p-3}{4}}\frac{1}{4(\frac{p-3}{4}-j)+ 1}=\sum_{j=\frac{p+1}{8}}^{\frac{p-3}{4}}\frac{1}{p-4j-2},\] \[\sum_{k=0}^{\frac{p-7}{8}}\frac{1}{4k+2}= \frac{2}{p-3}+\frac{1}{2}\sum_{j=1}^{\frac{p-7}{4}}\frac{1}{j}- \frac{1}{4}\sum_{j=1}^{\frac{p-7}{8}}\frac{1}{j}=\frac{1}{2}\sum_{j=1}^{\frac{ p-3}{4}}\frac{1}{j}-\frac{1}{4}\sum_{j=1}^{\frac{p-7}{8}}\frac{1}{j}\] and \[\sum_{k=0}^{\frac{p-7}{8}}\frac{1}{4k+3}= \sum_{j=\frac{p+1}{8}}^{\frac{p-3}{4}}\frac{1}{4\big{(}\frac{p-7}{4}-j+ 1\big{)}+3}=\sum_{j=\frac{p+1}{8}}^{\frac{p-3}{4}}\frac{1}{p-4j}.\] Thus \[\sum_{k=0}^{\frac{p-7}{8}}\frac{1}{4k+3}\equiv -\frac{1}{4}\sum_{j=\frac{p+1}{8}}^{\frac{p-3}{4}}\frac{1}{j}=- \frac{1}{4}H_{[\frac{p}{2}]}+\frac{1}{4}H_{[\frac{p}{8}]}=-\frac{1}{4}q_{p}(2)- \frac{1}{2}\chi_{p}\left(\operatorname{mod}p\right)),\] \[\sum_{k=0}^{\frac{p-5}{8}}\frac{1}{4k+2}\equiv \frac{1}{2}\sum_{j=1}^{\frac{p-1}{4}}\frac{1}{j}-\frac{1}{4}\sum_{j =1}^{\frac{p-5}{8}}\frac{1}{j}=\frac{1}{2}H_{[\frac{p}{4}]}-\frac{1}{4}H_{[ \frac{p}{8}]}=-\frac{1}{2}q_{p}(2)+\frac{1}{2}\chi_{p}\left(\operatorname{mod}p\right)\] and \[\sum_{k=0}^{\frac{p-7}{8}}\frac{1}{4k+1}\equiv -\sum_{j=\frac{p+1}{8}}^{\frac{p-3}{4}}\frac{1}{4j+2}=\frac{1}{4}H_ {[\frac{p}{2}]}+\frac{1}{4}H_{[\frac{p}{4}]}-\frac{1}{4}H_{[\frac{p}{8}]}=- \frac{1}{4}q_{p}(2)+\frac{1}{2}\chi_{p}\left(\operatorname{mod}p\right).\] **Proposition 2.4**.: _Let \(p\geq 5\) be a prime number and \(n,k\) be positive integers. We have_ \[\binom{np-1}{4k}_{3}\equiv 1-np\left(\frac{3}{4}H_{k}+\sum_{j=0}^{k-1} \frac{1}{4j+3}\right)\,(\operatorname{mod}p^{2}), 4k\leq p-1, \tag{2.10}\] \[\binom{np-1}{4k+1}_{3}\equiv -1+np\left(\frac{3}{4}H_{k}+\sum_{j=0}^{k}\frac{1}{4j+1}\right)\,( \operatorname{mod}p^{2}), 4k+1\leq p-1,\] (2.11) \[\binom{np-1}{4k+2}_{3}\equiv np\left(\sum_{j=0}^{k}\frac{1}{4j+2} -\sum_{j=0}^{k}\frac{1}{4j+1}\right)\,(\operatorname{mod}p^{2}), 4k+2\leq p-1,\] (2.12) \[\binom{np-1}{4k+3}_{3}\equiv np\left(\sum_{j=0}^{k}\frac{1}{4j+3} -\sum_{j=0}^{k}\frac{1}{4j+2}\right)\,(\operatorname{mod}p^{2}), 4k+2\leq p-1. \tag{2.13}\] Proof.: We have \[(1+x+x^{2}+x^{3})^{n}=(1+x^{2})^{n}(1+x)^{n}=\left(\sum_{j=0}^{n}\binom{n}{j} _{3}x^{2j}\right)\left(\sum_{l=0}^{n}\binom{n}{l}_{3}x^{l}\right).\] It follows that \[\binom{n}{k}_{3}= \sum_{\begin{subarray}{c}2j+l=k\\ 0\leq l,j\leq n\end{subarray}}\binom{n}{j}\binom{n}{l}=\sum_{j=0}^{\min(n,[ \frac{k}{2}])}\binom{n}{j}\binom{n}{k-2j}. \tag{2.14}\] Combining (2.14) with the relation \(\binom{np-1}{k}=(-1)^{k}\prod\limits_{j=1}^{k}\left(1-\frac{np}{j}\right) \equiv(-1)^{k}\big{(}1-npH_{k}\big{)}\,\left(\operatorname{mod}p^{2}\right)\) we obtain that \[\binom{np-1}{k}_{3} =\sum_{j=0}^{\min(np-1,[\frac{k}{2}])}\binom{np-1}{j}\binom{np-1} {k-2j}\] \[\equiv\sum_{j=0}^{\min(np-1,[\frac{k}{2}])}(-1)^{k-j}\Big{(}1-np( H_{j}+H_{k-2j})\Big{)}\,\left(\operatorname{mod}p^{2}\right).\] Then, for the congruence (2.10), we have \[\begin{pmatrix}np-1\\ 4k\end{pmatrix}_{3}\equiv \sum_{j=0}^{2k}(-1)^{j}\Big{(}1-np(H_{j}+H_{k-2j})\Big{)}\ (\text{mod}\,p^{2})\] \[= 1-np\left(\sum_{j=0}^{2k}(-1)^{j}H_{j}+\sum_{j=0}^{2k}(-1)^{j}H_{ 4k-2j}\right)\] \[= 1-np\sum_{j=0}^{2k}(-1)^{j}\big{(}H_{j}+H_{2j}\big{)}\] \[= 1-np\left(H_{2k}+H_{4k}-\sum_{j=0}^{k-1}\Big{(}H_{2j+1}-H_{2j}+H_ {4j+2}-H_{4j}\Big{)}\right)\] \[= 1-np\left[\sum_{j=1}^{k}\frac{1}{2j}+\sum_{j=0}^{k-1}\frac{1}{2j +1}+\sum_{j=1}^{k}\frac{1}{4j}+\sum_{j=0}^{k-1}\left(\frac{1}{4j+1}+\frac{1} {4j+2}+\frac{1}{4j+3}\right)\right]\] \[\quad+np\sum_{j=0}^{k-1}\left(\frac{1}{2j+1}+\frac{1}{4j+1}+ \frac{1}{4j+2}\right)\] \[= 1-np\left(\frac{3}{4}H_{k}+\sum_{j=0}^{k-1}\frac{1}{4j+3}\right) \ (\text{mod}\,p^{2}).\] Now, we show the congruence (2.11). We have \[\begin{pmatrix}np-1\\ 4k+1\end{pmatrix}_{3}\equiv -\sum_{j=0}^{2k}(-1)^{j}\Big{(}1-np(H_{j}+H_{4k+1-2j})\Big{)} \ (\text{mod}\,p^{2})\] \[= -1+np\sum_{j=0}^{2k}(-1)^{j}\big{(}H_{j}+H_{2j+1}\big{)}\] \[= -1+np\left(H_{2k}+H_{4k+1}-\sum_{j=0}^{k-1}\Big{(}H_{2j+1}-H_{2j} +H_{4j+3}-H_{4j+1}\Big{)}\right)\] \[= -1+np\left[\sum_{j=1}^{k}\frac{1}{2j}+\sum_{j=0}^{k-1}\frac{1}{2 j+1}+\frac{1}{4k+1}+\sum_{j=1}^{k}\frac{1}{4j}+\sum_{j=0}^{k-1}\left(\frac{1} {4j+1}+\frac{1}{4j+2}+\frac{1}{4j+3}\right)\right]\] \[\quad-np\sum_{j=0}^{k-1}\left(\frac{1}{2j+1}+\frac{1}{4j+2}+\frac {1}{4j+3}\right)\] \[= -1+np\left(\frac{3}{4}H_{k}+\sum_{j=0}^{k}\frac{1}{4j+1}\right) \ (\text{mod}\,p^{2}).\] Next, we prove the congruence (2.12). We have \[\begin{pmatrix}np-1\\ 4k+2\end{pmatrix}_{3} \equiv\sum_{j=0}^{2k+1}(-1)^{j}\Big{(}1-np(H_{j}+H_{4k+2-2j}) \Big{)}\,(\operatorname{mod}p^{2}).\] \[=np\sum_{j=0}^{2k+1}(-1)^{j}\big{(}H_{2j}-H_{j}\big{)}\] \[=np\left(H_{2k+1}-H_{2k}+H_{4k}-H_{4k+2}+\sum_{j=0}^{k-1}\frac{1} {4j+2}-\sum_{j=0}^{k-1}\frac{1}{4j+1}\right)\] \[=np\left(\frac{1}{2k+1}-\frac{1}{4k+1}-\frac{1}{4k+2}+\sum_{j=0}^ {k-1}\frac{1}{4j+2}-\sum_{j=0}^{k-1}\frac{1}{4j+1}\right)\] \[=np\left(\sum_{j=0}^{k}\frac{1}{4j+2}-\sum_{j=0}^{k}\frac{1}{4j+1 }\right)\,(\operatorname{mod}p^{2}).\] Finally, for the congruence (2.13), we have \[\begin{pmatrix}np-1\\ 4k+3\end{pmatrix}_{3} \equiv-\sum_{j=0}^{2k+1}(-1)^{j}\Bigg{(}1-np\big{(}H_{j}+H_{4k+3- 2j}\big{)}\Bigg{)}\] \[=np\left(\sum_{j=0}^{2k+1}(-1)^{j}\big{(}H_{j}-H_{2j+1}\big{)}\right)\] \[=np\left(H_{2k}-H_{2k+1}+H_{4k+3}-H_{4k+1}+\sum_{j=0}^{k-1}\frac{ 1}{4j+3}-\sum_{j=0}^{k-1}\frac{1}{4j+2}\right)\] \[=np\left(\frac{1}{4k+3}-\frac{1}{4k+2}+\sum_{j=0}^{k-1}\frac{1}{4 j+3}-\sum_{j=0}^{k-1}\frac{1}{4j+2}\right)\] \[=np\left(\sum_{j=0}^{k}\frac{1}{4j+3}-\sum_{j=0}^{k}\frac{1}{4j+2 }\right)\,(\operatorname{mod}p^{2}).\] ## 3 Proof of the main results Proof of Theorem 4.: For \(p\equiv 1\,\,(\operatorname{mod}4)\) let \(4k=p-1\) in the congruence (2.10). Then, by the congruences (2.2) and (2.4) we get \[\begin{pmatrix}np-1\\ p-1\end{pmatrix}_{3}\equiv 1-np\left(\frac{3}{4}H_{\frac{p-1}{4}}+\sum_{j=0}^{ \frac{p-5}{4}}\frac{1}{4j+3}\right)\equiv 1+2npq_{p}(2)\,(\operatorname{mod}p^{2}).\] For \(p\equiv 3\,\,(\operatorname{mod}4)\) let \(4k+2=p-1\) in the congruence (2.12). Then, by the congruence (2.5) we get \[\begin{pmatrix}np-1\\ p-1\end{pmatrix}_{3}\equiv np\left(\sum_{j=0}^{\frac{p-3}{4}}\frac{1}{4j+2}- \sum_{j=0}^{\frac{p-3}{4}}\frac{1}{4j+1}\right)=-\frac{1}{2}npq_{p}(2)\,\,( \operatorname{mod}p^{2}).\] For \(p\equiv 1\pmod{8}\) let \(4k=\frac{p-1}{2}\) in the congruence (2.10). Then, by the congruences (2.3) and (2.6) we get \[\binom{np-1}{\frac{p-1}{2}}_{3}\equiv 1-np\left(\frac{3}{4}H_{\frac{p-1}{8}}+ \sum_{j=0}^{\frac{p-9}{8}}\frac{1}{4j+3}\right)\equiv 1+np\Big{(}\frac{13}{4}q_{p} (2)+\chi_{p}\Big{)}\ (\bmod{p^{2}}).\] For \(p\equiv 3\pmod{8}\) let \(4k+1=\frac{p-1}{2}\) in the congruence (2.11). Then, by the congruences (2.3) and (2.7) we get \[\binom{np-1}{\frac{p-1}{2}}_{3}\equiv-1+np\left(\frac{3}{4}H_{\frac{p-3}{8}}+ \sum_{j=0}^{\frac{p-3}{8}}\frac{1}{4j+1}\right)=-1-np\left(\frac{13}{4}q_{p} (2)+\chi_{p}\right)\ (\bmod{p^{2}}).\] For \(p\equiv 5\pmod{8}\) let \(4k+2=\frac{p-1}{2}\) in the congruence (2.12). Then, by the congruence (2.8) we get \[\binom{np-1}{\frac{p-1}{2}}_{3}\equiv np\left(\sum_{j=0}^{\frac{p-7}{8}}\frac {1}{4j+2}-\sum_{j=0}^{\frac{p-7}{8}}\frac{1}{4j+1}\right)=np\left(\frac{1}{4} q_{p}(2)+\chi_{p}\right)\ (\bmod{p^{2}}).\] For \(p\equiv 7\pmod{8}\) let \(4k+3=\frac{p-1}{2}\) in the congruence (2.13). Then, by the congruence (2.9) we get \[\binom{np-1}{\frac{p-1}{2}}_{3}\equiv np\left(\sum_{j=0}^{\frac{p-7}{8}}\frac {1}{4j+3}-\sum_{j=0}^{\frac{p-7}{8}}\frac{1}{4j+2}\right)=np\left(\frac{1}{4} q_{p}(2)-\chi_{p}\right)\ (\bmod{p^{2}}).\] Proof of Proposition 2. --: For \(k\in\left\{0,1,\ldots,\left[\frac{p}{4}\right]-1\right\},\) from Proposition 2.4 we deduce that \[\binom{np-1}{4k}_{3}+\binom{np-1}{4k+1}_{3}+\binom{np-1}{4k+2}_{3}+\binom{np- 1}{4k+3}_{3}\equiv\frac{np}{4k+3}\ (\bmod{p^{2}}). \tag{3.1}\] To show the congruences (1.3), let \[\sum_{k=0}^{p-1}\binom{np-1}{k}_{3}=\sum_{k=0}^{\frac{p-1}{4}}\binom{np-1}{4k }_{3}+\sum_{k=0}^{\frac{p-2}{4}}\binom{np-1}{4k+1}_{3}+\sum_{k=0}^{\frac{p-3} {4}}\binom{np-1}{4k+2}_{3}+\sum_{k=0}^{\frac{p-4}{4}}\binom{np-1}{4k+3}_{3}.\] For \(p\equiv 1\pmod{3},\) by the congruences (3.1), (1.1) and (2.4), we obtain \[\sum_{k=0}^{p-1}\binom{np-1}{k}_{3} =\sum_{k=0}^{\frac{p-1}{4}}\binom{np-1}{4k}_{3}+\sum_{k=0}^{\frac {p-1}{4}-1}\binom{np-1}{4k+1}_{3}+\sum_{k=0}^{\frac{p-1}{4}-1}\binom{np-1}{4 k+2}_{3}+\sum_{k=0}^{\frac{p-1}{4}-1}\binom{np-1}{4k+3}_{3}\] \[=\binom{np-1}{p-1}_{3}+\sum_{k=0}^{\frac{p-5}{4}}\left\{\binom{np- 1}{4k+1}_{3}+\binom{np-1}{4k+2}_{3}+\binom{np-1}{4k+3}_{3}\right\}\] \[\equiv 1+2npq_{p}(2)+np\sum_{k=0}^{\frac{p-5}{4}}\frac{1}{4k+3}\] \[\equiv 1+\frac{9}{4}npq_{p}(2).\] For \(p\equiv 3\pmod{4}\), by the congruences (3.1), (2.13) and (2.5), we obtain \[\sum_{k=0}^{p-1}\binom{np-1}{k}_{3} =\sum_{k=0}^{\frac{p-3}{4}}\binom{np-1}{4k}_{3}+\sum_{k=0}^{\frac{p -3}{8}}\binom{np-1}{4k+1}_{3}+\sum_{k=0}^{\frac{p-3}{8}}\binom{np-1}{4k+2}_{3}+ \sum_{k=0}^{\frac{p-7}{4}}\binom{np-1}{4k+3}_{3}\] \[=-\binom{np-1}{p}_{3}+\sum_{k=0}^{\frac{p-3}{4}}\left\{\binom{np-1 }{4k}_{3}+\binom{np-1}{4k+1}_{3}+\binom{np-1}{4k+2}_{3}+\binom{np-1}{4k+3}_{3}\right\}\] \[=np\sum_{k=0}^{\frac{p-3}{4}}\frac{1}{4k+2}\] \[=-\frac{1}{4}q_{p}(2).\] To show the congruences (1.4), let \[\sum_{k=0}^{\frac{p-1}{2}}\binom{np-1}{k}_{3}=\sum_{k=0}^{\frac{[p-1]}{8}} \binom{np-1}{4k}_{3}+\sum_{k=0}^{[\frac{p-2}{8}]}\binom{np-1}{4k+1}_{3}+\sum_ {k=0}^{[\frac{p-4}{8}]}\binom{np-1}{4k+2}_{3}+\sum_{k=0}^{[\frac{p-6}{8}]} \binom{np-1}{4k+3}_{3}.\] For \(p\equiv 1\pmod{8}\), by the congruences (3.1), (2.1) and (2.9), we get \[\sum_{k=0}^{\frac{p-1}{2}}\binom{np-1}{k}_{3} =\sum_{k=0}^{\frac{p-1}{8}}\binom{np-1}{4k}_{3}+\sum_{k=0}^{\frac {p-9}{8}}\binom{np-1}{4k+1}_{3}+\sum_{k=0}^{\frac{p-9}{8}}\binom{np-1}{4k+2}_ {3}+\sum_{k=0}^{\frac{p-9}{8}}\binom{np-1}{4k+3}_{3}\] \[=\binom{np-1}{\frac{p-1}{2}}_{3}+\sum_{k=0}^{\frac{p-9}{8}}\left\{ \binom{np-1}{4k}_{3}+\binom{np-1}{4k+1}_{3}+\binom{np-1}{4k+2}_{3}+\binom{np- 1}{4k+3}_{3}\right\}\] \[=\binom{np-1}{\frac{p-1}{2}}_{3}+np\left(\sum_{k=0}^{\frac{p-9}{8 }}\frac{1}{4k+3}\right)\] \[\equiv 1+np\left(\frac{13}{4}q_{p}(2)+\chi(p)-\frac{2}{5}+\sum_{k=0 }^{\frac{p-1}{8}}\frac{1}{4k+3}\right)\] \[=1+\frac{3}{2}np\Big{(}2q_{p}(2)+\chi(p)\Big{)}.\] For \(p\equiv 3\pmod{8}\), by the congruences (3.1), (2.1) and (2.9), we get \[\sum_{k=0}^{\frac{p-1}{2}}\binom{np-1}{k}_{3} =\sum_{k=0}^{\frac{p-3}{8}}\binom{np-1}{4k}_{3}+\sum_{k=0}^{\frac {p-3}{8}}\binom{np-1}{4k+1}_{3}+\sum_{k=0}^{\frac{p-11}{8}}\binom{np-1}{4k+2} _{3}+\sum_{k=0}^{\frac{p-11}{8}}\binom{np-1}{4k+3}_{3}\] \[=-\binom{np-1}{\frac{p+1}{2}}_{3}-\binom{np-1}{\frac{p+3}{2}}_{3} +np\sum_{k=0}^{\frac{p-3}{8}}\frac{1}{4k+3}\] \[=np\sum_{k=0}^{\frac{p-3}{8}}\frac{1}{4k+1}\] \[=-\frac{1}{4}np\Big{(}q_{p}(2)-2\chi(p)\Big{)}.\] For \(p\equiv 5\pmod{8}\), by the congruences (3.1) and (2.9), we get \[\sum_{k=0}^{\frac{p-1}{2}}\binom{np-1}{k}_{3} =\sum_{k=0}^{\frac{p-5}{2}}\binom{np-1}{4k}_{3}+\sum_{k=0}^{\frac{p- 5}{2}}\binom{np-1}{4k+1}_{3}+\sum_{k=0}^{\frac{p-5}{2}}\binom{np-1}{4k+2}_{3}+ \sum_{k=0}^{\frac{p-5}{2}}\binom{np-1}{4k+3}_{3}\] \[=-\binom{np-1}{\frac{p+1}{2}}_{3}+\sum_{k=0}^{\frac{p-5}{2}}\left\{ \binom{np-1}{4k}_{3}+\binom{np-1}{4k+1}_{3}+\binom{np-1}{4k+2}_{3}+\binom{np-1 }{4k+3}_{3}\right\}\] \[=-np\sum_{k=0}^{\frac{p-5}{2}}\frac{1}{4k+3}\] \[=-\frac{1}{2}np\Big{(}q_{p}(2)-\chi(p)\Big{)}.\] For \(p\equiv 7\pmod{8}\), by the congruences (3.1) and (2.9), we get \[\sum_{k=0}^{\frac{p-1}{2}}\binom{np-1}{k}_{3} =\sum_{k=0}^{\frac{p-7}{2}}\binom{np-1}{4k}_{3}+\sum_{k=0}^{ \frac{p-7}{2}}\binom{np-1}{4k+1}_{3}+\sum_{k=0}^{\frac{p-7}{2}}\binom{np-1}{4k +2}_{3}+\sum_{k=0}^{\frac{p-7}{2}}\binom{np-1}{4k+3}_{3}\] \[=\sum_{k=0}^{\frac{p-7}{2}}\left\{\binom{np-1}{4k}_{3}+\binom{np- 1}{4k+1}_{3}+\binom{np-1}{4k+2}_{3}+\binom{np-1}{4k+3}_{3}\right\}\] \[=np\sum_{k=0}^{\frac{p-7}{2}}\frac{1}{4k+3}\] \[=-\frac{1}{4}np\Big{(}q_{p}(2)+2\chi(p)\Big{)}.\] Proof of Corollary 5. -- This follows without difficulty from Proposition 2.4.
2307.01214
Automatic Counterfactual Augmentation for Robust Text Classification Based on Word-Group Search
Despite large-scale pre-trained language models have achieved striking results for text classificaion, recent work has raised concerns about the challenge of shortcut learning. In general, a keyword is regarded as a shortcut if it creates a superficial association with the label, resulting in a false prediction. Conversely, shortcut learning can be mitigated if the model relies on robust causal features that help produce sound predictions. To this end, many studies have explored post-hoc interpretable methods to mine shortcuts and causal features for robustness and generalization. However, most existing methods focus only on single word in a sentence and lack consideration of word-group, leading to wrong causal features. To solve this problem, we propose a new Word-Group mining approach, which captures the causal effect of any keyword combination and orders the combinations that most affect the prediction. Our approach bases on effective post-hoc analysis and beam search, which ensures the mining effect and reduces the complexity. Then, we build a counterfactual augmentation method based on the multiple word-groups, and use an adaptive voting mechanism to learn the influence of different augmentated samples on the prediction results, so as to force the model to pay attention to effective causal features. We demonstrate the effectiveness of the proposed method by several tasks on 8 affective review datasets and 4 toxic language datasets, including cross-domain text classificaion, text attack and gender fairness test.
Rui Song, Fausto Giunchiglia, Yingji Li, Hao Xu
2023-07-01T02:26:34Z
http://arxiv.org/abs/2307.01214v1
# Automatic Counterfactual Augmentation for Robust Text Classification Based on Word-Group Search ###### Abstract Despite large-scale pre-trained language models have achieved striking results for text classificaion, recent work has raised concerns about the challenge of shortcut learning. In general, a keyword is regarded as a shortcut if it creates a superficial association with the label, resulting in a false prediction. Conversely, shortcut learning can be mitigated if the model relies on robust causal features that help produce sound predictions. To this end, many studies have explored post-hoc interpretable methods to mine shortcuts and causal features for robustness and generalization. However, most existing methods focus only on single word in a sentence and lack consideration of word-group, leading to wrong causal features. To solve this problem, we propose a new Word-Group mining approach, which captures the causal effect of any keyword combination and orders the combinations that most affect the prediction. Our approach bases on effective post-hoc analysis and beam search, which ensures the mining effect and reduces the complexity. Then, we build a counterfactual augmentation method based on the multiple word-groups, and use an adaptive voting mechanism to learn the influence of different augmentted samples on the prediction results, so as to force the model to pay attention to effective causal features. We demonstrate the effectiveness of the proposed method by several tasks on 8 affective review datasets and 4 toxic language datasets, including cross-domain text classification, text attack and gender fairness test. Automatic Counterfactual Augmentation, Counterfactual Causal Analysis, Robust Text Classification, Contrastive Learning ## 1 Introduction Text classification is a basic natural language processing (NLP) task which has been widely used in many fields, such as sentiment classification [1], opinion extraction [2], rumor detection [3], and toxic detection [4]. Recent studies have shown that fine-tuning of large-scale pre-training language models (LPLMs) can achieve optimal text classification results, such as BERT [5], ALBERT [6], and RoBERTa [7]. However, some work has raised concerns that existing text classification models often suffer from spurious correlations [8, 9], or called shortcut learning [10]. Although usually without compromising the prediction accuracy, shortcut learning results in low generalization of out-of-distribution (OOD) samples and low adversarial robustness [11]. Consider a widely used example _"This Spielberg film was wonderful"_, the term _Spielberg_ may be a shortcut, since it often appears alongside positive comments, even though it is not a reliable causal feature that causes the results [8]. This shortcut fails once the model is migrated to unfriendly dataset to _Spielberg_. A more worth noting example comes from the scenario of toxic text detection. Here, _"They are good at making money"_ is not regarded as a toxic description, but by replacing _They_ with _Jews_, the example may be seen as toxic [12]. The excessive focus on words related to certain groups leads to stereotypes, which makes unfairness to the relevant groups. Therefore, more and more studies work on shortcut mitigation and robustness improvement. In recent years, it has been proved that counterfactual augmentation can effectively improve the robustness of the classifier to shortcuts [13, 14]. Models trained on augmented data appear to rely less on semantically unrelated words and generalize better outside the domain [15]. Therefore, the human-in-the-loop process is designed to take advantage of human knowledge to modify text and obtain opposite labels for counterfactual augmentation [15]. But due to the high cost of human labor, many methods of automatic counterfactual augmentation have also been developed [14, 16, 17]. Edits against auto-mined causal features are used to obtain counterfactual samples. **However, the existing approaches still face two problems. Firstly, they overconsider the contribution of single token and ignore the influence of word-groups. Second, Fig. 1: An example to illustrate the effect of word-group on prediction results. It is important to emphasize that the word-group here is a combination of any tokens. It is not mandatory for tokens to be adjacent. automatically generated counterfactual samples may not have true opposite labels, which can also negatively affect model robustness.** As a result, the automatic counterfactual samples may not be insufficiently flipped due to the omission of causal features, further affecting the true semantics of the counterfactual samples. As shown in Figure 1, the emotional slant of a film criticism is determined by both _interesting_ and _important_, but a counterfactual for a single word simply replaces _interesting_ with _boring_, which results in a sentence with contradictory semantics, thus misleading the model. While a sensible automated counterfactual framework should be able to find the corresponding word-group to generate a true semantic flip sample. Based on the above observation, a causal word-group mining method is proposed in the paper, which purpose is to search for the set of keywords that have the most impact on the prediction. In order to avoid some insignificant words from negatively affecting the search efficiency, a gradient-based post-hoc analysis method [18] is adopted to obtain the candidate causal words of the current sample. Subsequently, a beam search method based on candidate causal words is proposed, whose goal is to counterfactual flip a word group to maximize the change in the probability distribution of predicted logits. This change on the predicted logits is known as **Causal Effect**. The limited search width and depth ensure the mining efficiency of word-group. Moreover, we propose an **Automatic Counterfactual** enhanced multi-instance contrastive learning framework based on **W**ord-**G**roup (**ACWG**). Specifically, for each sample, automatic counterfactual augmentation is performed on the searched word-groups to obtain enhanced samples that are semantically opposite to the original sample. While random masking of some non-causal candidates allows a semantically identical positive sample. Based on the above augmented results, a multi-instance contrastive learning framework is proposed to force language models to rethink semantically identical and opposite samples. To mitigate potential errors from a single word-group augmentation, we select the top \(k\) word-groups with the largest causal effect, and jointly optimize the loss of comparative learning through an adaptive voting mechanism. To verify the generalization and robustness of the proposed method, cross-domain text classification and text attack experiments are performed on 12 public datasets, including 8 sentiment classification datasets and 4 toxic language detection datasets. In summary, the contributions of this paper are as follows: * We propose a word-group mining method to overcome the disadvantage of existing robust text classification methods based on automatic causal mining which only focus on the causal feature of a single keyword. * Based on the word-group mining, we further propose an automatic counterfactual data augmentation method to obtain the opposite semantic samples by counterfactual substitution of the word-groups. * Furthermore, we propose a word-groups-based contrastive learning method, which aims to extract stable decision results from multiple word-groups by using a automatic voting mechanism. * Experimental results on 12 public datasets and 3 common used large-scale pre-training language models confirm the validity of the proposed method. ## 2 Related Work We introduce some of the work related to the proposed methods in this section, including identification of shortcuts and causal features, and approaches to use them to improve model robustness. ### _Shortcuts and Causal Features_ How to identify shortcuts and causal features in text is the premise of many robust text classification approaches. One of the most intuitive ways is to use human prior knowledge to label keywords or spurious correlation patterns [19, 20, 21]. There are also approaches to make better use of human prior knowledge by designing human-in-the-loop frameworks [22]. But these methods rely on manual labor and have poor scalability. Therefore, interpretable methods are adopted to facilitate automatic identification of robust/non-robust region at scale, e.g. attention score [23], mutual information [10] and integrated gradient [24, 25]. Besides, counterfactual causal inference is also used to determine the importance of a token by adding perturbation to the token [25, 26]. If the perturbation of a token has a greater impact, the higher the contribution of the token to the prediction result. Some work also seeks to obtain more explicit shortcuts by further integrating various interpretable methods [9, 10]. ### _Shortcut Mitigation and Robust Model Learning_ Multiple approaches have been studied for shortcut mitigation and robust model learning such as domain adaptation [27] and multi-task learning [28]. Under the premise of given shortcuts or causal features, then it is easy to guide the model correctly by adversarial training [29], reweighting [30], Product-of-Expert [31], knowledge distillation [32], keywords regularization [23] and contrastive learning [25]. Recently, researchers have developed counterfactual data augmentation methods to build robust classifiers, achieving state-of-the-art results [13]. Similarly, counterfactual augmentation can be divided into manual and automatic parts. The former relies on human prior knowledge. [33] counterfactually augments the sample with predefined tokens to improve the fairness of the model. [34] builds a human-in-the-loop system by crowd-sourcing methods to counterfactually augment samples, while improving the robustness and extraterritorial generalization ability of the model. The latter automatically looks for causal features in the sample and flips them to generate counterfactual samples. [35] generates synthetic training data by randomly moving a pair of corruption and reconstruction functions over a data manifold. [26] uses a masked language model to perturb tokens to obtain adversarial examples. [8, 14] obtain counterfactual data by substituting antonyms for words that are highly correlated with the predicted results. Counterfactual texts are assigned to opposite labels and helps train a more robust classifier. [16] learns the rules by logical reasoning and gives faithful counterfactual predictions. C2L make a collective decision based on a set of counterfactuals to overcome shortcut learning [17]. AutoCAD guides controllable generative models to automatically generate counterfactual data [58]. Similar to previous work, our approach is based on valid interpretable analysis. But the difference is that we automatically generate counterfactuals by searching for word groups with the greatest causal effect, rather than just focusing on the effects of individual words. Then, multiple word-groups vote adaptively to learn the impact on the model, reducing the potential for miscalculation from a single word-group. ## 3 Method In this section, we first define the model-related symbols in detail. Then, the detailed framework of the model is introduced in Figure 2. The overall framework of ACWG is divided into two parts. First, word-groups search is performed by maximum the causal effect of the language model. Subsequently, a contrastive learning with multiple samples is performed through the searched word-group to learn robust sample representations. ### _Task Definitions_ In this paper, we focus on cross-domain text classification, which aims to fine-tune the language model \(\mathcal{M}\) on the training set of the source domain \(\mathcal{X}^{train}_{source}\), and produce a trained model \(\hat{\mathcal{M}}\) and a mapping function \(f_{\hat{\mathcal{M}}}(x)=y\) with good generalization performance on the test set of the target domain \(\mathcal{X}^{test}_{target}\) by automatic counterfactual augmentation. For any sample \(x\) with its label \(y\), which consists of a token sequence \(x=\{t_{cls},t_{0},t_{1},...,t_{i},...,t_{sep}\}\), a word-group \(g\) is treated as a combination of any number of tokens in the sequence. Ideally, \(g\) reflects the true causal feature of the sample. The goal of word-group mining is to provide a corresponding word-group set \(\mathcal{G}_{x}\) for each sample. ### _Word-Group Mining_ Given the observations in Figure 1, we find that a single token does not cover the causal feature of the sample well in some cases, so we expect to use any combination of tokens, namely a word-group, to represent the real causal features. Theoretically, all the tokens in a sentence could be part of a word-group, but considering all the tokens would certainly complicate the search process. A wise pre-consideration is that the presence of some words in the sample, such as \(A\) in Figure 1, will have a weak effect on the final prediction, so they can be easily eliminated to reduce the search space. This process is called candidate causal word mining. #### 3.2.1 Candidate Causal Words Mining We use a post-hoc interpretable method to analyze candidate causal words in each sample. It's based on a fine-tuned model \(\mathcal{M}^{\prime}\) on \(\mathcal{X}^{train}_{source}\) and attributes the impact of each token on the model's prediction. Here, integrated gradient, a widely used post-hoc interpretable method is adopted to determine causal words in training samples [18, 36]. For a input sample \(x\), the gradient of the \(i^{th}\) token \(t_{i}\) can be represented as: \[IG_{t_{i}}=(x_{i}-x_{\vec{0}})*\int_{0}^{1}\frac{\partial f_{\mathcal{M}^{ \prime}}(x_{\vec{0}}+\alpha*(x_{i}-x_{\vec{0}}))}{\partial x_{i}}d\alpha, \tag{1}\] where \(x_{i}\) denotes the embedding of \(t_{i}\) with \(d\) dimensions, \(f_{\mathcal{M}^{\prime}}(x)\) is the mapping function which maps \(x\) to the corresponding label \(y\) through the fine-tuned model \(\mathcal{M}^{\prime}\). \(x_{\vec{0}}\) is a all-zero embedding. Subsequently, Riemann-sum approximation is used to approximate the gradient by summing small intervals along the straightline path from \(x_{i}\) to \(x_{\vec{0}}\): \[IG_{t_{i}}=(x_{i}-x_{\vec{0}})*\sum_{j=1}^{m}\frac{\partial f_{\mathcal{M}^{ \prime}}(x_{\vec{0}}+\frac{j}{m}*(x_{i}-x_{\vec{0}}))}{\partial x_{i}}\frac{1} {m}, \tag{2}\] where \(m\) is the number of steps in the Riemann-sum approximation which is set to 50 as adviced by Captum1. The L2 norm is then used to convert the gradient vector corresponding to each token into a scalar as the final attributing score \(\|IG_{t_{i}}\|\). Since a token may appear multiple times in a sample, that is, \(t_{i}\) may be the same as \(t_{j}\), so we calculate the corpus-level attribution score corresponding to \(w_{t_{i}}\) as: Footnote 1: [https://github.com/pytorch/captum](https://github.com/pytorch/captum) \[CS_{w_{t_{i}}}=\frac{1}{Freq(w_{t_{i}})}\sum_{j=1}^{Freq(w_{t_{i}})}\|IG_{w_{ t_{i}}}\|_{j}, \tag{3}\] where \(Freq(w_{t_{i}})\) is the total occurrences of \(w_{t_{i}}\) in \(\mathcal{X}^{train}_{source}\) and \(w_{t_{i}}\in\mathcal{W}\) is the word of \(t_{i}\) where \(\mathcal{W}\) is the vocabulary of the training corpus. According to \(CS_{w}\), a list of ranked causal words can be obtained, and we take the top 20% tokens as the final candidate causal words \(\hat{\mathcal{W}}\). In this way, the number of tokens to be searched within a sample is reduced, reducing the complexity of the search. #### 3.2.2 Word-Group Search Through the pre-selection of candidate causal words, each sample can obtain a causal word list \(\hat{\mathcal{W}}_{x}=\{w_{0},w_{1},...,w_{l}\},\hat{\mathcal{W}}_{x}\subset \hat{\mathcal{W}}\). Then, by searching for any combination of the tokens in \(\hat{\mathcal{W}}_{x}\) and estimating the causal effect of the combination, we hope to obtain a sorted set of word-groups \(\mathcal{G}_{x}\). For this purpose, we propose an improved beam search Algorithm to search for word-groups with the greatest causal effects. Here, considering the counterfactual framework of causal inference [37], the causal effect is defined as the disturbance effect to the probability distribution of a trained language model \(\mathcal{M}^{\prime}\) caused by automatic counterfactual augmentation against a word-group. For example, given the sample '_A interesting and important film_' and one of its word-groups \(\{interesting,important\}\), the corresponding automaticly counterfactual result is '_A boring and unimportant film_', where the corresponding token is replaced by its antonym. If a token doesn't have an antonym, we adopt the lazy counterfactual approach [33] and replace the token with LPLMs' mask token. The sample after the counterfactual augmentation is represented by \(\bar{x}_{g}\). Correspondingly, the probability distributions of \(\mathcal{M}^{\prime}\) are \(p(x)\) and \(p(\bar{x}_{g})\). To measure the agreement between the distributions, Jensen-Shannon Divergence (JSD) [38], a symmetric and smooth Kullback-Leibler divergence (KLD) is used: \[JSD_{g}=\frac{1}{2}KLD(p(\bar{x}_{g}||p(x))+\frac{1}{2}KLD(p(x)||p(\bar{x}_{g}). \tag{4}\] The greater the value of \(JSD_{g}\), the greater the impact of perturbations against word-group \(g\), thus the more likely \(g\) is to become a robust causal feature. Further, Algorithm 1 summarizes the proposed word-groups search method. First, the algorithm retrieves the top \(K\) tokens by causal effect from the candidate causal words \(\tilde{\mathcal{W}}_{x}\) in Line 4, where \([:K]\) represents the interception of the top \(K\) items of the sorted array. Then, Algorithm 1 takes these top \(K\) tokens as basic word-groups with length 1, and continue to generate word-groups with length 2 in Line 6. Here, \(g\oplus w\) indicates extension of word-groups \(g\) with new word \(w\). In practice, we make sure that the new word \(w\) does not exist in \(g\). Then, generation continues on the basis of the new candidate word-groups \(\mathcal{G}_{cand}\) in the next cycle in Line 4, until new word-groups in the current circle reach the specified maximum length. Finally, we rank the causal effects of the generated word-groups (Line 8) and select the top \(L\) as true causal features (Line 9). A simple example with \(K=L=2\) can be found in Figure 2. In this paper, we adopt the configuration of \(K=2\) and \(L=3\), to reduce the complexity of search and ensure that reasonable word-groups are taken into account as much as possible. ### _Multiple Causal Contrastive Learning_ **Data augmentation based on word-groups**. After obtaining the word-groups \(\mathcal{G}_{x}\), a special multiple contrastive learning framework is designed to make full use of the mining results [39]. For contrastive learning, an important premise is to obtain the corresponding positive and negative examples through data augmentation. For the negative samples, we get it via the automatic counterfactual substitution of word-groups. Since word-groups represent the most likely causal features, samples obtained by counterfactual are most likely to have opposite semantics. For positive samples, we can capture them by randomly perturbing the tokens that do not belong to word-groups. Specifically, we represent the composition of word-groups as \(\mathcal{W}_{G_{x}}\), and mask \(\tilde{\mathcal{W}}_{x}-\mathcal{W}_{G_{x}}\) randomly with a probability of 50% as [23]. Thus, the collection of the obtained augmented samples is written as \((x,x^{+},x_{1}^{-},...,x_{L}^{-})\). **Multiple negative samples voting mechanism**. The negative samples correspond to the word-groups with different causal effect, so we expect the model to distinguish among them. Inspired by some research on collective decision making [40, 41], the losses of multiple negative samples are combined to adaptively determine the contribution of each negative sample to the model optimization. Specifically, for the collection of augmented samples above, \(\mathcal{M}^{\prime}\) is easy to access to their corresponding representations as \((h,h^{+},h_{1}^{-},...,h_{l}^{-})\). Mimicking SimCLR [42], a simple MLP that shares parameters maps them to a lower dimensional representation space as \((z,z^{+},z_{1}^{-},...,z_{l}^{-})\). Then, we design an attention-based adaptive voting module, which learns about the contributions of different word-groups as: \[\alpha_{L}=softmax(([z_{1}^{-},...,z_{l}^{-}])W+b), \tag{5}\] where \([,...,]\) represents the concatenation of the vectors, \(W\in\mathcal{R}^{d_{z}u}\) is the learnable weight parameter and \(b\in\mathcal{R}^{l}\) Fig. 2: The overall framework of the proposed ACWG. To simplify, we replace the candidate causal words _interesting_, _important_, and _film_ with **A**, **B** and **C**, respectively. The same samples, representations, and their logits are represented in the same color. denotes the bias where \(d_{z}\) denotes the hidden dimension of \(z\). \(softmax\) is used to normalize the learned contributions. Subsequently, contrastive learning loss can be written as the following margin-based ranking loss [43]: \[\mathcal{L}_{CL}=max(0,\Delta+cos(z,z^{+})-\alpha_{l}\odot[cos(z,z_{1}^{-}),..., cos(z,z_{l}^{-})]), \tag{6}\] where \(\Delta\) is a margin value that we set to \(1\), \(cos\) denotes the cosine similarity of the vectors, \(\odot\) represents the Hadamard product of the vectors. Finally, the total loss function is the weighted sum of the cross entropy and the above loss: \[\mathcal{L}=\mathcal{L}_{CE}+\lambda\mathcal{L}_{CL}, \tag{7}\] where \(\lambda\) is the weight that needs to be further explored and \(\mathcal{L}_{CE}\) is the cross entropy loss on \(\mathcal{X}_{source}^{train}\). ## 4 Datasets To verify the validity of the proposed method, a variety of text classification tasks are explored on 12 different datasets. Specifically, the datasets can be divided into three groups as shown in Table I. * Multi-Domain Sentiment Dataset2[44]. It contains four different Amazon product reviews, **books**, **dvd**, **electronics** and **kitchen** which are contained in four different directories. We select _positive.review_ and _negative.review_ as the training set, and use _unlabeled.review_ as the test set. Footnote 2: [https://www.cs.jhu.edu/](https://www.cs.jhu.edu/) mdredze/datasets * More sources of sentiment classification datasets, including: Movie Review (**mr**) dataset containing binary categories [45]. FineFood (**foods**) [46] for food reviews scored on a scale from 1 to 5. Following [23], ratings \(5\) are regarded as positive and ratings \(1\) are regarded as negative. Stanford Sentiment Treebank (**sst2**) [47] with sentence binary classification task containing human annotations in movie reviews and their emotions. Kindle reviews (**kindle**) [48] from the Kindle Store, where each review is rated from 1 to 5. Following [8, 14], reviews with ratings \(4,5\) are positive and reviews with ratings \(1,2\) are negative. Footnote 3: [https://huggingface.co/datasets/mc/232/toxicweets](https://huggingface.co/datasets/mc/232/toxicweets) * Toxic detection datasets including: **Davidson**[49] collected from Twitter which contains three categories, hate speech, offensive or not. **OffEval**[50] collected from Twitter which is divided into offensive and non-offensive. **ToxicTweets3** from Twitter, where toxic, severe toxic, obscene, threat, insult, and identity hate are marked. We chose toxic or not as our dichotomous task and obtain balanced categories by downsampling the non-toxic samples. **Abusive** from Kaggle4 for binary abusive language detection. We collectively treat offensive, hateful, abusive speech as toxic, and we convert toxic language detection as a binary text classification task. For all the toxic detection datasets, we delete non-English characters, web links, dates, and convert all the words to lowercase. Footnote 4: [https://www.kaggle.com/datasets/hinugtrung/abusive-language-detection](https://www.kaggle.com/datasets/hinugtrung/abusive-language-detection) Then, we perform different tasks on a number of different baselines for the above datasets, including cross-domain text generalization, robustness testing against text attacks and gender fairness analysis. ## 5 Tasks and Experimental Results In this section, we introduce experimental results on the corresponding datasets to address the following key questions: * **Q1**. Does ACWG help to generalize LPLMs? * **Q2**. Does ACWG improve the robustness of LPLMs? * **Q3**. Does ACWG improve the fairness of LPLMs? * **Q4**. How does the proposed Word-Groups-based mining approach help improve the performance of different tasks? ### _Q1: In-domain and Cross-domain Text Classificaion_ To answer **Q1**, we test the OOD generalization performance of different datasets and further explore the experimental parameters. #### 5.1.1 Baselines and Details **Baselines**. The cross-domain generalization is verified by training on the source domain and testing on the target domain. They have different data distributions. Several different shortcut mitigation or automatic counterfactual augmentation approaches are compared. **Automatically Generated Counterfactuals (AGC)**[14], which augments the training data with automatically generated counterfactual data by substituting causal features with the antonyms and assigning the opposite labels. Then, the augmented samples are added to the training dataset to train a robust model. **MASKER**[23], which improves the cross-domain generalization of language models through the keyword shortcuts reconstruction and entropy regularization. It uses tokens with high LPLMs attention scores as possible shortcuts. **C2L**[17], which monitors the causality of each word collectively through a set of automatically generated counterfactual samples and uses contrastive learning to improve the robustness of the model. **Details**. As our main experiment, we conduct training on the training set of the source domain \(\mathcal{X}_{source}^{train}\), and save the optimal models which have the best results on \(\mathcal{X}_{source}^{test}\). Then, the optimal models are used to perform text attack testing and fairness testing. The batch of all datasets and all baselines is uniformly set to 64, and the learning rate is \(1e-5\). We set epoch to 5 and use Adam as the optimizer. All the codes are written using pytorch and trained on four \begin{table} \begin{tabular}{l c c c c} \hline \hline Datasets & books & dvd & electronics & kitchen \\ Train/Test & 2,000,4,465 & 2,000,53,586 & 2,000,561 & 2,000,594 \\ 0/1 & 3,201,32,624 & 2,779,2807 & 3,824,3857 & 3,991,3,954 \\ \hline Datasets & mr & foods & sst2 & kindle \\ Train/Test & 7,108,35,542 & 21,085 g’s,008 & 7,349,872 & 7,350,350 \\ 0/1 & 5,485,537,576 & 6,968,23,107 & 30,208/38,013 & 5,287,528 \\ \hline Datasets & Davidson & OffEval & ToxicTweets & Abusive \\ Train/Test & 17,346,7436 & 13,240/860 & 21,410/9,178 & 2,677,11,87 \\ 0/1 & 4,163/20,619 & 9,460/4,640 & 15,294,915,294 & 1,998,1,996 \\ \hline \hline \end{tabular} \end{table} TABLE I: The datasets and the corresponding partitioning. 0/1 denotes the number of negative and positive samples. NVIDIA A40 GPUs. For the baselines, officially published codes are used to replicate the experimental results. For AGC, we identify the causal features by picking the closest opposite matches which have scores greater than 0.95 as suggested in the original paper. For MASKER, we set the weights of the two regularization terms to 0.001 and 0.0001 for cross-domain generalization. For C2L, we set the number of positive/negative pairs for comparison learning to 1, and search for the optimal weight of contrastive learning loss in \([0.1,0.7,1.0]\). All the key parameters of the baselines are consistent with those reported in the original paper. #### 5.1.2 Comparisons With State-of-the-Arts The experimental results of BERT and RoBERTa, two commonly used LPLMs, are reported in Table II and Table III. In general, ACWG is able to achieve the best average in all cases, and has similar performance with BERT and RoBERTa as the backbones. First, we note that the attention-based shortcuts extraction method MASKER is not always effective. For example, compared to basic BERT, MAKSER shows degradation of performance on electronics, mr, ToxicTweets, and etc. This shows that attention score may not be suitable for robust feature extraction, and also indicates the importance of reasonable keyword mining methods. In contrast, counterfactual augmentation based methods AGC and C2L both achieve better results in most cases. But the former is superior to C2L in only a few cases, because it also includes samples with opposite augmentation as part of the training dataset, which is easily affected by the quality of the augmentation samples. While C2L adopts the form of contrastive learning and uses collaborative decision making to give a more robust counterfactual augmentative utilization. Finally, the proposed ACWG can obtain optimal values on all datasets, which indicates that mining word-groups and reasonably using them to generate counterfactual augmentation can stimulate LPLMs' ability to learn robust features, and therefore contribute to LPLMs' generalization. Due to the similarity of BERT's and RoBERTa's results, we take took BERT as the backbone to conduct in-depth exploration in the follow-up experiments. #### 5.1.3 Parameters Exploration Two main parameters explored in relation to ACWG are the loss weight of comparative learning \(\lambda\) and the number of word-groups used \(l\). **Contrastive learning loss \(\lambda\)**. First, \(\lambda\) in Eq. 7 is analyzed to determine the loss ratio of assisted contrastive learning. Since the optimal parameters of different datasets are difficult to be selected uniformly, our goal is to investigate the optimal magnitude of \(\lambda\). Specifically, we show cross-domain generalization results in all cases and average performance changes for \(\lambda\in\{0.1,0.01,0.001\}\) in Figure 5.1.3 compared with BERT. Although there are differences among different datasets, the best results are produced at \(0.01\) or \(0.001\) for all averages. Therefore, we choose 0.01 or 0.001 as the optimal \(\lambda\) value. In addition, in most cases, no matter the value of \(\lambda\), ACWG is better than backbone, which further verifies the effectiveness of the proposed method. **Word-groups number \(l\)**. Subsequently, \(l\in\{1,2,3,4\}\) is further analyzed to determine a reasonable number of word-groups in Figure 5.1.3. Here, the average results of the target domain under each particular source domain are reported, because the results vary widely across different target domains, a comprehensive evaluation is used as the main decision basis, same as Figure 5.1.3. We note the inadequacy of a single word-group as it has a low tolerance \begin{table} \begin{tabular}{c c c c c c c c c c c c c c} \hline \multicolumn{2}{c}{Datasets} & \multicolumn{4}{c}{Models} & \multicolumn{4}{c}{Models} \\ \hline Source & Target & BERT & AGC & MASKER & C2L & ACWG & Source & Target & BERT & AGC & MASKER & C2L & ACWG \\ \hline \multirow{4}{*}{books} & books & 81.27 & 80.25 & 81.85 & 82.39 & 82.68 & & books & 79.84 & 77.69 & 80.60 & 79.05 & 81.76 \\ & dvd & 80.51 & 79.55 & 80.00 & 79.66 & 81.04 & dvd & 76.68 & 78.81 & 75.85 & 75.29 & 80.87 \\ & electronics & 70.76 & 70.22 & 68.76 & 69.23 & 77.57 & dvd & electronics & 62.61 & 65.16 & 68.53 & 69.10 & 77.41 \\ & kitchen & 72.77 & 78.13 & 79.94 & 78.12 & 79.31 & kitchen & 68.44 & 75.46 & 73.02 & 80.18 & 83.30 \\ & Average & 77.70 & 77.04 & 77.64 & 77.35 & **80.15** & & Average & 71.89 & 74.28 & 74.50 & 75.91 & **80.84** \\ \hline \multirow{4}{*}{electronics} & books & 83.92 & 79.47 & 77.68 & 83.92 & 85.74 & & books & 84.61 & 81.33 & 87.16 & 85.52 & 87.96 \\ & dvd & 73.51 & 73.79 & 73.03 & 71.44 & 77.36 & & dvd & 67.32 & 73.92 & 68.96 & 73.21 & 76.57 \\ & electronics & 75.40 & 80.04 & 76.85 & 80.23 & 78.30 & kitchen & electronics & 69.74 & 77.16 & 73.47 & 76.73 & 78.50 \\ & kitchen & 84.22 & 85.31 & 84.42 & 84.40 & 84.95 & kitchen & 81.20 & 83.35 & 81.26 & 83.80 & 83.86 \\ & Average & 79.26 & 79.65 & 77.32 & 80.00 & **81.59** & & Average & 75.71 & 78.94 & 77.71 & **79.82** & **81.72** \\ \hline \multirow{4}{*}{mr} & mr & 85.51 & 85.14 & 84.81 & 85.57 & 85.34 & & mr & 61.31 & 56.44 & 80.79 & 65.14 & 68.35 \\ & foods & 69.95 & 75.61 & 60.95 & 77.48 & 83.95 & & foods & 96.40 & 94.66 & 96.12 & 96.20 & 96.29 \\ & stst2 & 91.97 & 92.32 & 92.43 & 91.40 & 91.74 & foods & sst2 & 70.64 & 72.71 & 72.33 & 73.97 & 76.38 \\ & kindle & 84.06 & 84.60 & 85.14 & 84.79 & 85.97 & & kindle & 72.44 & 76.57 & 74.05 & 78.76 \\ & Average & 82.87 & 84.42 & 80.83 & **84.76** & 86.76 & **86.75** & **Average** & 75.20 & 78.68 & 76.89 & **79.95** \\ \hline \multirow{4}{*}{sst2} & mr & 87.76 & 88.04 & 87.59 & 85.81 & 82.87 & & mr & 80.47 & 81.40 & 79.40 & 80.39 & 81.04 \\ & foods & 81.01 & 82.24 & 82.65 & 81.56 & 86.45 & & foods & 83.47 & 83.98 & 82.76 & 86.00 & 84.62 \\ & ssd2 & 91.74 & 91.85 & 91.74 & 92.20 & 91.86 & kindle & sst2 & 86.47 & 84.06 & 84.14 & 85.09 & 86.94 \\ & kindle & 85.11 & 85.36 & 85.87 & 85.81 & 85.21 & kindle & sst2 & 89.21 & 89.29 & 89.43 & 89.11 & 89.52 \\ & Average & 86.41 & 86.87 & 86.96 & 86.92 & 88.07 & Average & 84.91 & 84.68 & 83.93 & 85.15 & 85.53 \\ \hline \multirow{4}{*}{Davidson} & Davidson & 96.46 & 96.48 & 96.06 & 96.41 & 96.32 & Davidson & 82.23 & 82.41 & 82.52 & 83.58 & 83.84 \\ & OffEval & 79.53 & 80.70 & 80.35 & 80.47 & 80.81 & & OffEval & 83.27 & 85.35 & 82.83 & 83.07 & 84.53 \\ & Abusive & 76.91 & 77.78 & 77.06 & 80.20 & 79.11 & OffEval & Abusive & 80.73 & 81.58 & 88.85 & 83.07 \\ & ToxicTweets & 74.72 & 79.62 & 81.09 & 80.21 & 82.61 & ToxicTweets & 82.79 & 87.49 & 85.25 & 88.14 & 88.98 \\ & Average & 81.91 & 86.85 & 83.82 & 84.32 & **84.70** & & Average & 82.88 & 84.60 & 82.75 & 80.84 & **85.11** \\ \hline \multirow{4}{*}{Abusive} & Davidson & 81.86 & 84.44 & 83.51 & 82.37 & 84.37 & Davidson & 87.80 & 85.92 & 86.97 & 86.66 & 86.51 \\ & OffEval & 79.42 & 81.83 & 77.79 & 78.48 & 80.91 & OffEval & 79.42 & 81.83 & 77.79 for noise compared to the collective decision-making of multiple word-groups. But this does not mean that more word-groups will bring better results, because with the increase of word-groups, groups with lower causal effect will be included in the decision-making group, which will also introduce potential noise. As a result, the optimal results of most datasets are generated at 2 or 3, except for dvds and ToxicTweets. Therefore, we choose a stable value \(l=3\) as the parameters for all datasets, even though this parameter may not represent the optimal results. #### 5.1.4 Ablation Study Ablation experiments are performed to analyze the effectiveness of the proposed key components in Figure 5.1.4 Fig. 3: The optimum \(\lambda\) for different datasets and the comparison with BERT. The dataset titles under different subgraphs represent the source domains for training, the datasets in the legends represent the target domains for test. Different combinations of colors and symbols represent different datasets. to further answer **Q1**. Specifically, two ACWG variants are considered: **WO voting** and **WO word-groups**. The former deletes the word-groups voting mechanism in Section 3.3, that is, the word-group with the highest score is used to calculate directly. The latter means that the word-group search method proposed is not used, and only the keywords with the greatest causal effect are used for automatic counterfactual substitution. We observe that for all datasets, both variants of ACWG result in performance degradation. Among them, **WO voting** cause a smaller decline than **WO word-groups**, which indicates that word-groups mining is the main cause of ACWG performance improvement. The voting mechanism is based on word-groups, so the performance degradation of **WO voting** is lower. ### _Q2: Text Attack_ To further verify the robustness of the proposed method (to answer **Q2**), several basic text attack methods are used to destroy the original text. **Approaches. Probability Weighted Word Saliency (PWWS)**[51], a greedy algorithm including a new word substitution strategy by both the word saliency and the classification probability. **TextBugger**[52] finds the most important sentence, and uses a scoring function to look for keywords in the sentence, and then attacks the keywords. **TextFooler**[53] looks for the key words that contribute the most to the sentence prediction by deleting words in sequence, and attacks the text by replacing the key words. **Details**. We attack the test sets of all the above datasets except for Multi-Domain Sentiment Dataset, then tests the performance of different models on the test sets after the attack. However, attacks on tokens often act on more than one tokens in the sample, so to prevent the semantics of the sample from changing too much due to the attacks, a constraint is added to limit the number of tokens to be attacked to \(K\). But for Multi-Domain Sentiment Dataset, due to their long text length, the search time required for word replacement is estimated to be more than 24 hours on the 4"NVADIA A40 GPUs, so they are not discussed further. **Attack Results**. We report on the response of different models to attacks on the test sets in Figure 5.2. The effectiveness of the different attack methods is demonstrated because they perturb the sample by retrieving the most important words. As a result, we observe significant performance degradations due to text attacks on 8 datasets, especially as the number of words being attacked increases. But in most cases, a robust model can increase resistance to attacks, whether word-based C2L or word-groups-based ACWG. Furthermore, word-groups-based approach is more effective at resisting attacks than single-word-based model because word-groups contain a more rational causal structure and are more diverse. ACWG shows a trend where the advantage over BERT increases as the number of words attacked increases. This is also due to ACWG's learning of word-groups, which makes it more robust when dealing with multiple attacked words. ### _Q3: Gender Fairness_ Furthermore, although our method does not specifically study fairness on minority groups, such as gender and race, robust feature learning still helps to alleviate the bias of the model [54]. To verify this idea and answer **Q3**, in this paper, we explore the gender bias that has been extensively studied by a set of gender attribute terms given by [55]. If a sample contains any of the keywords in the gender attribute terms, then we assume that the sample is likely to have gender unfairness. We screen potential gender bias samples in the test sets of Davidson and ToxicTweets since they have more samples. Subsequently, the trained model is used to test fairness on the above subsets, using the following metrics. **Perturbation Consistency Rate (PCR)**. PCR is used to assess the robustness of the model to the gender perturbation of the sample, which measures the percentage of predicted results that have not changed if a gender attribute term in a sample is replaced with the opposite word. For example, if a sample _'Site is a good girl'_ is predicted by the model as positive, then its gender perturbation sample _'He is a good boy'_ should have the same prediction result. If the results are different, the model may be gender-sensitive and make unfair judgments about _She_ and _He_. **False Positive Equality Difference (FPED) and False Negative Equality Difference (FNED)**[56]. They are Fig. 4: The average value of the target domain generalization effect on different dataset groups varies with the number of word-groups \(l\). Fig. 5: The average performance under different source domains for ablation study. β€˜WO’ denotes β€˜without’. relaxations of Equalized Odds (also known as Error Rate Balance) defined by [57] as follows: \[\begin{split} FPED=\sum_{z}|FPR_{z}-FPR_{all}|,\\ FNED=\sum_{z}|FNR_{z}-FNR_{all}|,\end{split} \tag{8}\] where \(FPR_{all}\) and \(FNR_{all}\) denotes False Positive Rate and False Negative Rate in the whole test set, \(FPR_{z}\) and \(FNR_{z}\) represents the results in the corresponding gender group \(z\), where \(z=\{male,female\}\). The lower their values, the more fair the model is. **Fairness Results.** The measurement of fairness is reported in Table IV. For PCR, ACWG outperforms BERT on both Davidson and ToxicTweets, indicating that the proposed method is more stable when flipping the attributes, without misjudgment due to differences between male and female. In addition, the lower FPED and FNED also indicate that ACWG made more balanced predictions for the male/female samples, further verifying its fairness. ACWG's fairness also stems from a more explicit causal feature reflected in word-groups, since gender is not the actual cause of the model's predictions, and it is easy for ACWG to exclude the influence of such non-causal features. ### _Q4: Label Flipping Rate_ Similiar to [58], we want to measure the quality of the sample generated by the automatic counterfactual. If a counterfactual sample produces an effect, then it should result in an oppositely labeled sample, compared to the ground truth label. Further, this opposite sample can be used to enrich the train data and induce the model to consider word-groups that represent robust features. Therefore, **Label Flipping Rate** (LFR) is adopted to measure the effectiveness of generating counterfactual data. It is defined as the ratio at which the counterfactual flipping of the sample will predict a different result compared to the ground truth label: \[LFR=1-\frac{\sum_{x\in\mathcal{X}}\Xi(y=argmax(p(\bar{x})))}{|\mathcal{X}|}, \tag{9}\] where \(\mathcal{X}\) is the data set on which the counterfactual is executed, \(y\) represents the ground truth label of the sample corresponding to \(x\), and \(\Xi\) is the indicator function. The LFR scores for three cases is calculated: **single word** uses the keyword with the maximum causal effect \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{Davidson} & \multicolumn{2}{c}{ToxicTweets} \\ & BERT & ACWG & BERT & ACWG \\ \hline PCR (\(\%\uparrow\)) & 99.10 & 99.51 & 99.04 & 99.22 \\ FPED (\(\downarrow\)) & 0.0201 & 0.0116 & 0.0228 & 0.0181 \\ FNED (\(\downarrow\)) & 0.1441 & 0.0949 & 0.0406 & 0.0389 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Measurement results of gender fairness compared with BERT on Davidson and ToxicTweets. \(\uparrow\) indicates that the smaller the value, the higher the fairness, while \(\downarrow\) is opposite. Fig. 6: When the number of attack words is different (1,2,3), the three attack methods lead to the performance changes of BERT, C2L and PWWS. Different groups of graphs show the results of the same data set, with different colors representing different fine-tuned models. Different styles of polylines represent different attack methods, different subgraphs represent different datasets. for counterfactual substitution, **word group**\(l=1\) uses the word-group with the maximum causal effect for counterfactual substitution, and **word group**\(l=3\) denotes that if only one of the three word-groups causes a label flip, the counterfactual is successful. **LFR Results.** We show the corresponding results in Figure 5.4. For automatic counterfactual substitution of a single word, the values of LFR are smaller, especially for books, dvds, electronics, and kitchen that contain longer text (less than 10%). This is due to the fact that the longer the text, the greater the number of words required for semantic flipping. When the word-groups search method is used (word group \(l=1\)), we can increase the LFR by searching for combinations of words of different lengths, as because word-groups contain stronger causal effect. For all datasets, \(l=3\) leads to the largest LFR, because the greater the number of word-groups, the more likely it is to contain true causal features. But we also notice that \(l=3\) has a small boost than \(l=1\), which again shows that the number of phrases is not always better, as it introduces more noise and increases complexity. This also explains the need for the proposed voting mechanism. Therefore, based on the above observations, we can answer **Q4**. The effectiveness of ACWG comes from the richer semantics brought by automatic counterfactual substitution, the samples after automatic counterfactual substitution are a useful augmentation to the LPLMs. The key causal features found by the word-groups search method enhance the efficiency of this counterfactual gain, thus inducing LPLMs to focus on more robust causal features. ### _Q4: Case Study_ Further, we carry out in-depth analysis of the proposed framework through case analysis, so as to show the working mechanism of the model more clearly. Specifically, several sets of cases are carefully studied to explore the true effects of the proposed word-groups as shown in Table V. For the sample from sst2, the proposed method can easily find word-group _'beautiful loves'_. The word-group contains two words with a significant positive predisposition, and thus determines that the prediction is positive. For samples from foods, _no purchase recommend_ is found, expressing a negative assessment. Further, for the toxic cases, multiple insults are found in the sample, such as _ass, asshole, bitch_, and _pussy_. The words together constitute the toxicity of the samples, and deleting any one of them does not eliminate the toxicity of the samples. In addition, we can find that there are similarities among different word-groups of the same sample, and the voting mechanism can further strengthen the causal features by capturing such similarities. ## 6 Conclusion and Future Work In this paper, we propose a word-group mining method to enhance robust of large-scale pre-trained language models in the face of shortcut learning in text classification. Based on the maximum causal effect, we search the combinations of keywords to obtain robust combinations of causal features. Further, word-groups are used for automatic counterfactual generation to augment the training sample, and finally, comparative learning is used to induce model fine-tuning to improve robustness. We conduct extensive experiments on 8 sentiment classification and 4 toxic text detection datasets, and confirm that the proposed method can effectively improve the model's cross-domain generalization, robustness against attacks, and fairness. But fine-tuning some of the existing hyperscale language models is very difficult, such as GPT-3 [59] and LLAMA [60]. Therefore, in future work, we will try to explore large-scale generative language models and analyze them from multiple perspectives for shortcut learning problems. In addition, we will study how to improve robustness and fairness of language models by combining interpretability and prompt learning without fine-tuning. \begin{table} \begin{tabular}{c l|c|c|c|c} \hline Datasets & \multicolumn{2}{c}{Text} & \multicolumn{2}{c}{Word-groups I=3} & Category \\ \hline sst2 & thatl loves & its characters and communicates something rather beautiful & beautiful loves; and beautiful loves; beautiful lovesomething & positive \\ & about human nature & & & & \\ \hline foods & this coffee is strong but no flavor, no taste, no aroma. poor choice do not try: i would not recommend to purchase & no purchase recommend; no recommend & negative \\ \hline Abusive & ben affleck is the best those other people who are aguring with them & asshole screw; asshole beat screw; asshole screw & asshole screw & toxic \\ & and screw & the other guy beat him looks like gun with hair that long, ass & & \\ & uply nose been affleck is the best defending muslims even when he is not & its just the best brings a tear to my eyeawata & & \\ \hline Davidson & i got some lightsknin pussy & one time and the bitch & damn near had & bitch damn pussy; bitch damn i; bitch pussy & toxic \\ & me bout to propose. had some i had to immediately & & & \\ \hline \end{tabular} \end{table} TABLE V: Case studies from different datasets. The yellow text box shows the word-group with the highest causal effect score. Fig. 7: The label flipping rate (%) caused by different counterfactual approaches. Single words denotes the **WO word-groups** method in Section 5.1.4, \(l=1\) denotes the **WO voting** method. ## Acknowledgments The authors would like to thank....
2308.00957
Causal Inference with Differentially Private (Clustered) Outcomes
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their potentially sensitive responses. Of the many ways of ensuring privacy, label differential privacy is a widely used measure of an algorithm's privacy guarantee, which might encourage participants to share responses without running the risk of de-anonymization. Many differentially private mechanisms inject noise into the original data-set to achieve this privacy guarantee, which increases the variance of most statistical estimators and makes the precise measurement of causal effects difficult: there exists a fundamental privacy-variance trade-off to performing causal analyses from differentially private data. With the aim of achieving lower variance for stronger privacy guarantees, we suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure of the data while still allowing for the estimation of causal effects. We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees. We compare its performance, theoretically and empirically, to that of its unclustered version and a more extreme uniform-prior version which does not use any of the original response distribution, both of which are special cases of the Cluster-DP algorithm.
Adel Javanmard, Vahab Mirrokni, Jean Pouget-Abadie
2023-08-02T05:51:57Z
http://arxiv.org/abs/2308.00957v2
# Causal Inference with Differentially Private ###### Abstract Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their potentially sensitive responses. Of the many ways of ensuring privacy, label differential privacy is a widely used measure of an algorithm's privacy guarantee, which might encourage participants to share responses without running the risk of de-anonymization. Many differentially private mechanisms inject noise into the original data-set to achieve this privacy guarantee, which increases the variance of most statistical estimators and makes the precise measurement of causal effects difficult: there exists a fundamental privacy-variance trade-off to performing causal analyses from differentially private data. With the aim of achieving lower variance for stronger privacy guarantees, we suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure of the data while still allowing for the estimation of causal effects. We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees. We compare its performance, theoretically and empirically, to that of its unclustered version and a more extreme uniform-prior version which does not use any of the original response distribution, both of which are special cases of the Cluster-DP algorithm. ## 1 Introduction Measurement and experimentation are essential tools to improve any user-facing product. Technology companies routinely run randomized experiments (Imbens and Rubin, 2015), also known as A/B tests, to compare the performance of a new product or iteration (the treatment) to some well-chosen baseline (the control). Randomized experiments are also used to evaluate the impact of new drugs, in the form of clinical trials, or to inform public policy. Measuring causal effects from these randomized experiments assumes that participants are willing to share their potentially sensitive or private response to treatment. This assumption is constantly challenged by the rise of privacy concerns and the regulations for protecting individuals' online data. Many participants and regulatory guidelines agree with sharing some degree of information, as long as there is so-called plausible deniability, meaning no response can be tracked to any individual user, for example, by sharing only aggregated data. However, aggregating data is often not sufficient to entirely prevent the risk of de-anonymization (Sweeney, 2000; Narayanan and Shmatikov, 2008). Differential privacy is one possible framework which has emerged as a solid contender under which user outcomes might be shared while diminishing the risk of deanonymization. It formalizes the notion that two privatized datasets are unlikely to differ in any measurable way if the true responses differ by a single point. Ensuring such a privacy guarantee often comes at the risk of adding additional noise into the original dataset, which increases the variance of statistical estimators. This _privacy-variance trade-off_ is crucial for causal inference applications, since randomized experiments aim to obtain the most precise measurements possible of a causal effect. The goal of our paper is to design a mechanism for estimating causal effects from differentially private outcomes in a way that improves this privacy-variance trade-off. We focus our investigation on the label-differentially-private setting, which is a more relaxed measure of privacy that only concerns the privacy of responses (labels), as opposed to all features. We also assume that outcomes enjoy some non-private cluster structure, for example geographic regions or broad demographic classes. The clusters need not meet any cardinality or quality constraints--our results extend to all singleton clusters, random clusters, or even a single cluster. Finally, we postulate the existence of a central unit that can observe all outcomes and the cluster information, and computes privatized outcomes. We motivate this setting in Section 2. We propose a privacy-preserving mechanism where outcomes are either reported truthfully with some probability or re-sampled from a learned but modified distribution of outcomes within the same cluster by the central unit. We show that valid causal inference is possible while achieving a differentially-private guarantee on outcomes. We evaluate, theoretically and empirically, the privacy-variance trade-off of other private mechanisms, and show that ours improves upon these as the outcome distributions become more homogeneous within the given clusters. In Section 2, we introduce and motivate the differential privacy setting and causal objective of our work. In Section 3, we introduce our main clustered differentially private mechanism, Cluster-DP, and evaluate its privacy guarantee. We also consider a special case of the algorithm, Cluster-Free-DP, where the data is grouped into one single cluster. In Section 4, we provide a causal estimator which uses only the privatized outcomes, treatment assignments, and cluster memberships of each unit. We show that it is unbiased and consistent for the common average treatment effect estimand, and compute an upper-bound of its variance, which we relate to its non-differentially-private counterpart. In Section 5, we consider several baselines, starting with another special case of our differential privacy mechanism, Uniform-Prior-DP. This special case does not rely on a learned empirical distribution of outcomes and instead samples uniformly from the space of possible outcomes. We compare the performance of each mechanism theoretically. In Section 6, we validate these claims and evaluate the privacy-variance trade-off in a simulated example. Finally, we replicate some of these claims on a real Youtube Social graph in Section 7. ### Related works There are different approaches to preserving the privacy of user data regardless of any downstream analysis of it. A popular approach involves anonymizing data by removing, aggregating, or randomizing identifying details in some way before releasing the data to the public, in the hope of making're-identification' of users difficult. Several privacy measures have been proposed under this approach, including \(k\)-anonymity (Sweeney, 2002, 2000), \(\ell\)-diversity (Machanavajjhala et al., 2007), and \(t\)-closeness (Li et al., 2006). While providing a layer of privacy protection, there are well-documented cases where de-identified data has been combined with other data sources to uniquely re-identify large proportion of users (Narayanan and Shmatikov, 2008; Sweeney, 2000). In this work, we consider the differential privacy measure, which is a property of a data processing algorithm, rather than of a data set (Dwork et al., 2006, 2006). Differential privacy is widely used and extensively covered. In this work, we focus on label differential privacy, introduced by Chaudhuri and Hsu (2011). The literature on label differential privacy is mostly dedicated to classification and regression tasks with the goal of improving excess risk while offering protection for labels (Beimel et al., 2013; Bassily et al., 2018; Wang and Xu, 2019). More recently, there have been several papers improving utility-privacy tradeoffs of label DP algorithms (Esfandiari et al., 2022; Ghazi et al., 2021, 2022). Our mechanism is inspired by a technique in (Esfandiari et al., 2022), which we adapt to the estimation of causal effects. We further provide privacy-variance tradeoffs and a tighter analysis of the privacy guarantee than the proof methodology of (Esfandiari et al., 2022), which we extend to an \((\varepsilon,\delta)\)-type guarantee (See Theorem 3.2). Panigrahi et al. (2022) studies the problem of treatment effect estimation after adjusting for potential confounders from different independent studies. Using a Lasso estimator, a parsimonious model is selected in each study and an unbiased estimator is constructed by aggregating simple summary statistics. While sharing only the summary statistics provides some layer of protection, this work does not provide any differential privacy guarantees. Closer to our work, Kancharla and Kang (2021) also study the problem of average treatment effect estimation from a randomized control trial, where outcomes have been privatized using a differentially private mechanism. Specifically, they consider a binary outcome space and consider a mechanism, which for any given true response \(y_{i}\), either returns \(\tilde{y}_{i}=0\) with probability \(r_{0}\), \(\tilde{y}_{i}=1\) with probability \(r_{1}\) or returns the true outcome \(\tilde{y}_{i}=y_{i}\) with probability \(1-r_{0}-r_{1}\) for some choice of probabilities \(r_{0},r_{1}\in(0,1)\). Our setting and results differ significantly in that (1) we go beyond binary responses and consider a general discrete outcome space; (2) our mechanism leverages a clustering structure of responses to improve the privacy-variance trade-off, and we quantify the impact of cluster quality on the variance of the estimator; (3) we propose a stratified estimator of the average treatment effect. In the absence of non-compliers, the procedure in (Kancharla and Kang, 2021) can be viewed as a special case of ours, for binary outcomes and assuming no-cluster structure (see Equation 7 for further detail). Betlei et al. (2021) focus on the randomized control trial set-up and propose a differentially private method, called ADUM, to learn uplift models from data aggregated along a given partition of the feature space. The privacy-utility trade-off is studied by computing the mean-squared error of the estimator and its dependence on the underlying partition size and privacy budget. The analysis is for uni-dimensional feature spaces and makes the assumption that every bin has the same number of treated and controlled units. The ADUM mechanism adds Laplace noise to the count and the sum of responses from treated and controlled groups within each bin, and uses the (noisy) aggregate responses to estimate the conditional average treatment effect (CATE). While the bins can be thought of as clusters based on the features, their work does not use this clustering structure to reduce the variance of the estimator resulting from the added noise to achieve differential privacy. The ADUM mechanism is very similar to the noisy Horvitz-Thompson estimator baseline discussed in Section 5.1, with the difference that ADUM adds noise to the count and the sum of responses, while the noisy Horvitz-Thompson estimator baseline adds noise directly to the average response of treated and controlled groups in each cluster (cf. Equation (8)). Niu et al. (2022) propose a meta-algorithm for multi-stage learning under privacy constraints and apply that to CATE estimation. The methodology relies on multiple sample-splittings where different parts of the sample are used to estimate different components of the estimator. The approach uses DP-EBMs (Nori et al., 2021) as the base learner, and conducts a privacy analysis using the sample splitting structure of the algorithm and the parallel composition property of differential privacy. They study the privacy-accuracy trade-off empirically. This framework goes beyond randomized control trials and allows for heterogeneous causal effects. However, they do not leverage the cluster structure of the data to improve the variance-privacy tradeoff, nor do they provide private 'unit-level' data to the experimenters (e.g. advertisers in our motivating example in Section 2); instead their work aims to estimate the propensity model and the outcome model in a differentially-private way. ## 2 Privacy setting and causal objective We are motivated by the following scenario of a technology company in the business of selling advertising space to advertisers. Advertisers wish to measure the effectiveness of their advertising campaigns by running A/B tests. Advertisers do not wish to rely on this technology company to provide estimates themselves and wish for the technology company to provide user-level data, such as whether they clicked on their ad as well as any meaningful covariates. One reason for this might be that advertisers wish to run their own proprietary covariate-adjustment methods, and would not be content with summary data of the treatment effect. On the other hand, this technology company seeks to protect the privacy of its user. Not all attributes are equally sensitive however. For example, whether a user clicked on an ad might be more sensitive than the region the click came from. In this setting, illustrated in Figure 1, the central model is motivated by the idea that the technology company can privatize the dataset before passing it on to advertisers. The label-DP model is motivated by the advertising setting where only user outcomes are sensitive, whereas cluster information and other covariates are not. We consider a fixed population of \(n\) users, henceforth units, where we can assume the Stable Unit Treatment Value Assumption (Imbens and Rubin, 2015). In particular, there is no interference between units and no hidden levels of treatment. Let \(y_{i}(0)\) be the potential outcome of unit \(i\) if it is controlled, and \(y_{i}(1)\) if it is treated. These potential outcomes are sampled from a finite response space \(\mathcal{Y}\) of cardinality \(K=|\mathcal{Y}|\). While finite response spaces are common in many discrete settings (e.g. number of clicks or impressions), outcomes are continuous in many real-world settings. We Figure 1: Illustration of our label-DP mechanism with a central unit computing the (clustered) privatized outcomes for valid causal inference. then suggest binning, as illustrated in Section 6. We further assume that there is some known clustered structure of these units into \(C=|\mathcal{C}|\) non-overlapping clusters of size \(n_{c}\). We let \(c_{i}\in\mathcal{C}\) be the cluster membership of unit \(i\). These clusters may be geographic regions or broad demographic groups the units belong to. We do not make any assumptions on the number of clusters or their size; our results hold for a single cluster and singleton clusters, balanced or unbalanced. The strength of our results, however, improve with a specific measure of cluster quality, which we will introduce in Section 4. While our results are expressed in the common finite sample setting, it is easy to consider their super-population equivalents. For this purpose, we will sometimes denote by \(x_{i}\in\mathbb{R}^{d}\) the covariate vector of each unit \(i\) such that each \((x_{i},y_{i}(0),y_{i}(1),c_{i})\) is drawn from some joint distribution \(\mathcal{P}\,\). Let \(z_{i}\) correspond to the treatment assignment of unit \(i\), \(z_{i}=1\) if treated and \(z_{i}=0\) if controlled. Let \(n_{1}\) be the total number of treated units and \(n_{0}\) to be the total number of controlled units across units. The treatment assignment is sampled in a completely randomized way over clusters: a fixed number of \(n_{1,c}\) (resp \(n_{0,c}\)) units is chosen uniformly at random to be treated (resp. controlled) within each cluster \(\mathcal{C}\). Our results generalize to the single cluster setting where we recover the completely randomized assignment. Our objective is to estimate the average treatment effect estimand, defined by \(\tau=\frac{1}{n}\sum_{i=1}^{n}\left(y_{i}(1)-y_{i}(0)\right)\). Unlike a typical experiment, we do not observe outcomes \(y_{i}(z_{i})\) directly. Instead, we observe privatized outcomes \(\tilde{y}_{i}\) for each unit \(i\), where \(\tilde{y}_{i}\) is returned by our proposed differential private mechanism, which we describe in Section 3. We cover how to estimate \(\tau\) from \((z_{i},\tilde{y}_{i}(z_{i}),c_{i})_{i\in[n]}\) in Section 4. To guide the reader through abundant notation, we include a glossary at the beginning of the Appendix. ## 3 The Cluster-DP and Cluster-Free-DP Mechanisms We now introduce our main differentially private mechanism, Cluster-DP, and then consider a special case, Cluster-Free-DP. The central unit observes the treatment assignments, realized potential outcomes, and cluster memberships for all units \((z_{i},y_{i}(z_{i}),c_{i})_{i\in[1,n]}\). In the super-population setting, it also observes the covariates \(x_{i}\) of each unit. It then returns a privatized potential outcome \(\tilde{y}_{i}\) for each unit, observed by the experimenter, using a similar mechanism proposed by Esfandiari et al. (2022), which either returns the true outcome or samples from a transformed empirical distribution of responses from units in the same cluster. Randomization in each unit's response provides "plausible deniability", meaning that the returned response can be linked to the randomization, rather than the actual response, allowing the users to freely respond to the treatment, while preserving their privacy. The details of the mechanism is given in Algorithm 1, which we summarize below. The mechanism is applied to each cluster individually, independently of other clusters. The treated and controlled groups are also dealt with separately. For the sake of exposition, we focus on the controlled units of a given cluster \(c\in\mathcal{C}\). The same procedure will be repeated for the treated units of cluster \(c\), and so on for all other clusters. 1. Compute the empirical response distribution of the controlled units in the cluster \(\hat{p}_{0}(y|c)\). 2. Add noise drawn from a Laplace distribution with parameter \((\sigma/n_{0,c})\) to each response probability. Recall that \(n_{0,c}\) is the number of controlled units in cluster \(c\). 3. Truncate the response probabilities to be within the interval \([\gamma,1]\), with 4. Renormalize the response probabilities to form a distribution. We follow a specific renormalization so that the resulting response probabilities remain in \([\gamma,1]\), and add up to one. 5. With probability \(\lambda\), each original response is replaced by a random sample from the distribution constructed in the previous step. ``` Parameters: threshold \(\gamma\in[0,\nicefrac{{1}}{{\kappa}}]\); noise scale \(\sigma\geq 0\); re-sampling probability \(\lambda\in[0,1]\) Input: Individual responses \(y_{1},\ldots,y_{n}\), treatment assignments \(z_{1},\ldots,z_{n}\). Output: Privatized responses \(\tilde{y}_{1},\ldots,\tilde{y}_{n}\) // Compute noisy response distribution per cluster and treatment group for\(c\in\mathcal{C}\)do for\(a\in\{0,1\}\)do // Add noise to each empirical probability distribution \(\hat{p}_{a}(y|c)\) and truncate for\(y\in\mathcal{Y}\)do \(w\sim\text{Laplace}(\sigma/n_{a,c})\) \(q_{a}(y|c)\leftarrow\max\{\gamma,\min\{1,\hat{p}_{a}(y|c)+w\}\}\) for\(a\in\{0,1\}\)do // Renormalize each distribution for\(y\in\mathcal{Y}\)do if\(\sum_{y}q_{a}(y|c)>1\)then\(\zeta_{y}\gets q_{a}(y|c)-\gamma\); else\(\zeta_{y}\gets 1-q_{a}(y|c)\) for\(y\in\mathcal{Y}\)do \(\tilde{q}_{a}(y|c)\gets q_{a}(y|c)+\frac{\zeta_{y}}{\sum_{y^{\prime}} \zeta_{y^{\prime}}}\left(1-\sum_{y}q_{a}(y|c)\right)\) // Randomize responses for\(i\in\{1,\ldots n\}\)do \(\tilde{y}_{i}\leftarrow\begin{cases}y_{i}^{0}\sim\tilde{q}_{z_{i}}(\cdot|c_{i} )&\text{with probability }\lambda\\ y_{i}&\text{with probability }1-\lambda\end{cases}\) Return privatized responses \(\{\tilde{y}_{1},\ldots,\tilde{y}_{n}\}\). ``` **Algorithm 1**Our suggested differential privacy mechanism: Cluster-DP Before stating our result on the privacy guarantees of our suggested algorithm, we recall the formal definition of label-differential privacy. In the current context, a unit's "label" refers to its observed outcome; we use the words outcome and label interchangeably. **Definition 3.1**.: _(Label Differential Privacy) Consider a randomized mechanism \(M:D\rightarrow\mathcal{O}\) that takes as input a dataset \(D\) and outputs into \(\mathcal{O}\). Let \(\varepsilon,\delta\in\mathbb{R}_{\geq 0}\). A mechanism \(M\) is called \((\varepsilon,\delta)\)-label differentially private -- or \((\varepsilon,\delta)\)-label DP-- if for any two datasets \((D,D^{\prime})\) that differ in the label (outcome) of a single example and any subset \(O\subseteq\mathcal{O}\) we have \(\mathbb{P}[M(D)\in O]\leq e^{\varepsilon}\mathbb{P}[M(D^{\prime})\in O]+\delta\), where \(\varepsilon\) is the privacy budget and \(\delta\) is the failure probability. If \(\delta=0\), then \(M\) is said to be \(\varepsilon\)-label differentially private, or \(\varepsilon\)-label DP._ Achieving label-differential privacy implies that the output of a mechanism does not change too much if a single label in the input dataset is changed. The _privacy loss_\(\varepsilon\) controls the size of the possible change, and \(\delta\) is the _failure probability_ in providing such a guarantee. In other words, \((\varepsilon,0)\)-differential privacy ensures that, for _every_ run of the mechanism \(M\), the observed output is (almost) equally likely to be observed on every other neighboring dataset, simultaneously. The \((\varepsilon,\delta)\)-differential privacy property relaxes this constraint and states only that it is unlikely that the observed value \(M(D)\) has a much higher or lower chance to be generated under a dataset \(D\) compared to a neighboring dataset \(D^{\prime}\). Differential privacy can also be viewed from a statistical hypothesis testing framework, where an attacker aims to distinguish \(D\) from \(D^{\prime}\) based on the output of the mechanism. This viewpoint has been put forward by Wasserman and Zhou (2010) and Kairouz et al. (2015), who show that, by using the output of an \((\varepsilon,\delta)\)-DP mechanism, the power of any test with significance level \(\alpha\in[0,1]\) is bounded by \(e^{\varepsilon}\alpha+\delta\). For small enough \((\varepsilon,\delta)\), this bound is only slightly larger than \(\alpha\), and so any test which aims to distinguishing \(D\) from \(D^{\prime}\) is powerless. Our first result is the following theorem, which states the differential privacy guarantee of our proposed Cluster-DP mechanism. **Theorem 3.2**.: _Let \(\tilde{\varepsilon}>0\) and define \(\delta=\max(0,1-\lambda+\lambda\gamma(1-e^{\tilde{\varepsilon}}))\,\). The Cluster-DP mechanism described in Algorithm 1 is \((\varepsilon,\delta)\)-label DP with \(\varepsilon=\min\left(\frac{1}{\sigma},\frac{2}{\gamma}\right)+\tilde{ \varepsilon}\,\)._ **Corollary 3.3**.: _By setting \(\tilde{\varepsilon}=\log(1+\frac{1-\lambda}{\lambda\gamma})\), we have \(\delta=0\), and therefore, the Cluster-DP mechanism described in Algorithm 1 is also \(\varepsilon\)-label DP, with \(\varepsilon=\min\left(\frac{1}{\sigma},\frac{2}{\gamma}\right)+\log\left(1+ \frac{1-\lambda}{\lambda\gamma}\right)\,\)._ We refer the reader to the Appendix for a full proof of Theorem 3.2 and provide here some intuition for the privacy loss \(\varepsilon\) in Corollary 3.3. The first \(\min\left(\nicefrac{{1}}{{\sigma}},\nicefrac{{2}}{{\gamma}}\right)\) is the privacy budget used to privately estimate the empirical response distribution \(\tilde{q}_{a}(\cdot|c)\) for each cluster. Fixing the estimated distributions \(\tilde{q}_{a}(\cdot|c)\), the log term is the privacy budget used to generate the privatized responses \(\tilde{y}_{i}\). By the composition theorem for differential privacy (Dwork et al., 2014, Theorem B.1), the total privacy loss is given by the sum of these two losses. As expected, when the resampling probability goes to zero (\(\lambda\to 0\)), the privacy loss grows large (\(\varepsilon\to+\infty\)). Similarly, as the Laplace noise \(\sigma\) and truncation parameter \(\gamma\) grow large, our privacy guarantee improves (\(\varepsilon\to 0\)). An important observation is that these privacy guarantees do not depend on the size, cardinality, or quality of the clusters. As a result, these results also hold for a special case of the Cluster-DP mechanism where there is no cluster structure to the data. In that case, we can repeat the mechanism in Algorithm 1 as if all units belong to the same large cluster, which we call the Cluster-Free-DP mechanism, with the same privacy guarantees. ## 4 An unbiased estimator and its variance We now consider estimating causal effects from the privatized outcomes provided by our suggested Cluster-DP mechanism. Our estimand of interest is the average treatment effect \(\tau\), defined in Section 2, in the finite-sample setting. In the super-population setting, it would be defined as \(\tau:=\mathbb{E}_{\mathcal{P}}[y_{i}(1)-y_{i}(0)]\,\). We propose an estimator \(\hat{\tau}\) of the average treatment effect \(\tau\) using the privatized responses \(\{\tilde{y}_{i}\}\), cluster assignments \(\{c_{i}\}\), and treatment assignment values \(\{z_{i}\}\). For each cluster \(c\in\mathcal{C}\) and each value \(a\in\{0,1\}\) of treatment, we construct the response randomization matrix \(Q_{c,a}\in\mathbb{R}^{K\times K}\): \[Q_{c,a}[y^{\prime},y]:=(1-\lambda)\mathbb{I}(y^{\prime}=y)+\lambda\tilde{q}_{a }(y^{\prime}|c)\,. \tag{1}\] Conditional on its true outcome \(y_{i}\), treatment assignment \(z_{i}\), and cluster assignment \(c_{i}\), the privatized response \(\tilde{y}_{i}\) of unit \(i\) is distributed according to \(Q_{c_{i},z_{i}}[\tilde{y}_{i},y_{i}]\). Recall the notation \(\mathcal{Y}\) indicating the space of all potential outcomes. We use the notation \(\mathsf{y}\) to represent its vector form, with similar ordering of rows and columns as \(Q_{c_{i},z_{i}}\). We use the inverse of the response randomization matrix to debias the privatized responses. Concretely, we show that the index \(\tilde{y}_{i}\) of the vector \(\mathsf{y}^{T}Q_{c,z_{i}}^{-1}\), which we write as \(\mathsf{y}^{T}Q_{c,z_{i}}^{-1}[\tilde{y}_{i}]\), is an unbiased estimate for \(y_{i}\), and propose the following estimator for \(\tau\): \[\hat{\tau}:=\sum_{c\in\mathcal{C}}\frac{n_{c}}{n}\sum_{i\in c}\left(\mathsf{y }^{T}Q_{c,z_{i}}^{-1}[\tilde{y}_{i}]\frac{z_{i}}{n_{1,c}}-\mathsf{y}^{T}Q_{c,z _{i}}^{-1}[\tilde{y}_{i}]\frac{1-z_{i}}{n_{0,c}}\right)\,, \tag{2}\] where \(n_{a,c}\) is the number of units in cluster \(c\) such that \(z_{i}=a\), and \(n_{c}=n_{0,c}+n_{1,c}\) indicating the total number of units in cluster \(c\). Our estimator \(\hat{\tau}\) takes the form of a stratified Horvitz-Thompson estimator, where each privatized outcome is reweighted by the inverse of its conditional probability of occurring \(Q_{c_{i},z_{i}}[\tilde{y}_{i},y_{i}]\). This implies that the central unit must pass along to the learner wishing to conduct causal inference: the cluster assignment, the treatment assignment, the privatized response \(\tilde{y}_{i}\), as well as the vector of probabilities \(\mathsf{y}^{T}Q_{c,z_{i}}^{-1}\) where \(\mathsf{y}=\{y\in\mathcal{Y}\}\) is the vector of all possible responses. Since \(\tilde{q}_{a}(\cdot|c)\) and the responses \(\tilde{y}_{i}\) are \(\varepsilon\)-DP, by the post-processing property of differential privacy (Dwork et al., 2014, Proposition 2.1), all the information passed to the learner, as well as any estimation based on this information, is also \(\varepsilon\)-DP. Our estimator has two1 main sources of randomness. The first source of randomness is from the differential private mechanism itself, which determines both the Laplace noise that is added to outcomes to form the learned distributions \(\tilde{q}_{a}(\cdot|c)\) and the \(\lambda\) probability of reporting the true outcome. The second source of randomness is from the randomized assignment \(\mathbf{z}\) of units to treatment and control. Our estimator \(\hat{\tau}\) is unbiased with respect to these two sources of randomness. Footnote 1: If we opt for a super-population view of the data, there is a third source of randomness where the potential outcomes and cluster assignments are sampled from the distribution \(\mathcal{P}\). **Theorem 4.1**.: _Conditionally on the randomness of the treatment assignment, \(\hat{\tau}\) is equal, in expectation over the randomness of the DP mechanism, to the stratified difference-in-means estimator, such that \(\hat{\tau}\) is an unbiased and consistent estimator of \(\tau\)._ \[\mathbb{E}_{DP}[\hat{\tau}|\mathbf{z}]=\sum_{c\in\mathcal{C}}\frac{n_{c}}{n}\left( \sum_{i=1}^{n}y_{i}(1)\frac{z_{i}}{n_{1,c}}-\sum_{i=1}^{n}y_{i}(0)\frac{1-z_{i }}{n_{0,c}}\right)\quad\text{and}\quad\mathbb{E}_{DP,\mathbf{z}}[\hat{\tau}]=\tau\] The proof of Theorem 4.1 can be found in the Appendix. Having an unbiased estimator for causal inference is an important but not entirely surprising result. In fact, many differentially private mechanisms can recover the true label in expectation; Kancharla and Kang (2021) also propose an unbiased differentially private estimator in their restricted setting of binary potential outcomes \(y_{i}\in\{0,1\}\). Instead, the main difficulty is to obtain a good upper-bound of the estimator variance. Since privatizing outcomes involves adding some randomness to the true outcomes, the variance of a differentially-private estimator is bound to be larger than the variance of its non-differentially-private counterpart. We first recall the variance of a non-differentially private stratified estimator \(\tau_{\text{No-DP}}\), given by \[\hat{\tau}_{\text{No-DP}}:=\sum_{c\in\mathcal{C}}\frac{n_{c}}{n}\sum_{i\in c }\left(\frac{y_{i}z_{i}}{n_{1,c}}-\frac{y_{i}(1-z_{i})}{n_{0,c}}\right)\,. \tag{3}\] Let \(\vec{y}_{c}:=\{y_{i}:c_{i}=c\}\in\mathcal{Y}^{n_{c}}\) be the vector of outcomes of units in cluster \(c\), and \(\vec{\tau}_{c}:=\vec{y}_{c}(1)-\vec{y}_{c}(0)\) be the vector of the differences between each unit's potential outcome in treatment and in control. It is well-known that \[\operatorname{Var}_{\boldsymbol{z}}[\hat{\tau}_{\text{No-DP}}]=\sum_{c\in \mathcal{C}}\frac{n_{c}^{2}}{n^{2}}\left(\frac{S^{2}(\vec{y}_{c}(1))}{n_{1,c}} +\frac{S^{2}(\vec{y}_{c}(0))}{n_{0,c}}-\frac{S^{2}(\vec{\tau}_{c})}{n_{c}} \right)\,,\] where, for any vector \(u\in\mathbb{R}^{d}\), \(S^{2}(\vec{u}):=\frac{1}{d-1}\sum_{u\in\vec{u}}(u-\bar{u})^{2}\) and \(\bar{u}:=\frac{1}{d}\sum_{u\in\vec{u}}u\). Our goal is to make the gap between the variance of our differentially-private estimator \(\hat{\tau}\) and its non-differentially private counterpart \(\hat{\tau}_{\text{No-DP}}\) as small as possible for a given privacy guarantee. We will show that this bound is greatly improved when clusters are homogeneous, which we define below. **Definition 4.2** (Cluster homogeneity).: _For \(a\in\{0,1\}\), define the average intra-cluster variance of outcomes \(\phi_{a}\), or cluster homogeneity, as_ \[\phi_{a}:=\sum_{c\in\mathcal{C}}\frac{n_{c}^{2}}{n^{2}}\frac{S^{2}(\vec{y}_{c }(a))}{n_{a,c}}\geq 0\,.\] The quantity \(\phi_{a}\) has a natural super-population interpretation. Taking the expectation of \(\phi_{a}\) over the distribution \(\mathcal{P}\), it holds that \(\phi_{a}=\mathbb{E}[\operatorname{Var}(y(a)|c)]=\operatorname{Var}(y(a))- \operatorname{Var}(\mathbb{E}[y(a)|c])>0\). Holding \(\operatorname{Var}(y(a))\) constant, lower values of \(\phi_{a}\) implies that clusters are better separated. For \(\phi_{a}=0\), the outcome values of each clusters are contained within a singleton set. On the other hand, if \(\phi_{a}\) is high, clusters contain a wide range of responses, up to the variation of outcomes of the entire population. The following theorem provides a bound on the variance of \(\hat{\tau}\) with respect to the randomness of the differentially private mechanism and the random assignment \(\boldsymbol{z}\). **Theorem 4.3**.: _The variance of the estimator \(\hat{\tau}\) defined in (2) is bounded by_ \[0\leq\operatorname{Var}_{DP,\boldsymbol{z}}[\hat{\tau}]-\operatorname{Var}_{ \boldsymbol{z}}[\hat{\tau}_{\text{No-DP}}]\leq\left(\frac{1}{(1-\lambda)^{2}} -1\right)\sum_{a\in\{0,1\}}\phi_{a}+\sum_{a\in\{0,1\}}\sum_{c\in\mathcal{C}} \frac{n_{c}^{2}}{n^{2}}\frac{A(n_{a,c})}{n_{a,c}},\] _where \(\phi_{a}\) is the measure of cluster homogeneity defined in Definition 4.2, and for any \(x\),_ \[A(x):=2K\left[\frac{3\|\mathsf{y}\|_{\infty}^{2}+(\lambda\sqrt{K}+1)^{2}+\| \mathsf{y}\|_{2}^{2}(1-\lambda(K-1)\gamma)}{(1-\lambda)^{2}}+2\|\mathsf{y}\|_ {\infty}^{2}\right]\left[\gamma+\frac{\sigma}{x}\Big{(}e^{-\gamma x/\sigma}-e^ {-x/\sigma}\Big{)}\right]\,,\] _with \(K\) the number of possible potential outcomes and \(\mathsf{y}\in\mathbb{R}^{K}\) the vector of all possible outcomes._ Theorem 3.2 and Theorem 4.3 together allow us to capture the privacy-variance trade-off of our proposed mechanism. Recall that the privacy guarantee of Theorem 3.2 is agnostic to the clustering. In Theorem 4.3, we see that the variance gap with respect to a non-differentially-private estimator is captured in two additive terms: The first term depends on the homogeneity of clusters, as defined in Definition 4.2, with more homogeneous clusters--those with low \(\phi_{a}\)--leading to a smaller variance gap. The second term depends on properties of the data (e.g. the \(\ell_{2}\)-norm of outcomes) and on the parameters of our mechanism, but not on the nature of the clusters. As a result, the biggest takeaway of Theorem 3.2 and Theorem 4.3 together is that more homogeneous clusters lead to a better privacy-variance trade-off than less homogeneous clusters, all else being equal. We now provide some intuition for the second term on the right-hand side, which depends only on properties of the data, but not on the clustering. When setting \(\lambda=0\), our Cluster-DP mechanism always outputs the true outcome, and we no longer produce privatized outcomes. In that case, we can set the truncation parameter \(\gamma\) and the Laplace noise \(\sigma\) to be zero with no consequence to recover the trivial equality: \(\operatorname{Var}_{DP,\boldsymbol{z}}(\hat{\tau})=\operatorname{Var}_{ \boldsymbol{z}}[\hat{\tau}_{\text{No-DP}}]\,\). The more interesting setting from a privacy perspective is \(\lambda\in(0,1)\). By choosing \(\gamma\) and \(\sigma\) to be arbitrarily small, we can make this second term arbitrarily small. As expected, the privacy guarantees of Theorem 3.2 suffer in that regime. The conclusion from Theorem 4.3 then is that cluster homogeneity can improve the estimator variance penalty of our suggested mechanism with no loss in the privacy guarantee. In Section 5, we introduce another differentially-private mechanism for causal estimation which does not leverage a clustering of the outcomes and which generalizes the one suggested by Kancharla and Kang (2021) in their binary-outcome setting. We show that leveraging homogeneous clusters, as we suggest, outperforms these non-clustered mechanisms, but rather than computing the terms of Theorem 4.3 in closed-form, we encourage practitioners to compute the variance bounds empirically for different values of our mechanism's parameters \((\lambda,\gamma,\sigma)\), keeping track of the resulting privacy guarantee. See Section 6 for more details on empirical evaluations of the privacy-variance trade-off. ## 5 The Uniform-Prior-DP mechanism and other baselines We now introduce a few natural baselines, starting with the uniform-prior-DP mechanism, which does not leverage any clustering structure or any information about the empirical distribution of outcomes beyond its support: with some probability, report the true outcome, otherwise, report an outcome sampled uniformly at random from the space of possible outcomes, as formalized in Algorithm 2. ``` Input: Individual responses \(y_{1},\ldots,y_{n}\) Output: Privatized responses \(\tilde{y}_{1},\ldots,\tilde{y}_{n}\) for\(i\in\{1,\ldots n\}\)do \(\tilde{y}_{i}\leftarrow\begin{cases}y_{i}^{0}\sim\mathcal{U}(\mathcal{Y})& \text{with probability }\lambda\quad//\;\mathcal{U}\text{ is the uniform distribution}\\ y_{i}&\text{with probability }1-\lambda\end{cases}\) Return privatized responses \(\{\tilde{y}_{1},\ldots,\tilde{y}_{n}\}\). ``` **Algorithm 2**uniform-prior-DP mechanism The uniform-prior-DP mechanism is a generalization of the mechanism proposed by Kancharla and Kang (2021) in the binary-outcome setting, when there are no non-compliers. In fact, our cluster-DP mechanism is itself a generalization of the uniform-prior-DP mechanism, and by extension the one proposed by Kancharla and Kang (2021), with the right choice of parameters. We first observe that the distributions \(\tilde{q}_{a}(y|c)\) for \(a\in\{0,1\}\), constructed in the cluster-DP mechanism, obey the following properties: \(\tilde{q}_{a}(y|c)\geq\gamma\) for all \(y\in\mathcal{Y}\,,\text{ and }\sum_{y}\tilde{q}_{a}(y|c)=1\,.\) When setting the truncation parameter \(\gamma=\nicefrac{{1}}{{K}}\), these distributions \(\tilde{q}_{a}(y|c)\) reduce to a uniform distribution and the cluster-DP mechanism amounts to the simpler procedure from the uniform-prior-DP mechanism, regardless of the value of the Laplace noise variance \(\sigma^{2}\). As a result, we obtain the following corollary of Theorem 3.2 by setting \(\gamma=\nicefrac{{1}}{{K}}\) and \(\sigma\to\infty\) for the uniform-prior-DP mechanism: **Corollary 5.1**.: _For any \(\tilde{\varepsilon}>0\), the uniform-prior-DP mechanism is \((\tilde{\varepsilon},\delta)\)-label DP where we set \(\delta=\max(0,1-\lambda+\frac{\lambda}{K}(1-e^{\tilde{\varepsilon}}))\,.\) In particular, it is \(\varepsilon\)-label DP with \(\varepsilon=\log\left(1+\frac{(1-\lambda)K}{\lambda}\right)\)._ When seeing the Uniform-Prior-DP mechanism as a special case of the Cluster-DP mechanism, the response randomization matrices \(Q_{c,a}\), defined in Eq. 1, reduces to \(Q=(1-\lambda)I+\nicefrac{{\lambda}}{{K}}\mathbf{1}\mathbf{1}^{\mathsf{T}}\), for all \(c\) and \(a\in\{0,1\}\),with \(\mathbf{1}\in\mathbb{R}^{K}\) indicating the all-one vector. Our stratified estimator \(\hat{\tau}\) then reduces to the following simplified and familiar form \[\hat{\tau}=\frac{1}{1-\lambda}\sum_{c\in\mathcal{C}}\frac{n_{c}}{n}\sum_{i\in c }\left(\frac{\tilde{y}_{i}z_{i}}{n_{1,c}}-\frac{\tilde{y}_{i}(1-z_{i})}{n_{0,c }}\right)\,. \tag{4}\] We now express the variance in closed form of estimator \(\hat{\tau}\) for this special case. The proof follows the same lines as the proof of Theorem 4.3, but with a more direct analysis using the special form of \(Q\), enabling us to characterize the variance of the estimator exactly. Recall the notation: \(\bar{\mathsf{y}}:=\nicefrac{{1}}{{\left|\mathcal{Y}\right|}}\sum_{y\in \mathcal{Y}}y\,\) and \(\overline{\mathsf{y}^{2}}:=\nicefrac{{1}}{{\left|\mathcal{Y}\right|}}\sum_{y \in\mathcal{Y}}y^{2}\,\) over all possible outcomes. For \(a\in\{0,1\}\), we also define \(\overline{y_{c}(a)}:=\nicefrac{{1}}{{n_{c}}}\sum_{i\in c}y_{i}(a)\) and \(\overline{y_{c}^{2}(a)}:=\nicefrac{{1}}{{n_{c}}}\sum_{i\in c}y_{i}^{2}(a)\) over the units of cluster \(c\). **Theorem 5.2**.: _The variance of estimator \(\hat{\tau}\) in (4) under uniform-prior-DP is given by_ \[\operatorname{Var}_{DP,\mathbf{z}}(\hat{\tau}) =\operatorname{Var}_{\mathbf{z}}[\hat{\tau}_{\text{No-DP}}]+\sum_{c \in\mathcal{C}}\frac{n_{c}^{2}}{n^{2}}\left(\frac{1}{n_{0,c}}+\frac{1}{n_{1, c}}\right)\frac{\lambda\overline{\mathsf{y}^{2}}-\lambda^{2}\bar{\mathsf{y}}^{2}}{(1- \lambda)^{2}}\] \[+\sum_{c\in\mathcal{C}}\frac{n_{c}^{2}}{n^{2}}\left[\frac{\lambda }{1-\lambda}\left(\frac{\overline{y_{c}^{2}(0)}}{n_{0,c}}+\frac{\overline{y_{ c}^{2}(1)}}{n_{1,c}}\right)-\frac{2\lambda\bar{\mathsf{y}}}{1-\lambda}\left( \frac{\overline{y_{c}(0)}}{n_{0,c}}+\frac{\overline{y_{c}(1)}}{n_{1,c}} \right)\right]\,. \tag{5}\] As the sampling probability grows small \(\lambda\to 0\), we recover the non-private variance formula \(\operatorname{Var}_{DP,\mathbf{z}}(\hat{\tau})\to\operatorname{Var}_{\mathbf{z}}[\hat {\tau}_{\text{No-DP}}]\), but the \(\varepsilon\)-label-DP guarantee in Corollary 5.1 goes to infinity. The dependence on cluster properties in Equation (5) is only due to the definition of the stratified estimator, since the uniform-prior-DP mechanism does not depend on the clusters. We conclude this section by considering properties of the unstratified difference-in-means estimator \(\hat{\tau}^{u}\) under the uniform-prior-DP mechanism. \[\hat{\tau}^{u}=\frac{1}{1-\lambda}\sum_{i=1}^{n}\left(\frac{\tilde{y}_{i}z_{i }}{n_{1}}-\frac{\tilde{y}_{i}(1-z_{i})}{n_{0}}\right)\,. \tag{6}\] Because the unstratified estimator \(\hat{\tau}^{u}\) can be viewed as special case of the stratified estimator \(\hat{\tau}\) (4), where we assume all units belong to the same cluster (\(n_{c}=n\) and \(n_{1,c}=n_{1}\), \(n_{0,c}=n_{0}\)), we obtain the following variance result as a corollary of Theorem 5.2: **Corollary 5.3**.: _The variance of the unstratified estimator \(\hat{\tau}^{u}\) defined in Eq. 6 is given by_ \[\operatorname{Var}_{DP,\mathbf{z}}[\hat{\tau}^{u}]=\operatorname{Var}_{\mathbf{z}}[ \hat{\tau}^{u}_{\text{No-DP}}]+\frac{n}{n_{1}n_{0}}\frac{\lambda\overline{ \mathsf{y}^{2}}-\lambda^{2}\bar{\mathsf{y}}^{2}}{(1-\lambda)^{2}}+\frac{ \lambda}{1-\lambda}\left(\frac{\overline{y^{2}(0)}}{n_{0}}+\frac{\overline{ y^{2}(1)}}{n_{1}}\right)-\frac{2\lambda\bar{y}}{1-\lambda}\left(\frac{ \overline{y(0)}}{n_{0}}+\frac{\overline{y(1)}}{n_{1}}\right)\] _where \(\operatorname{Var}_{\mathbf{z}}[\hat{\tau}^{u}_{\text{No-DP}}]\) denotes the finite-sample variance of the (non-private) unstratified estimator \(\hat{\tau}^{u}_{\text{No-DP}}\), commonly known to be \(\nicefrac{{S^{2}(\vec{y}(1))}}{{n_{1}}}+\nicefrac{{S^{2}(\vec{y}(0))}}{{n_{0} }}-\nicefrac{{S^{2}(\vec{\tau}_{c})}}{{n_{1}}}\)._ The special case of the unstratified estimator \(\hat{\tau}^{u}\) in (6) for binary outcomes \(\mathcal{Y}=\{0,1\}\) was previously proposed by Kancharla and Kang (2021). In this case, the variance of the estimator can be further simplified to the following: \[\mathrm{Var}_{DP,\mathbf{z}}[\hat{\tau}^{u}]=\mathrm{Var}_{\mathbf{z}}[\hat{\tau}^{u}_{ \text{No-DP}}]+\frac{n}{n_{0}n_{1}}\frac{\frac{\lambda}{2}(1-\frac{\lambda}{2} )}{(1-\lambda)^{2}}\,. \tag{7}\] **Remark 5.4**.: _The Cluster-DP mechanism assumes that a trusted data curator (e.g. a technology company, in the motivating example in Section 2) has access to the users' responses and computes a differentially private empirical distribution of responses within each cluster. In contrast the Uniform-prior-DP can be implemented without such a curator. Each user can privatize their response before sharing it with the experimenter. In other words, the Uniform-prior-DP provides a local DP guarantee, a notion defined by Kasiviswanathan et al. (2011), which is stronger than a DP guarantee. That said, in our motivating example, assuming the existence of a trusted curator--the technology company--is more natural than putting the burden of privatizing responses on each individual user._ ### Other baselines We further consider two alternatives to Algorithms 1 and 2 of achieving \(\epsilon\)-differential privacy, and discuss why neither alternative is actually suitable for our setting: * _the noisy Horvitz-Thompson estimator:_ The central unit computes the Horvitz-Thompson estimator based on the original responses \(y_{i}\) and directly adds noise to the estimate before sharing it externally. \[\hat{\tau}:=\sum_{c\in\mathcal{C}}\frac{n_{c}}{n}\sum_{i\in c}\left\{\left( \frac{y_{i}z_{i}}{n_{1,c}}-\frac{y_{i}(1-z_{i})}{n_{0,c}}\right)+w_{c}\right\} \,,\quad w_{c}\sim\text{Laplace}(\eta_{c})\,.\] (8) The noise parameters \(\eta_{c}\) depend on the chosen privacy loss \(\varepsilon\). Let us define the sensitivity of a real-valued function as the maximum change in its value when changing only one label in the data set. For the inner function \(\nicefrac{{1}}{{n_{1,c}}}y_{i}z_{i}-\nicefrac{{1}}{{n_{0,c}}}y_{i}(1-z_{i})\), it is easy to see that its sensitivity works out to \(\Delta_{c}:=\|\mathbf{y}\|_{\infty}(\nicefrac{{1}}{{n_{0,c}}}+\nicefrac{{1}} {{n_{1,c}}})\). From (Dwork et al., 2014, Theorem 3.6) on the differential privacy of the Laplace mechanism, the estimator \(\hat{\tau}\) is \(\varepsilon\)-DP when setting \(\eta_{c}=\nicefrac{{\Delta_{c}}}{{\varepsilon}}\) for every cluster. * _the noisy histogram estimator:_ Since the estimated treatment effect depends only on the histogram of responses of treated and controlled units in each cluster, the central unit adds noise to the frequency of responses in each cluster before sharing the histogram externally. Since the \(K\) bins (corresponding to the \(K\) elements of \(\mathcal{Y}\)) are disjoint, and the sensitivity of the value of each histogram bin is \(\nicefrac{{1}}{{n_{a,c}}}\), the central unit can share the histogram privately by adding independent draws from Laplace(\((n_{a,c}\varepsilon)^{-1}\)) to the frequency of each value. Both approaches have serious drawbacks in the real-world setting of Section 2, where the central unit shares private data with advertisers. First, advertisers expect "user-level" outcomes to measure campaign effectiveness, which neither alternative provides, unlike our proposal which provides individually privatized user outcomes. One scientific reason for this might be that advertisers wish to run their own slices of the user population or apply their own proprietary covariate-adjustment method. For example, an advertiser may have built a predictive model of user responses, fit out-of-sample, based on non-private covariates \(X_{i}\) (e.g. demographic information, geography): \(f:X_{i}\mapsto f(X_{i})\). Since \(f(X_{i})\) is a common term for the response under both treatment and control, the advertiser can then substitute \(\mathsf{y}^{T}Q_{a,c}^{-1}[\tilde{y}_{i}]\) by \(\mathsf{y}^{T}Q_{a,c}^{-1}[\tilde{y}_{i}]-f(X_{i})\) in the definition of our estimator \(\hat{\tau}\) in Equation (2) to obtain another unbiased and consistent estimator, with lower variance if \(f(X_{i})\) is predictive of \(Y_{i}\). Second, in the case of one-shot communication between the central unit and the advertisers, the noise would be averaged over C (the number of clusters) in the first suggested alternative and at most \(K\) (the number of possible outcomes) estimates in the second, whereas it would be averaged over \(N\) samples, the number of users, in our approach. This is because we incorporate a randomized response per individual. This randomized response per individual allows our method to be unbiased, conditionally on _any prior on the empirical distribution_, even with a fixed noised one like \(\hat{p}_{a}(y|c)+w\) in the second alternative. As a result, our estimator achieves lower finite-sample conditional bias than the other two baselines when \(N\gg K\) and \(N\gg C\) where we condition on the noise and the randomization in the DP mechanism and consider the bias with respect to the randomization in the sub-population. We demonstrate this point in Experiment 5 in the next section. ## 6 Numerical experiments In this section, we perform a series of simulated experiments to validate the theoretical claims we make in the paper and to illustrate their usefulness. We start by considering a Gaussian Mixture Model setting where for every unit \(i\) in cluster \(c\), a continuous quantity \(y_{i}^{\prime}\) is given by \[y_{i}^{\prime}=\sqrt{\beta}\mu_{c}+\sqrt{v-\beta}w_{i}\,, \tag{9}\] where \(w_{i}\) and \(\mu_{c}\) are drawn from the standard normal distribution. The coefficient \(\beta\in[0,v]\) measures the dependence of the response on the cluster center. The specific parameterization in (9) is chosen to fix the variance of the response, equal to \(v\), as \(\beta\) varies. Since the proposed mechanism is for discrete outcome spaces, we quantize the response in the following way: \[y_{i}(1)=y_{i}(0)+\tau\quad\text{and}\quad\ y_{i}(0)=\begin{cases}K^{\prime} \text{ if }y_{i}^{\prime}>2\sqrt{v}\\ -K^{\prime}\text{ if }y_{i}^{\prime}<-2\sqrt{v}\\ \lfloor y/\Delta\rfloor\text{ otherwise}\end{cases}\] where \(\Delta:=2\sqrt{v}/K^{\prime}\) and \([x]\) denotes the rounding of \(x\) to the nearest integer. The treatment effect is an additive \(\tau\) term on the potential outcome under control. We fix \(\tau=1\), such that the outcomes take values in the set \(\mathcal{Y}=\{-K^{\prime},\ldots,0,\ldots,K^{\prime},K^{\prime}+1\}\). We denote by \(K:=2(K^{\prime}+1)\) the size of outcome space. Unless otherwise specified, and with no particular reason to fix parameters one way or another, we take \(K^{\prime}=5\), \(v=5\), and \(\beta=4.5\). We consider \(C=3\) clusters of sizes 500, 1k, 2k with an equal number of controlled and treated units in each cluster. To display confidence intervals around certain results, we consider a super-population of three clusters of sizes 2.5k, 5k, and 10k units, and repeatedly draw uniformly at random sub-populations of three clusters from these original clusters. For any given sub-population, we compute the variance \(\operatorname{Var}_{DP,\boldsymbol{z}}(\hat{\tau})\) by empirically computing the variance (or histogram) of \(\hat{\tau}\) empirically over 500 realizations of the randomness in the corresponding DP mechanism (e.g. Laplace noise and response randomization), as well as the treatment assignments, which are done by choosing balanced set of treated and controlled units uniformly at random within each cluster. Unless otherwise specified, for the Cluster-DP mechanism, we set the truncation parameter \(\gamma=0.02\), the Laplace noise \(\sigma=10\), and the resampling probability \(\lambda=0.8\). Experiment 1. (Bias and Gaussianity)We first verify that our Cluster-DP estimator \(\hat{\tau}\), given by (2), is unbiased and admits an asymptotically Gaussian distribution by plotting the histogram and the qq-plot of \(\hat{\tau}-\tau\) in Figures 2 and 3. Experiment 2. (Privacy-variance trade-off)We next study the trade-off between the privacy-variance trade-off of our suggested Cluster-DP mechanism as compared to the Cluster-Free-DP mechanism, introduced in Section 3, and the stratified and unstratified versions of the Uniform-prior-DP mechanism, introduced in Section 5.1. We show that for the same privacy loss \((\varepsilon,\delta)\), the Cluster-DP estimator can have significantly lower variance compared to the other mechanisms. Recall from Section 5.1 that, when setting the truncation parameter \(\gamma=\nicefrac{{1}}{{K}}\) and \(\sigma=\infty\), the privacy guarantee in Theorem 3.2 for the Cluster-DP and Cluster free-DP mechanisms reduces to the guarantee of the uniform-prior-DP mechanism in Corollary 5.1. Yet, because both Cluster-DP and Cluster-free-DP mechanisms use data-dependent priors, there may exist choices of \(\sigma\), \(\gamma\), \(\lambda\) which result in better privacy-variance trade-offs than the latter for certain outcome distributions. In Figure 4, we aim to fix the privacy loss to \(\varepsilon=0.2\) and \(\delta=10^{-4}\) for all three mechanisms. For the Cluster-DP and Cluster-Free-DP, we set the Laplace parameter to \(\sigma=10\), and vary the truncation parameter \(\gamma\in[0.1/K,1/K]\). Following Theorem 3.2, we first choose \(\tilde{\varepsilon}\) so that the corresponding privacy \(\varepsilon\), is equal to its target \(\varepsilon=0.2\), and then choose the re-sampling probability \(\lambda\) to obtain the failure probability \(\delta=10^{-4}\). Likewise, for the Uniform-prior-DP, we set the re-sampling probability \(\lambda\) according to Corollary 5.1 such that \(\varepsilon=0.2\) and \(\delta=10^{-4}\). In summary, as the truncation parameter \(\gamma\) varies, we compare the three mechanisms at the same privacy loss. As we observe in Figure 4, for small values of \(\gamma\), the Cluster-DP achieves significantly lower variance compared to the other mechanisms. When \(\gamma=\nicefrac{{1}}{{K}}\) and \(\sigma=\infty\), the theory tells us that Cluster-DP reduces to uniform-prior-DP (stratified) and the Cluster free-DP reduces to uniform-prior-DP (unstratified). However, since we have set \(\sigma=10\), we observe that the variance for the uniform-prior-DP becomes lower than the other mechanisms for \(\gamma=\nicefrac{{1}}{{k}}\). The error-bars here correspond to 50 independent draws of the sub-population. In Figure 5, we plot the variance of the estimators versus the privacy loss \(\varepsilon\), as we fix \(\delta=10^{-4}\). Here, we optimize the choice of Laplace parameter \(\sigma\in\{10,20,\infty\}\) and the truncation parameter \(\gamma\in\{\nicefrac{{0.01}}{{K}},\nicefrac{{0.1}}{{K}},\nicefrac{{1}}{{K}}\}\). We observe that both Cluster-DP and Cluster free-DP estimators, which use data-dependent priors achieve a better trade-off than either version of the Uniform-prior-DP mechanism. Furthermore, the Cluster-DP mechanism, which also leverages the clustering structure, showcases an even better trade-off compared to the Cluster free-DP mechanism. Experiment 3. (Role of clustering structure)In this experiment we show that, as the clustering quality improves, the variance of the estimator for the cluster-DP mechanism decreases when compared to the variance of the estimator for the cluster free-DP mechanism, without affecting their privacy guarantees, since these are agnostic to the clustering according to Theorem 3.2. Under our specified potential outcome model (9), the cluster homogeneity \(\phi_{a}\), as defined in Definition 4.2, is given by \(\phi_{0}=\mathbb{E}(\mathrm{Var}(y_{i}(0)|c))\propto v-\beta=\phi_{1}\), hence our clusters become more homogeneous as \(\beta\) increases. From Theorem 4.3, the clustering structure reduces the variance of the estimator at more homogeneous clusters, i.e. lower values of \(\phi_{0},\phi_{1}\), and \(\lambda\). We verify this in Figure 6, which plots the ratio of the variances for two values of \(\lambda\in\{0.5,0.8\}\) as we vary \(\beta\). As \(\beta\) grows, we observe a stronger reduction in the variance using the clustering structure of data. This effect is stronger at smaller values of \(\lambda\). Experiment 4. (Validation of theoretical bound)In Theorem 4.3, we bounded the excess variance of the private estimator (2) compared to the non-private estimator (3). The bound had two additive terms. The first one depends on the cluster structure of data, namely the cluster homogeneity quantities \(\phi_{0}\), \(\phi_{1}\), and the second term did not depend on the clusters, capturing instead an increase in the variance due to the randomness of the Cluster-DP mechanism. In Figure 7, we compute the gap \(\mathrm{Var}_{DP,\mathbf{z}}[\hat{\tau}]-\mathrm{Var}_{\mathbf{z}}[\hat{\tau}_{ \mathrm{No-DP}}]\) empirically, by averaging over 500 different realizations of the randomness in the DP mechanism and the treatment assignments in the same setting as the previous experiment. We plot this gap as we vary \(\beta\), along with a shaded region whose upper boundary corresponds to the upper bound given in Theorem 4.3 and its lower boundary corresponds to only the first term in that bound. We observe that the variance gap remains in the shaded area which validates the theoretical upper bound given by Theorem 4.3, and shows that the derived bound is tight, up to the second term. Experiment 5. (Comparisons with other baselines)We next compare the privacy-variance trade-off of the estimator based on the cluster-DP mechanism with the other baselines discussed in Section 4.2. Figure 6: Ratio of the variance of the estimators under the cluster-DP and cluster free-DP mechanisms in Experiment 3. The benefit of cluster-DP mechanism is stronger at larger \(\beta\) and smaller value of \(\lambda\). Figure 7: The variance gap between the private estimator \(\hat{\tau}\), given by (2), and the non-private estimator \(\hat{\tau}_{\mathrm{No-DP}}\) in the setting of Experiment 4. The upper boundary of the shaded area corresponds to the upper bound derived in Theorem 4.3, and it lower boundary corresponds to the the first term in that bound. As we see the gap remains between the two boundaries. in Section 5.1, namely the noisy Horvitz-Thompson estimator and the noisy histogram estimator. The goal of this experiment is to show that in the case of one-shot communication between the central unit and the advertisers, the cluster-DP estimator achieves lower finite-sample conditional bias than the other two baselines. To demonstrate this point, we fix the noise and randomization in each DP mechanisms for the super-population and compute the bias of each estimator with respect to random draws from the super-population and of the treatment assignments. Specifically we compute the expectation of the treatment effect estimator over 500 sub-populations, each consisting of 500, 1000, 2000 units from each cluster, uniformly at random with a balanced number of treated and controlled units in each cluster. The bias is then computed as the difference between the expectation of the estimator and the true treatment effect. As we see in Figure 8, Cluster-DP estimator achieves a lower conditional bias compared to the other two baselines, as we vary the privacy loss \(\varepsilon\). The error bars are obtained by considering 50 different realizations of the noise/randomization in the DP mechanisms. ## 7 Simulation on the Youtube social network In this section, we use a subset of the Youtube social network (Leskovec and Krevl, 2014) to replicate two experiment results in a setting with natural clusters. First, we demonstrate that the proposed stratified estimator combined with the Cluster-DP mechanism is unbiased and admits a Gaussian distribution, replicating the results of Experiment 1. We then compare the variance of our suggested estimator for the Cluster-DP mechanism with its variance when using the Cluster free-DP mechanism to show the benefit of leveraging the clustering structure of the data for the DP mechanism in order to achieve a better privacy-variance trade-off, replicating the results of Experiment 2. The Youtube social network dataset contains the friendship links of a set of users on Youtube, and the ground-truth clusters correspond to groups created by users. We form a smaller dataset, by considering only the 50 largest communities, which includes a total of 22,179 users with a minimum Figure 8: Bias of the cluster-DP, noisy Horvitz-Thompson and noisy histogram estimators under one shot communication between the central unit and the advertisers in the setting of Experiment 5. cluster size of 199. We generate the potential outcomes for the users as follows: \[y_{i}(0)=x_{i}^{\mathsf{T}}\beta+w_{i}\,,\quad y_{i}(1)=y_{i}(0)+\tau\,,\] with \(w_{i}\sim\mathsf{N}(0,v^{2})\) capturing individual \(i\)'s effect and the \(x_{i}^{\mathsf{T}}\beta\) term capturing the cluster-level effect. We follow a similar model as in (Zhou et al., 2020) and consider a four-dimensional feature vector \(x_{i}\), with \(x_{i1}\) being the number of nodes in cluster \(c_{i}\) (the cluster of user \(i\)), \(x_{i2}\) the number of edges in \(c_{i}\), \(x_{i3}\) the number of edges in \(c_{i}\) with other clusters, and \(x_{i4}\) the density2 of cluster \(c_{i}\). Footnote 2: For a cluster with \(n\) nodes and \(e\) edges, its density is defined as \(\nicefrac{{e}}{{n}}\). Since the proposed mechanism is for discrete outcome spaces, we quantize the responses into \(K=8\) levels. We standardize the features by making each of the four features zero mean and unit norm across clusters, and setting the standard deviation of the Gaussian noise \(w_{i}\) to \(v=0.1\). In our experiments, we set \(\beta=(1,1,1,1)^{\mathsf{T}}\) and \(\tau=1\). In the Cluster-DP mechanism, we set the truncation threshold to \(\gamma=0.1/K\) and the Laplace noise level to \(\sigma=5\). Figure 10 shows the qqplot of \(\hat{\tau}-\tau\) with \(\hat{\tau}\) being the Cluster-DP mechanism, at \(\varepsilon=2\), using \(500\) realizations of the randomness in the outcomes and the DP mechanism. As the plot demonstrates \(\hat{\tau}\) is an unbiased and Gaussian estimator. In Figure 10, we plot the privacy-variance trade-off for the Cluster-DP and the Cluster free-DP mechanisms, along with the variance of the non-private stratified estimator, finding once again that the Cluster-DP mechanism achieves a better trade-off by leveraging the natural cluster structure of the Youtube users. ## 8 Conclusion Our proposed mechanism and estimator allow for causal estimation from privatized outcomes, in a differentially private sense, which leverages any given clustered structure of responses. We show theoretically and empirically that an intuitive measure of cluster homogeneity, which we introduce, can improve the fundamental privacy-variance trade-off of other differentially private mechanisms. We qualify this improvement by analyzing the trade-off of two related privacy mechanisms, which our mechanism generalizes. The first one is cluster-free and does not leverage any clustered structure. The second one discards all information from the empirical distribution of outcomes except for its support. ## Acknowledgement We would like to thank Nick Doudchenko, Ian Waudby-Smith, and many others for helpful discussions on this work.
2304.07335
Generic properties of eigenvalues of the fractional Laplacian
We consider the Dirichlet eigenvalues of the fractional Laplacian $(-\Delta)^s$, with $s\in (0,1)$, related to a smooth bounded domain $\Omega$. We prove that there exists an arbitrarily small perturbation $\tilde\Omega=(I+\psi)(\Omega)$ of the original domain such that all Dirichlet eigenvalues of the fractional Laplacian associated to $\tilde\Omega$ are simple. As a consequence we obtain that all Dirichlet eigenvalues of the fractional Laplacian on an interval are simple. In addition, we prove that for a generic choice of parameters all the eigenvalues of some non-local operators are also simple.
Mouhamed Moustapha Fall, Marco Ghimenti, Anna Maria Micheletti, Angela Pistoia
2023-04-14T18:14:11Z
http://arxiv.org/abs/2304.07335v3
# Generic properties of eigenvalues of the fractional Laplacian ###### Abstract. We consider the Dirichlet eigenvalues of the fractional Laplacian \((-\Delta)^{s}\), with \(s\in(0,1)\), related to a smooth bounded domain \(\Omega\). We prove that there exists an arbitrarily small perturbation \(\tilde{\Omega}=(I+\psi)(\Omega)\) of the original domain such that all Dirichlet eigenvalues of the fractional Laplacian associated to \(\tilde{\Omega}\) are simple. As a consequence we obtain that all Dirichlet eigenvalues of the fractional Laplacian on an interval are simple. In addition, we prove that for a generic choice of parameters all the eigenvalues of some non-local operators are also simple. Key words and phrases:Eigenvalues, fractional Laplacian, generic properties, simplicity 2020 Mathematics Subject Classification: 35J60, 58C15 The last three authors are partially supported by the group GNAMPA of Istituto Nazionale di Alta Matematica (INdAM). The second author is partially supported by the GNAMPA project "Modelli nonlineari in presenza di interazioni punctual". In the following, to simplify notation, we will omit the renormalization constant \(C_{n,s}\). It is well known (see e.g. [1] and the reference therein for an exhaustive introduction about these topics) that (1.2) admits an ordered sequence of eigenvalues \[0<\lambda_{1,s}<\lambda_{2,s}\leq\lambda_{3,s}\leq\cdots\leq\lambda_{1,s}\leq \cdots\to+\infty.\] Since the first eigenvalue is strictly positive, we can endow \(\mathcal{H}^{s}_{0}(\Omega)\) with the norm \[\|u\|^{2}_{\mathcal{H}^{s}_{0}(\Omega)}=\mathcal{E}^{\Omega}_{s}(u,u).\] In the local case, i.e. \(s=1\), it is well known (see [8, 9]) that all the eigenvalues are simple for _generic_ domains \(\Omega\). It is natural to ask if the same results hold true in the non-local case, i.e. \(s\in(0,1)\). As far as we know, there are only two results dealing with the simplicity issue. Very recently, in [2] the authors prove the simplicity of radial eigenvalues in a ball or an annulus. In [5, 6], the authors prove that all the eigenvalues of the fractional Laplacian \((-\Delta)^{s}\) with \(s\in[1/2,1)\) in the interval \(\Omega=(-1,1)\) are simple. However, to our knowledge, the simplicity eigenvalues on an interval for all \(s\in(0,1)\) remains an open problem. The present paper solves this open question, as a consequence of our main result. To study domain perturbations we will consider the space \[C^{1}(\mathbb{R}^{n},\mathbb{R}^{n}):=\left\{\psi:\mathbb{R}^{n}\to\mathbb{R}^ {n}\ :\ \psi^{(i)}\text{ continuous and bounded, }i=0,1\right\}\] endowed with the norm \[\|\psi\|_{1}=\sup_{x\in\mathbb{R}^{n}}\max_{i=0,1}|\psi^{(i)}(x)|.\] The first question is: if \(\bar{\lambda}\) is an eigenvalue of multiplicity \(\nu>1\) of the operator \((-\Delta)^{s}_{\Omega}\) associated with the domain \(\Omega\) with Dirichlet boundary condition, and \(U\) is an interval such that the intersection of the spectrum of \((-\Delta)^{s}_{\Omega}\) with \(U\) consist of the only number \(\bar{\lambda}\), there exists a perturbation \(\Omega_{\psi}=(I+\psi)(\Omega)\) of the domain \(\Omega\) such that the intersection of the spectrum of \((-\Delta)^{s}_{\Omega_{\psi}}\) with the interval \(U\) consists exactly of \(\nu\) simple eigenvalues of \((-\Delta)^{s}_{\Omega_{\psi}}\)? Consequently, a second question arises: there exists a perturbed domain \(\Omega_{\psi}=(I+\psi)(\Omega)\) such that _all_ the eigenvalues of \((-\Delta)^{s}_{\Omega_{\psi}}\) are simple. The answer is affirmative and our main result reads as follows. **Theorem 1**.: _Let \(\Omega\) be a smooth bounded domain with \(C^{1,1}\) boundary. Then for any \(\varepsilon>0\) there exists \(\psi\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\), with \(\|\psi\|_{C^{1}}<\varepsilon\), such that all the eigenvalues of the problem_ \[(-\Delta)^{s}\varphi=\lambda\varphi\text{ in }\Omega_{\psi}=(I+\psi)(\Omega), \qquad\varphi=0\text{ in }\mathbb{R}^{n}\smallsetminus\Omega_{\psi}\] _are simple._ In other words, it can be said that all the eigenvalues of the problem (1.1) are simple for _generic_ domains \(\Omega\), where with generic we mean that, given a domain \(\Omega\), there exists at least an arbitrarily close domain \(\tilde{\Omega}=(I+\psi)\Omega\) for which all eigenvalues of (1.1) are simple. As a consequence of Theorem 1, we obtain the simplicity of eigenvalues of the fractional laplacian on intervals. **Corollary 2**.: _Let \(s\in(0,1)\). Then all eigenvalues of the eigenvalue problem_ \[(-\Delta)^{s}\varphi=\lambda\varphi\quad\text{ in }(-1,1),\qquad\varphi=0 \quad\text{ in }\mathbb{R}\smallsetminus(-1,1)\] _are simple._ Corollary 2 follows from Theorem 1 which implies that there exists an open interval \(\tilde{\Omega}\) (a perturbation of an open bounded interval \(\Omega\)) such that all its Dirichlet eigenvalues are simple. Since the dimension of the eigenspaces are invariant under scaling and translation, Corollary 2 follows immediately. In the spirit of Theorem 1, we obtain a similar result considering Dirichlet eigenvalue fractional problem with nonconstant coefficients of the type \[(-\Delta)^{s}\varphi+a(x)\varphi=\lambda\varphi\text{ in }\Omega,\qquad\varphi=0 \text{ in }\mathbb{R}^{n}\smallsetminus\Omega \tag{1.2}\] and \[(-\Delta)^{s}\varphi=\lambda\alpha(x)\varphi\text{ in }\Omega,\qquad\varphi=0 \text{ in }\mathbb{R}^{n}\smallsetminus\Omega, \tag{1.3}\] where \(a,\alpha\in C^{0}(\mathbb{R}^{n})\). Again, if \((-\Delta)^{s}+a(x)I\) is a positive operator (e.g. \(\min_{\overline{\Omega}}a>0\) or \(\|a\|_{C^{0}(\Omega)}\) is small enough) or \(\min_{\overline{\Omega}}\alpha>0\), from a (fractional analogue) of Rellich's compactness lemma it is quite standard to deduce that there is an unbounded ordered sequence of eigenvalues \((\lambda_{i})_{i\in\mathbb{N}}\) (see [1, 3] and the references therein) and that each eigenvalue has finite multiplicity and the first one is simple. In the local case, simplicity of the eigenvalues with respect to a perturbation of the coefficients where proved in [11] and we are able to show the nonlocal counterpart of this result. In particular, we prove that all the eigenvalues of (1.2) and (1.3) are simple for _generic_ functions \(a\) and \(\alpha\), respectively, in this two results. **Theorem 3**.: _Let \(a\in C^{0}(\mathbb{R}^{n})\) such that \(\min_{\overline{\Omega}}a>0\) or \(\|a\|_{C^{0}(\Omega)}\) is small enough. For any \(\varepsilon>0\) there exists \(b\in C^{0}(\mathbb{R}^{n})\), with \(\|b\|_{C^{0}}<\varepsilon\), such that all the eigenvalues of the problem_ \[(-\Delta)^{s}\varphi+\left(a(x)+b(x)\right)\varphi=\lambda\varphi\text{ in } \Omega,\qquad\varphi=0\text{ in }\mathbb{R}^{n}\smallsetminus\Omega\] _are simple._ **Theorem 4**.: _Let \(\alpha\in C^{0}(\mathbb{R}^{n})\) such that \(\min_{\overline{\Omega}}\alpha>0\). For any \(\varepsilon>0\) there exists \(\beta\in C^{0}(\mathbb{R}^{n})\), with \(\|\beta\|_{C^{0}}<\varepsilon\), such that all the eigenvalues of the problem_ \[(-\Delta)^{s}\varphi=\lambda\left(\alpha(x)+\beta(x)\right)\varphi\text{ in } \Omega,\qquad\varphi=0\text{ in }\mathbb{R}^{n}\smallsetminus\Omega\] _are simple._ The strategy of the proofs of the above theorems relies on an abstract result which is presented in Section 3. In particular, Theorem 13 provides us a so called _splitting condition_, which is crucial to find the perturbation term \(\psi\) (or \(b\),\(\beta\)) for which all eigenvalues are simple as claimed in Theorem 1 (Th. 3 and Th. 4, respectively). Throughout the paper we will give a detailed proof of Theorem 1, from Section 2 to Section 5 while in Section 6 and in Section 7 we will only describe the main steps to get Theorem 3 and Theorem 4. ### Acknowledgments The authors would like to thank Matteo Cozzi, Nicola Soave and Enrico Valdinoci for some helpful discussions. ## 2. Domain perturbations In this section we study how a perturbation of the domain affects the multiplicity of eigenvalues. The main point is, given a smooth perturbation of the domain of the form \(I+\psi\), to introduce, by a suitable change of variables, the bilinear form \(\mathcal{B}_{s}^{\psi}\) in (2.1) to which we apply the splitting condition of Theorem 13. The problem of the splitting of the eigenvalues with respect to domain perturbation was studied for the standard Laplacian in [4, 7, 8, 9], from which we derive this strategy and which we refer to for a bibliography on the subject. For a function \(\psi\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\), we define \[\Omega_{\psi}:=(I+\psi)\Omega.\] If \(\|\psi\|_{C^{1}}\leq L\) for some \(L<1\) then \((I+\psi)\) is invertible on \(\Omega_{\psi}\) with inverse mapping \((I+\psi)^{-1}=I+\chi\). In the following we always consider \(\psi\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\) with \(\|\psi\|_{C^{1}}\leq L\). Also, we denote \(J_{I+\psi}\) as the Jacobian determinant of the mapping \(I+\psi\). Whenever no ambiguity is possible, we use also the short notation \(J_{\psi}:=J_{I+\psi}\). _Remark 5_.: It is well known that, if \(\psi\) is sufficiently regular, the following expansion holds for \(\varepsilon\) small \[J_{I+\varepsilon\psi}=1+\varepsilon\mathrm{div}\psi+\varepsilon^{2}a_{2}+ \cdots+\varepsilon^{n}a_{n}\] for suitable \(a_{i}\). By the change of variables given by the mapping \((I+\psi)\), and denoted \(\tilde{u}(\xi):=u(\xi+\psi(\xi))\), we obtain the bilinear form \(\mathcal{B}^{\psi}_{s}\) on \(\mathcal{H}^{s}_{0}(\Omega)\) \[\mathcal{E}^{\Omega_{\psi}}_{s}(u,v)=\frac{1}{2}\int_{\mathbb{R}^ {n}}\int_{\mathbb{R}^{n}}\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}dxdy\\ =\frac{1}{2}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{( \tilde{u}(\xi)-\tilde{u}(\eta))(\tilde{v}(\xi)-\tilde{v}(\eta))}{|\xi-\eta+ \psi(\xi)-\psi(\eta)|^{n+2s}}J_{\psi}(\xi)J_{\psi}(\eta)d\xi d\eta\\ =:\mathcal{B}^{\psi}_{s}(\tilde{u},\tilde{v}), \tag{2.1}\] for \(\tilde{u},\tilde{v}\in\mathcal{H}^{s}_{0}(\Omega)\). Notice that \(\mathcal{B}^{0}_{s}(\tilde{u},\tilde{v})=\mathcal{E}^{\Omega}_{s}(\tilde{u}, \tilde{v})\). At this point, one can prove by direct computation the following result. **Lemma 6**.: _Let \(\psi\in C^{1}\), and take \(\tilde{u}\in\mathcal{H}^{s}_{0}(\Omega)\). Then_ \[\mathcal{B}^{\psi}_{s}(\tilde{u},\tilde{u})=\mathcal{E}^{\Omega_{\psi}}_{s}(u,u)\leq C_{1}\left[\mathcal{E}^{\Omega}_{s}(\tilde{u},\tilde{u})+\|\tilde{u} \|_{L^{2}(\Omega)}\right]\leq C_{2}\mathcal{E}^{\Omega}_{s}(\tilde{u},\tilde{ u})\] _for some positive contants \(C_{1},C_{2}\)._ _Remark 7_.: Let us define the map \[\gamma_{\psi} :\mathcal{H}^{s}_{0}(\Omega_{\psi})\rightarrow\mathcal{H}^{s}_{0 }(\Omega);\] \[\gamma_{\psi}(u) :=\tilde{u}(\xi)=u(\xi+\psi(\xi)).\] By the previous lemma we have that, if \(\|\psi\|_{C^{1}}\) is sufficiently small the following maps are continuous isomorphism \[\gamma_{\psi} :\mathcal{H}^{s}_{0}(\Omega_{\psi})\rightarrow\mathcal{H}^{s}_{0 }(\Omega)\] \[\gamma_{\psi}^{-1} =\gamma_{\chi} :\mathcal{H}^{s}_{0}(\Omega)\rightarrow\mathcal{H}^{s}_{0}( \Omega_{\psi}).\] In addition \(\mathcal{B}^{\psi}_{s}(\tilde{u},\tilde{v})\) is a scalar product on \(\mathcal{H}^{s}_{0}(\Omega)\), and the norm induced by \(\mathcal{B}^{\psi}_{s}(\cdot,\cdot)\) is equivalent to the one induced by \(\mathcal{E}^{\Omega}_{s}(\cdot,\cdot)\). It is well known that the embedding \(i:\mathcal{H}^{s}_{0}(\Omega)\to L^{2}(\Omega)\) is compact, so we consider the adjoint operator, with respect to \(\mathcal{E}^{\Omega}_{s}\), \[i^{*}:L^{2}(\Omega)\rightarrow\mathcal{H}^{s}_{0}(\Omega).\] The composition \(E_{\Omega}:=(i^{*}\circ i)_{\Omega}:\mathcal{H}^{s}_{0}(\Omega)\rightarrow \mathcal{H}^{s}_{0}(\Omega)\) is selfadjoint, compact, injective with dense image in \(\mathcal{H}^{s}_{0}(\Omega)\) and it holds \[\mathcal{E}^{\Omega}_{s}\left((i^{*}\circ i)_{\Omega}v,u\right)=\int_{\Omega} uv. \tag{2.2}\] _Remark 8_.: If \(\varphi_{k}\in\mathcal{H}^{s}_{0}(\Omega)\) is an eigenfunction of the fractional Laplacian with eigenvalue \(\lambda_{k}\), then \(\varphi_{k}\) is an eigenfunction of \((i^{*}\circ i)_{\Omega}\) with eigenvalue \(\mu^{\Omega}_{k}:=1/\lambda_{k}\). In fact, it holds \[\mathcal{E}^{\Omega}_{s}(\varphi_{k},v)=\lambda_{k}\int_{\mathbb{R}^{n}} \varphi_{k}vdx=\int_{\mathbb{R}^{n}}\lambda_{k}\varphi_{k}vdx=\mathcal{E}^{ \Omega}_{s}\left(\lambda_{k}(i^{*}\circ i)_{\Omega}\varphi_{k},v\right),\] thus \(\lambda_{k}(i^{*}\circ i)_{\Omega}\varphi_{k}=\varphi_{k}\). We recall two min-max characterizations of eigenvalues \(\mu^{\Omega}_{k}\). We have that \[\mu^{\Omega}_{1}:=\sup_{u\in\mathcal{H}^{s}_{\Omega}\smallsetminus 0}\frac{\int_{ \Omega}u^{2}dx}{\mathcal{E}^{\Omega}_{s}(u,u)};\ \ \ \ \ \mu^{\Omega}_{\nu}:=\sup_{\begin{array}{c}u\in \mathcal{H}^{s}_{\Omega}\smallsetminus 0\\ \mathcal{E}^{\Omega}_{s}(u,e_{t})=0\\ t=1,\ldots\nu-1\end{array}}\frac{\int_{\Omega}u^{2}dx}{\mathcal{E}^{\Omega}_ {s}(u,u)};\] where \((i^{*}\circ i)_{\Omega}e_{t}=\mu^{\Omega}_{t}e_{t}\); equivalently, \[\mu^{\Omega}_{\nu}:=\inf_{\begin{array}{c}V=\{v_{1},\ldots,v_{\nu-1}\}\\ \end{array}}\sup_{\begin{array}{c}u\in\mathcal{H}^{s}_{\Omega}\smallsetminus 0 \\ \mathcal{E}^{\Omega}_{s}(u,v_{t})=0\\ t=1,\ldots\nu-1\end{array}}\frac{\int_{\Omega}u^{2}dx}{\mathcal{E}^{\Omega}_ {s}(u,u)}.\] By this characterization, and by (2.1), it is easy to prove the following result **Lemma 9**.: _Every eigenvalue \(\mu_{k}\) of the operator \(E_{\psi}:=E_{\Omega_{\psi}}\) is continuous at 0 with respect to \(\psi\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\)._ Finally, since in Remark 8 we proved that if \(\varphi_{k}\) is an eigenfuntion of \((-\Delta)^{s}\) with Dirichlet boundary conditions on \(\Omega_{\psi}\) with eigenvalue \(\lambda_{k}\), then \(\varphi_{k}\) is an eigenfunction of \(E_{\psi}\) with eigenvalue \(\mu_{k}:=1/\lambda_{k}\), to obtain the main result of this paper, we study the multiplicity of the eigenvalues \(\mu_{k}\) of the operator \(E_{\psi}\). For this purpose, in the next section we collect an abstract result which we will apply to the operator \(E_{\psi}\). ## 3. An abstract result We recall a series of abstract results which holds in general in a Hilbert space \(X\) endowed with scalar product \(<\cdot,\cdot>_{X}\). Later, in the paper, we will apply these abstract results to derive a splitting condition for multiple eigenvalues. The proof of these results, are contained [9, Section 2]. However, to make this paper self contained, we recall them in the appendix. Let \[F_{ij}:=\{A\in L(X,X)\ :\ \text{codim}\ \text{Im}A=i\ \text{and}\ \text{dim}\ \ker A=j\}\] be the set of Fredholm operator with indices \(i\) and \(j\) in the Banach space \(L(X,X):=\{A:X\to X\ :\ A\ \text{linear and continuous}\}\). We show first that \(F_{ij}\) is a smooth submanifold of codimension \(ij\) in \(L(X,X)\). It is well known that if \(A\in F_{ij}\), there exist closed subspaces \(V,W\subset X\) such that \[X=\ker A\oplus V\ \text{and}\ X=W\oplus\text{Im}A.\] Let us call \(P,Q,\check{P}\) and \(\check{Q}\) the projector on \(\ker A,V,W,\text{Im}A\), respectively. It holds **Lemma 10**.: _We have_ \[L(X,X)=L\oplus\mathcal{V},\] _where_ \[\mathcal{V} :=\left\{T\in L(X,X)\ :\ T(\ker A)\subset\mathrm{Im}A\right\}\] \[L :=\left\{\bar{P}HP\in L(X,X)\text{ with }H\in L(X,X)\right\}.\] Proof.: The claim can be showed immediately noticing that \(T=\bar{P}TP+\bar{Q}TQ+\bar{P}TQ+\bar{Q}TP\) and that \(\bar{Q}TQ+\bar{P}TQ+\bar{Q}TP\in\mathcal{V}\). **Lemma 11**.: _We have that \(F_{ij}\) is an analytic submanifold of \(L(X,X)\). In addition, for any \(A\in L(X,X)\), the tangent space in \(A\) to \(F_{ij}\), \(T_{A}F_{ij}=\mathcal{V}\)._ The proof of this result is postponed to appendix. Here we limit ourselves to give the main idea. Given \(A_{0}\in F_{ij}\), and given \(H\) such that \(A_{0}+H\) still belongs to \(F_{ij}\), it is possible to write \(H=\bar{P}HP+f(V)\) where \(V\in\mathcal{V}\) and \(f\) is an analytic function. Then \(F_{ij}\) near \(A_{0}\) is a smooth graph on \(\mathcal{V}\). **Lemma 12**.: _Let \(A\in F_{ij}\) such that \(\ker A\not\subset\mathrm{Im}A\). Then_ \[M=\left\{A+H+\lambda I\in L(X,X)\ :\ \lambda\in\mathbb{R},A+H\in F_{ij}\text{ and }H \text{ suff. small}\right\}\] _is an analytic manifold at \(A+\lambda I\), and \(T_{A+\lambda I}M=\mathcal{V}\oplus\mathrm{Span}<I>\) where \(T_{A+\lambda I}M\) is the tangent space in \(A+\lambda I\) to \(M\)._ Proof.: By definition of \(\mathcal{V}\), we have that \(I\in\mathcal{V}\) if and only if \(\ker A\subset\mathrm{Im}A\), which is not possible by our hypothesis on \(A\). Thus, by Lemma 11 we have that \(M\) is a ruled manifold and the thesis follows immediately. We can recast the previous result considering \(T:X\to X\) a selfadjoint compact operator with an eigenvalue \(\bar{\lambda}\) with multiplicity \(\nu\). By Riesz theorem we have that \(T-\bar{\lambda}I\in F_{\nu\nu}\) and that \(\ker(T-\bar{\lambda}I)\cap\mathrm{Im}(T-\bar{\lambda}I)=\{0\}\). Moreover by Lemma 12 if \(U\) is a suitable neighborhood of \(T-\bar{\lambda}I\) we have that \[\tilde{M}=\left\{\tilde{T}+\lambda I\in L(X,X)\ :\ \lambda\in\mathbb{R}\text{ and } \tilde{T}\in F_{\nu\nu}\cap U\right\}\] is a smooth manifold and \(T_{T-\bar{\lambda}I}\tilde{M}=\tilde{\mathcal{V}}\oplus\mathrm{Span}<I>\) where \[\tilde{\mathcal{V}}=\left\{H\in L(X,X)\ :\ H(\ker(T-\bar{\lambda}I))\subset \mathrm{Im}(T-\bar{\lambda}I)\right\}. \tag{3.1}\] At this point we are in position to enunciate the main result of this section. **Theorem 13**.: _Let \(T_{b}:X\to X\) be a selfadjoint compact operator which depends smoothly on a parameter \(b\) belonging to a real Banach space \(B\). Let \(T_{0}=T\) and let \(T_{b}\) be Frechet differentiable in \(b=0\). Let Let \(x_{1}^{0},\ldots,x_{\nu}^{0}\) be an orthonormal basis for the eigenspace relative to the eigenvalue \(\bar{\lambda}\) of \(T\). If \(T_{b}\in\tilde{M}\) for all \(b\) with \(\|b\|_{C^{0}}\) small, then for all \(b\) there exist a \(\rho=\rho(b)\in\mathbb{R}\) such that_ \[\left\langle T^{\prime}(0)[b]x_{j}^{0},x_{i}^{0}\right\rangle_{X}=\rho\delta_ {ij}\text{ for }i,j=1,\ldots,\nu. \tag{3.2}\] Proof.: By Lemma 12 we have that, if \(T_{b}\in\tilde{M}\) for all \(b\), then \[T^{\prime}(0)[b]\in\tilde{\mathcal{V}}\oplus\mathrm{Span}<I>.\] So, by (3.1), for all \(b\), there exists \(\bar{\lambda}(b)\in\mathbb{R}\), such that \[\left[T^{\prime}(0)[b]-\bar{\lambda}(b)I\right](\ker(T-\bar{\lambda}I))\subset \mathrm{Im}(T-\bar{\lambda}I),\] that is \[\left\langle\left[T^{\prime}(0)[b]-\bar{\lambda}(b)I\right]x_{j}^{0},x_{i}^{0} \right\rangle_{X}=0\] for all \(i,j=1,\ldots,\nu\), which implies (3.2). This theorem says that if condition (3.2) is fulfilled, then the eigenvalue \(\bar{\lambda}(b)\) has still multiplicity \(\nu\) in a neighborhood of \(b=0\). ## 4. Splitting of a single eigenvalue We recall that \(E_{\psi}=(i^{*}\circ i)_{\Omega_{\psi}}\). Also, by (2.2), and by the definition of \(\tilde{u}\) we have \[\mathcal{E}_{s}^{\Omega_{\psi}}(E_{\psi}u,v)=<u,v>_{L^{2}(\Omega_{\psi})}= \int_{\Omega}\tilde{u}\tilde{v}J_{\psi}.\] By the definition of \(\mathcal{B}_{s}^{\psi}\), we can rewrite the previous formula as \[\mathcal{B}_{s}^{\psi}(\gamma_{\psi}E_{\psi}u,\tilde{v})=\mathcal{E}_{s}^{ \Omega_{\psi}}(E_{\psi}v,u)=\int_{\Omega}\tilde{u}\tilde{v}J_{\psi}.\] Set \[T_{\psi}\tilde{u}:=\gamma_{\psi}E_{\psi}\gamma_{\psi}^{-1}\tilde{u}, \tag{4.1}\] we get that \(T_{\psi}:\mathcal{H}_{0}^{s}(\Omega)\to\mathcal{H}_{0}^{s}(\Omega)\) is a compact selfadjoint operator such that \[\mathcal{B}_{s}^{\psi}(T_{\psi}\tilde{u},\tilde{v})=\int_{\Omega}\tilde{u} \tilde{v}J_{\psi}\] for all \(\psi\). _Remark 14_.: One can prove that \(T_{\psi}\) and \(\mathcal{B}_{s}^{\psi}\) are differentiable in the \(\psi\) variable at \(0\). Then it holds \[\left(\mathcal{B}_{s}^{\psi}\right)^{\prime}(0)[\psi](T_{0}\tilde{u},\tilde{ v})+\mathcal{B}_{s}^{0}(T_{\psi}^{\prime}(0)[\psi]\tilde{u},\tilde{v})=\int_{ \Omega}\tilde{u}\tilde{v}\mathrm{div}\psi. \tag{4.2}\] **Lemma 15**.: _Let \(\tilde{u},\tilde{v}\in\mathcal{H}_{0}^{s}(\Omega)\) such that \((-\Delta)^{s}\tilde{u},(-\Delta)^{s}\tilde{v}\in C^{\alpha}_{\text{loc}}( \Omega)\cap L^{\infty}(\Omega)\) with \(\alpha>(1-2s)_{+}\). Then_ \[\left(\mathcal{B}_{s}^{\psi}\right)^{\prime}(0)[\psi](\tilde{u},\tilde{v})=- \Gamma^{2}(1+s)\int_{\partial\Omega}\frac{\tilde{u}}{\delta^{s}}\frac{\tilde{ v}}{\delta^{s}}\psi\cdot Nd\sigma-\int_{\Omega}[\nabla\tilde{u}\cdot\psi(-\Delta)^{s} \tilde{v}+\nabla\tilde{v}\cdot\psi(-\Delta)^{s}\tilde{u}]dx \tag{4.3}\] _where \(\delta(x)=\mathrm{dist}(x,\mathbb{R}^{n}\smallsetminus\Omega)\) and \(N\) is the exterior normal of \(\Omega\)._ Proof.: If \(\|\psi\|_{C^{1}}\) is small, by direct computation we have that \[\left(\mathcal{B}_{s}^{\psi}\right)^{\prime}(0)[\psi](\tilde{u}, \tilde{v})=\\ \frac{1}{2}\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\frac{( \tilde{u}(\eta)-\tilde{u}(\xi))(\tilde{v}(\eta)-\tilde{v}(\xi))}{|\xi-\eta|^{ n+2s}}\left\{\mathrm{div}\psi(\xi)+\mathrm{div}\psi(\eta)-\frac{(n+2s)(\xi- \eta)\cdot(\psi(\xi)-\psi(\eta))}{|\xi-\eta|^{2}}\right\}d\xi d\eta\\ =\int_{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}\left(\tilde{u}(\eta) -\tilde{u}(\xi)\right)(\tilde{v}(\eta)-\tilde{v}(\xi))K(\xi,\eta)d\xi d\eta, \tag{4.4}\] where \[K(\xi,\eta):=\frac{1}{2}\left\{\mathrm{div}\psi(\xi)+\mathrm{div}\psi(\eta)- \frac{(n+2s)(\xi-\eta)\cdot(\psi(\xi)-\psi(\eta))}{|\xi-\eta|^{2}}\right\} \frac{1}{|\xi-\eta|^{n+2s}}.\] At this point we use the result of Theorem 1.3 of [2] which allows to compute integrals of the form of (4.4) and we obtain the conclusion. We want to apply the previous result to eigenfunctions of \((-\Delta)^{s}\) on \(\Omega\) with Dirichlet boundary conditions. We recall that, by Remark 8, this is equivalent to consider eigenfunctions of the operator \(T_{0}\). **Corollary 16**.: _Let \(u,v\in\mathcal{H}_{0}^{s}(\Omega)\) satisfy \(T_{0}u=\frac{1}{\lambda_{0}}u\), and \(T_{0}v=\frac{1}{\lambda_{0}}v\). Then we have_ \[\left(\mathcal{B}_{s}^{\psi}\right)^{\prime}(0)[\psi](T_{0}u,v)=-\frac{\Gamma^ {2}(1+s)}{\lambda_{0}}\int_{\partial\Omega}\frac{u}{\delta^{s}}\frac{v}{\delta^ {s}}\,\psi\cdot N\,d\sigma+\int_{\Omega}uv\mathrm{div}(\psi)dx.\] Proof.: By elliptic regularity the eigenfunctions belongs to \(C^{\alpha}_{\mathrm{loc}}(\Omega)\cap L^{\infty}(\Omega)\) with \(\alpha>(1-2s)_{+}\). Then, by Lemma 15 we have \[\left(\mathcal{B}_{s}^{\psi}\right)^{\prime}(0)[\psi](T_{0}u,v)= -\frac{\Gamma^{2}(1+s)}{\lambda_{0}}\int_{\partial\Omega}\frac{u }{\delta^{s}}\frac{v}{\delta^{s}}\,\psi\cdot N\,d\sigma\] \[-\frac{1}{\lambda_{0}}\int_{\Omega}\nabla u\cdot\psi(-\Delta)^{s} v\,dx-\frac{1}{\lambda_{0}}\int_{\Omega}\nabla v\cdot\psi(-\Delta)^{s}u\,dx\] Combining this with Remark 8 and integration by parts, we obtain \[\left(\mathcal{B}_{s}^{\psi}\right)^{\prime}(0)[\psi](T_{0}u,v)=- \frac{\Gamma^{2}(1+s)}{\lambda_{0}}\int_{\partial\Omega}\frac{u}{\delta^{s}} \frac{v}{\delta^{s}}\,\psi\cdot N\,d\sigma-\int_{\Omega}\nabla u\cdot\psi v\, dx-\int_{\Omega}\nabla v\cdot\psi u\,dx\] \[=-\frac{\Gamma^{2}(1+s)}{\lambda_{0}}\int_{\partial\Omega}\frac{u }{\delta^{s}}\frac{v}{\delta^{s}}\,\psi\cdot N\,d\sigma+\int_{\Omega}uv \mathrm{div}(\psi)dx,\] as desired. Now we apply Theorem 13 to the operator \(T_{\psi}\) defined in (4.1). This is the fundamental block to prove Theorem 3. Let \(\mu_{0}\) be an eigenvalue of \(T_{0}=E_{\Omega}=(i^{*}\circ i)_{\Omega}\) which has multiplicity \(\nu>1\). If for all \(\psi\) with \(\|\psi\|_{C^{1}}\) small, the operator \(T_{\psi}\) has an eigenvalue \(\mu(\psi)\) has multiplicity \(\nu\) for all \(\psi\) and such that \(\mu(\psi)\to\mu_{0}\) while \(\psi\to 0\), then Theorem 13 yields \[\mathcal{B}_{s}^{0}(T_{\psi}^{\prime}(0)[\psi]\varphi_{i},\varphi_{j})=\rho I\] for some \(\rho=\rho(\psi)\in\mathbb{R}\). Here \(\left\{\varphi_{i}\right\}_{i=1,\ldots,\nu}\) is an orthonormal base for the eigenspace \(\mu(0)\). This, in light of (4.2) and Corollary 16 can be recast as \[\rho\delta_{ij} =-\left(\mathcal{B}_{s}^{\psi}\right)^{\prime}(0)[\psi](T_{0} \varphi_{i},\varphi_{j})+\int_{\Omega}\varphi_{i}\varphi_{j}\mathrm{div}\psi dx\] \[=\Gamma^{2}(1+s)\mu_{0}\int_{\partial\Omega}\frac{\varphi_{i}}{ \delta^{s}}\frac{\varphi_{j}}{\delta^{s}}\,\psi\cdot N\,d\sigma. \tag{4.5}\] So, for all \(\psi\) with \(\|\psi\|_{C^{1}}\) small, \[\int_{\partial\Omega}\frac{\varphi_{i}}{\delta^{s}}\frac{\varphi_{j}}{\delta^ {s}}\,\psi\cdot N\,d\sigma=0\text{ for }i\neq j;\qquad\int_{\partial\Omega}\left(\frac{ \varphi_{1}}{\delta^{s}}\right)^{2}\,\psi\cdot N\,d\sigma=\cdots=\int_{ \partial\Omega}\left(\frac{\varphi_{\nu}}{\delta^{s}}\right)^{2}\,\psi\cdot N \,d\sigma.\] This implies that \((\frac{\varphi_{i}}{\delta^{s}})^{2}\equiv 0\) on \(\partial\Omega\) for \(i=1,\ldots,\nu\). On the other hand, by the fractional Pohozaev identity (see [10] and [2, formula (1.6)]), \[\Gamma^{2}(1+s)\int_{\partial\Omega}\left(\frac{\varphi_{i}}{\delta^{s}} \right)^{2}x\cdot N\,d\sigma=\frac{2s}{\mu_{0}}\int_{\Omega}\varphi_{i}^{2}dx= \frac{2s}{\mu_{0}}\neq 0.\] This leads to a contradiction and thus \(T_{\psi}\) cannot have multiplicity \(\nu\) for all \(\psi\) with \(\|\psi\|_{C^{1}}\) small. This fact can be summarized in the next proposition, which is the main tool to prove Theorem 1. **Proposition 17**.: _Let \(\bar{\lambda}\) an eigenvalue of the operator \((-\Delta)^{s}_{\Omega}\) with Dirichlet boundary condition which has multiplicity \(\nu>1\). Let \(U\) and open bounded interval such that_ \[\bar{U}\cap\sigma\left((-\Delta)^{s}_{\Omega}\right)=\left\{\bar{\lambda}\right\},\] _where \(\sigma\left((-\Delta)^{s}_{\Omega}\right)\) is the spectrum of \((-\Delta)^{s}_{\Omega}\)._ _Then, there exists \(\psi\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\) such that for \(\Omega_{\psi}=(I+\psi)\Omega\) it holds_ \[\bar{U}\cap\sigma\left((-\Delta)^{s}_{\Omega_{\psi}}\right)=\left\{\lambda^{ \Omega_{\psi}}_{1},\ldots,\lambda^{\Omega_{\psi}}_{k}\right\},\] _where \(\lambda^{\Omega_{\psi}}_{i}\) is an eigenvalue of the operator \((-\Delta)^{s}_{\Omega_{\psi}}\) associated to the set \(\Omega_{\psi}\) with Dirichlet boundary condition. Here \(k>1\) and the multiplicity of \(\lambda^{\Omega_{\psi}}_{i}\) is \(\nu_{i}\) with \(\sum_{i=1}^{k}\nu_{i}=\nu\)._ We recall that if \(\|\psi\|_{C^{1}}\) is small, the multiplicity of an eigenvalue \(\lambda^{\Omega_{\psi}}\) near \(\bar{\lambda}\) can only be equal or smaller than the multiplicity of \(\bar{\lambda}\). Here, in Proposition 17, we proved the existence of perturbations for which the multiplicity is strictly smaller. The next corollary follows from Proposition 17, composing a finite number of perturbations. **Corollary 18**.: _There exists \(\psi\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\) such that for \(\Omega_{\psi}=(I+\psi)\Omega\) it holds_ \[\bar{U}\cap\sigma\left((-\Delta)^{s}_{\Omega_{\psi}}\right)=\left\{\lambda^{ \Omega_{\psi}}_{1},\ldots,\lambda^{\Omega_{\psi}}_{\nu}\right\},\] _where \(\lambda^{\Omega_{\psi}}_{i}\) is a simple eigenvalue of the operator \((-\Delta)^{s}_{\Omega_{\psi}}\) associated to the set \(\Omega_{\psi}\) with Dirichlet boundary condition._ At this point we are in position to prove the main result of this paper. ## 5. Proof of Theorem 1 We start proving the following splitting property for a finite number of multiple eigenvalues. **Lemma 19**.: _Given a sequence \(\{\sigma_{l}\}\) of positive real numbers there exists_ * _a sequence of bijective map_ \(\{F_{l}\}\in C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\)_,_ \(F_{l}=(I+\psi_{l})\) _with_ \(\|\psi_{l}\|_{C^{1}}\leq\sigma_{l}\)__ * _a sequence of open bounded_ \(C^{1}\) _sets with_ \(\Omega_{0}=\Omega\) _and_ \(\Omega_{l}=F_{l}(\Omega_{l-1})\)__ * _a sequence of increasing integer numbers_ \(\{q_{l}\}\) _with_ \(q_{l}\nearrow+\infty\)__ * _a sequence of open bounded intervals_ \(\{U_{t}\}_{t=1,\ldots,q_{l}}\) _with_ \(\bar{U}_{i}\cap\bar{U}_{j}=\emptyset\) _for_ \(i\neq j\)__ _such that the eigenvalues \(\lambda^{\Omega_{l}}_{i}\) of the operator \((-\Delta)^{s}_{\Omega_{l}}\) are simple for \(i=1,\ldots,q_{l}\) and \(\lambda^{\Omega_{l}}_{i}\in U_{i}\) for all \(i=1,\ldots,q_{l}\)._ Proof.: Take \(q\in\mathbb{N}\) such that that \(\lambda_{1},\ldots,\lambda_{q}\) are simple eigenvalues for \((-\Delta)^{s}_{\Omega}\) and that \(\lambda_{q+1}\) is the first eigenvalue with multiplicity \(\nu_{q+1}\). For \(t=1,\ldots,q\) let \(U_{t}\) be open intervals such that \(\bar{U}_{i}\cap\bar{U}_{j}=\emptyset\) for \(i\neq j\) and \(\lambda_{t}\in U_{t}\). Let us take \(W\) an open interval such that \(\bar{W}\cap\bar{U}_{t}=\emptyset\) for all \(t=1,\ldots,q\) and \(\bar{W}\cap\sigma((-\Delta)^{s}_{\Omega})=\{\lambda_{q+1}\}\). At this point, by Corollary 18 we can choose \(\bar{\psi}\) such that \(\bar{W}\cap\sigma((-\Delta)^{s}_{\Omega_{\bar{\psi}}})I\) contains exactly \(\nu_{q+1}\) simple eigenvalues. Also, we can choose a number \(\sigma_{q+1}\) sufficiently small, with \(\|\bar{\psi}\|_{C^{1}}\leq\sigma_{q+1}\) so that \(\lambda^{\bar{\psi}}_{t}\in U_{t}\) for all \(t=1,\ldots,q\), since the eigenvalues depends continuously on \(\psi\). At this point, by iterating this procedure a finite number of times we get the proof. At this point we are in position to prove the first result of our paper Proof of Theorem 1.: Let us take a sequence \(\{\sigma_{l}\}\) with \(0<\sigma_{l}<\frac{1}{4^{l}}\), and a sequence \(F_{l}=(1+\psi_{l})\) associated to \(\sigma_{l}\) as in the previous theorem. We set \[\mathcal{F}_{l}=F_{l}\circ F_{l-1}\circ\cdots\circ F_{1}.\] We can prove that, by the choice of \(\sigma_{l}\), the sequence \(\{\mathcal{F}_{l}-I\}_{l}\) converges to some function \(\bar{\psi}\) in \(C^{1}(\mathbb{R}^{n},\mathbb{R}^{n})\). In fact, by the previous lemma we have \[\|\mathcal{F}_{i+1}-\mathcal{F}_{i}\|_{\infty} \leq \|\psi_{i+1}\|_{C^{1}}<\left(\frac{1}{4}\right)^{i+1} \tag{5.1}\] \[\|\mathcal{F}_{i+1}^{\prime}-\mathcal{F}_{i}^{\prime}\|_{\infty} \leq \|\psi_{i+1}\|_{C^{1}}\|\mathcal{F}_{i}^{\prime}\|_{\infty}\leq \left(\frac{1}{4}\right)^{i+1}\|\mathcal{F}_{i}^{\prime}\|_{\infty}. \tag{5.2}\] By induction, using 5.2, we can prove that \[\|\mathcal{F}_{i}^{\prime}\|_{\infty}\leq\left(1+\frac{1}{4}\right)^{i}\leq \left(\frac{5}{4}\right)^{i} \tag{5.3}\] and, combining all these equation, that \[\|\mathcal{F}_{i+1}-\mathcal{F}_{i}\|_{C^{1}} \leq \|\psi_{i+1}\|_{C^{1}}\leq\left(\frac{1}{4}\right)^{i+1}\left( \frac{5}{4}\right)^{i} \tag{5.4}\] and, by iterating, that, for all \(p\in\mathbb{N}\) \[\|\mathcal{F}_{i+p}-\mathcal{F}_{i}\|_{C^{1}} \leq \|\psi_{i}\|_{C^{1}}\leq\sum_{t=0}^{p}\left(\frac{1}{4}\right)^{ i+t+1}\left(\frac{5}{4}\right)^{i+t} \tag{5.5}\] \[\leq \frac{1}{4}\left(\frac{5}{16}\right)^{i}\sum_{t=0}^{p}\left( \frac{5}{16}\right)^{t}\to 0\text{ as }i\rightarrow\infty.\] Thus the sequence \(\{\mathcal{F}_{i}-I\}\) converges in \(C^{1}\) to some \(\bar{\psi}=\bar{\mathcal{F}}-I\) and, by (5.5), \(\|\bar{\psi}\|_{C^{1}}\leq 1/2\), so \(\bar{\mathcal{F}}\) is invertible. We claim that all the eigenvalues \((-\Delta)^{s}_{\Omega_{\bar{\psi}}}\) are simple. By contradiction, suppose that there exists a \(\bar{q}\) such that \(\lambda^{\bar{\psi}}_{\bar{q}}\) is the first multiple eigenvalue. Let us call \(\Omega_{l}=\mathcal{F}_{l}(\Omega)\) and \(\{\lambda^{\Omega_{l}}_{i}\}_{i}\) the eigenvalues of \((-\Delta)^{s}_{\Omega_{l}}\) on \(\Omega_{l}\) with Dirichlet boundary conditions. By Theorem 19 we have that there exists an \(l\in\mathrm{N}\) such that \((-\Delta)^{s}_{\Omega_{l}}\) has the first \(\bar{q}+1\) eigenvalues simple, and that there exists \(U_{1},\ldots,U_{\bar{q}+1}\) open intervals, with disjoint closure, such that \(\lambda^{\Omega_{l}}_{t}\in U_{t}\) for \(t=1,\ldots,\bar{q}+1\). On the one hand, \(\lambda^{\Omega_{N}}_{\bar{q}}\rightarrow\lambda^{\bar{\psi}}_{\bar{q}}\) as well as \(\lambda^{\Omega_{N}}_{\bar{q}+1}\rightarrow\lambda^{\bar{\psi}}_{\bar{q}}\) when \(N\rightarrow\infty\) by continuity of the eigenvalues. On the other hand, \(\lambda^{\Omega_{N}}_{\bar{q}}\in U_{\bar{q}}\) and \(\lambda^{\Omega_{N}}_{\bar{q}+1}\in U_{\bar{q}+1}\) for all \(N\), by Theorem 19. So \(\lambda^{\bar{\psi}}_{\bar{q}}=\lambda^{\bar{\psi}}_{\bar{q}+1}\in\bar{U}_{ \bar{q}}\cap\bar{U}_{\bar{q}+1}\) which leads us to a contradiction, and the theorem is proved. ## 6. Proof of Theorem 3 In this case we call \[\mathcal{B}^{a}(u,v)=\mathcal{E}(u,v)+\int_{\mathbb{R}^{n}}au^{2}dx.\] and, by the hyphothesis on \(a\), we can endow \(\mathcal{H}^{s}_{0}(\Omega)\) with the norm \[\|u\|^{2}_{\mathcal{H}^{s}_{0}(\Omega)}=\mathcal{B}^{a}(u,u)=\mathcal{E}(u,u)+ \int_{\mathbb{R}^{n}}au^{2}dx.\] We call \(\varphi^{a}\in\mathcal{H}^{s}_{0}(\Omega)\) an eigenfunction of \(((-\Delta)^{s}+a)\) corresponding to the eigenvalue \(\lambda^{a}\). Given the embedding \(i:\mathcal{H}^{s}_{0}(\Omega)\to L^{2}(\Omega)\) we consider its adjoint operator, with respect to the scalar product \(\mathcal{B}^{a}\), \[i^{*}:L^{2}(\Omega)\to\mathcal{H}^{s}_{0}(\Omega).\] It holds \[\mathcal{B}^{a}\left((i^{*}\circ i)_{a}u,v\right)=\mathcal{E}\left((i^{*}\circ i )_{a}u,v\right)+\int_{\Omega}au(i^{*}\circ i)_{a}v=\int_{\Omega}uv, \tag{6.1}\] and, as before, if \(\varphi^{a}_{k}\in\mathcal{H}^{s}_{0}(\Omega)\) is an eigenfunction of the fractional Laplacian with eigenvalue \(\lambda^{a}_{k}\), then \(\varphi^{a}_{k}\) is an eigenfunction of \((i^{*}\circ i)_{a}\) with eigenvalue \(\mu^{a}_{k}:=1/\lambda^{a}_{k}\). In addiction (1.2) admits an ordered sequence of eigenvalues \[0<\lambda^{a}_{1}<\lambda^{a}_{2}\leq\lambda^{a}_{3}\leq\cdots\leq\lambda^{a}_ {k}\leq\cdots\to+\infty\] and all the eigenvalues \(\lambda^{a}_{k}\) depends continuously on \(a\). In the following, for \(b\in C^{0}(\Omega)\) with \(\|b\|_{L^{\infty}}\) small enough we consider \(\mathcal{B}^{a+b}\) and \((i^{*}\circ i)_{a+b}\) and we put \[B_{b}:=\mathcal{B}^{a+b}\text{ and }E_{b}:=(i^{*}\circ i)_{a+b}. \tag{6.2}\] Similarly to what we proved in Section 4 we have the following lemma. **Lemma 20**.: _The maps \(b\mapsto B_{b}\) and \(b\mapsto E_{b}\) are differentiable at \(0\) and it holds_ \[(B^{\prime}(0)[b]u,v)=\int_{\Omega}buv,\] \[0=\left(B^{\prime}(0)[b]E_{0}u,v\right)+B_{0}\left(E^{\prime}(0)[b]u,v\right). \tag{6.3}\] _for all \(u,v\in\mathcal{H}^{s}_{0}(\Omega)\)._ _Remark 21_.: Notice that, by Lemma 20 and by (6.3), it holds \[-B_{0}\left(E^{\prime}(0)[b]u,v\right)=\left(B^{\prime}(0)[b]E_{0}u,v\right)= \int_{\Omega}b(E_{0}u)v=\int_{\Omega}b\left[(i^{*}\circ i)_{a}u\right]v.\] _Remark 22_.: If \(\mu^{a}=\mu\) is an eigenvalue of the map \(E_{0}=(i^{*}\circ i)_{a}\) with multiplicity \(\nu>1\), and \(\varphi^{a}_{1},\ldots,\varphi^{a}_{\nu}\) are orthonormal eigenvectors associated to \(\mu\), then, by the previous remark we have \[\left(B^{\prime}(0)[b]E_{0}\varphi^{a}_{i},\varphi^{a}_{j}\right)=\int_{\Omega }bE_{0}(\varphi^{a}_{i})\varphi^{a}_{j}=-\mu\int_{\Omega}b\varphi^{a}_{i} \varphi^{a}_{j},\] for all \(i,j=1,\ldots,\nu\). Now we apply the condition (3.2) to prove the splitting property for a chosen multiple eigenvalue. **Proposition 23**.: _Let \(a\in C^{0}(\mathbb{R}^{n})\) be positive on \(\Omega\) or with \(\|a\|_{C^{0}(\Omega)}\) sufficiently small. Let \(\bar{\lambda}\) an eigenvalue of the operator \((-\Delta)^{s}_{\Omega}+aI\) on \(\mathcal{H}^{s}_{0}\) with Dirichlet boundary condition with multiplicity \(\nu>1\). Let \(U\) and open bounded interval such that_ \[\bar{U}\cap\sigma\left((-\Delta)^{s}_{\Omega}+aI\right)=\left\{\bar{\lambda} \right\},\] _where \(\sigma\left((-\Delta)^{s}_{\Omega}+aI\right)\) is the spectrum of \((-\Delta)^{s}_{\Omega}+aI\)._ _Then, there exists \(b\in C^{0}(\mathbb{R}^{n})\) such that for_ \[\bar{U}\cap\sigma\left((-\Delta)^{s}_{\Omega}+(a+b)I\right)=\left\{\lambda^{b }_{1},\ldots,\lambda^{b}_{k}\right\},\] _where \(\lambda^{b}_{i}\) is an eigenvalue of the operator \((-\Delta)^{s}_{\Omega}+(a+b)I\). Here \(k>1\) and the multiplicity of \(\lambda^{b}_{i}\) is \(\nu_{i}\) with \(\sum_{i=1}^{k}\nu_{i}=\nu\)._ The next corollary follows from by the previous propositon, after composing a finite number of perturbations. **Corollary 24**.: _There exists \(b\in C^{0}(\mathbb{R}^{n})\) such that_ \[\bar{U}\cap\sigma\left((-\Delta)^{s}_{\Omega}+(a+b)I\right)=\left\{\lambda^{b}_ {1},\ldots,\lambda^{b}_{\nu}\right\},\] _where \(\lambda^{b}_{i}\) is a simple eigenvalue of the operator \((-\Delta)^{s}_{\Omega}+(a+b)I\) with Dirichlet boundary condition._ Proof of Proposition 23.: We apply Theorem 13 to the operator \(E_{b}=(i^{*}\circ i)_{a+b}\) introduced in (6.2). If \(\mu^{a+b}\) is an eigenvalue of \(E_{b}\) which has multiplicity \(\nu\) at \(b=0\) and at any \(b\) with \(\|b\|_{C^{0}}\) small, then by condition (3.2) of Theorem 13 we have \[B_{0}(E^{\prime}(0)[b]\varphi_{i},\varphi_{j})=\rho\delta_{ij}\text{ for some }\rho\in\mathbb{R},\] where \(\left\{\varphi_{i}\right\}_{i=1,\ldots,\nu}\) is an \(L^{2}\)-orthonormal basis for the eigenspace relative to \(\mu^{a}\). Then, in light of Remark 22, we should have that for any \(b\in C^{0}\) small, there exists \(\rho=\rho(b)\) such that \[\mu^{a}\int_{\Omega}b\varphi_{i}\varphi_{j}=\rho(b)\delta_{ij}.\] Then, in particular, we deduce that \[\int_{\Omega}b\varphi_{1}\varphi_{2}=0\text{ and }\int_{\Omega}b\varphi_{1}^{2} =\int_{\Omega}b\varphi_{2}^{2}\text{ for all }b\in C^{0}.\] Thus \(\varphi_{1}\varphi_{2}\equiv 0\) and \(\varphi_{1}^{2}\equiv\varphi_{2}^{2}\) almost everywhere in \(\Omega\). Thus \(\varphi_{1}\equiv\varphi_{2}\equiv 0\) a.e. in \(\Omega\), which leads us to a contradiction. Then there exists \(b\in C^{0}\) small such that the multiplicity of \(\mu^{a+b}\) is smaller that \(\nu\). Since the eigenvalue \(\mu^{a+b}\) depends continuosly on \(b\), given a neighborhood \(U\) of \(\mu^{a}\), for \(\|b\|_{C^{0}}\) small we have that \(\bar{U}\cap\sigma(E_{b})=\left\{\mu_{1}^{a+b},\ldots,\mu_{k}^{a+b}\right\}\) with \(\nu_{i}\) the multiplicity of \(\mu_{i}^{a+b}\), and where \(\sum_{i=1}^{k}\nu_{i}=\nu\), and \(k>1\). Remebering the definition of \(E_{b}\) and that \(\mu^{a+b}=1/\lambda^{a+b}\) we have the claim. We proceed similarly as the proof of Theorem 1 to obtain Theorem 3 **Lemma 25**.: _Given \(a\in C^{0}(\mathbb{R}^{n})\) as in the hypotesis of Theorem 3, and a sequence \(\{\sigma_{l}\}\) of positive real numbers there exists_ * _a sequence of functions_ \(\{b_{l}\}\in C^{0}(\mathbb{R}^{n})\) _with_ \(\|b_{l}\|_{C^{0}}\leq\sigma_{l}\)__ * _a sequence of increasing integer numbers_ \(\{q_{l}\}\) _with_ \(q_{l}\nearrow+\infty\)__ * _a sequence of open bounded intervals_ \(\{U_{t}\}_{t=1,\ldots,q_{l}}\) _with_ \(\bar{U}_{i}\cap\bar{U}_{j}=\emptyset\) _for_ \(i\neq j\)__ _such that the eigenvalues \(\lambda^{a+\sum_{j=i}^{l}b_{j}}_{i}\) of the operator \((-\Delta)^{s}_{\Omega}+(a+\sum_{j=i}^{l}b_{j})I\) are simple for \(i=1,\ldots,q_{l}\) and \(\lambda^{a+\sum_{j=i}^{l}b_{j}}_{i}\in U_{i}\) for all \(i=1,\ldots,q_{l}\)._ Proof.: Take \(q\in\mathbb{N}\) such that that \(\lambda^{a}_{1},\ldots,\lambda^{a}_{q}\) are simple eigenvalues for \((-\Delta)^{s}_{\Omega}+aI\) and that \(\lambda^{a}_{q+1}\) is the first eigenvalue with multiplicity \(\nu_{q+1}\). For \(t=1,\ldots,q\) let \(\{U_{t}\}\) open intervals such that \(\bar{U}_{i}\cap\bar{U}_{j}=\emptyset\) for \(i\neq j\) and \(\lambda^{a}_{t}\in U_{t}\). Let us take \(W\) an open interval such that \(\bar{W}\cap\bar{U}_{t}=\emptyset\) for all \(t=1,\ldots,q\) and \(\bar{W}\cap\sigma((-\Delta)^{s}_{\Omega}+aI)=\left\{\lambda^{a}_{q+1}\right\}\). At this point, by Corollary 24 we can choose \(\bar{b}\) such that \(\bar{W}\cap\sigma((-\Delta)^{s}_{\Omega}+(a+\bar{b})I\) contains exactly \(\nu_{q+1}\) simple eigenvalues. Also, we can choose a number \(\sigma_{q+1}\) sufficiently small, with \(\|b_{q+1}\|_{C^{0}}\leq\sigma_{q+1}\) so that \(\lambda^{a+\bar{b}}_{t}\in U_{t}\) for all \(t=1,\ldots,q\), since the eigenvalues depends continuosly on \(b\). At this point, by iterating this procedure a finite number of times we get the proof. At this point we can conclude. Proof of Theorem 3.: Let us take a sequence \(\{\sigma_{l}\}\) with \(0<\sigma_{l}<\frac{1}{2^{l}}\), and a sequence \(b_{l}\) associated to \(\sigma_{l}\) as in the previous theorem. By the choice of \(\sigma_{l}\), we have that \(\sum_{l}b_{l}\) converge to some function \(b\) in \(C^{0}(\mathbb{R}^{n})\). We claim that all the eigenvalues \((-\Delta)^{s}_{\Omega}+(a+b)I\) are simple. By contradiction, suppose that there exists a \(\bar{q}\) such that \(\lambda^{a+b}_{\bar{q}}\) is the first multiple eigenvalue. By Theorem 19 we have that \((-\Delta)^{s}_{\Omega}+(a+\sum_{l=1}^{\bar{q}+1}b_{l})I\) has the first \(\bar{q}+1\) eigenvalues simple, and that there exists \(U_{1},\ldots,U_{\bar{q}+1}\) open intervals, with disjoint closure, such that \(\lambda^{a+\sum_{l=1}^{\bar{q}+1}b_{l}}_{t}\in U_{t}\) for \(t=1,\ldots,\bar{q}+1\). On the one hand, \(\lambda^{a+\sum_{l=1}^{N}b_{l}}_{\bar{q}}\to\lambda^{a+b}_{\bar{q}}\) as well as \(\lambda^{a+\sum_{l=1}^{N}b_{l}}_{\bar{q}+1}\to\lambda^{a+b}_{\bar{q}}\) when \(N\to\infty\) by continuity of the eigenvalues. On the other and, \(\lambda^{a+\sum_{l=1}^{N}b_{l}}_{\bar{q}}\in U_{\bar{q}}\) and \(\lambda^{a+\sum_{l=1}^{N}b_{l}}_{\bar{q}+1}\in U_{\bar{q}+1}\) for all \(N\), by Theorem 19. So \(\lambda^{a+b}_{\bar{q}}=\lambda^{a+b}_{\bar{q}+1}\in\bar{U}_{\bar{q}}\cap\bar{ U}_{\bar{q}+1}\) which lead as to a contradiction, and the theorem is proved. ## 7. Sketch of the proof of Theorem 4. In this section we adapt the abstract scheme to the last of the second result of this paper. Since the proof is very similar to the one of Theorem 3, we provide only the main tools. Since \(\alpha>0\) on \(\bar{\Omega}\), we endow the space \(L^{2}(\Omega)\) with scalar product and norm given, respectively, by \[\langle u,v\rangle_{L^{2}}=\int_{\Omega}\alpha uv;\ \ \ \ \ \|u\|_{L^{2}}^{2}=\int_{\Omega} \alpha u^{2},\] while on \(\mathcal{H}^{s}_{0}\) we consider the usual scalar product \(\mathcal{E}(u,v)\). We consider the embedding \(i:\mathcal{H}^{s}_{0}\to L^{2}\) and its adjoint operator \(i^{*}:L^{2}\to\mathcal{H}^{s}_{0}\). Then we have \[\mathcal{E}((i^{*}\circ i)_{\alpha}v,u)=\int_{\Omega}\alpha uv\ \ \forall u,v\in \mathcal{H}^{s}_{0}.\] As before, the map \((i^{*}\circ i)_{\alpha}\) is selfadjoint, compact and injective form \(\mathcal{H}^{s}_{0}\) in itself. In addition, is \(\varphi^{\alpha}\) is an eigenfunction associated to the eigenvalue \(\mu^{\alpha}\) for \((i^{*}\circ i)_{\alpha}\), then \[\mu^{a}(-\Delta)^{s}\varphi=\alpha(x)\varphi_{s}\ \text{in}\ \Omega,\ \varphi=0\ \text{in}\ \mathbb{R}^{n}\smallsetminus\Omega,\] thus \(\lambda^{\alpha}=1/\mu^{\alpha}\) is an eigenvalue with \(\varphi^{\alpha}\) as eigenvector for Problem (1.3). We want to prove that there exists \(\beta\in C^{0}(\Omega)\), with \(\|\beta\|_{L^{\infty}}\) sufficiently small, such that \((i^{*}\circ i)_{\alpha+\beta}\) has all eigenvalues simple. Set \[E_{\beta}:=(i^{*}\circ i)_{\alpha+\beta},\] we have the following Lemma **Lemma 26**.: _The map \(\beta\mapsto E_{\beta}\) from a neighborhood of \(0\) in \(C^{0}(\Omega)\) to the space of linear maps from \(\mathcal{H}^{s}_{0}(\Omega)\) to \(\mathcal{H}^{s}_{0}(\Omega)\) is continuous and differentiable at \(0\) and it holds_ \[\mathcal{E}(E^{\prime}(0)[\beta]u,v)=\int_{\Omega}\beta uv.\] Proof.: Since \(\Lambda_{1}\int_{\Omega}u^{2}\leq\mathcal{E}(u,u)\), and \(\Lambda_{1}>0\), where \(\Lambda_{1}\) is the first eigenvalue of \((-\Delta)^{s}\), we have \(\|E_{\beta}u\|_{L^{2}}\leq c\|u\|_{L^{2}}\). Indeed \[\Lambda_{1}\int_{\Omega}\left(E_{\beta}u\right)^{2}\leq\mathcal{E}(E_{\beta}u,E_{\beta}u)=\int_{\Omega}(\alpha+\beta)uE_{b}u\leq c\|u\|_{L^{2}}\|E_{\beta}u \|_{L^{2}}.\] We can show now that \(\mathcal{E}\left((E_{\beta}-E_{0})u,(E_{\beta}-E_{0})u\right)\to 0\) as \(\|\beta\|_{L^{\infty}}\to 0\), proving the continuity of \(\beta\to E_{\beta}\) at \(b=0\), in fact \(\mathcal{E}\left((E_{\beta}-E_{0})u,w\right)=\int_{\Omega}\beta uw\), so \[\mathcal{E}\left((E_{\beta}-E_{0})u,(E_{\beta}-E_{0})u\right)=\int_{\Omega} \beta u(E_{\beta}-E_{0})u\leq c\|\beta\|_{L^{\infty}}\|u\|_{L^{2}}\mathcal{E} \left((E_{\beta}-E_{0})u,(E_{\beta}-E_{0})u\right)^{\frac{1}{2}}\] which proves the claim. Finally, given \(\beta\in C^{0}(\Omega)\) and \(u\in\mathcal{H}_{0}^{s}\), there exists \(L(\beta,u)\in\mathcal{H}_{0}^{s}\) such that \[\int_{\Omega}\beta uw=\mathcal{E}\left(L(\beta,u),w\right).\] Thus, for any \(w\in\mathcal{H}_{0}^{s}\) it holds \[\mathcal{E}\left(\left(E_{\beta}u-E_{0}u-L(\beta,u)\right),w\right)=\int_{ \Omega}(\alpha+\beta)uw-\int_{\Omega}\alpha uw-\int_{\Omega}\beta uw\equiv 0.\] Thus \(L(\beta,u)=E^{\prime}(0)[\beta]u\) and \(\mathcal{E}(E^{\prime}(0)[\beta]u,v)=\int_{\Omega}\beta uv\), as claimed. It remains to us to apply Theorem 13 to conclude the proof of Theorem 4. Proof of Theorem 4.: If \(\mu^{\alpha}\) is an eigenvalue of multiplicity \(\nu>1\) of the operator \((i^{*}\circ i)_{\alpha}=E_{0}\) and \(\varphi_{1}^{\alpha},\ldots,\varphi_{\nu}^{\alpha}\) are orthonormal eigenfunctions associated to \(\mu^{\alpha}\), the condition of non splitting is that for any \(b\) with \(\|\beta\|_{C^{0}}\) small there exists \(\rho=\rho(\beta)\in\mathbb{R}\) such that \[\int_{\Omega}\beta\varphi_{i}\varphi_{j}=\rho\delta_{ij},\text{ for all }i,j=1,\ldots,\nu.\] At this point, the proof can be achieved as the proof of Theorem 3. ## 8. Appendix Proof of Lemma 11.: It is known that the Fredholm operator of a given index is open in \(L(X,X)\). So, if \(A_{0}\in F_{ij}\), then \(A_{0}+H\in F_{ij}\) (if \(H\) is small) if and only if \(\dim\) (\(\ker(A_{0}+H)\)) = \(\dim\) (\(\ker(A_{0})\)), that is, if there exists \(j\) linearly independent solutions of \((A_{0}+H)x=0\). By means of the projections \(P,Q,\bar{P},\bar{Q}\), this is equivalent to solve \[\left\{\begin{array}{l}\bar{P}Hx=0\\ \bar{Q}A_{0}x+\bar{Q}Hx=0\end{array}\right.; \tag{8.1}\] Furthermore by Lemma 10, we can decompose \(H=Y+S+Z+T\) where \(Y=\bar{P}HP\), \(S=\bar{Q}HP\), \(Z=\bar{P}HQ\) and \(T=\bar{Q}HQ\). Set \(x=u+v\) where \(u\in\ker A_{0}\) and \(v\in\mathcal{V}\), we can recast (8.1) as \[\left\{\begin{array}{l}Yu+Zv=0\\ \bar{Q}A_{0}v+Su+Tv=0\end{array}\right.. \tag{8.2}\] Now, \(\bar{Q}A_{0}:\mathcal{V}\rightarrow\mathrm{Im}A\) is invertible, and let us call \(R\) its inverse. Then the second equation of (8.2) becomes \[v=-RSu-RTv.\] If \(H\) is sufficiently small, then the operator \(w\mapsto-RSu-RTw\) is a contraction from \(\mathcal{V}\) to \(\mathcal{V}\). Then we can find \(v\) as \[v=-RSu-\sum_{i=0}^{\infty}(-1)^{i}\left(RT\right)^{i}RSu.\] Plugging this expression in (8.2) we obtain \[\left[Y+Z\left(-RS-\sum_{i=0}^{\infty}(-1)^{i}\left(RT\right)^{i}RS\right) \right]u=0.\] Recalling that \(u\in\ker A_{0}\), we have that this equation has \(j\) linearly independent solutions if and only if \[Y=Z\left(RS+\sum_{i=0}^{\infty}(-1)^{i}\left(RT\right)^{i}RS\right).\] Then, when \(H\) is small, the set \(\{A_{0}+H\in F_{ij}\}\) is a graph of an analytic function with domain \(\mathcal{V}\), and the claim follows easily.
2301.02493
Muon Beam for Neutrino CP Violation: connecting energy and neutrino frontiers
We propose here a proposal to connect neutrino and energy frontiers, by exploiting collimated muon beams for neutrino oscillations, which generate symmetric neutrino and antineutrino sources: $\mu^+\rightarrow e^+\,\bar{\nu}_{\mu}\, \nu_{e}$ and $\mu^-\rightarrow e^-\, \nu_{\mu} \,\bar{\nu}_{e}$. Interfacing with long baseline neutrino detectors such as DUNE and T2K, this experiment can be applicable to measure tau neutrino properties, and also to probe neutrino CP phase, by measuring muon electron (anti-)neutrino mixing or tau (anti-)neutrino appearance, and differences between neutrino and antineutrino rates. There are several significant benefits leading to large neutrino flux and high sensitivity on CP phase, including 1) collimated and manipulable muon beams, which lead to a larger acceptance of neutrino sources in the far detector side; 2) symmetric $\mu^+$ and $\mu^-$ beams, and thus symmetric neutrino and antineutrino sources, which make this proposal ideally useful for measuring neutrino CP violation. More importantly, $\bar{\nu}_{e,\mu}\rightarrow\bar{\nu}_\tau$ and $\nu_{e,\mu}\rightarrow \nu_\tau$, and, $\bar{\nu}_{e}\rightarrow\bar{\nu}_\mu$ and $\nu_{e}\rightarrow \nu_\mu$ oscillation signals can be collected simultaneously, with no needs for separate specific runs for neutrinos or antineutrinos. Based on a simulation of neutrino oscillation experiment, we estimate $10^4$ tau (anti-) neutrinos can be collected within 5 years which makes this proposal suitable for a brighter tau neutrino factory. Moreover, more than 7 standard deviations of sensitivity can be reached for $\dcp = |\pi/2|$, within only five ears of data taking, by combining tau and muon (anti-) neutrino appearances. With the development of a more intensive muon beam targeting future muon collider, the neutrino potential of the current proposal will surely be further improved.
Alim Ruzi, Tianyi Yang, Dawei Fu, Sitian Qian, Leyun Gao, Qiang Li
2023-01-06T13:10:06Z
http://arxiv.org/abs/2301.02493v5
# Muon Beam for Neutrino CP Violation: connecting energy and neutrino frontiers ###### Abstract We propose here a proposal to connect neutrino and energy frontiers, by exploiting collimated muon beams for neutrino oscillations, which generate symmetric neutrino and antineutrino sources: \(\mu^{+}\to e^{+}\,\bar{\nu}_{\mu}\,\nu_{e}\) and \(\mu^{-}\to e^{-}\,\nu_{\mu}\,\bar{\nu}_{e}\). Interfacing with long baseline neutrino detectors such as DUNE and T2K, this experiment can be applicable to measure tau neutrino properties, and also to probe neutrino CP phase, by measuring muon electron (anti-)neutrino mixing or tau (anti-)neutrino appearance, and differences between neutrino and antineutrino rates. There are several significant benefits leading to large neutrino flux and high sensitivity on CP phase, including 1) collimated and manipulable muon beams, which lead to a larger acceptance of neutrino sources in the far detector side; 2) symmetric \(\mu^{+}\) and \(\mu^{-}\) beams, and thus symmetric neutrino and antineutrino sources, which make this proposal ideally useful for measuring neutrino CP violation. More importantly, \(\bar{\nu}_{e,\mu}\to\bar{\nu}_{\tau}\) and \(\nu_{e,\mu}\to\nu_{\tau}\), and \(\bar{\nu}_{e}\to\bar{\nu}_{\mu}\) and \(\nu_{e}\to\nu_{\mu}\) oscillation signals can be collected simultaneously, with no needs for separate specific runs for neutrinos or antineutrinos. Furthermore, it is possible to exchange \(\mu^{+}\) and \(\mu^{-}\) flying routes, thus further reducing possible bias or systematic uncertainties. In an optimistic way, we estimate \(10^{4}\) tau (anti-) neutrinos can be collected per year thus this proposal can serve as a brighter tau neutrino factory. Moreover, 5 standard deviations of sensitivity can be easily reached for CP phase as \(|\pi/2|\), with only 1-2 years of data taking, by combining tau and muon (anti-) neutrino appearances. With the development of a more intensive muon beam targeting future muon collider, the neutrino potential of the current proposal will surely be further improved. N novel collision methods and rich phenomena are crucial to keeping high-energy collision physics more robust and attractive [1]. Recent years have witnessed vast development towards next generation high energy colliders, including various proposals on Higgs factory [2; 3], revived interest in Muon collider [4; 5; 6; 7; 8], etc. As for the muon collider design, we take positron on target method (LEMMA) as an example, which has been proposed for high quality muon beam production [9; 10] (Earlier studies using proton on target muon beams can be found in Refs.[11; 12; 13] ) Although it is still quite challenging to achieve enough high luminosity for muon beam collisions [6; 7], we find it quite promising for neutrino oscillation studies, with comparable or even larger neutrino flux than other long baseline neutrino experiments, with more details to be discussed below. In the LEMMA approach, the incident positron energy is around 45 GeV, producing collimated muon pairs with opening angles of around 0.005 rad. and a large boost about \(\gamma\sim 200\), which extends the muon lifetime by the order of \(\mathcal{O}(10^{2})\). Generally, the number of muon pairs produced per positron bunch on target can be expressed as \[n(\mu^{+}\mu^{-})=n^{+}\rho_{e^{-}}l\sigma(\mu^{+}\mu^{-}) \tag{1}\] where \(n^{+}\) is the number of \(e^{+}\) in each positron bunch, \(\rho_{e^{-}}\) is the electron density in the medium, \(l\) is the thickness of the the target, and \(\sigma(\mu^{+}\mu^{-})\) being the cross section of the muon pair production. The number of muon pairs per positron bunch on target can be maximally estimated as \(n(\mu^{+}\mu^{-})_{\rm max}\approx n^{+}\times 10^{-5}\). On the other hand, neutrinos are among the most abundant and least understood of all particles in the SM that make up our universe. The history of neutrino physics was full of novel discoveries. One of them is the observation of neutrino oscillations, confirming that at least two types of SM neutrinos have a tiny, but strictly nonzero mass. Nowadays, the neutrino system is described by nine parameters, the masses \(m_{1}\), \(m_{2}\) and \(m_{3}\) of the three mass eigenstates, three mixing angles, \(\theta_{12}\), \(\theta_{23}\), and \(\theta_{13}\), and three phases, one Dirac phase, \(\delta_{\rm CP}\) and two Majorana phases. The Majorana phase only plays a part in neutrinoless double beta decay [14]. The mixing angles and the phases are the elements of a unitary matrix called Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix [15; 16]. The available experiments on neutrino oscillations to date have measured five of the neutrino mixing parameters, three mixing angles \(\theta_{12}\), \(\theta_{23}\), \(\theta_{13}\), and the two squared-mass differences \(\Delta m^{2}_{21}\), \(\Delta m^{2}_{32}\) within 3\(\sigma\) range [17; 18; 19; 20; 21]. Neutrino oscillation probabilities from one flavor into another are functions of these mixing angles and the squared-mass differences. The mass of the individual neutrino and the mass hierarchies are not known. The determination of the CP-violating phase, the Dirac phase, has been the core research program in neutrino physics for decades because it provides a potential source of CP violation in the SM lepton sector. It has been known that the leptonic CP violation could generate the matter-antimatter asymmetry through leptogenesis [22]. CP violation in neutrino oscillation can be measured through the difference between the oscillation probability of the neutrino and antineutrino, expressed as \(\Delta P^{\rm CP}_{\alpha\beta}=P_{\alpha\beta}-\overline{P}_{\alpha\beta}\), which is well quantified by \(\delta_{\rm CP}\). There are several experiments worldwide dedicated to the measurements of the neutrino parameters, especially the CP phase, performing searches of short-baseline and long-baseline neutrino oscillation. To ensure that there are enough neutrino flavors oscillated from source neutrino and to be detected by Far Detector (FD), a long-baseline neutrino oscillation experiment is preferable rather than a short-baseline. Recently, the long-baseline experiments, T2K (Tokai to Kamioka) [23; 24] and NOvA [25] report their results. T2K reports a measured value for CP phase, \(\delta_{\rm CP}=-1.97^{+0.97}_{-0.70}\) while excluding \(\delta_{\rm CP}\) = 0 and \(\pi\) at 90% CL, indicating CP violation in the lepton sector. The FD in this case is the Super-Kamiokande, a 50 Kton water Cherenkov detector. A narrow band neutrino beam is produced at an angle of 2.5\({}^{\circ}\) by a 30 GeV proton beam hitting on graphite target. With this off-axis method, the narrow band neutrino energy has a peak at 0.6 GeV. The secondary neutrino produced from decays of Kaon or Pion travels a distance of 295 Km to reach the Super-Kamiokande detector. T2K plans to extend its term to 2026, followed by the Hyper-K project with the mass of the far detector to be increased by a factor of 10, and will offer a broad science program [26]. On the other hand, the NOvA experiment [25] in the US is also a long-baseline accelerator-based neutrino oscillation experiment. It uses the upgraded Fermilab NuMI beam and measures electron-neutrino appearance and muon-neutrino disappearance at its far detector in Ash River, Minnesota. The reported NOvA result shows no strong preference for any particular value of the neutrino CP phase and has a slight tension with T2K's. Another promising long-baseline neutrino experiment under construction is DUNE (Deep Underground Neutrino Experiment), whose goals are the determination of the neutrino mass ordering, observation of CP violation (up to 50% level), and precise measurements of oscillation parameters, such as \(\delta_{\rm CP}\), \(\sin^{2}(2\theta_{21})\). The idea is to send a wide-band high-intensity muon neutrino beam from Fermilab to the Sanford Underground Facility in Homestake at the 1300 Km distance. The detector technology of DUNE experiment is based on building liquid argon time projection chambers (LArTPC). Unlike the T2K experiment, the neutrino beam energy has a peak at 2.5 GeV with a broad range of neutrino energies. The neutrino beam is produced from proton collision on the graphite target. In the corresponding DUNE TDR report, it is shown that favorable values for \(\delta_{\rm CP}\) with 3\(\sigma\) (5\(\sigma\)) can be achieved after five (ten) years of running. In this letter, we are interested in applying collimated muon beams into neutrino mixing and CP phase measurements. Although the beam density is lower than the proton-on-target scenario, there are several significant benefits leading to large neutrino flux and high sensitivity on CP phase, including 1) collimated and manipulable muon beams, which lead to a larger acceptance of neutrino sources in the far detector side; 2) symmetric \(\mu^{+}\) and \(\mu^{-}\) beams, and thus symmetric neutrino and antineutrino sources, which make this proposal ideally for measuring neutrino CP violation. What is more important is that antineutrino and neutrino flux here are similar, and thus for example, \(\bar{\nu}_{e}\rightarrow\bar{\nu}_{\mu}\) and \(\nu_{e}\rightarrow\nu_{\mu}\) oscillation signals can be collected simultaneously, with no needs for separate specific runs for neutrinos or antineutrinos as done in the DUNE experiment [27; 28]. As to be discussed below, the estimated neutrino flux in our proposal is comparable to or even larger than the DUNE experiment [27; 28; 29]. The neutrino energy has wide distributions in the 1-15 GeV region, and peaks at around 7 GeV (neutrino energy can be further tuned with on-axis and off-axis techniques), suggesting our proposal is also suitable for tau neutrino studies. Taking into account both muon and electron neutrinos and antineutrinos, the signal yields indeed can be doubled or more. Finally, we point out that it is possible to exchange \(\mu^{+}\) and \(\mu^{-}\) flying routes, thus further reducing possible bias or systematic uncertainties. Fig. 1 shows a proposed neutrino oscillation experiment to probe neutrino CP phase by measuring muon electron (anti-)neutrino mixing and their differences. We are especially interested in the oscillation modes of \(\nu_{e,\mu}\rightarrow\nu_{\tau}\), \(\nu_{e}\rightarrow\nu_{\mu}\) and their antineutrino correspondents, with more details to be given later in this paper. This proposal is based on collimated muon beams achieved from e.g., Positron on Target method, where 45 GeV positron beams are shed on the target. Dipoles are used to separate \(\mu^{+}\) and \(\mu^{-}\) with an angle around 0.01 rad., with direction changeable. Muon beams fly about 10 Km and radiate neutrinos before being swept away. Neutrinos then further fly e.g., 1300 Km to reach DUNE or T2K type of detectors [30]. Fig. 2 shows a similar but simpler proposal focusing mainly on tau (anti-)neutrino appearance and related physics studies. * Muon production rates \(n(\mu^{+}\mu^{-})_{max}\approx n^{+}\times 10^{-5}\)[10]. Assuming positron bunch density as \(10^{12}\)/bunch and bunch crossing frequency as \(10^{5}\)/sec, we get **muon production rates \(dN_{\mu}/dt\sim 10^{12}\)/sec (or \(10^{19}\)/year)**. (Notice the future TeV scale muon collider is indeed targeting a much larger intensity beam by more than 1-2 orders of magnitudes [5; 6; 7].) * For muons with the energy around 20 GeV, the mean flying distance is around 100 Km. If there is a straight tube with a length around 5-10 Km to let muons go through with quadrupoles to keep them merged, **the decayed fraction can reach \(10^{-1}\)** in realistic case. On the other hand, we can also refer to a muon complex as discussed in Ref. [8; 31], where the muon beam is accelerated in a circular section and then extracted into the rectangular section for decays. The intensity of the neutrino beam compared with the incoming muon beam is suppressed by a ratio around \(10^{-1}\), i.e., the fraction of the collider ring circumference occupied by the production straight section. * The opening angles between muons beam axis and the momentum of decay products are around 0.005 rad, as shown in Fig. 3 and may be kept smaller with quadrupoles. For neutrinos traveling 1300 Km to reach far detectors, the spread size can be around 1-5 Km. For a DUNE-like detector with a cubic size of about 20 m [30], **the neutrino acceptance is then \(10^{-4}\)**. * Muon/electron neutrinos and antineutrinos interacting with detectors. With a \(L=20\) m long detector (DUNE far detector indeed has a length around 50m [30]), **the expected event yield rate can be roughly estimated with: \(dN_{\mu}/dt\times L\times\sigma_{n\nu}\times\rho N_{A}\cdot dE\sim 10^{-9} \times dN_{\mu}/dt\)**, where \(N_{A}\) is the Avogadro constant, \(\rho\sim 2\) g/cm\({}^{3}\), \(\sigma_{n\nu}\) symbols the neutrino nucleon cross sections and is around \(10^{-37}\)cm\({}^{2}\) for a 10 GeV neutrino [32; 33]. Combining all above numbers, **with the conservative option** (e.g., 20-meter cubic size detector), we get the muon/electron (anti-)neutrino Charged Current (CC) events per year as \[N_{\nu_{\mu,e}}^{cc}\sim 10^{19}\times 10^{-1}\times 10^{-4}\times 10^{-9}=10^{5 }/\text{year}, \tag{2}\] On top of measuring muon (anti-)neutrino rates, our proposal can also be used to probe tau neutrino appearance. Figure 1: A proposed neutrino oscillation experiment to probe neutrino violation CP phase by measuring muon electron (anti-) neutrino oscillation and the differences of resulted \(\nu_{e,\mu}\rightarrow\nu_{\tau}\), \(\nu_{e}\rightarrow\nu_{\mu}\), and their antineutrino correspondents. This proposal is based on collimated muon beams achieved from e.g., Positron on Target method, where 45 GeV positron beams are fired. Dipoles are used to separate \(\mu^{+}\) and \(\mu^{-}\) with an angle around 0.01 rad., with direction changeable. Muon beams fly about 10 Km and radiate neutrinos before being swept away. Neutrinos then further fly 1300 Km to reach DUNE type of detectors. Figure 2: A proposed neutrino oscillation experiment to probe tau neutrino physics by measuring tau (anti-)neutrino appearance. This proposal is based on collimated muon beams achieved from e.g., Positron on Target method, where 45 GeV positron beams are shed. Neutrinos are radiated along the muon beam line and fly 100 to 1300 Km to reach DUNE type of detectors. Quadrupoles can be applied to keep muon beams more collimated. One additional factor needs to be considered, i.e., the fraction of muon neutrino oscillated into tau neutrino [34]: \[\begin{split} P(\nu_{\mu}\to\nu_{\tau})&\simeq\sin^{2 }\left(2\theta_{23}\right)\cos^{4}(\theta_{13})\sin^{2}\left(1.27\frac{\Delta m _{32}^{2}L}{E_{\nu}}\right)\\ &\pm 1.27\Delta m_{21}^{2}\frac{L}{E_{\nu}}\sin^{2}\left(1.27\frac{ \Delta m_{32}^{2}L}{E_{\nu}}\right)\times 8J_{\rm CP}\end{split} \tag{3}\] where the "Jarlskog invariant" [35; 36], \[\begin{split} J_{\rm CP}&\equiv\sin\theta_{13}\cos ^{2}\theta_{13}\sin\theta_{12}\cos\theta_{12}\sin\theta_{23}\cos\theta_{23} \sin\delta_{\rm CP}\\ &=0.03359\pm 0.0006(\pm 0.0019)\sin\delta_{\rm CP}\end{split} \tag{4}\] is a function of \(\sin\delta_{\rm CP}\). The oscillation probability difference between \(P(\nu_{\mu}\to\nu_{\tau})\) and \(P(\bar{\nu}_{\mu}\to\bar{\nu}_{\tau})\) reads as \[\Delta P(\nu_{\mu}\to\nu_{\tau})=16J_{\rm CP}\times 1.27\Delta m_{12}^{2} \frac{L}{E_{\nu}}\sin^{2}\left(\frac{\Delta m_{32}^{2}L}{E_{\nu}}\right) \tag{5}\] Thus we can estimate tau neutrino CC events per year as (for simplicity we ignore the cross section difference of neutrino and antineutrino nucleon cross sections) \[N_{\nu_{\tau}}^{cc}\sim[(3\times 10^{4})\pm(2.6\times 10^{2})]/\rm year, \tag{6}\] where \((2.6\times 10^{2})\) corresponds to the CP violation term. Notice the yearly expected tau neutrino yields is already comparable or even surpass the rates at the SHiP experiment at CERN [37]. Thus our proposal can serve as a 'brighter' factory for tau neutrinos. The oscillation probability of \(P(\nu_{e}\to\nu_{\tau})\) can be estimated as \[\begin{split} P(\nu_{e}\to\nu_{\tau})&\simeq\sin^{ 2}(2\theta_{13})\cos^{2}(\theta_{23})\sin^{2}\left(1.27\Delta m_{32}^{2}\frac{ L}{E_{\nu}}\right)\\ &\mp 1.27\Delta m_{21}^{2}\frac{L}{E_{\nu}}\sin^{2}\left(1.27\Delta m_ {32}^{2}\frac{L}{E_{\nu}}\right)\times 8J_{\rm CP}\end{split} \tag{7}\] Because of its heavy mass and very short lifetime, the tau neutrino production in abundance in conventional accelerator is almost impossible. On the contrary, we have rich tau neutrino flux because of the higher \(P(\nu_{\mu}\to\nu_{\tau})\) oscillation. The tau neutrino flux can be further strengthened by \(P(\nu_{e}\to\nu_{\tau})\) oscillation. With this promising feature in this proposal, some of the new physics models, such as charged Higgs doublet [38] or leptoquarks [39] maybe tested through neutrino-type collision [40]. Figure 3: 2D distributions of energy and angle in respect to muon flying direction, for muon and electron neutrinos from 22.5 GeV \(\mu^{+}\to e^{+}\,\bar{\nu}_{\mu}\,\nu_{e}\) (similarly for \(\mu^{-}\) decay). Apart from tau neutrino appearance, the oscillation rates with CP phase dependence for \(\nu_{\mu}\rightarrow\nu_{e}\) is shown as below [23; 24]: \[\begin{split} P(\nu_{\mu}\rightarrow\nu_{e})&\simeq \sin^{2}(2\theta_{13})\sin^{2}(\theta_{23})\sin^{2}\left(1.27\Delta m_{32}^{2} \frac{L}{E_{\nu}}\right)\\ &\mp 1.27\Delta m_{21}^{2}\frac{L}{E_{\nu}}\sin^{2}\left(1.27\Delta m_{3 2}^{2}\frac{L}{E_{\nu}}\right)\times 8J_{\rm CP}\end{split} \tag{8}\] The corresponding oscillation probability for \(\nu_{e}\rightarrow\nu_{\mu}\) only differs a minus sign from \(\nu_{\mu}\rightarrow\nu_{e}\) oscillation in CP violating term: \[\begin{split} P(\nu_{e}\rightarrow\nu_{\mu})&\simeq \sin^{2}(2\theta_{13})\sin^{2}(\theta_{23})\sin^{2}\left(1.27\Delta m_{32}^{2} \frac{L}{E_{\nu}}\right)\\ &\pm 1.27\Delta m_{21}^{2}\frac{L}{E_{\nu}}\sin^{2}\left(1.27\Delta m_{3 2}^{2}\frac{L}{E_{\nu}}\right)\times 8J_{\rm CP}\end{split} \tag{9}\] Using the current measured values of the mixing angles and squared mass differences [32] and taking the distance of neutrino propagation as L =1300 Km, we have the numeric values for the neutrino oscillations at \(E_{\nu}=7\,(5)\) GeV (see more in Fig. 4): \[P(\nu_{\mu}\rightarrow\nu_{\tau}) =0.2916\pm 0.0026\sin\delta_{\rm CP}\,(0.5093\pm 0.0048\sin \delta_{\rm CP}), \tag{10}\] \[P(\nu_{\mu}\rightarrow\nu_{e}) =0.0151\mp 0.0026\sin\delta_{\rm CP}\,(0.0264\mp 0.0048\sin \delta_{\rm CP}),\] (11) \[P(\nu_{e}\rightarrow\nu_{\mu}) =0.0151\pm 0.0026\sin\delta_{\rm CP}\,(0.0264\pm 0.0048\sin \delta_{\rm CP}),\] (12) \[P(\nu_{e}\rightarrow\nu_{\tau}) =0.0119\mp 0.0026\sin\delta_{\rm CP}\,(0.0209\mp 0.0048\sin \delta_{\rm CP}). \tag{13}\] We will now evaluate the sensitivities on neutrino CP violation, taking \(\delta_{\rm CP}=\pm\pi/2\) and \(E_{\nu}=7\) GeV as benchmark parameters: * **1)** Firstly, we consider the tau (anti-) neutrino appearance from muon and electron neutrino oscillations: \[\mu^{+}\to e^{+}\,\bar{\nu}_{\mu 1}\,\nu_{e1}\implies\bar{\nu}_{\mu 1} \rightarrow\bar{\nu}_{\tau 1},\ \nu_{e1}\rightarrow\nu_{\tau 1},\] (14) \[\mu^{-}\to e^{-}\,\nu_{\mu 2}\,\bar{\nu}_{e2}\implies\nu_{ \mu 2}\rightarrow\nu_{\tau 2},\ \bar{\nu}_{e2}\rightarrow\bar{\nu}_{\tau 2},\] (15) where '1' and '2' symbol the two far detectors as shown in Fig. 1. Notice that the CP dependence of \(P(\nu_{e}\rightarrow\nu_{\tau})\) and \(P(\bar{\nu}_{\mu}\rightarrow\bar{\nu}_{\tau})\) as shown in Eq. 10 vary in the same direction. If we count on tau-related events in the far detector inclusively, this means our signal doubles. The sensitivity can be estimated then as \[\frac{\bar{\nu}_{\tau 2}+\nu_{\tau 2}-\bar{\nu}_{\tau 1}-\nu_{\tau 1}}{\sqrt{ \bar{\nu}_{\tau 2}+\nu_{\tau 2}+\bar{\nu}_{\tau 1}+\nu_{\tau 1}}}\] (16) which is around \(4\times 260/\sqrt{60000}\sim 4.2\) standard deviations (\(\sigma\)) in one year, and can reach near \(13.4\sigma\) in 10 years. Although only statistics are taken into account here, systematics should be able to be reduced efficiently due to the symmetric property of the proposed device. Furthermore, it is possible to exchange \(\mu^{+}\) and \(\mu^{-}\) flying routes, thus further reducing possible bias or systematic. * **2)** Secondly, if the far detector can distinguish tau neutrino from antineutrino such as the CERN SHiP experiment [37], then with only \(P(\nu_{e}\rightarrow\nu_{\tau})\), we can already have higher CP sensitivity. The sensitivity can be estimated then as \[\frac{\bar{\nu}_{\tau 2}-\nu_{\tau 1}}{\sqrt{\bar{\nu}_{\tau 2}+\nu_{\tau 1}}}\] (17) which is around \(2\times 260/\sqrt{2200}\sim 11\ \sigma\) in one year. * **3)** Finally, one can also exploit electron to muon oscillation which has also clear sensitivity on neutrino CP phase, if the far detector can distinguish muon neutrino from antineutrino, possibly can be achieved with moderate magnets. The sensitivity can be estimated then as \(2\times 260/\sqrt{3000}\sim 9.5\ \sigma\) in one year. By combing all three options and taking into account statistic errors, one can achieve a sensitivity of far more than 5 \(\sigma\) in one year, for \(\delta_{\rm CP}=\pm\pi/2\). Option 1) corresponds to a most conservative scenario, where one can reuse such as the DUNE-like detectors. Options 2) and 3) lead to much larger sensitivities on neutrino CP violation, yet putting additional requirements on the experimental side, such as to distinguish \(\mu^{+}\) from \(\mu^{-}\), or \(\tau^{+}\) from \(\tau^{-}\) under the help of moderate magnet field. Notice both HyperK [26] and DUNE [29] can only cover around half of all possible values of CP phase. For example, \(|\delta_{\rm CP}|\sim 0.1\pi\) is below 3 \(\sigma\) reach. Our proposal, on the other hand, can already give via the above mentioned three channels 1.3 \(\sigma\), 3.4 \(\sigma\), 2.9 \(\sigma\) of sensitivity in one year, respectively. Moreover, T2K [23; 24] and NOvA [25] tension on neutrino CP measurement may appear again, which makes an independent probe indispensable, for example, through tau neutrino appearance or electron to muon neutrino oscillations. The proposal here should also be useful to detect new CP phases in case of the presence of a light sterile neutrino [41]. Last but not least, our proposal exploits muon beam with looser requirement (e.g., lower intensity) compared with the needs towards a future muon collider, and thus can serve as a realistic intermediate step. Moreover, the GALEX and SAGE solar neutrino Gallium experiments reported that only 88\(\pm\)5% \(\nu_{e}\) events of the expected number were observed [42; 43]. The deficit of \(\nu_{e}\) events observed in these experiments can be explained by electron neutrino to sterile neutrino oscillation at short baseline. The explanation of the LSND and MiniBooNE [44; 45; 46] experimental results could also indicate the possible existence of sterile neutrino. There is one more advantage of our proposal when it comes to search for the sterile neutrinos. The rich flux of both the muon and electron-type neutrinos produced after muon decay increases the possibility of observing oscillations related to sterile neutrino. We can examine two oscillation modes simultaneously: \(\nu_{e}\rightarrow\nu_{e}\) and \(\nu_{\mu}\rightarrow\nu_{e}\), while DUNE and T2K mainly focus on electron neutrino appearance, \(\nu_{\mu}\rightarrow\nu_{e}\). The probability of disappearance and appearance for a neutrino flavor \(\alpha\), taking into only the large mass difference account, can be approximated as \[P(\nu_{\alpha}\rightarrow\nu_{\alpha})\approx 1-4|U_{\alpha 4}|^{2}(1-|U_{ \alpha 4}|^{2})\sin^{2}\left(\frac{\Delta m^{2}_{41}L}{4E_{\nu}}\right) \tag{18}\] \[P(\nu_{\alpha}\rightarrow\nu_{\beta})\approx 4|U_{\alpha 4}|^{2}|U_{ \beta 4}|^{2}\sin^{2}\left(\frac{\Delta m^{2}_{41}L}{4E_{\nu}}\right) \tag{19}\] With the fourth neutrino, the PMNS matrix is a \(4\times 4\) matrix that contains six mixing angles and three CP phases. Determination of these parameters would be a huge work that requires large number of neutrino oscillation experiments. Considering the abundance of the tau neutrino and the wider energy range, we can collect events relative to tau neutrino disappearance. So our proposal not only enables us to further confirm the results of the LSND and MiniBooNE experiment but also help us to determine the aforementioned parameters, especially \(\Delta m^{2}_{41}\), active and sterile neutrino mixing angles, as well as the additional CP phases. In Summary, we propose here a new idea to exploit collimated muon beams which generate symmetric neutrino and antineutrino sources: \(\mu^{+}\to e^{+}\,\bar{\nu}_{\mu}\,\nu_{e}\) and \(\mu^{-}\to e^{-}\,\nu_{\mu}\,\bar{\nu}_{e}\). Interfacing with long baseline neutrino detectors such as in DUNE or HyperK, this experiment can be useful to measure tau neutrino properties, and also to probe neutrino CP phase, by measuring muon (anti-) neutrino disappearance or tau (anti-)neutrino appearance, and differences between neutrino and antineutrino rates. There are several significant benefits leading to large neutrino flux and high sensitivity on CP phase, including 1) collimated and manipulable muon beams, which lead to a larger acceptance of neutrino sources in the far detector side; 2) symmetric \(\mu^{+}\) and \(\mu^{-}\) beams, and thus symmetric neutrino and antineutrino sources, which make this proposal ideally for measuring neutrino CP violation. More importantly, \(\bar{\nu}_{e,\mu}\to\bar{\nu}_{\tau}\) and \(\nu_{e,\mu}\to\nu_{\tau}\), and, \(\bar{\nu}_{e}\to\bar{\nu}_{\mu}\) and \(\nu_{e}\to\nu_{\mu}\) oscillation signals can be collected simultaneously, with no needs for separate specific runs for neutrinos or antineutrinos. It is also possible to exchange \(\mu^{+}\) and \(\mu^{-}\) flying routes, thus further reducing possible bias or systematic. In an optimistic way, we estimate \(10^{4}\) tau (anti-) neutrinos can be collected per year thus our proposal can serve as a brighter tau neutrino factor. Moreover, 5 standard deviations of sensitivity can be reached easily for CP phase as \(|\pi/2|\), with only 1 year of data taking, by combining tau and muon (anti-) neutrino appearances. In this draft, we mainly provide a preliminary estimation of the feasibility study. A detailed study is surely necessary to follow up. On the other hand, there exist also rich potential to be further explored with such a proposal that connects energy and neutrino frontiers. Especially, one can imagine a post-DUNE (or in parallel to DUNE as the probe channels are indeed orthogonal and thus complementary) experiment with neutrinos from an intense muon source located at the Fermilab site. This connection between energy and neutrino frontiers can also serve as a precursor for future high-energy muon colliders. Notice that a muon collider requires a 1-2 orders of magnitude more intense beam as compared with the number (\(dN_{\mu}/dt\sim 10^{12}\)/sec ) listed above as our benchmark. Thus with the development of a more intensive muon beam targeting future muon colliders, it surely will improve further the neutrino potential of the current proposal. ###### Acknowledgements. This work is supported in part by the National Natural Science Foundation of China under Grants No. 12150005, No. 12075004, and No. 12061141002, by MOST under grant No. 2018YFA0403900.
2306.06652
Audio-Visual Mandarin Electrolaryngeal Speech Voice Conversion
Electrolarynx is a commonly used assistive device to help patients with removed vocal cords regain their ability to speak. Although the electrolarynx can generate excitation signals like the vocal cords, the naturalness and intelligibility of electrolaryngeal (EL) speech are very different from those of natural (NL) speech. Many deep-learning-based models have been applied to electrolaryngeal speech voice conversion (ELVC) for converting EL speech to NL speech. In this study, we propose a multimodal voice conversion (VC) model that integrates acoustic and visual information into a unified network. We compared different pre-trained models as visual feature extractors and evaluated the effectiveness of these features in the ELVC task. The experimental results demonstrate that the proposed multimodal VC model outperforms single-modal models in both objective and subjective metrics, suggesting that the integration of visual information can significantly improve the quality of ELVC.
Yung-Lun Chien, Hsin-Hao Chen, Ming-Chi Yen, Shu-Wei Tsai, Hsin-Min Wang, Yu Tsao, Tai-Shih Chi
2023-06-11T11:25:17Z
http://arxiv.org/abs/2306.06652v1
# Audio-Visual Mandarin Electrolaryngeal Speech Voice Conversion ###### Abstract Electrolarynx is a commonly used assistive device to help patients with removed vocal cords regain their ability to speak. Although the electrolarynx can generate excitation signals like the vocal cords, the naturalness and intelligibility of electrolaryngeal (EL) speech are very different from those of natural (NL) speech. Many deep-learning-based models have been applied to electrolaryngeal speech voice conversion (ELVC) for converting EL speech to NL speech. In this study, we propose a multimodal voice conversion (VC) model that integrates acoustic and visual information into a unified network. We compared different pre-trained models as visual feature extractors and evaluated the effectiveness of these features in the ELVC task. The experimental results demonstrate that the proposed multimodal VC model outperforms single-modal models in both objective and subjective metrics, suggesting that the integration of visual information can significantly improve the quality of ELVC. Yung-Lun Chien\({}^{1,2}\), Hsin-Hao Chen\({}^{1,2}\), Ming-Chi Yen\({}^{2}\), Shu-Wei Tsai\({}^{3}\), Hsin-Min Wang\({}^{2}\), Yu Tsao\({}^{2}\), and Tai-Shih Chi\({}^{1}\)\({}^{1}\) National Yang Ming Chiao Tung University, \({}^{2}\) Academia Sinica \({}^{3}\) National Cheng Kung University Hospital [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected] **Index Terms**: Electrolaryngeal speech, voice conversion, lip images, multimodal learning, feature extractor. ## 1 Introduction The ability to speak and communicate is fundamental for human life. However, individuals who undergo laryngectomy lose the ability to produce excitation signals because of the removal of their vocal cords. This loss significantly affects their ability to speak normally, decreasing their overall quality of life. To address this issue, the use of the electrolarynx is the primary method for speech recovery. However, this device often produces a relatively flat fundamental frequency (F0) and generates noise that affects the voice quality, highlighting the need for improved electrolaryngeal (EL) speech techniques. Voice conversion (VC) is a technique that converts a human voice from a source speaker to target speaker without changing the underlying content. One of the applications of VC is to improve the naturalness and intelligibility of EL speech [1, 2]; this VC task is called electrolaryngeal speech voice conversion (ELVC). A typical ELVC approach first extracts the acoustic features of EL speech and target natural (NL) speech and then trains a conversion model. When in use, the converted features are synthesized back into a waveform using a vocoder. For frame-based VC, aligning the acoustic features of paired EL and NL speech is critical before training the conversion model. Dynamic time warping (DTW) is the most commonly used algorithm for determining the best alignment path over two feature sequences based on a predefined distance (e.g., the Euclidean distance). However, in ELVC, the DTW algorithm often fails to find the correct alignment path and causes the model to fail in learning the correct conversion function, which seriously affects the performance of ELVC. To address this issue, Liou _et al._ used lip images instead of acoustic features for alignment [3]. Although this method achieved better ELVC results, it was not the best alignment method. In this study, we explored different alignment methods to improve the performance of ELVC. In addition to its role in alignment, the lip shape may play an important role in speech signal processing [4]. Although users of the electrolarynx cannot speak normally, their lip movements are similar to those of healthy people. Therefore, the use of lip-shape information to improve the ELVC model is worth studying. Multimodal training methods have been employed in many speech-processing studies [5, 6], including the VC task [7]. In this study, we evaluated different visual feature extractors and determined the best one for the ELVC task. The main contributions of this study are twofold: i) the proposal of a new feature-alignment method suitable for frame-based ELVC, and ii) a novel multimodal VC architecture that uses both acoustic and visual features. The remainder of this paper is organized as follows. Section 2 introduces the alignment methods, including the traditional and proposed methods. Section 3 introduces different lip-image feature extractors and their uses. Section 4 presents the experimental setup and various objective and subjective evaluations. Finally, Section 5 presents the conclusions of this study and directions for future research. ## 2 Alignment methods In this section, we will introduce previous and our alignment methods for ELVC. ### Previous alignment methods As shown in Fig. 1, EL speech is generally longer than NL speech, even with the same linguistic content. Differences in the speech length can cause distortion of NL speech owing to the stretching of length during alignment. In addition to the very different acoustic properties of EL and NL speech, the length difference is one of the key challenges in aligning these two types of speech. As a baseline, we used the WORLD vocoder [8] to decompose EL and NL speech into acoustic features, such as mel-cepstral coefficients (MCC). Subsequently, an alignment was performed based on the DTW algorithm using MCC. This method is referred to as DTW-MCC. The path calculation is based on the mel-cepstral distortion (MCD). Liou _et al._ used the lip images of EL speech and NL speech to align both [3]. This approach involves first obtaining 20 lip landmarks using the dlib library [9], relocating the coordinates according to their centroid, and then calculating the Euclidean distance between the source and target landmark sets. Although the DTW-lip-landmark method was shown to outperform the DTW-MCC method in the ELVC task in [3], the room for improvement exists. ### Proposed alignment method To address the misalignment caused by the difference in length of EL and NL speech, we applied the waveform similarity overlap-and-add (WSOLA) algorithm [10], which is a time-scale modification method that can adjust the speed of speech while preserving F0. Specifically, we used WSOLA to adjust the length of NL speech to match that of the EL speech, thereby reducing the distortion caused by the length difference. The modified DTW-MCC method that uses length-adjusted NL speech is referred to as the DTW-WSOLA method. We conducted preliminary listening tests and confirmed that the intelligibility of the NL speech was not compromised after length adjustment. ## 3 Multimodal system architecture The overall architecture of the proposed multimodal ELVC system, which consists of a VC model and lip image feature extractor, is illustrated in Fig. 2. The VC model and lip-image feature extractor are described in detail in the following sections. ### Voice conversion model The VC model is implemented based on the CLDNN model proposed in [11]. CLDNN has been used in ELVC with satisfactory results in [12]. Using the MCC features as the model input, three independent CLDNN models were trained to predict the target speaker's MCC, aperiodicity (AP), and F0 and unvoiced/voiced (U/V) symbols. To reduce the experimental variability, we changed the input to a logarithmic Mel spectrogram (LMS) and trained a single CLDNN to convert the input LMS into the target LMS. To synthesize the waveform from the LMS, we used parallel WaveGAN [13] as the vocoder in our experiments. ### Lip image feature extractor The compressed visual features were obtained using a lip-image feature extractor. The feature extractor can be completely removed during the training phase. The lip-image feature extractors used in this study are described below. #### 3.2.1 CNN encoder The overall architecture of the CNN-based lip-image feature extractor includes an encoder and a decoder [14]. The encoder consists of three 2D convolutional layers and one linear layer. The decoder architecture is similar to that of the encoder; however, the convolutional layers are replaced by 2D transposed convolutional layers. The CNN-based model was trained in a self-supervised manner by reconstructing input lip images. Then, the lip images were processed by the pre-trained encoder to obtain latent representations of dimension 768, and these representations were used as the visual features for the multimodal VC model. #### 3.2.2 Vision Transformer Vision Transformer (ViT) [15] is an image classification model with Transformer [16] as the backbone. We used a pre-trained ViT model1 as a lip image feature extractor. The lip images were processed using the ViT model, and 768-dimensional representations of the last hidden layer were used as the visual features for the multimodal VC model. Footnote 1: [https://github.com/google-research/vision_transformer](https://github.com/google-research/vision_transformer) #### 3.2.3 AV-HuBERT In recent years, many model architectures for self-supervised learning (SSL) have been developed, including AV-HuBERT [17], which inputs both acoustic features and lip images during training. AV-HuBERT enables the model to learn better features through the complementarity of information provided between the two modalities, leading to better results for downstream tasks that utilize lip information. We used a pre-trained AV-HuBERT model 2 as the lip image feature extractor. Figure 1: Spectrogram plots of EL speech and NL speech. Figure 2: Overall architecture of the proposed multimodal ELVC system. When using AV-HuBERT as a feature extractor, it is possible to analyze whether the output of each layer of the transformer encoder is helpful for the ELVC task. Inspired by [18], a weighted-sum (WS) method was used for the output of each layer to combine the best-fit features. During VC model training, the AV-HuBERT model was fixed, but the weights were learned and updated. To balance the values of the output features of each layer, the output features were normalized and multiplied by the weight values. In our experiments, we compared the performance of the output features of the last hidden layer (LL) with that of the features using the WS method. ## 4 Experiments This section presents the experimental setup, including the data and evaluation metrics, and the experimental results. ### Datasets and evaluation metrics We conducted experiments on the Mandarin parallel ELVC corpus, which was recorded by a doctor imitating a total laryngectomy patient using an electrolaryngeal device. The doctor read each sentence in the phonetically balanced TMHINT [19] dataset with and without the use of electrolarynx, while the audio and video were simultaneously recorded. We used 288 and 18 utterances as training and test data, respectively. All the speech utterances were sampled at a frequency of 16 kHz. Each speech waveform was converted into an 80-dimensional LMS with a window size of 512 points and frame shift of 160 points. The layer parameters of the CLDNN model architecture in Fig. 2 are similar to those in [12], except for the last fully connected layer. Since the input acoustic feature is an 80-dimensional LMS, the number of hidden units in the last fully connected layer is set to 80 to ensure that the input and output dimensions are consistent. The parallel WaveGAN used to synthesize the LMS back into a waveform was trained using the TMSV dataset [14]. The frame rate of the video was 50 FPS, and we downsampled the frame rate to 25 FPS, such that one image corresponded to four acoustic frames. Lip images were acquired by the lip-image extractor in [20] and converted into lip-image features using a lip-image feature extractor. In the experiments, the lip-image feature sequence was aligned with the acoustic frame sequence for model training. The batch size was 16, the learning rate was set to 0.0005, and the Adam optimizer was used. Three objective metrics were used to evaluate the ELVC systems, including MCD, the syllable error rate (SER) measured by an ASR system trained on the MATBN dataset [21], and the estimated mean opinion score (MOS) of the pre-trained MOSA-Net [22]3. The SER and predicted MOS values were 7.3% and 3.052 for NL speech and 82.3% and 1.556 for EL speech, respectively. These values were considered the upper and lower bounds of the performance of the ELVC models. Footnote 3: [https://github.com/dimarsyan/MOSA-Net-Cross-Domain](https://github.com/dimarsyan/MOSA-Net-Cross-Domain) ### Experimental results Experiments were conducted in two stages. First, the ELVC results obtained using different alignment methods were compared, and the best alignment method for use in subsequent experiments was determined. Subsequently, we compared the ELVC results obtained using different lip-image feature extractors. #### 4.2.1 Comparison of alignment methods Table 1 lists the results obtained by applying different alignment methods to ELVC. The best-performing method was DTW-WSOLA, which stretched the target speech length so that more corresponding acoustic frames were aligned with the EL speech. While this could lead to distortion, it performed better than the DTW-lip-landmark method, which uses lip images for alignment. DTW-WSOLA cannot fully solve the alignment problem caused by the large difference in the acoustic characteristics of EL and NL speech; however, it is much better than other alignment methods. Therefore, DTW-WSOLA was used as the alignment method in subsequent experiments. #### 4.2.2 Comparison of visual feature extractors Table 2 lists the results of applying different lip image feature extractors to ELVC. The visual features extracted by the CNN encoder and ViT showed no notable improvement in all three metrics. However, the visual features extracted by AV-HuBERT, both LL and WS, had a significant improvement in MCD, and the WS visual features were more helpful than the LL visual features. Compared with the CNN encoder and ViT, AV-HuBERT used both acoustic features and lip images as model input, which can extract meaningful features and provide more information to better train the conversion model. #### 4.2.3 Fine-tuning visual features In our previous experiments, we concatenated the visual features extracted using a lip-image feature extractor with the acoustic features and trained a conversion model. In this experiment, we aimed to improve the conversion ability by fine-tuning (FT) the extracted visual features. We fed the extracted visual features to a unidirectional GRU layer and maintained the dimensionality of the features, enabling the model to learn dynamic information between images. The GRU module was trained together with the VC model. Comparing the results in Tables 2 and 3, it is found that the simple FT method can effectively improve the usability of the visual features extracted by all the lip image feature extractors. on a scale of 1-5, regardless of the speech quality. The evaluation criteria are as follows: 5 means that every word in the sentence can be understood; 4 means that a few words in the sentence cannot be understood, but it does not affect the understanding of the sentence; 3 means that nearly half of the words in the sentence can be understood, and the content of the sentence can be roughly judged; 2 means that only a few words in the sentence can be understood, but not the whole sentence; and 1 means that the sentence cannot be understood at all. Table IV presents the subjective evaluation results of three ELVC systems. The listening test was conducted on 12 untrained but experienced normal hearing subjects. Among them, 8 were male, and 4 were female. The average age of these 12 subjects was 24 years old. For each test sample, participants were not informed which ELVC system was used to generate it. We selected 18 speech utterances converted from each ELVC system to conduct the subjective test. Both audio-visual systems (AV-HuBERT(WS) and AV-HuBERT(WS)+FT) using the AV-HuBERT features achieved higher intelligibility than the Audio-only CLDNN system; and the system with fine-tuned visual features (AV-HuBERT(WS)+FT) achieved the best intelligibility. The subjective evaluation results confirm that multi-modal learning can help with the ELVC task. ## V Conclusions and future work In this study, we proposed a multimodal ELVC approach. The experimental results show that the quality and intelligibility of converted EL speech can be improved. The features of the SSL models that have been frequently used in recent years also play a pivotal role in our model. In future research, we will attempt to fine-tune the pre-trained AV-HuBERT model to generate more useful features for ELVC. We will also leverage the features of AV-HuBERT to help align EL and NL speech for better ground truth when training the conversion model.
2308.01491
Longer-Lived Mediators from Charged Mesons and Photons at Neutrino Experiments
Since many of the dark-sector particles interact with Standard Model (SM) particles in multiple ways, they can appear in experimental facilities where SM particles appear in abundance. In this study, we explore a particular class of longer-lived mediators that are produced from photons, charged mesons, neutral mesons, and $e^\pm$ that arise in proton-beam fixed-target-type neutrino experiments. This class of mediators encompasses light scalars that appear in theories like extended Higgs sectors, muon(electro)philic scalars, etc. We evaluate the sensitivities of these mediators at beam-based neutrino experiments such as the finished ArgoNeuT, ongoing MicroBooNE, SBND, ICARUS, and the upcoming DUNE experiment. We realize that scalars are more enhanced while produced from three-body decay of charged mesons, especially if they are muonphilic in nature. For scenarios that contain muonphilic scalars, these experiments can probe unexplored regions of parameter space that can explain the current discrepancy in the anomalous magnetic moment of muons. The sensitivity of electrophilic scalars at the DUNE Near Detector can explore new regions. We also show that Bethe-Heitler scattering processes can be used to probe flavor-specific lepton final states even for the mediator masses below twice the lepton mass.
Bhaskar Dutta, Aparajitha Karthikeyan, Doojin Kim
2023-08-03T01:20:06Z
http://arxiv.org/abs/2308.01491v2
# Longer-Lived Mediators from Charged Mesons and Photons at Neutrino Experiments ###### Abstract Since many of the dark-sector particles interact with Standard Model (SM) particles in multiple ways, they can appear in experimental facilities where SM particles appear in abundance. In this study, we explore a particular class of longer-lived mediators that are produced from photons, charged mesons, neutral mesons, and \(e^{\pm}\) that arise in proton-beam fixed-target-type neutrino experiments. This class of mediators encompasses light scalars that appear in theories like extended Higgs sectors, muon(electro)philic scalars, etc. We evaluate the sensitivities of these mediators at beam-based neutrino experiments such as the finished ArgoNeuT, ongoing MicroBooNE, SBND, ICARUS, and the upcoming DUNE experiment. We realize that scalars are more enhanced while produced from three-body decay of charged mesons, especially if they are muonphilic in nature. For scenarios that contain muonphilic scalars, these experiments can probe unexplored regions of parameter space that can explain the current discrepancy in the anomalous magnetic moment of muons. The sensitivity of electrophilic scalars at the DUNE Near Detector can explore new regions. We also show that Bethe-Heitler scattering processes can be used to probe flavor-specific lepton final states even for the mediator masses below twice the lepton mass. + Footnote †: preprint: MI-HET-809 ###### Contents * I Introduction * II Models * II.1 Higgs Portal Scalars (HPS) * II.2 Muonphilic scalars * II.3 Electrophilic scalars * III Benchmark Experiments * IV Production of mediators * IV.1 Charged meson decays * IV.2 Photons * IV.3 Electrons and positrons * IV.4 Simulation methods * V Detection of mediators * V.1 Decay widths of mediators * V.2 Detection channels * VI Results * VI.1 Sensitivity estimates * VI.2 Sensitivities with detection constraints * VII Vector Mediators * VIII Conclusions ## I Introduction There are compelling reasons for the existence of a particle sector (often called dark sector or hidden sector) beyond the Standard Model (SM) of particle physics, for example, dark matter, non-zero neutrino masses, mass-flavor hierarchy puzzle, etc. An attractive scenario with the new particle sector is one which is very weakly or feebly connected to the SM sector via portal particles that are often called mediators [1]. A myriad of models with mediators has been built along this line to address these issues and similarly, many experiments are being developed towards unraveling these mysteries. These efforts are also motivated by anomalies in the experimental results such as the LSND excess [2], the MiniBooNE anomaly [3; 4; 5], and the discrepancy in the anomalous magnetic moments of the muon [6; 7] and electron [8]. A subset of these experiments are fixed-target-type experiments involving high-intensity protons on target (POT) and they are widely adopted at neutrino facilities with beam energy being \(O(1)\) to a few hundred GeV. While neutrino facilities serve as neutrino factories, copious amounts of charged mesons, (secondary) photons, electrons, positrons, and neutral mesons are also produced. Therefore, given the beam energy and intensity of neutrino facilities, they have the capability to test MeV-to-sub-GeV-scale new physics particles interacting with those SM particles that can be found at these facilities. While the landscape of dark-sector models is vast, we will focus on a particular class of models that constitutes scalar mediators that couple either to all SM matter particles or a subset of flavors. We find that the flux of the above mediators produced from charged mesons inside a proton target can be quite enhanced. Scalars could abundantly appear through the three-body decay of charged mesons such as charged pions and kaons; their corresponding two-body decay is helicity-suppressed but adding another particle in the final state would evade the suppression and enhance the branching fraction of three-body decay modes [9]. We also notice an enhancement when they are sourced from photons. For example, scalars can be copiously produced as a consequence of Primakoff scattering [10; 11; 12] of photons as they interact with atoms present in the target. This process is coherently enhanced by a factor of from the nuclear form factor [13]. Similar to scalars, flavor-specific massive vector mediators can be produced from the above sources as well. One such example we considered in this paper is a muonphilic gauge boson that appears in the context of a model [14; 15; 16; 17]. We find that these can also appear in good abundance from charged mesons, neutral mesons, photons, electrons, and bremsstrahlung. We investigate the detection prospects of the signals induced by the aforementioned mediators at experiments along the NuMI [18] and BNB [19] beamlines at Fermi National Laboratory: ArgoNeuT [20], ICARUS [21], MicroBooNE [22], and SBND [23]. We also probe them at the upcoming DUNE Near Detector [24] (DUNE ND), which is placed along the LBNF beam line [25]. These detectors feature different baselines and angular distances with respect to their respective beam axis. The magnetic horns - which are designed to focus or deflect charged particles and, in turn, their corresponding neutrino decay products - affect the mediator production via charged mesons. As a result of the magnetic horn effect along with the position of detectors, we expect to take advantage of the multiple experiments and probe regions of parameter space in a complementary manner. Once a mediator is produced inside the proton target of these beam facilities, it should be safely delivered to a detector of interest. Due to the feebly-interacting nature of the above-mentioned mediators, they would live longer rather than decay immediately. Once they survive, some fraction of mediators can leave detectable signatures within the detector fiducial volume. These signatures include electron-positron pairs, muon-antimuon pairs, photon pairs, electron-photon pairs, and single photons from scattering and decay processes. We again expect to benefit from different baselines and detector sizes in the search for these long-lived mediators as they provide complementarity. We emphasize that while one can look for electron-positron pairs and muon-antimuon pairs from decay processes, the same final states can arise through the splitting process, also known as the Bethe-Heitler scattering process [26]. Owing to its energy-dependent nature, these final states can also be produced from a mediator with a keV-range mass, which is not kinematically allowed for decay processes. For example, a 10 keV (muonphilic) scalar with a total energy greater than 210 MeV can split into a muon-antimuon pair through the Bethe-Heitler scattering process. From the above example, we see that the appearance of lepton-antilepton final states for all possible masses helps us to probe flavor-specific models directly. With all the above-mentioned detection channels, we investigate the parameter space of the Higgs Portal Scalar and -motivated parameter space, utilizing muon/electrophilic scalar mediators which can be efficiently probed by experiments operating at the sub-GeV scale. In Sec. II, we discuss essential features of the example models that we explore. Section III is reserved for a brief overview of the benchmark short baseline experiments for which we study sensitivity reaches. We then explain how the aforementioned mediators are produced at generic proton-on-target experiments in Sec. IV and elaborate on the signals that the mediators produce at the detectors in Sec. V. In Sec.VI, we explain our analysis methodology and report our main results including sensitivity plots. In Sec.VII, we explore the above study for vector mediator scenarios using the model as an example. We finally summarize and conclude our study in Sec. VIII. ## II Models We apply the above production and detection mechanisms of scalars in the context of three benchmark spin-0 mediator models.1 Although we focus on scalars in the paper, one can also look at spin-1 gauge boson mediator models. We will briefly explore these aspects in Sec. VII. Footnote 1: We emphasize that our study here can be straightforwardly applied to mediators with different Lorentz structures as well. ### Higgs Portal Scalars (HPS) This model contains a singlet dark scalar with mass that interacts with the SM scalar doublet via a portal interaction [27]: \[\mathcal{L}_{S}\supset(AS+BS^{2})H^{\dagger}H. \tag{1}\] Where \(A\) and \(B\) are free parameters. Under \(SU(2)_{L}\) symmetry breaking, the neutral Higgs decomposes into a sum \(v+h\). Therefore, the interaction in Eq. (1) induces a mass mixing between the dark scalar and the SM Higgs in two ways: (\(a\)) if \(A\neq 0\), then the mass mixing is naturally induced regardless of whether the dark scalar acquires a zero or a non-zero dark scalar vacuum expectation value (vev), and (\(b\)) if \(A=0\), the dark scalar can acquire a non-zero vev by an appropriate choice of potential and thus induce mass mixing. After diagonalizing the mass-like terms, we see that the scalar \(S\) mixes with the SM Higgs \(h\) via a small mixing angle, that is, \[h\to h+\theta S. \tag{2}\] Therefore, the dark scalar can interact with the SM particles that acquire mass via the Higgs scalar: \[\mathcal{L}_{\mathcal{S}}\supset\frac{1}{2}m_{S}^{2}S^{2}+\theta S \big{(}\sum_{f}\frac{m_{f}}{v}\bar{f}f+\frac{2m_{W}^{2}}{v}W_{\mu}^{+}W_{-}^{ \mu}+\frac{m_{Z}^{2}}{v}Z_{\mu}Z^{\mu}\big{)}, \tag{3}\] where \(f\) runs over all quark and charged lepton flavors. This model is of particular interest in various contexts. Examples include scalar-to-pion decays [28], MicroBooNE searches for the HPS to explain the KOTO excess [29; 30], and a search for the HPS-induced signatures at ICARUS [31]. In addition to the above-shown interactions, HPS can also couple to two photons via a fermion loop and thus widen the phenomenology [32]. ### Muonphilic scalars These scalars (henceforth denoted by \(\phi_{\mu}\)) can appear in effective field theories containing singlet scalars that have minimal flavor violation [33] or in other extensions to the SM with additional doublets/singlets, which contain Yukawa couplings unique to each flavor [34]. The Lagrangian has the following form. \[\mathcal{L}_{\phi,\mu}\supset y_{22}\bar{\mu}\mu\phi_{\mu}. \tag{4}\] Muonphilic scalars contribute to the anomalous magnetic moment of the myon (\(a_{\mu}\)) of the muon at one-loop level. Therefore, their phenomenology is useful to explain the \(3.7\sigma\) discrepancy between the experimental and theoretical values of \(a_{\mu}\)[6; 7]: \[\Delta a_{\mu}=a_{\mu}^{\rm exp}-a_{\mu}^{\rm th}=(2.74\pm.73)\times 10^{-9}. \tag{5}\] Similar phenomenology has been explored via muon-coupled axion models which bring in constrains from SN1987a data [35]. These scalars can also couple to two photons via a muon loop and the effect of this has been studied in the context of axions [36; 37]. The lack of photon events at the E137 SLAC experiment imposes stringent constraints on this model as well [38]. ### Electrophilic scalars On a similar line of thought, there are models with scalars (henceforth denoted by \(\phi_{e}\)) that solely couple to electrons. One such example is an effective field theory where all heavy fermions and bosons are integrated out such that we end up with scalars that exclusively couple to electrons via a Yukawa coupling. Such a model has bounds from stellar cooling [39], SN1987a [40], NA64 [41], Orsay [42], E141 [43] and E774 [44] that look at electron-positron as well as electron-photon final states. The relevant Lagrangian is given by \[\mathcal{L}_{\phi,e}\supset y_{11}\bar{e}e\phi_{e}. \tag{6}\] In a manner similar to \(\phi_{\mu}\) searches, we can study electrophilic scalars to explain the discrepancy in the electron anomalous magnetic moment \(a_{e}\), which, based on a recent measurement with \({}^{87}\)Rb [8], is \[\Delta a_{e}=a_{e}^{\rm exp}-a_{e}^{\rm th}=(4.8\pm 3.0)\times 10^{-13}\,. \tag{7}\] ## III Benchmark experiments We explore the sensitivity of these models in several neutrino experiments as mentioned earlier. We tabulate key specifications of the experiments in Table. 1. The aforementioned mediators that reach ArgoNeuT, ICARUS, and MicroBooNE are sourced from the 120 GeV NuMI beam, those at SBND are from the 8 GeV BNB beam, and finally, those at DUNE ND are produced by the 120 GeV LBNF beam. MicroBooNE and ICARUS receive contributions from the BNB beam as well, but in this paper, we consider the signals only from the NuMI beam. The magnetic horn system present near the target plays an integral role. They operate in either neutrino mode (focusing positive mesons) or antineutrino mode (focusing negative mesons). We remark that DUNE ND, ArgoNeuT, and SBND are located on the beamline,2 whereas ICARUS and MicroBooNE are placed off-axis. Since the magnetic horns focus charged particles along the beam axis, most of the high-energy mediators (originating from high-energy charged mesons) are boosted along the beam direction, whereas softer mediators are less focused and diverge away from the axis. Hence, on-axis detectors are more sensitive to high-energy mediators as compared to those that are off-axis. Similarly, (secondary) high-energy photons are directed more in the forward direction and softer photons are directed more away from the beam axis. Therefore, high-energy mediators are likely to travel along the beam axis. These expectations are depicted in Fig. 1 which contains the energy spectra of muonphilic scalars at DUNE ND (see Fig. 1a), which is one of the on-axis detectors, and ICARUS (see Fig. 1b), which is off-axis. We clearly see that the energies of the scalars that reach ICARUS are much lower than those at DUNE ND. Footnote 2: The BNB beam axis gets through SBND and the detector center is off the beam axis by 0.3 degrees. ## IV Production of mediators In this section, we explain how long-lived scalars can be produced from charged mesons, photons, electrons, and positrons. ### Charged meson decays Charged pions and kaons dominantly decay into a charged lepton \(\ell\) (\(=\mu\) or \(e\)) and its neutrino counterpart through an off-shell intermediate \(W\) boson, for example, \(K^{+}/\pi^{+}\rightarrow\mu^{+}\nu_{\mu}\) and \(K^{+}/\pi^{+}\to e^{+}\nu_{e}\). However, the above two-body decay processes are suppressed due to the required helicity of final state particles, which constrains the allowed phase space. This enables us to explore the production of long-lived mediators as the third decay product of charged mesons. Unlike the corresponding two-body decay, this three-body decay would not be limited by the helicity suppression [9; 45; 46]. However, the branching fraction for a choice of coupling must not exceed the upper limit on three-body decay branching fractions of charged kaons [47] and charged pions [48]. We use the three-body decay of kaons as our example in this section since the kinematically allowed phase space and mass range are larger than those of pions, although the same argument can be applied to pions. HPS can emanate from the charged lepton leg as in Fig. 2a as well as from the \(W\) boson leg [49] as in Fig. 2b. Since they couple to leptons with a strength of \(m_{\ell}/v\) and \(2m_{W}^{2}/v\) with the \(W\) boson, the latter contribution dominates the relevant decay matrix element. The HPS can couple to charged kaons, whose strength can be calculated from chiral perturbation theory, but since this term is subdominant in the relevant decay matrix element, we do not include their contribution to the decay width. Since we take the neutrinos to be massless leptons, we omit their contribution. HPS from \(K^{+}\to\mu^{+}\nu_{\mu}S\) are kinematically restricted to the maximum mass reach \(m_{K}-m_{\mu}=388\) MeV, whereas those from \(K^{+}\to e^{+}\nu_{e}S\) can be as heavy as \(m_{K}-m_{e}=492\) MeV. HPS can also be produced via a kaon two-body decay, i.e., \(K^{+}\to\pi^{+}S\) (Fig. 2c) where the scalar couples to an intermediate top quark [50; 51; 31]. This strong coupling to the top quark makes the branching ratio of the above two-body decay process dominate over all the three-body decays, but the HPS mass is limited to \(m_{K}-m_{\pi}=354\) MeV. HPS can also be produced via \(B^{+}\to K^{+}S\), which have been searched at LHCb [52], but the flux of \(B\) mesons is not large enough at the aforementioned neutrino facilities. Hence HPS sourced from \(B^{+}\) do not produce a sizable signal flux here. While looking at scalars with flavor-specific couplings such as muonphilic (electrophilic) scalars, the only possible diagrams are those where scalars emerge from muon (electron) legs (Fig. 2a). Therefore, they can appear from kaons via the process \(K^{+}\to\mu^{+}\nu_{\mu}\phi_{\mu}\) (\(K^{+}\to e^{+}\nu_{e}\phi_{e}\)) where the amplitude depends on Yukawa coupling squared \(y_{22}^{2}\) (\(y_{11}^{2}\)). ### Photons Scalars couple to two photons at the one-loop level. This coupling can be written as [53], \[S^{\mu\nu}=g_{\phi\gamma}\big{(}p_{1}.p_{2}\eta^{\mu\nu}-p_{1}^{\nu}p_{2}^{\mu} \big{)}\,, \tag{8}\] Here, the coupling strength \(g_{\phi\gamma}\) is written in terms of the non-divergent one-loop factor with mass dimension \(-1\)3: Footnote 3: The subscript \(f\) in \(g_{\phi\gamma}\) is used to denote that all fermions, leptons and quarks, that couple to the scalar contribute to the loop. \[g_{\phi\gamma}=\frac{\alpha_{\rm em}}{\pi}\sum_{f}y_{ff}\frac{N_{c}Q_{f}^{2}} {m_{f}}I\Big{(}\frac{m_{\phi}^{2}}{4m_{f}^{2}}\Big{)}. \tag{9}\] where \(\alpha_{\rm em}=1/137\) is the electromagnetic fine structure constant, \(p_{1}\) and \(p_{2}\) denote the momenta of the two photons, and the function \(I(\beta)\) carries information about the non-divergent fermion loop. It is generally expressed as \[\begin{split} I(\beta)&=\int_{0}^{1}dx\int_{0}^{1- x}dy\frac{1-4xy}{1-4xy\beta}\\ &=\frac{1}{2\beta^{2}}[\beta+(\beta-1)f(\beta)],\end{split} \tag{10}\] where \(f(\beta)\) is defined as \[f(\beta)=\begin{cases}\arcsin\big{(}\sqrt{\beta}\big{)}^{2}&\text{for } \beta\leq 1\\ -0.5(-2\arcarccosh\sqrt{\beta}+i\pi)^{2}&\text{for }\beta>1\end{cases}. \tag{11}\] The coupling that appears in Eq. (9) is \[\begin{split} y_{\ell\ell}=\theta m_{\ell}/v&\text{for HPS}, \\ y_{\mu\mu}=y_{22}&\text{for }\phi_{\mu},\\ y_{ee}=y_{11}&\text{for }\phi_{e}.\end{split} \tag{12}\] The dot product \(p_{1}.p_{2}\) equals \(m_{\phi}^{2}/2\) if the scalar decays into two photons, and \((m_{\phi}^{2}-q^{2})/2\) if one of the photons is an off-shell propagator that appears in the Primakoff scattering Feynman diagram (Fig. 3a). From Fig. 4, we see that the loop factor is maximized when \(\beta\) is lying between \(1\) and \(3\), and it drops as \(\beta\to 0,\infty\). For a scalar that appears in the HPS model, all massive fermions contribute \begin{table} \begin{tabular}{|l||l|l|l|l|l|} \hline Detectors & Beam, Energy & Distance & Angle off-axis & Detector dimensions & POT \\ & [GeV] & [m] & [degrees] & [m \(\times\) m \(\times\) m] & \\ \hline SBND [23] & BNB, \(8\) & \(110\) & \(0.3\) & \(4\times 4\times 5\) & \(6.6\times 10^{21}\) \\ DUNE ND [24] & LBNF, \(120\) & \(574\) & \(0\) & \(3\times 5\times 4\) & \(7\times 10^{21}\) \\ ArgoNeuT [20] & NuMI, \(120\) & \(1040\) & \(0\) & \(0.4\times 0.48\times 0.9\) & \(1.35\times 10^{20}\) \\ ICARUS [21] & NuMI, \(120\) & \(803\) & \(5.56\) & \(2\times(2.63\times 2.86\times 17)\) & \(6.6\times 10^{20}\) \\ MicroBooNE [22] & NuMI, \(120\) & \(685\) & \(8\) & \(2.26\times 2.03\times 10.4\) & \(6\times 10^{20}\) \\ \hline \end{tabular} \end{table} Table 1: List of experiments at the NuMI, BNB, and LBNF baselines and their key specifications. The quoted numbers in the last column are the POT that we use in our study. to the loop, but the dominant contribution is from those fermions with masses comparable to that of the scalar, as can be seen in the argument of \(I\) in Eq. (10). However, for those that appear in the muonphilic (electrophilic) scalar model, the only contribution to the fermion loop is from muons (electrons), which are proportional to \(y_{22}\) (\(y_{11}\)). Through the above one-loop coupling, scalars can be produced from photons at the target via the Primakoff process, which is enhanced by a factor of \(Z^{2}\) from the nuclear form factor as mentioned earlier. These scalars are also highly forward-directed, i.e., in the same direction as the incoming photon. Despite this enhancement, HPS produced from kaon two-body decays is more than those produced via Primakoff scattering due to the presence of \(W\) boson and top quark couplings in the former scenario. Since these couplings do not exist in the case of muonphilic and electrophilic scalar models, the Primakoff production here is not as suppressed as it is in the case of HPS. In fact, we observe that this contribution exceeds that from kaon decays for electrophilic scalars with masses close to 1 MeV (twice the electron mass). Electrophilic scalars can also appear when photons interact with electrons via Compton-like scattering (Fig. (b)b). Although the enhancement factor here is only proportional to \(Z\), unlike the Primakoff enhancement proportional to \(Z^{2}\), this process dominates over the scalar Primakoff process as it occurs at the tree level. For both the Primakoff and Compton-like scatterings, the minimum energy required to produce a scalar is \[E_{\gamma}=\frac{m_{\phi}^{2}}{2m_{T}}+m_{\phi}, \tag{13}\] where \(m_{T}\) is electron mass \(m_{e}\) for Compton-like scattering and nucleus mass \(m_{N}\) for Primakoff scattering. Figure 1: Energy spectra of muonphilic scalars at (a) DUNE ND and (b) ICARUS. Three scalar mass values are shown: \(m_{\phi}=0.01\) GeV, \(0.1\) GeV, and \(0.16\) GeV with Yukawa coupling \(Y_{22}=10^{-4}\). Figure 3: Feynman diagrams depicting the production of scalars from photons. (a) Primakoff scattering of a photon to produce HPS as well as flavor-specific scalars. (b) Compton-like scattering of a photon to produce an electrophilic scalar. Figure 2: Feynman diagrams depicting the production of scalars from charged kaons. (a) Production of both HPS and flavor-specific scalars from the charged lepton leg. (b) HPS produced from the \(W\) boson leg. (c) Feynman diagram for two-body decay of \(K^{+}\) to \(\pi^{+}\) and HPS \(S\) which couples to an intermediate top quark. ### Electrons and positrons When fast-moving positrons interact with electrons at the target, they can annihilate to a photon and electrophilic scalar, \(e^{+}e^{-}\rightarrow\gamma\phi_{e}\), as shown in Fig. 4(a). However, if the energy of the incoming positron is resonated with a particular scalar mass, the electrophilic scalar can be produced directly, \(e^{+}e^{-}\rightarrow\phi_{e}\) (Fig. 4(b)). The cross-section of this process is given by \[\sigma_{\phi_{e}}=4\pi y_{11}^{2}\frac{m_{e}}{m_{\phi}^{2}}\sqrt{m_{\phi}^{2}- 4m_{e}^{2}}\delta(E_{+}+m_{e}-\frac{m_{\phi}^{2}}{2m_{e}}), \tag{14}\] where \(E_{+}\) is the energy of the incident positron. In order to produce scalars via resonance processes, the center of mass energy of \(e^{+}e^{-}\) system must exactly match with the mass of the scalar (modulo its decay width) as suggested by the delta function. Since the center of mass energy \(\sqrt{s}\) is \(\sim 10\) MeV for the NuMI and LBNF, where the peak energy of positrons is \(\sim 100\) MeV, scalars that are as heavy as \(10\) MeV could appear through this process. Similarly, at BNB where the peak energy of positrons is \(10\) MeV, a scalar of mass \(3\) MeV is preferred for resonant production. However, we find that this process is still subdominant at these resonant masses as compared to other processes. Therefore, we ignore the contributions from resonance while simulating scalars. ### Simulation methods We use the simulated fluxes of source particles, i.e., mesons, photons, electrons, and positrons, using the GEANT4 code package [54]. The distribution and normalization of charged-meson fluxes as they pass by the magnet are adjusted according to magnet specifications needed for these experiments [55, 31]. In order to simulate mediators, we first calculate the probability of producing them in the center-of-mass (C.O.M) frame of the production process. Using this probability distribution, we simulate mediators in the C.O.M frame and then boost them to the laboratory frame. The probability functions in the C.O.M frame are explained below: 1. If a mediator is produced as a product of a two-body decay, such as \(K^{+}\rightarrow\pi^{+}S\), the branching ratio and the energy of the mediator are fixed for a given mass of the mediator. 2. If produced from a three-body decay such as \(K^{+}\to e^{+}\nu_{e}\phi_{e}\), we calculate the differential branching ratio as a function of energy in the rest-frame of the decaying particle. We generate random energy between the minimum and maximum energies in this frame which are weighted by the flux times the Monte Carlo volume. 3. If the mediator is from a 2-to-2 scattering process, for example, Compton-like scattering, we calculate \((1/\sigma_{\rm tot})d\sigma_{2\to 2}/dt\) in the C.O.M frame of the process. In the above example, \(\sigma_{\rm tot}\) is the total scattering cross section of a photon of a given energy and \(\sigma_{2\to 2}\) is the cross section of the Compton-like scattering. Also, \(t\) is one of the Lorentz-invariant Mandelstam variables. In this choice of frame, the energy and momenta of all the final state particles are fixed. The angular distributions of the decay processes are approximately uniform (modulo the internal propagator effect). Hence, we randomly choose the angles in the rest-frame of the decaying particle, and then boost the four momentum along the direction of the decaying particle to the laboratory frame. For the scattering processes, however, the angular distribution is not necessarily uniform, but a function of the Mandelstam \(t\). Thus we simulate \(t\)'s and the appropriate Monte Carlo weights in the C.O.M frame, and then boost it back to the laboratory frame. We record the (1) energy (2) polar and azimuthal angles, and (3) production point of the mediators in the laboratory frame for every possible mass. For each recorded event, we check whether the direction of the mediator is within the acceptance cone that the detector subtends at the point where it is produced. If so, we accept the event for the detector of interest and if not, we reject them. ## V Detection of mediators In this section, we discuss methods to detect the above-mentioned mediators at our benchmark detectors all of Figure 4: Variation of \(I(\beta)\) with \(\beta\) Figure 5: Feynman diagrams depicting the production of scalars from electrons and positrons: (a) associated production with positrons impinging on target electrons and (b) resonance production. which adopt the liquid argon time projection chamber (LArTPC) technology. After collecting the mediators within the solid angle of the detector, they can be detected if their lifetime is long enough to survive up to the detector, without decaying into (in)visible particles before they reach the detector. To calculate the lifetime of the mediator for a given mass and momentum, we calculate the total decay width, which is the sum of the individual decay widths of all allowed decay channels. ### Decay widths of mediators Scalars can decay into a lepton and an anti-lepton if the scalar mass is greater than twice the mass of the lepton. The decay width of a scalar with mass \(m_{\phi}\) that couples to a lepton \(\ell\) with strength \(y_{\ell\ell}\) is \[\Gamma_{\phi\ell\ell}=y_{\ell\ell}^{2}\frac{m_{\phi}}{8\pi}\bigg{(}1-\frac{4 m_{\ell}^{2}}{m_{\phi}^{2}}\bigg{)}^{3/2}\,, \tag{15}\] where \(y_{\ell\ell}\) is given in Eq. (12).4 Scalars of any mass can decay into two photons with a decay width Footnote 4: Here we subscript \(\ell\) to denote leptons as opposed to \(f\) in Eq. (9) which denotes fermions. This is because we have only leptonic final states can appear from decays (not quarks). \[\Gamma_{\phi\gamma\gamma}=\frac{g_{\phi\gamma}^{2}m_{\phi}^{3}}{64\pi^{3}}. \tag{16}\] Here the coupling \(g_{\phi\gamma}\) is given by Eq. (9). The \(-1\) mass dimension in Eq. (9) is manifested in the \(1/v\) proportionality for HPS, \(1/m_{\mu}\) for muonphilic scalars and \(1/m_{e}\) for electrophilic scalars. Therefore, the inverse-mass scale of the photon mixing is the smallest for HPS, followed by muonphilic scalars and then electrophilic scalars. Additionally, HPS could also decay into two pions as well, whose decay width can be calculated from chiral perturbation theory [56]: \[\Gamma_{\phi\pi\pi}=\bigg{(}\frac{2}{9}m_{\phi}^{2}+\frac{11}{9}m_{\pi}^{2} \bigg{)}^{2}\frac{3\theta^{2}}{32\pi v^{2}m_{\phi}}\bigg{(}1-\frac{4m_{\pi}^{ 2}}{m_{\phi}^{2}}\bigg{)}^{1/2}. \tag{17}\] We will now summarize all the possible decay channels in the context of the three scalar models. HPS decay into di-photons if their mass is less than 1 MeV. If they are heavier than 1 MeV, the electron-positron decay channel opens up. Above 210 MeV, the muon-antimuon decay channel dominates over the electrons, and above 276 MeV, the \(\pi^{+}\pi^{-}\) decay channel adds up. If we look at the two flavor-specific scalars, muonphilic (electrophilic) scalars up to 210 MeV (1 MeV), prominently decay into two photons. However, if they are heavier than 210 MeV (1 MeV), the muon-antimuon (electron-positron) decays take over. ### Detection channels For a given decay width of a mediator, the probability that it survives until it reaches the detector is \[P_{\rm surv}=e^{-D/\lambda_{L}}, \tag{18}\] where \(D\) is the distance between the production point of the mediator and the front end of the detector and \(\lambda_{L}\) is the laboratory-frame mean decay length. \(\lambda_{L}\) can be related to the lifetime in the laboratory-frame (\(t_{L}\)) and the total decay width (\(\Gamma_{0}\)) by the following equality. \[\begin{split}\lambda_{L}&=vt_{L}\\ &=0.197\times 10^{-18}[\rm GeV\cdot m]\frac{p_{X}}{m_{X}} \frac{1}{\Gamma_{0}[\rm GeV]},\end{split} \tag{19}\] where \(m_{X}\) and \(p_{X}\) are the mass and the momentum of the mediator respectively. After reaching the detector, it can be detected if they either decay into visible particles or scatter with an argon nucleus to give rise to visible particles. For the decays, we require that it decays within the fiducial volume of the detector. Therefore, the probability of detecting a mediator inside a detector of length \(\Delta\) is given by \[P_{\rm decay}=e^{-D/\lambda_{L}}(1-e^{-\Delta/\lambda_{L}}). \tag{20}\] Surviving mediators can give rise to a signal by scattering off electrons, nucleons, and/or nuclei at the LArTPC detector. The various possible scattering processes of scalar mediators are: 1. Scalar inverse Primakoff: Scalars can produce a single photon signal through inverse Primakoff scattering by exchanging a photon with the nucleus. This is also a coherent process that is enhanced by \(Z^{2}\) from the nuclear form factor (Fig. 5(a)). Figure 6: Feynman diagrams illustrate the various scattering channels that mediators can undergo once they reach the detector. (a) Inverse Primakoff process where incoming scalars (\(\ell=e,\mu\)) scatter into a single photon. (b) Bethe-Heitler splitting process. (c) Inverse Compton-like scattering of \(\phi_{e}\) into an electron and a photon. 2. Bethe-Heitler process/splitting: This energy-dependent 2-to-3 scattering process gives rise to two leptons by scattering off of a nucleus. This is enhanced by the form factor and it can give rise to a lepton-antilepton signal even for mediators with masses less than twice the mass of the lepton (Fig. 6b). However, the mediator (\(X\)) requires a minimum threshold energy for this process to occur: \[E_{X,\text{min}}=\frac{4m_{\ell}^{2}+4m_{\ell}m_{N}-m_{X}^{2}}{2m_{N}}.\] (21) Therefore, this scattering process in energy-dependent. 3. Inverse Compton-like scattering: This occurs when mediators scatter off an electron to produce a photon-electron signal at the detector. This channel is possible for HPS as well as electrophilic scalars (Fig. 6c). The Bethe-Heitler process and the inverse Compton-like scattering occur for vector mediators too. The inverse Compton-like channel can appear as a tree-level process (if the vector mediator is electrophilic) or as a one-loop process (if not electrophilic) by mixing with the SM photon. By calculating the cross section of the above scattering processes, we can arrive at the probability of scattering \(P_{\text{scat}}\) within the detector length \(\Delta\). This is given by \[P_{\text{scat}}=P_{\text{surv}}\times n_{T}\sigma_{\phi}\Delta\,, \tag{22}\] where \(n_{T}\) is the number of target electrons/nucleons/ nuclei per unit volume and \(\sigma_{\phi}\) is the scattering cross-section of the process of interest. ## VI Results In this section, we report the main results of our study. We present the sensitivity reaches for our benchmark scalar mediator models delineated in Sec. II, and we also discuss the dependence of our findings on the background assumptions. ### Sensitivity estimates The sensitivities discussed in this section are obtained under zero background assumptions. Therefore, the contour for each experiment has been plotted for 3 events, which is the upper bound of the 95% confidence level (C.L.) interval for zero backgrounds. Any parameter that is enclosed inside the contours for each experiment yields more than 3 events. This is an all-inclusive sensitivity plot where the sensitivities include all possible events from decays and scatters of the mediators. Fig. 7 has the three sensitivity plots for the three models. The nature of the signals induced is explained in the caption below. 1. **Higgs Portal Scalars.** Figure 7a shows the sensitivity plot of HPS. We included the limits for DUNE ND while the lines for other experiments have been taken from Ref. [31]. The contribution to the scalars from two-body decays has been considered in Ref. [57]. Here, however, we plot the contributions of scalars produced via two-body decays (red line) and three-body decays (blue line) separately in order to compare. We observe that the majority of scalars are produced from the two-body decay process, \(K^{+}\to\pi^{+}S\). This is due to the top quark coupling. However, only those scalars lighter than 354 MeV can be produced via this process. Although HPS with masses greater than 354 MeV can be produced via three-body decays, \(K^{+}\to e^{+}\nu_{e}S\) and \(K^{+}\to\mu^{+}\nu_{\mu}S\), the flux of these scalars is suppressed. This is not only because it is a three-body decay, but also because the couplings are weaker than the previously mentioned top quark coupling. We find that the sensitivity at DUNE ND is more enhanced in comparison to ICARUS, especially for larger couplings. For the masses and couplings of our interest, we see the number of scalars produced from Primakoff processes is subdominant. They are prominent only for \(\theta\) values greater than \(10^{-3}\), which is constrained by LHCb. For the masses and couplings of our interest, all signals produced by these scalars are from decay processes. We do not see signals from scattering processes because they are subdominant for weak couplings (\(\theta<10^{-2}\)). 2. **Muonphilic scalars.** Figure 7b depicts the sensitivity plot of the muonphilic scalar model. Since these scalars do not couple to quarks, \(W\) gauge bosons, or first- and third-generation leptons, their production modes are limited to kaon three-body decays \(K^{+}\to\mu^{+}\nu_{\mu}\phi_{\mu}\) by coupling to the muon, and Primakoff process \(\gamma N\to\phi_{\mu}N\). In the region represented by the current \(g-2\) discrepancy (represented by the pink band), the detected signals in this region are mostly di-photons. We also see a dip in the sensitivity plot at 210 MeV where muon-antimum decays start to appear, thus reducing the scalar lifetimes. Stringent constraints appear from the 20 GeV electron beam experiment E137. Since forward detectors such as DUNE ND, SBND, and ArgoNeuT (existing data) are more sensitive to high-energy mediators (which have longer lifetimes), the ceiling of the sensitivity curve for the above forward detectors is higher than off-axis detectors. Thus, sensitivities of forward detectors extend beyond E137 bounds, and into the \(g-2\) band. We also notice that big detectors are exposed to a great intensity of muonphilic scalars. Hence, they are sensitive to a large region of parameter space which is allowed by the E137. Since ArgoNeuT is smaller in size, it is challenging for them to probe couplings smaller than those excluded by E137. Some regions of this allowed parameter space that are explored by the neutrino experiments are ruled Figure 7: 95% C.L. lines for our three benchmark scalar mediator models under the assumption of zero backgrounds. out by SN1987a data. However, the astrophysical bounds can be avoided in light mediators models by the chameleon effect [58; 59], where the masses of the mediators can depend on the background matter density. It must be noted that the environment in the core of a supernova (or a star) is extremely dense. In such a highly dense environment, the effective mass of the particle can be larger than it is on Earth causing an expansion of the allowed parameter space. In any case, it is important to probe the parameter space using laboratory experiments. Unlike HPS, scattering channels are relevant to this model. Amongst the two scattering channels in this model, inverse Primakoff and Bethe-Heitler splitting, the latter dominates over the former despite the phase space suppression. This is because the Bethe-Heitler process occurs at the tree level. Additionally, there are two Feynman diagrams that contribute to the matrix element of the Bethe-Heitler process. 3. **Electrophilic scalars.** The sensitivity of this model is depicted in Fig. 6(c). The coupling of these scalars to electrons open up many other production channels such as Compton-like scattering and associated production. For scalars heavier than 100 MeV, \(K^{+}\) decays are the most dominant source of electrophilic scalars. If lighter than 100 MeV, Compton-like scattering of photons with target electrons produces the most number of scalars. Primakoff scattering majorly contributes to scalars with mass 1 MeV. The availability of multiple sources of electrophilic scalars comes at the cost of many constraints, with E137 being the most stringent one. However, we find that the DUNE ND is sensitive to parameters that are still unconstrained by HB Stars and E137. These scalars are detected via electron-positron decay pairs for masses greater than 1 MeV, diphotons decays for masses between (0.01-1) MeV, and through scattering channels for masses below 0.01 MeV. There are three possible scattering channels for electrophilic scalars, inverse Compton-like, splitting contribution, and inverse Primakoff. Out of these three, the splitting contribution is the most dominant scattering channel for all detectors because the minimum threshold required for electron-positron splitting is very low, much lower than the scale of NuMI and BNB. The splitting process, as seen in Fig. 5(b), is unique because it results in lepton-antilepton final states even if the mediator mass is less than twice the lepton mass, which would not be kinematically allowed to decay. They become subdominant for higher mediator masses (masses close to twice the lepton mass) like all other scattering channels. However, they give us a unique way of identifying photon-less, purely leptonic final states. ### Sensitivities with detection constraints In our analysis and resulting sensitivities reported in the previous section, we envisioned the situation where associated backgrounds are sufficiently suppressed. While careful background estimates would lead to more precise sensitivity estimates, they certainly depend on signal channels and detector capabilities such as energy threshold, energy/angular resolutions, and particle identification. In particular, since the LArTPC technology is being developed and matured, higher-capability detectors would allow for rejecting more backgrounds while retaining signals. Nevertheless, in this section, we investigate how our sensitivity results are affected by the background assumption, especially in the context of DUNE ND. The backgrounds for the HPS model at ICARUS and SBND have been investigated in Ref. [31]. Figure 7(a) roughly demonstrates the effect of backgrounds at DUNE ND by plotting the sensitivity contours for 100 events along with 3 events. We plot the 100 events line as it roughly corresponds to the worst-case reduction in sensitivity seen in Fig. 15 of Ref. [31]. Though the parameter space coverage does not reduce drastically with increasing the number of signal events, improved background analysis is expected to minimize that reduction in the future. We see that the forward DUNE ND can continue to probe those parameters with short lifetimes even if a larger number of events are required to determine the sensitivity, i.e., the expected sensitivity reaches are not very sensitive to the underlying background assumption. Since mediators with strong couplings tend to decay rapidly, the range of couplings in the ceiling of a sensitivity lays an upper bound to the coupling strength such that 3 of them make it to the detector. This limit depends on the mass and momentum of the mediator, and the distance between the source and detector [60]. The lower edge, on the other hand, depends on statistics of mediators produced at the target which approximately scales with the square of the coupling [60]. This depends on scalings, such as the number of POT, the branching ratio of the production process, etc. If we increase the required statistics by looking for the number of events greater than 3, the minimum coupling required would increase as compared to the case of 3 events, therefore pushing up the lower edge of the sensitivity. Figure 7(b) is an example that shows the sensitivity contours of the muon-philic scalar model at DUNE ND for 10 events and 100 events. It clearly shows that the contours do not shrink much as we increase the number of considered events. The change in number affects the lower edge of the sensitivity plot only. Since we would look for more statistics in the presence of backgrounds, they would affect the lower limits rather than the ceiling. This also implies that the presence of backgrounds would not compromise the ability of these experiments to probe parameters in the \(g-2\) band, as seen in the example plot. The minimum threshold kinetic energy (\(E_{t}\)) required to identify signals at the detector plays a vital role while arriving at sensitivity plots of certain models at certain detectors. We notice that this constraint does not re Figure 8: Sensitivities where experimental and detection constraints are considered. duce the extent of the sensitivity curves of the muonphilic scalar model as the majority of the mediators are produced from processes that favor high energy mediators. However, we see that this plays a role in the sensitivity of the electrophilic scalar model, especially for masses in the keV range as scalars produced from Compton-like scattering, being the dominant one, are lower in energy, and so are the final states. Figure (c)c demonstrates the effects of this by showing the contours without cuts versus those with 10 MeV and 30 MeV cuts. As expected, we see the reduction in sensitivity space coverage by a few factors toward the lower-mass regime. In summary, Fig. 8 demonstrates multiple ways in which sensitivities are altered with experimental constraints. A more detailed analysis of detector responses and backgrounds for LArTPC-type detectors will certainly allow for more precise sensitivity estimates. ## VII Vector mediators As discussed earlier, the above analysis is not limited only to scalar mediators. They can be extended to spin-1 mediator models (gauge bosons) as well. They can be greatly produced from charged mesons present at the target. As an example, let us consider the \(U(1)_{T_{3R}}\) model that appears in the context of left-right symmetric models [61, 62]. This model contains a gauge boson, \(Z^{\prime}\), that couples to the right-handed fermions of one particular generation. Although there exist new fields in this anomaly-free model which is a spontaneously broken model at a low energy scale, e.g., low mass scalars and dark-matter candidates, we consider the effects of the gauge boson with visible decay modes only where the gauge boson couples to the right-handed muon, charm, and strange quarks. These gauge bosons can be heavily sourced from three-body decays of charged mesons \(K^{+}\to\mu^{+}\nu_{\mu}Z^{\prime}\), emanating from the muon leg \(\mu^{+}\). Since these gauge bosons couple only to the right-handed component of the muon, the muon's helicity must be flipped, thereby suppressing this three-body decay by the muon mass. Despite this condition, we observe an enhancement effect, similar to the scalar three-body decay case. This is due to the existence of the longitudinal polarization mode of vector gauge bosons that allows for an enhancement proportional to \(1/m_{Z^{\prime}}^{2}\)[9]. Since these gauge bosons can mix with the SM photon kinetically, they can couple to electrons with strength \(ee\) where \[\epsilon=g_{T3R}\sqrt{\frac{\alpha_{\rm em}}{4\pi^{3}}}. \tag{23}\] This expands the horizon of gauge boson production including neutral meson decays, \(\pi^{0}/\eta\to\gamma Z^{\prime}\). Similarly, gauge bosons can also be produced via Compton-like scattering, \(\gamma e^{-}\to Z^{\prime}e^{-}\)[63] when photons hit the target electrons. The production mechanisms of electrophilic scalars can be applied to \(Z^{\prime}\) gauge bosons as well, i.e., pair annihilation \(e^{+}e^{-}\to\gamma Z^{\prime}\)[64] and resonance production \(e^{+}e^{-}\to Z^{\prime}\)[64]. Additionally, \(Z^{\prime}\)s can appear from electron/positron bremsstrahlung \(e^{\pm}N\to e^{\pm}NZ^{\prime}\)[64, 65]. However, it is important to note that these processes occur through \(\gamma-Z^{\prime}\) mixing, which is of the order \(O(10^{-2})\). Therefore, the flux of gauge bosons from three-body decays of charged mesons is much larger as they are produced via the direct coupling to the lepton. It is important to note that the three-body decay must satisfy the upper limit of the charged kaon/pion branching ratio. We probe the sensitivity of the \(U(1)_{T_{3R}}\) gauge boson through visible decays into electrons and positrons via the kinetic mixing loop. This decay width is given by \[\Gamma_{Z^{\prime},e^{+}e^{-}}=\frac{\epsilon^{2}\alpha_{\rm em}m_{Z^{\prime} }}{3}\bigg{(}1+\frac{2m_{e}^{2}}{m_{Z^{\prime}}^{2}}\bigg{)}\sqrt{1-\frac{4m_ {e}^{2}}{m_{Z^{\prime}}^{2}}}. \tag{24}\] We see that the production of these gauge bosons from charged mesons dominates over neutral mesons despite constraints on the three-body branching ratio. The magnetic focusing horn system facilitates this production mechanism and hence, we obtain a great amount of sensitivity for weak couplings. Due to constraints on the upper limit of the three-body branching fraction, we are unable to excavate higher couplings through this production mechanism, which are mostly constrained by experiments such as NA64, E774, etc. Thus, in this \(U(1)_{T3R}\) model, we see that the production of these gauge bosons from charged mesons can give us insights into the lower coupling range that are unexplored parameters, as supported by our sensitivity study under the assumption of negligible backgrounds (see Fig. 9). Finally, we attempt to apply these production and detection mechanisms to \(U(1)_{L_{i}-L_{j}}\) gauge bosons. Since these gauge bosons couple to neutrinos as well, their lifetimes at MeV-to-sub-GeV facilities are not long enough to suggest considerable value-added insights into unconstrained parameters. Figure 9: 95% C.L. sensitivity plot (with no backgrounds) of the \(U(1)_{T_{3R}}\) gauge boson with only visible decays. ## VIII Conclusions In this study, we explore three dark-sector models, Higgs Portal Scalars, muonphilic scalar models, and electrophilic scalar models, at neutrino experiments by utilizing the vast flux of mesons, photons, and electrons that are produced at the neutrino target. The neutrino experiments considered in this study are the finished ArgoNeuT, ongoing MicroBooNE, SBND, ICARUS, and the upcoming DUNE experiments. We have also demonstrated an example of a vector muonphilic mediator, which is the gauge boson in the \(U(1)_{T3R}\) model. The magnetic horns present near the targets of these experiments have been utilized to maximize the production of the mediators from charged mesons along the beam direction. The choice of models and mediators enables us to understand the domination of one production mode over the other. Although three-body decays of mesons dominate for models with muonic couplings, and two-body meson decays dominate the HPS model, Compton-like scattering and Primakoff production from photons are most dominant for models with electron couplings. Through these production mechanisms, the potential for high-energy mediators to reach the detectors increases, especially at forward detectors. This allows us to explore the \(g-2\) regions along with large regions of unexplored parameter space in the laboratory experiments at the ongoing/upcoming facilities, especially for the muon. Since these mediators produce visible signals in the detector through multiple scattering and decay mechanisms, we were able to get an all-inclusive sensitivity plot demonstrating the potential of probing regions of parameter space that are still unexplored by experiments. The inclusion of the energy-dependent Bethe-Heitler scattering process and the decay process allowed us to access unique lepton-antilepton signatures for flavor-specific mediators. A rigorous background analysis would allow us to estimate more realistic exclusion limits. A better understanding of the background would include angular separation capabilities along with exact energy thresholds. For example, Ref. [29] has done a study for electron-positron events at MicroBooNE from which we find an estimate of around 30 background events. However, it is important to note that the ceiling of sensitivities does not change appreciably, as demonstrated in Fig. 8, since they are obtained from extremely high-energy mediators. Our studies done with the example models can straightforwardly be extended to many more scenarios with bosonic mediators. ## Acknowledgements We thank Wooyoung Jang for his work on the GEANT4 simulations. We would also like to thank Joshua Berger, Nityasa Mishra, Ornella Palamara, and Adrian Thompson for their useful discussions. This work of BD, AK, and DK is supported by the U.S. Department of Energy Grant DE-SC0010813.
2304.08006
The quantum skyrmion Hall effect in f electron systems
The flow of electric current through a two-dimensional material in a magnetic field gives rise to the family of Hall effects. The quantum versions of these effects accommodate robust electronic edge channels and fractional charges. Recently, the Hall effect of skyrmions, classical magnetic quasiparticles with a quantized topological charge, has been theoretically and experimentally reported, igniting ideas on a quantum version of this effect. To this end, we perform dynamical mean field theory calculations on localized $f$ electrons coupled to itinerant $c$ electrons in the presence of spin-orbit interaction and a magnetic field. Our calculations reveal localized nano quantum skyrmions that start moving transversally when a charge current in the itinerant electrons is applied. The results show the time-transient build-up of the quantum skyrmion Hall effect, accompanied by an Edelstein effect and a magnetoelectric effect that rotate the spins. This work motivates studies about the steady state of the quantum skyrmion Hall effect, looking for eventual quantum skyrmion edge channels and their transport properties.
Robert Peters, Jannis Neuhaus-Steinmetz, Thore Posske
2023-04-17T06:13:37Z
http://arxiv.org/abs/2304.08006v2
# The quantum skyrmion Hall effect in \(f\) electron systems ###### Abstract The flow of electric current through a two-dimensional material in a magnetic field gives rise to the family of Hall effects. The quantum versions of these effects accommodate robust electronic edge channels and fractional charges. Recently, the Hall effect of skyrmions, classical magnetic quasiparticles with a quantized topological charge, has been theoretically and experimentally reported, igniting ideas on a quantum version of this effect. To this end, we perform dynamical mean field theory calculations on localized \(f\) electrons coupled to itinerant \(c\) electrons in the presence of spin-orbit interaction and a magnetic field. Our calculations reveal localized nano quantum skyrmions that start moving transversely when a charge current in the itinerant electrons is applied. The results show the time-transient build-up of the quantum skyrmion Hall effect, accompanied by an Edelstein effect and a magnetoelectric effect that rotate the spins. This work motivates studies about the steady state of the quantum skyrmion Hall effect, looking for eventual quantum skyrmion edge channels and their transport properties. ## I Introduction From fundamental physical processes to application-oriented information storage and processing, the stability of a physical effect is paramount. Some physical effects, especially quantum Hall effects, accommodate observables that are topologically protected, i.e., they are robust against a continuous deformation of selected parameters. Recently, experimental and theoretical studies have found topologically protected classical magnetic structures in thin films or effectively two-dimensional systems, which have been coined magnetic skyrmions [1; 2; 3; 4; 5], connected to earlier ideas in particle physics [6]. The stability of these objects and the possibility of creating them by electrical currents or time-controlled magnetic boundary conditions [7; 8; 9; 10] promote the idea of using them in spintronics and as information carriers [11; 12; 13]. Furthermore, in sight of the ongoing miniaturization of magnetic skyrmions, they have also been proposed as ingredients in quantum computing [14]. Classical magnetic skyrmions experience an additional drag transversal to the direction of an applied electric current, which leads to an accumulation of skyrmions at one side [15; 16; 17; 18; 19; 20; 21; 22; 23]. The angle between the direction of motion and the direction of the current is called the Magnus angle of the skyrmion Hall effect. The question arises if there is a quantum version of the skyrmion Hall effect and, if so, which characteristics of the electronic Hall effects transfer, including hypothetical skyrmion edge channels with their quantized conductance. A previous study treating the quantum skyrmion as a product state and calculating an effective action for the quantum skyrmion demonstrated the existence of the Magnus force [24]. Yet, the challenge in describing general quantum skyrmions comes with the large Hilbert spaces that need to be considered in two-dimensional spin systems carrying quantum skyrmions of a size of minimally \(3\times 3\) spins [25; 26; 27], which demand special theoretical techniques like density matrix renormalization group [28] or artificial neural networks [29] to investigate them numerically. Another method to analyze quantum skyrmions is the use of localized spins from interacting electrons, like \(f\) electrons, to represent the skyrmions in a correlated electronic system [30]. Representing the skyrmions as electronic degrees of freedom has the advantage that we can treat considerably large quantum spin systems with established advanced numerical techniques for correlated electronic systems and that we can apply an electric current within the model without further assumptions. Interestingly, skyrmions in \(f\)-electron systems have been experimentally detected in EuAl\({}_{4}\), in which the skyrmions have been treated classically [31]. Yet, the quantum nature of skyrmions in strongly correlated electronic systems is not well studied. Ultimately, a quantum skyrmion Hall effect could have direct practical applications extending the manifold of suggested technical applications of magnetic skyrmions [32] to the quantum world. Moreover, fundamental questions about the topological nature of quantum skyrmions, which, strictly speaking, gets lost because of quantum spin slip processes [33; 34; 27; 35], could be answered when quantum skyrmions are connected to quantum Hall effects and their unambiguous topological origin. In this paper, we numerically study a square lattice of localized \(f\) electrons that are coupled to two-dimensional itinerant conduction (\(c\)) electrons in the presence of spin-orbit coupling and a small magnetic field perpendicular to the two-dimensional plane. Using dynamical mean-field theory, we reliably identify regions in parameter space that host quantum nano skyrmions. We subsequently study the effect of a current in the itinerant electrons on the quantum skyrmion in linear response theory and find a strong initial drag into the direction perpendicular to the current, marking the onset of the quantum skyrmion Hall effect. The shift of the skyrmion is accompanied by an Edelstein effect and a magnetoelectric effect,[36; 37; 38; 39; 40; 41; 42] which leads to a rotation of the localized \(f\)-electron spins. Our study stimulates further investigation of the quantum skyrmion Hall effect, especially its steady state, and possibly quantized skyrmion edge channels. The remainder of this paper is structured as follows: In Sec. II, we introduce the model and the method. In Sec. III, we analyze the stability and structure of the quantum skyrmions for different model parameters. This is followed by Sec. IV, where we demonstrate the onset of the skyrmion Hall effect using linear response theory. Finally, in Sec. V, we discuss our results and conclude the paper. ## II Model and method Motivated by the discovery of magnetic skyrmions in Eu-compounds [31], including partially filled \(f\) electrons, we focus here on magnetically ordered ground states and low-energy metastable states in \(f\)-electron systems on a square lattice with a lattice constant of \(a\) on the order of half a nanometer. In particular, we study the ground states of a noncentrosymmetric \(f\)-electron system described by a periodic Anderson model [43; 44; 41]. It is important to note that we explicitly start with an electronic Hamiltonian instead of a quantum spin model. Thus, charge fluctuations and other effective interactions besides the effective Heisenberg and Dzyaloshinskii-Moriya (DM) interaction generally affect the ground state. Furthermore, due to the hybridization between the itinerant conduction (\(c\)) electrons and the \(f\) electrons, the magnetic moments generated by the \(f\) electrons are intrinsically coupled to the \(c\) electrons. Such a coupling, which is necessary to observe skyrmion Hall and skyrmion drag effects, does hence not need to be inserted manually but is naturally included. Our model Hamiltonian can be split into a single-particle part, \(H_{0}\), and an interaction part, \(H_{U}\). The single-particle Hamiltonian is \[H_{0}(\mathbf{k}) = \left(t\left[\cos(k_{x})+\cos(k_{y})\right]+\left[\mu_{c}+\mu_{f} \right]/2\right)\mathbf{c}_{\mathbf{k}}^{\dagger}\mathbf{c}_{\mathbf{k}} \tag{1}\] \[+ \left(t\left[\cos(k_{x})+\cos(k_{y})\right]+\left[\mu_{c}-\mu_{f} \right]/2\right)\mathbf{c}_{\mathbf{k}}^{\dagger}\sigma^{0}\tau^{z}\mathbf{c}_{\mathbf{k}}\] \[- 2\alpha_{c}\sin(k_{y})\mathbf{c}_{\mathbf{k}}^{\dagger}\sigma^{x}\tau^{x} \mathbf{c}_{\mathbf{k}}+2\alpha_{c}\sin(k_{x})\mathbf{c}_{\mathbf{k}}^{\dagger}\sigma^{y}\tau ^{x}\mathbf{c}_{\mathbf{k}}\] \[+ V\mathbf{c}_{\mathbf{k}}^{\dagger}\sigma^{0}\tau^{x}\mathbf{c}_{\mathbf{k}}+B\bm {c}_{\mathbf{k},\rho_{1}\tau_{1}}^{\dagger}\sigma^{z}\tau^{0}\mathbf{c}_{\mathbf{k}},\] where \(\mathbf{c}_{\mathbf{k}}=\left(c_{k_{x},k_{y},\uparrow},f_{k_{x},k_{y},\uparrow},c_{k _{x},k_{y},\downarrow},f_{k_{x},k_{y},\downarrow}\right)\) is the spinor containing the momentum space annihilation operators of the itinerant electrons and \(f\) electrons, respectively, corresponding to the real-space operators \(c_{i,j,\sigma}\) and \(f_{i,j,\sigma}\) at site \((i,j)\) of a square lattice with spin \(\sigma\). The matrices \(\sigma^{\lambda}=s^{\lambda}\otimes s^{0}\) and \(\tau=s^{0}\otimes s^{\lambda}\) denote the Pauli matrices on the spin and sublattice space, respectively, where \(s\) are the bare Pauli matrices. The particle number operators are \(n_{i,j,\sigma}^{c}=c_{i,j,\sigma}^{\dagger}c_{i,j,\sigma}\) and \(n_{i,j,\sigma}^{f}=f_{i,j,\sigma}^{\dagger}f_{i,j,\sigma}\). The strength of the nearest neighbor hopping of the \(c\) electrons on the square lattice is denoted by \(t\). Throughout this paper, we use \(t\) as the unit of energy. \(\mu_{c}\) and \(\mu_{f}\) are local energies of the \(c\) and \(f\) electrons, respectively. \(V\) is a local hybridization between the \(c\) and \(f\) electrons as commonly used in the periodic Anderson model. \(B\) corresponds to a small magnetic field applied in the \(z\) direction. Finally, we include a spin-orbit coupling between the \(c\) and \(f\) electrons with hopping amplitude \(\alpha_{c}\). This spin-orbit coupling corresponds to a Rashba-type spin-orbit interaction as present in noncentrosymmetric \(f\)-electron systems [43]. The interaction part of the Hamiltonian is \[H_{U}=U\sum_{i,j}n_{i,j,\uparrow}^{f}n_{i,j,\downarrow}^{f}, \tag{2}\] corresponding to a density-density interaction between \(f\) electrons on the same lattice site. The full Hamiltonian is \[H=H_{0}+H_{U}. \tag{3}\] The calculations are performed on a finite lattice \(L_{x}\times L_{y}=11\times 11\) with periodic boundary conditions. To find the ground state of this quantum model, we use the real-space dynamical mean-field theory (RDMFT)[45; 46; 47; 48; 49]. RDMFT maps each atom of a unit cell (finite lattice) on its own quantum impurity model by calculating the local Green's function \[G_{n,m}(z)=\left(z-\tilde{h}_{0}-\Sigma(z)\right)_{n,m}^{-1}, \tag{4}\] where \(\tilde{h}_{0}\) is the single-particle matrix of the Fourier transform of \(H_{0}\) in Eq. (1) into real space, i.e., \(\tilde{H}_{0}=\sum_{n,m}c_{n}^{\dagger}\tilde{h}_{n,m}c_{m}\). Here, \(n\) and \(m\) are super indices including the lattice positions, the \(f\)-\(c\) sublattice, and the spin. Furthermore, \(\Sigma(z)\) is a matrix including the local self-energies of each lattice site in the finite lattice, where, by the defining approximation of RDMFT, \(\Sigma_{n,m}(z)\) vanishes when the spatial components of \(n\) and \(m\) differ. Writing the local Green's function as \[G_{n,m}=\left(z-\Delta_{n,m}(z)-\Sigma_{n,m}(z)\right)^{-1}, \tag{5}\] we can map each lattice site on a quantum impurity model, where \(\Delta_{n,m}(z)\) is the local hybridization of the impurity model. This hybridization function describes the environment for one lattice site created by the rest of the lattice. Here, the self-energy differs for each lattice site, and hence this hybridization function is different for each lattice site. Summarizing the numerical procedure, the local hybridization functions define quantum impurity models, which are solved to obtain the local self-energy for each lattice site. These updated self-energies are then used in Eq. (4), which defines a self-consistency problem. To calculate the self-energy of each lattice site, we use the numerical renormalization group (NRG) [50; 51; 52], which can calculate accurate Green's functions and self-energies at low temperatures. The magnetic properties of the periodic Anderson model without Rashba spin-orbit interaction are well understood within the DMFT approximation [45]. At half-filling, \(\langle n_{i,j}^{c}\rangle=\langle n_{i,j}^{f}\rangle=1\), on a square lattice, the periodic Anderson model orders antiferromagnetically for weak hybridization strengths \(V\)[48]. For large hybridization strengths, the periodic Anderson model at half-filling becomes a Kondo insulator. On the other hand, when the number of \(c\) electrons is small and the \(f\) electrons are nearly half-filled, the system orders ferromagnetically [53]. This paper aims to study the existence and properties of magnetic skyrmions in a ferromagnetic periodic Anderson model, including Rashba spin-orbit interaction. We thus look for parameters where the \(f\) electrons are nearly half-filled, and the \(c\)-electron filling is about \(\langle n^{c}\rangle\approx 0.2\). An exhaustive search of the parameter space of the periodic Anderson model for stable quantum skyrmions in the ground state is numerically unfeasible. In advance to our fully quantum-mechanical calculations, we therefore first identify candidate parameter regions where the ground state or low-energy metastable states accommodate magnetic skyrmions. We do so by mapping Eq. (1) to a classical Heisenberg spin model with nearest-neighbor coupling by integrating out the \(c\) electrons using second-order perturbation theory, which obtains the RKKY spin-spin interactions [54; 55; 56]. We then use classical Monte Carlo methods to find the ground states of these spin models [57]. In particular, we have varied in this procedure the local hybridization \(V\), the spin-orbit coupling \(\alpha_{c}\), and the \(c\)-electron level position, \(\mu_{c}\). We subsequently transfer parameter configurations where we find a classical skyrmion to the quantum model and conduct RDMFT calculations to obtain the system's ground state. Here, the presence of magnetic skyrmions in the corresponding classical model generally is a good indicator for quantum skyrmions in the quantum model. Setting \(U/t=6\) and \(\mu_{f}/t=-3\), corresponding to half-filling of the \(f\) electrons, we find quantum skyrmions in a ferromagnetic background for \(V=t\), \(\mu_{c}/t\approx 3.6\), and a finite spin-orbit coupling in combination with a magnetic field, in agreement with previous results on classical and quantum magnetic skyrmions [25; 26; 27; 28]. In the RDMFT calculations, we vary the strength of the spin-orbit interaction, \(\alpha_{c}\), and the strength of the magnetic field, \(B\), in the region according to the results of the classical calculations. ## III Structure and stability of magnetic skyrmions in the periodic Anderson model To unambiguously identify a magnetic skyrmion, we break the spin translation symmetry of the model in the first DMFT iteration. By this, we select a specific state of the translationally invariant space of ground states. We use two different strategies in our calculations. We either start with a ferromagnetic solution where all \(f\) electrons point downwards and flip a single \(f\) electron upwards. Alternatively, we directly start with a magnetic skyrmion solution obtained for a different set of parameters. Then, by iterating the DMFT self-consistency equation, we find possible, stable magnetic skyrmion solutions when the algorithm converges. To verify the existence of a magnetic skyrmion, we calculate the spin expectation values of the \(c\) and \(f\) electrons for each lattice site, \[\mathbf{S}_{\mathbf{r}}^{c} =\langle c_{\mathbf{r},\rho_{1}}^{\dagger}\mathbf{\sigma}_{\rho_{1},\rho_ {2}}c_{\mathbf{r},\rho_{2}}\rangle, \tag{6}\] \[\mathbf{S}_{\mathbf{r}}^{f} =\langle f_{\mathbf{r},\rho_{1}}^{\dagger}\mathbf{\sigma}_{\rho_{1},\rho_ {2}}f_{\mathbf{r},\rho_{2}}\rangle, \tag{7}\] where \(\mathbf{r}=(i,j)\) corresponds to the coordinates of a lattice site, and \(\mathbf{\sigma}=(\sigma^{x},\sigma^{y},\sigma^{z})\) is the vector containing the spin space Pauli matrices. Using these spin expectation values, we calculate the local lattice skyrmion density for the \(f\) and \(c\) electrons based on the solid angle spanned by three vectors as \[N^{d}_{\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}}= \tag{8}\] \[\frac{1}{2\pi}\tan^{-1}\left(\frac{\mathbf{S}^{d}_{\mathbf{r}_{1}}\cdot(\bm {S}^{d}_{\mathbf{r}_{2}}\times\mathbf{S}^{d}_{\mathbf{r}_{3}})}{\left(\frac{\hbar}{2}\right)^ {3}+\frac{\hbar}{2}\left(\mathbf{S}^{d}_{\mathbf{r}_{1}}\mathbf{S}^{d}_{\mathbf{r}_{2}}+\mathbf{S} ^{d}_{\mathbf{r}_{1}}\mathbf{S}^{d}_{\mathbf{r}_{3}}+\mathbf{S}^{d}_{\mathbf{r}_{2}}\mathbf{S}^{d}_{ \mathbf{r}_{3}}\right)}\right),\] where \(d\) either stands for \(f\) or \(c\) electrons, \(\mathbf{r}_{1}\), \(\mathbf{r}_{2}\), and \(\mathbf{r}_{3}\) are nearest-neighbor lattice sites spanning an elemental triangle \(\langle\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}\rangle\) in the densest triangular tessellation of the lattice. The sum of this skyrmion density over all triangles spanning the square lattice yields the skyrmion number \[N^{c/f}=\sum_{\langle\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}\rangle}N^{c/f}_{\mathbf{r}_ {1},\mathbf{r}_{2},\mathbf{r}_{3}}. \tag{9}\] Unlike in a classical calculation, the spin expectation values in a quantum model do not need to be \(\hbar/2\) in magnitude. In fact, these expectation values are usually smaller due to quantum fluctuations, \(|\mathbf{S}|<\hbar/2\). We thus calculate two types of skyrmion densities: One is the quantum skyrmion density/number using unnormalized spin expectation values. The second type is a classical skyrmion density, where we normalize all spin expectation values to \(\hbar/2\) before using them in Eq. (8). The skyrmion number is an integer when using normalized spin expectation values. When using unnormalized spin expectation values, the skyrmion number is not quantized, and instead, its magnitude is an indicator of the skyrmion stability [27], similar to the scalar chirality defined in Ref. [25]. A representative magnetic skyrmion solution is shown in Fig. 1 calculated for a spin-orbit coupling \(\alpha_{c}/t=0.3\) and a magnetic field \(B/t=0.002\). We note that within the accuracy of our calculations, we cannot find discernible energy differences between the ferromagnetic configuration and the magnetic skyrmion. The described skyrmions can, therefore, be metastable excitations on a ferromagnetic background with almost vanishing excitation energy or present in the ground state itself. Such an almost degenerate situation is favorable for applications in racetrack devices. If skyrmions were energetically strongly favorable, a skyrmion lattice would form instead of individual skyrmions. Figure 1 shows the spin texture of the \(f\) and \(c\) electrons underlaid with the local skyrmion density for normalized spin expectation values as 2D color plot, see Eq. (8). Due to the local hybridization, \(V\), which leads to an effective antiferromagnetic interaction between the \(c\) and \(f\) electrons, the spins of the \(c\) and \(f\) electrons mostly point in opposite directions, with deviations in the skyrmion's center, where the Rashba interaction and the itinerant character of the \(c\) electrons play a stronger role. The combined state corresponds to a bound skyrmion-antiskyrmion pair where the \(f\) electrons form a magnetic skyrmion with skyrmion number \(N^{f}=1\) and the \(c\) electrons form a magnetic antiskyrmion with skyrmion number \(N^{c}=-1\). This configuration, in principle, leads to the cancellation of the topological protection for each individual skyrmion structure because both can annihilate to form a topologically trivial spin texture. Yet, in this system, the spin expectation values of the \(c\) and \(f\) electrons are of very distinct origin, such that annihilation is suppressed. Because the \(f\) electrons are strongly interacting, they form localized magnetic moments, and their spin expectation values in this calculation are approximately \(|\langle S^{f}\rangle|_{a}\approx 0.75\frac{\hbar}{2}\). They are not perfectly polarized due to the entanglement between the \(c\) and the \(f\) electrons. On the other hand, the \(c\) electrons are noninteracting, and their spin expectation values vary around \(|\langle S^{c}\rangle|_{a}\approx 0.03\frac{\hbar}{2}\). The \(c\) electrons' polarization is a direct cause of the hybridization with the magnetized \(f\) electrons, a secondary effect and therefore, the skyrmion Figure 1: Magnetic skyrmion in an \(f\) electron system. A skyrmion forms in the \(f\) electrons, depicted by their spin expectation values (a). The color code corresponds to the local skyrmion density in Eq. (8). As a result, an antiskyrmion forms in the itinerant \(c\) electrons (b). The antiskyrmion is considerably less polarized, \(|\langle S^{c}\rangle|_{a}\approx 0.03\frac{\hbar}{2}\). The spin expectation values are shown normalized for better visualization. Parameters: spin-orbit coupling \(\alpha_{c}/t=0.3\) and magnetic field \(B/t=0.002\). of the \(f\) electrons and the antiskyrmion of the \(c\) electrons do not annihilate. This is revealed by the finite total skyrmion number calculated with unnormalized spin expectation values. This is indeed a difference between classical skyrmions, where the magnitude of the spin vectors is normalized. The reduced polarization decreases the topological protection of the magnetic skyrmion. The smaller the spin expectation value, the easier the spin can be flipped, and the magnetic skyrmion is destroyed [27]. On the other hand, this facilitates manipulating them as necessary for technical applications. Next, we analyze the stability of the magnetic skyrmion for different magnetic field strengths, as shown in Fig. 2. As stated above, we generally apply a small magnetic field which helps to stabilize the magnetic skyrmion against spin spiral solutions [25, 28]. In Fig. 2(a), we show the skyrmion number using normalized and unnormalized spins, respectively. We observe that, while the skyrmion number (normalized) is constantly one for \(B/t\lesssim 0.0056=B_{c}\), the skyrmion number using unnormalized spin expectation values is \(N^{f}\approx 0.4\) and gradually drops for an increased magnetic field until \(B_{c}\) is reached. The difference between these numbers demonstrates the relevance of quantum effects to the system at hand. We furthermore show the average spin expectation values of the \(c\) and \(f\) electrons, indicating that the \(f\) electrons are considerably more stronger polarized than the \(c\) electrons. Furthermore, for magnetic fields stronger than \(B_{c}\), we only find ferromagnetic solutions. Figures 2(b) and (c) give representative spin textures of the \(f\) electrons for the corresponding parameter regimes, i.e., small and large magnetic fields. We next analyze the structure of the skyrmion depending on the strength of the Rashba spin-orbit coupling \(\alpha_{c}\). We show the skyrmion number using normalized spin expectation values, the skyrmion number using unnormalized spin expectation values, and the average spin expectation values (\(\langle S^{f}\rangle\) and \(\langle S^{c}\rangle\)) in Fig. 3(a). Increasing the Rashba interaction, the \(f\)-electron spin expectation value is slightly suppressed, while the \(c\)-electron spin expectation value slightly increases. This increase in the \(c\) electron magnetization can be explained by the stronger coupling between the \(c\) and \(f\) electrons. While we need \(\alpha_{c}/t>0\) to create a finite DM interaction that stabilizes Figure 3: Dependence of magnetic skyrmions on the spin-orbit coupling for \(B/t=0.002\). Panel (a) shows the skyrmion number (normalized), skyrmion number (unnormalized), and averaged spin expectation values (\(c\)- and \(f\)-electrons), \(\langle\mathbf{S}^{f}\rangle_{a}\) and \(\langle\mathbf{S}^{c}\rangle_{a}\), for different strengths of the Rashba interaction. The skyrmion changes to a spin density wave at \(\alpha_{c}/t=0.4\). Panel (b) shows the extension of the skyrmion in the \(x\) and \(y\) direction; see Eq. (11). Panels (c)-(e) show representative spin configurations for small (\(\alpha_{c}=0.2t\)), medium (\(\alpha_{c}=0.35t\)), and large (\(\alpha_{c}=0.5t\)) spin-orbit interaction. Figure 2: Magnetic field dependence of magnetic skyrmions for spin-orbit coupling \(\alpha_{c}=0.3t\). Panel (a) shows the normalized and unnormalized skyrmion number and average spin expectation values of the \(c\) and \(f\) electrons. The skyrmion number drops to zero at \(B/t\approx 0.0056\), and the spins align ferromagnetically, consistent with studies on quantum skyrmions in spin lattices [25, 28]. The average spin expectation value \(|\langle S^{f/c}\rangle|_{a}\) of the \(c\) and \(f\) electrons alone does not indicate this phase transition. Panels (b) and (c) show typical \(f\)-spin configurations for small (skyrmionic configuration at \(B/t=0.002\)) and large magnetic fields (ferromagnetic configuration at \(B/t=0.006\)), respectively. the magnetic skyrmion, we see that for increasing \(\alpha_{c}\) the skyrmion gets destabilized and for \(\alpha_{c}/t>0.4\), magnetic skyrmions become unstable indicated by the vanishing skyrmion number. To analyze this transition further, we calculate the average size of the skyrmion. First, the center of the skyrmion created by the \(f\) electrons is \[\mathbf{R}_{S}=\sum_{\langle\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}\rangle}N^{f }_{\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}}\frac{\mathbf{r}_{1}+\mathbf{r}_{2}+\mathbf{r}_{3}}{3}, \tag{10}\] where \(\mathbf{r}_{1}\), \(\mathbf{r}_{2}\), and \(\mathbf{r}_{3}\) are the coordinates of the lattice sites spanning the elemental triangle as explained below Eq. (8). The extension of the skyrmion in the \(x\) and the \(y\)-direction \(\mathbf{L}=(L_{x},L_{y})\) is then given as \[\mathbf{L}_{x}^{2} = \sum_{\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}}N^{f}_{\mathbf{r}_{1},\mathbf{r}_ {2},\mathbf{r}_{3}}\left(\frac{\mathbf{x}_{1}+\mathbf{x}_{2}+\mathbf{x}_{3}}{3}-\mathbf{x}_{S} \right)^{2}, \tag{11}\] \[\mathbf{L}_{y}^{2} = \sum_{\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}}N^{f}_{\mathbf{r}_{1},\mathbf{r}_ {2},\mathbf{r}_{3}}\left(\frac{\mathbf{y}_{1}+\mathbf{y}_{2}+\mathbf{y}_{3}}{3}-\mathbf{y}_{S} \right)^{2}, \tag{12}\] where \(x_{i}\) (\(y_{i}\)) is the \(x\) (\(y\)) component of the position \(\mathbf{r}_{i}\) and the center of the skyrmion is \(\mathbf{R}_{s}=(x_{s},y_{s})\). In Fig. 3(b), we show the extension of the skyrmion in the \(x\) and \(y\)-direction depending on the Rashba spin-orbit interaction. We see that while the average extension of the skyrmion in the \(x\) direction remains unchanged when increasing \(\alpha_{c}\), the magnetic skyrmion is strongly elongated in the \(y\) direction. At \(\alpha_{c}/t\approx 0.4\), the magnetic skyrmion changes into a spiral phase, again consistent with findings for quantum skyrmions on nonelectronic spin lattices [25; 28]. Representative spin textures of the \(f\) electrons are shown in Fig. 3(c)-(e), depicting a skyrmion for small \(\alpha_{c}\) (c), an elongated skyrmion close to the phase transition (d), and a spiral wave for large \(\alpha_{c}\) (e). For larger values of the spin-orbit coupling, we do not find stable skyrmion solutions. ## IV Charge-Driven Quantum Skyrmions -- the Onset of the Quantum Skyrmion Hall Effect Finally, we study the response of the identified stable skyrmion textures to an applied charge current in the itinerant \(c\) electrons. To do this, we calculate the change in the spin expectation values of all lattice sites in linear response theory. We focus on describing the time-transient behavior of the system. A description of the nonequilibrium steady state poses considerable numerical challenges, as discussed in the concluding remarks. In linear response, the change in an expectation value of operator \(A\) resulting from a perturbation \(B\) is given by \[\langle A\rangle(\tau) = \langle A\rangle(0)+\!\int_{0}^{\tau}\!d\tau^{\prime}\,X_{AB}( \tau-\tau^{\prime}) \tag{13}\] \[X_{AB}(\tau-\tau^{\prime}) = i\Theta(\tau-\tau^{\prime})\langle[A(\tau),B(\tau^{\prime})]\rangle, \tag{14}\] where \(\Theta(\tau)\) is the Heaviside step function. Because we are interested in the linear response of the spin expectation values of the \(f\) electrons to a charge current in the itinerant electrons, we use \[A = f_{\tau,\rho_{1}}^{\dagger}\sigma_{\rho_{1}\rho_{2}}^{x/y/z}f_{ \mathbf{r},\rho 2}=S^{x/y/z}, \tag{15}\] \[B = J^{c}=-iJ\sum_{i,j,\sigma}\left(c_{i+1,j,\sigma}^{\dagger}c_{i,j,\sigma}-c_{i-1,j,\sigma}^{\dagger}c_{i,j,\sigma}\right), \tag{16}\] where \(A\) corresponds exactly to the local spin of an \(f\) electron, and \(B\) is the charge current operator in the \(c\) electrons. For these operators, Eq. (14) corresponds to a nonlocal two-particle Green's function. Using the DMFT approximation, where vertex corrections in nonlocal Green's functions vanish, we write these two-particle Green's functions as the convolution of two single-particle Green's functions. Then, we calculate the time evolution of the spin expectation values of all spins. Because the Figure 4: Time-resolved change of the spin expectation values \(\Delta\langle S^{x}\rangle(\tau)\) (a), \(\Delta\langle S^{y}\rangle(\tau)\) (b), and \(\Delta\langle S^{z}\rangle(\tau)\) (c) scanned in the \(x\) direction across the center of the magnetic skyrmion at lattice sites \((x_{S}+x,y_{S})\), calculated by linear response theory for \(\alpha_{c}/t=0.3\). Here, \(\mathbf{R}_{S}=(x_{S},y_{S})\) is the center of the skyrmion, see Eq. (10). The change of the spin expectation values is normalized by the strength of the current, \(J\). The expectation values start oscillating when the validity regime of the linear response theory is left. self-energy depends on the lattice site, also the time evolution of the spin expectation value depends on the lattice site. This is shown in Fig. 4, where we show the change of the \(x\), \(y\), and \(z\) component of the spin expectation values \(\Delta\langle S^{x}\rangle(\tau)\), \(\Delta\langle S^{y}\rangle(\tau)\), and \(\Delta\langle S^{z}\rangle(\tau)\) along the \(x\)-direction of the lattice across the center of the skyrmion solution for \(\alpha_{c}/t=0.3\) (shown in Fig. 1). Specifically, the spin expectation values are shown for lattice sites \((x_{S}+x,y_{S})\), where \(\mathbf{R}_{S}=(x_{S},y_{S})\) is the center of the skyrmion, see Eq. (10). \(\Delta\langle S_{x}\rangle\) and \(\Delta\langle S_{z}\rangle\) show a strong dependence on the position close to the center of the skyrmion. \(\Delta\langle S_{z}\rangle\) changes even its sign when changing the position from the left of the center to the right of the center. On the other hand, \(\Delta\langle S_{y}\rangle\) is nearly independent of the lattice site. Also, while \(\Delta\langle S_{z}\rangle\) becomes small for spins far away from the skyrmion center, \(\Delta\langle S_{x}\rangle\) and \(\Delta\langle S_{y}\rangle\) are nonzero. Thus, even in the ferromagnetic region away from the skyrmion center, \(\langle S_{x}\rangle\) and \(\langle S_{y}\rangle\) change. This rotation of the spin in the ferromagnetic state when a charge current is applied is explained by the Edelstein and the magnetoelectric effect [41]; in a system where the Fermi surface is split due to the Rashba spin-orbit coupling, a charge current results in an accumulation of spin. This can be seen here as a rotation of the spin expectation values in the \(x\) and the \(y\) direction, even far away from the magnetic skyrmion. Furthermore, we emphasize that the linear-response results only remain valid within sufficiently small times \(\tau\). In Fig. 4, we see that the initial linear trend in \(\tau\) is reduced, and, as an expected artifact from linear response theory, all spin expectation values start oscillating after a certain individual time. Finally, we take the time evolution of each spin on the lattice and calculate the skyrmion density and the time-dependent size and position of the skyrmion according to Eqs. (10-11). By Eq. (14), we find that the center of the skyrmion moves almost perpendicularly to the applied current, as shown in Fig. 5(a). While the current is applied in the \(x\) direction, the skyrmion moves in the positive \(y\) direction. Thus, our results demonstrate the onset of a quantum skyrmion Hall effect with a Magnus angle close to 90 degrees. Notably, the size of the skyrmion effectively remains constant during the motion, shown in Fig. 5(b). Figure 5(c) shows the spin texture at \(\tau=0\), and Fig. 5(d) shows the site-dependent change of the spin expectation values at \(\tau=4t/\hbar\). We clearly see that even in the ferromagnetic state away from the magnetic skyrmions, the spins are rotated into the \(xy\) direction. The combination of the Edelstein effect and the magnetoelectric effect is the origin of this site-dependent spin rotation. ## V Discussion In conclusion, we show that noncentrosymmetric \(f\)-electron systems with spin-orbit coupling in the presence of a small external magnetic field can host nano quantum skyrmions in the ground state, and we demonstrate the onset of the quantum skyrmion Hall effect upon applying a charge current, which is accompanied by an Edelstein and magnetoelectric effect. The reason for the stability of the quantum skyrmion is an effective DM interaction generated by the spin-orbit interaction and a local density-density interaction. Despite the itinerant \(c\) electrons being magnetized like an antiskyrmion, the quantum skyrmions of the \(f\) elec Figure 5: Onset of the quantum skyrmion Hall effect for \(\alpha_{c}/t=0.3\) and \(B/t=0.002\): Shown are the center (a) and the size (b) of the quantum magnetic skyrmion depending on time, calculated by linear response theory. The quantum skyrmion starts moving almost perpendicularly to the applied current, indicating a Magnus angle close to \(90\deg\). When the validity of the linear response calculations is left, the skyrmion slows down. The size of the skyrmion stays constant over time, indicating a negligible smearing of the structure compared to its motion. Panel (c) shows the initial spin configuration. The direction (normalized vectors) of the change \(\Delta\langle S\rangle(\tau)\) of the spin expectation values between \(\tau_{e}=4\hbar/t\) and the initial state shows an Edelstein and a magnetoelectric effect (d) with \(|\Delta\left\langle\mathbf{S}\right\rangle(\tau_{e})|\approx 0.07\frac{\hbar}{2}\). trons remain stable and dominate the physical behavior of the system because of its considerably stronger polarization due to strong correlations. Concerning the quantum skyrmion Hall effect, we observe a Magnus angle close to \(90\deg\). This is consistent with the behavior of classical skyrmions, where the Magnus angle increases when the size of the skyrmions is smaller or when dissipative effects are small [22; 23]. Both is the case for the observed nano quantum skyrmions. Furthermore, no quantum skyrmion pinning is visible in our study. We note that our method can only describe the onset of the skyrmion motion. In particular, in a full nonequilibrium calculation, time-dependent spin expectation values would lead to time-dependent self-energies. The system would adapt to the changed spin expectation values and backaction effects would alter our conclusions when the linear-response regime is left. For example, linear response theory can permanently decrease the polarization locally, ultimately resulting in a site with vanishing spin polarization. However, this situation is energetically unfavorable due to the strong density-density interaction. Thus, in a full nonequilibrium calculation, self-energies will change in a way that an atom with vanishing spin polarization is prevented, rendering the quantum magnetic skyrmion stable and letting it continue its motion perpendicular to an applied current. Yet, a full nonequilibrium calculation, as well as a steady-state analysis, goes beyond the scope of the current paper and is left for future work. We note that other forms of spin-orbit interaction also lead to stable nano quantum skyrmions in the \(f\)-electron system at hand. We show the results for a different form of the spin-orbit interaction, where the momenta couple to the same spin direction, in Appendix A. Also in these systems, the spin-orbit interaction results in a spin accumulation when a current is applied, which leads to a site-dependent change of the spin expectation values, and to a skyrmion Hall effect. These results emphasize that the existence of magnetic skyrmions in strongly correlated \(f\)-electron systems with spin-orbit coupling and the skyrmion Hall effect is a general effect. ###### Acknowledgements. All authors acknowledge funding by the Kyoto University - Hamburg University (KU-UHH) international partnership funding program for 2021 and 2022. R.P. is supported by JSPS KAKENHI No. JP18K03511 and JP23K03300. Parts of the numerical simulations in this work have been done using the facilities of the Supercomputer Center at the Institute for Solid State Physics, the University of Tokyo. J. N.-S. acknowledges support by the Cluster of Excellence "CUI: Advanced Imaging of Matter" of the Deutsche Forschungsgemeinschaft (DFG) - EXC 2056 - project ID 390715994 and the Universitat Hamburg's Next Generation Partnership funded under the Excellence Strategy of the Federal Government and the Lander. T. P. acknowledges funding by the DFG (project no. 420120155) and the European Union (ERC, QUANTWIST, project number 101039098). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. ## Appendix A Different form of spin-orbit interaction To demonstrate that our results are robust for different types of spin-orbit coupling, we repeat our analysis using a spin-orbit interaction of the form \[H_{SOI}(\mathbf{k}) = 2\alpha_{c}(\sin(k_{x})\mathbf{c}_{\mathbf{k}}^{\dagger}\sigma^{x}\tau^ {x}\mathbf{c_{k}} \tag{10}\] \[+ \sin(k_{y})\mathbf{c_{k}^{\dagger}}\sigma^{y}\tau^{x}\mathbf{c_{k}}).\] The rest of the Hamiltonian, including the two-particle interaction, is unchanged compared to the main text. In Fig. 6, we show two RDMFT solutions, including magnetic skyrmions, for \(\alpha_{c}=\pm 0.3t\), where we again use a small magnetic field, \(B/t=0.002\), to stabilize the magnetic skyrmion [25; 26; 27; 28]. The change in the sign of the spin-orbit interaction leads to a change in the rotation direction of the spin texture. Furthermore, we apply a charge current in the \(x\) direction for both solutions and find that the center of the quantum magnetic skyrmion dominantly moves into the positive \(y\)-direction. This is explained as follows: The change in the sign of the spin-orbit interaction leads not only to a reversal of the spin rotation inside the skyrmion but also changes the sign of the Edelstein and magnetoelectric effect. Thus, spins in these two examples are rotated in opposite directions when current is applied. As a result, both magnetic skyrmions move into the same, the positive \(y\) direction.
2307.07665
Catastrogenesis with unstable ALPs as the origin of the NANOGrav 15 yr gravitational wave signal
In post-inflation axion-like particle (ALP) models, a stable domain wall network forms if the model's potential has multiple minima. This system must annihilate before dominating the Universe's energy density, producing ALPs and gravitational waves (a process we dub "catastrogenesis," or "creation via annihilation"). We examine the possibility that the gravitational wave background recently reported by NANOGrav is due to catastrogenesis. For the case of ALP decay into two photons, we identify the region of ALP mass and coupling, just outside current limits, compatible with the NANOGrav signal.
Graciela B. Gelmini, Jonah Hyman
2023-07-15T00:26:26Z
http://arxiv.org/abs/2307.07665v3
# Catastrogenesis with unstable ALPs as the origin of the NANOGrav 15 yr gravitational wave signal ###### Abstract In post-inflation axion-like particle (ALP) models, a stable domain wall network forms if the model's potential has multiple minima. This system must annihilate before dominating the Universe's energy density, producing ALPs and gravitational waves (a process we dub "catastrogenesis," or "creation via annihilation"). We examine the possibility that the gravitational wave background recently reported by NANOGrav is due to catastrogenesis. For the case of ALP decay into two photons, we identify the region of ALP mass and coupling, just outside current limits, compatible with the NANOGrav signal. ## I Introduction The fundamental importance of gravitational waves (GWs) as messengers of the pre-Big Bang Nucleosynthesis (BBN) era, a yet unknown epoch of the Universe from which we do not yet have any other remnants, cannot be overestimated. The NANOGrav pulsar timing array collaboration has recently reported the observation of a stochastic gravitational wave background [1] in 15 years of data, and has examined its possible origin in terms of new physics [2]. They showed that the pre-BBN annihilation of cosmic walls provides a good fit to their signal, both as the sole source and in combination with a background from a population of inspiraling supermassive black hole binaries (SMBHBs), which is expected to be its primary conventional physics origin [2]. The annihilation produces a peaked spectrum, whose peak frequency \(f_{\rm peak}\) is given by the inverse of the cosmic horizon \(\simeq t_{\rm ann}\) at annihilation redshifted to the present. In their fit to the wall annihilation model NANOGrav finds [2] a peak frequency \[f_{\rm peak}=c_{f}10^{-8}\ {\rm Hz}\, \tag{1}\] and a peak energy density \[\Omega_{\rm GW}h^{2}\big{|}_{\rm peak}=c_{\Omega}10^{-8}\, \tag{2}\] with coefficients \(c_{f}\) and \(c_{\Omega}\) of order 1. In particular \(c_{\Omega}\simeq 1\), while \(c_{f}\) can have larger values. Here we consider the annihilation of a \(U(1)\) pseudo Nambu-Goldstone boson stable string-wall system as the origin of the NANOGrav signal, based on our previous recent work [3; 4; 5], to which we refer often in the following. Many extensions of the Standard Model (SM) of elementary particles assume an approximate global \(U(1)\) symmetry spontaneously broken at an energy scale \(V\). The symmetry is not exact, but explicitly broken at another scale \(v\ll V\). Thus the model has a pseudo Nambu-Goldstone boson we denote with \(a\), with mass \(m_{\rm e}\simeq v^{2}/V\). These models, include the original axion [6; 7; 8], invisible axions (also called "QCD axions) [9; 10; 11; 12], majoron models [13; 14; 15; 16; 17; 18; 19], familon models [21; 22; 23], and axion-like particles (ALPs) (e.g. [24; 25; 26; 27; 28]). Many models predict a large mass for the QCD axion [29; 30; 31; 32], including the "high-quality QCD axion" [33] and previous models (see e.g. Section 6.7 of Ref. [34]). Heavy majorons, which could get a mass from soft breaking terms or from gravitational effects (see e.g. [15; 16; 17; 18; 19]), have been considered as well (see e.g. [16; 19]), even of mass in the TeV range. Since we need a specific type of model to take into account existing experimental bounds, we concentrate on ALPs coupled to photons. ALPs are one of the most studied types of dark matter candidates. They are extensively searched for in a variety of laboratory experiments and astrophysical observations, their coupling to photons being one of the most studied as well. We assume that the spontaneous symmetry breaking happens after inflation, in which case cosmic strings appear during the spontaneous breaking transition, and a system of cosmic walls bounded by strings is produced when the explicit breaking becomes dynamically relevant, when \(t\simeq m_{a}^{-1}\). The cosmic strings then enter into a "scaling" regime, in which the number of strings per Hubble volume remains of order 1 (see e.g. Ref. [35] and references therein). The subsequent evolution of the string-wall system depends crucially on the number of minima of the potential after the explicit breaking, which may be just one minimum, \(N=1\), or several, \(N>1\). With \(N=1\), "ribbons" of walls bounded by strings surrounded by true vacuum form, which shrink very fast due to the pull of the walls on the strings, leading to the immediate annihilation of the string-wall system (see e.g. [36]). We concentrate on the \(N>1\) case, where the \(U(1)\) symmetry is broken into a discrete \(Z_{N}\) symmetry, in which each string connects to \(N\) walls forming a stable string-wall system. A short time after walls form, when friction of the walls with the surrounding medium is negligible, the string-wall system enters into another scaling regime in which the linear size of the walls is the cosmic horizon size \(\simeq t\). Thus its energy density is \(\rho_{\rm walls}\simeq\sigma/t\) where \(\sigma\) is the energy density per unit area of the walls. The energy density in this system grows faster with time than the radiation density, and would come to dominate the energy density of the Universe, leading to an unacceptable cosmology [37], unless it annihilates earlier. If the \(Z_{N}\) is also an approximate symmetry, then there is a "bias," a small energy difference between the \(N\) minima, which chooses one of them to be that with minimum energy. The energy difference between two vacua at both sides of each wall accelerates each wall toward its adjacent higher-energy vacuum, which drives the domain walls to their annihilation [37] (see also e.g. Ref. [38]). As in our previous recent work [3; 4; 5], we adopt the \(Z_{N}\) explicit breaking term in the scalar potential originally proposed for QCD axions [39; 40], and parameterized as \(V_{\rm bias}\simeq\epsilon_{b}v^{4}\), with a dimensionless positive coefficient \(\epsilon_{b}\ll 1\). For small enough \(\epsilon_{b}\) values, ALPs are dominantly produced when the string-wall system annihilates, together with GWs, a process that we named "catastrogenesis" [4], after the Greek word \(\texttt{xx}\texttt{x}\texttt{x}\texttt{o}\texttt{x}\texttt{p}\texttt{o} \texttt{f}_{\dagger}\), for "overturn" or "annihilation." The emission of GWs by the initial system of cosmic strings ends when walls appear. Thus, there is a low-frequency cutoff of 82 \((m_{a}/\text{GeV})^{1/2}\) Hz [36; 41; 42], corresponding to the inverse of the cosmic horizon when walls appear, redshifted to the present. This is much higher than the relevant frequencies for \(m_{a}\simeq\text{GeV}\), so strings do not contribute to the NANOGrav signal in this model. We assume radiation domination during the times of interest. In this case, the present peak GW density is related to the temperature at annihilation \(T_{\rm ann}\) by \[f_{\rm peak}\simeq 0.76\times 10^{-7}\text{Hz}\ \frac{T_{\rm ann}}{\text{GeV}} \ \frac{\left[g_{\star}(T_{\rm ann})\right]^{1/2}}{\left[g_{s\star}(T_{\rm ann}) \right]^{1/3}}\, \tag{3}\] where \(g_{\star}\) and \(g_{s\star}\) are the energy and entropy density numbers of degrees of freedom. Thus, Eq. (1) also gives \(T_{\rm ann}\) in terms of \(c_{f}\) \[T_{\rm ann}\simeq 82.5\;c_{f}\;\text{MeV}\left[\frac{16.5}{g_{\star}(T_{\rm ann })}\right]^{1/2}\ \left[\frac{g_{s\star}(T_{\rm ann})}{16.5}\right]^{1/3}, \tag{4}\] while in terms of the parameters of our model it is \[T_{\rm ann}\simeq\frac{2.2\times 10^{9}\ \text{GeV}}{\left[g_{\star}(T_{\rm ann })\right]^{1/4}}\ \sqrt{\frac{\epsilon_{b}\ m_{a}}{f_{\sigma}\ \text{GeV}}}. \tag{5}\] The peak energy density is \[\Omega_{\rm GW}h^{2}\big{|}_{\rm peak}\simeq\frac{1.2\times 10^{-79} \epsilon_{\rm GW}\ g_{\star}(T_{\rm ann})}{\epsilon_{b}^{2}\ \left[g_{s\star}(T_{\rm ann})\right]^{4/3}}\left(\frac{f_{\sigma}V}{N\text{GeV }}\right)^{4} \tag{6}\] where \(f_{\sigma}\) is a parameter entering into the definition of the energy per unit area of the walls, \(\sigma\simeq f_{\sigma}v^{2}V/N\), and \(f_{\sigma}\simeq 6\) for most assumed potentials. We include in Eq. (6) a dimensionless factor \(\epsilon_{\rm GW}\) found in numerical simulations (e.g. [43]). When needing to fix its value we use \(\epsilon_{\rm GW}=0.7\) as adopted in the NANOGrav fit [2] following [44] (in our earlier work we took instead \(\epsilon_{\rm GW}=10\), using Fig. 8 of Ref. [43]). Since \(g_{\star}=g_{s\star}\) for \(T>1\) MeV, we set them equal in the following. We address the reader to Refs. [3; 4] for the derivation of these equations. Our previous results [3; 4] show that the requirement that the ALP density not exceed that of dark matter, \(\Omega_{a}h^{2}\lesssim 0.12\), implies \[\frac{\left.\Omega_{\rm GW}h^{2}\right|_{\rm peak}}{10^{-17}}\left(\frac{f_{ \rm peak}}{10^{-9}\text{Hz}}\right)^{2}<10^{-2}\, \tag{7}\] so the model cannot produce the NANOGrav signal with stable ALPs. Thus we concentrate on ALPs that are unstable and decay into SM products that thermalize early enough to leave no trace by the time of Big Bang Nucleosynthesis (BBN), such as we considered in Ref. [5]. Similar or related models have been studied recently in relation to pulsar timing array data, e.g. Refs. [45; 46; 47; 48; 49; 50; 51]. Ref. [50] considered the same type of models we study here, but with the purpose of excluding parameter regions disfavored by the NANOGrav 15 yr data, which they analyzed independently. Our purpose is instead to try to explain the signal, and thus we stay away from the disfavored region (shown in gray in the lower left panel of Fig. 12 of Ref. [2] and the right panel of Fig. 2 of Ref. [50]). ## III Unstable ALP models that can produce the NANOGrav signal In Ref. [5] we assumed \(m_{a}\) was sufficiently larger than 1 GeV for ALPs decaying into SM particles to comfortably escape existing experimental limits. However, we need to be more nuanced here and explore the viability of somewhat lighter ALPs. The reason is that requiring the walls to form at least one order of magnitude in temperature after strings appear, combined with upper limits on \(T_{\rm ann}\) determined by NANOGrav to explain the signal, impose \(m_{a}\lesssim 1.8\) GeV, as we are going to show now. Walls appear when the Hubble parameter is \(H(T_{\rm w})=m_{a}/3\), i.e. when the temperature is \[T_{\rm w}\simeq\frac{1.6\times 10^{9}\ \text{GeV}}{\left[g_{\star}(T_{\rm w}) \right]^{1/4}}\left(\frac{m_{a}}{\text{GeV}}\right)^{1/2}. \tag{8}\] Thus \(T_{\rm w}\) depends only on \(m_{a}\) (\(g_{\star}(T_{\rm w})\simeq 105\), since \(T_{\rm w}>100\) GeV). As in our previous papers, we consider \(m_{a}\) to be temperature independent. A temperature dependence would not affect the annihilation process, which happens late enough for \(m_{a}\) to have reached its present constant value in any case, but could affect \(T_{\rm w}\). Combining Eqs. (2) and (6) fixes the ratio \(V^{2}/\epsilon_{b}\), and Eqs. (4) and (5) fix the product \(\epsilon_{b}m_{a}\). Thus, given Eqs. (1) and (2), we obtain \(V\) as a function of \(m_{a}\), \[V\simeq\frac{5.0\times 10^{7}\ \text{GeV}}{\epsilon_{\text{GW}}^{1/4}}\frac{N}{f _{\sigma}^{1/2}}\left(\frac{\text{GeV}}{m_{a}}\right)^{1/2}c_{f}c_{\Omega}^{1/ 4}\left[\frac{g_{\star}(T_{\text{ann}})}{16.5}\right]^{1/6}, \tag{9}\] and consequently \(m_{a}\) in terms of the ratio \(T_{\text{w}}/V\), \[m_{a}\simeq\frac{(T_{\text{w}}/V)}{0.1}c_{f}c_{\Omega}^{1/4}\frac{N}{f_{\sigma }^{1/2}}\frac{10.3\ \text{MeV}}{\epsilon_{\text{GW}}^{1/4}}\left[\frac{g_{\star}(T_{\text{ann}})}{ 16.5}\right]^{1/6}. \tag{10}\] We require \(T_{\text{w}}/V\lesssim 0.1\) so walls form at least one order of magnitude in temperature after strings appear. Larger values of \(N\) are favorable to allow larger \(m_{a}\), so we adopt here \(N=20\) (there are no constraints on the number \(N\) for ALP models, but \(N=20\) is possible for QCD axion models [52]). Replacing also \(f_{\sigma}=6\) and \(\epsilon_{\text{GW}}=0.7\), \[m_{a}\simeq\frac{(T_{\text{w}}/V)}{0.1}c_{f}c_{\Omega}^{1/4}\ 92\ \text{MeV} \left[\frac{g_{\star}(T_{\text{ann}})}{16.5}\right]^{1/6}. \tag{11}\] An upper limit on \(c_{f}\) thus provides an upper limit on \(m_{a}\) and vice versa (since the NANOGrav fit prefers \(c_{\Omega}\simeq 1\)[2]). Looking in Fig. 12 of Ref. [2], the range of annihilation temperatures (called \(T_{\star}\) in that paper) where the NANOGrav signal can be explained by the annihilation of domain walls into SM products (DW-SM, the model most similar to ours), we can see that \(T_{\text{ann}}\lesssim 1\) GeV (close to the upper boundary of the red region in the figure). By Eq. (4), this corresponds to \(c_{f}\lesssim 15\) (taking into account the rapid change of \(g_{\star}\) values for temperatures in the 100 MeV range, \(g_{\star}(1\ \text{GeV})\simeq 70\)). Through Eq. (11), this implies \(m_{a}\lesssim 1.8\) GeV. A more conservative upper limit on the annihilation temperature is the upper boundary of the 95% credible interval including a SMBHB contribution quoted in the text of Ref. [2], 843 MeV. This implies through Eq. (4) (with \(g_{\star}(0.84\ \text{GeV})\simeq 68\)) \(c_{f}\lesssim 13\), and through Eq. (11) \(m_{a}\lesssim 1.5\) GeV. Let us now consider the experimental limits on ALPs coupled to photons through a Lagrangian term \[\mathcal{L}_{a\gamma\gamma}=\frac{c_{\gamma\gamma}}{f_{a}}aF_{\mu\nu}\tilde{F} ^{\mu\nu}\, \tag{12}\] where \(F_{\mu\nu}\) is the electromagnetic field tensor and \(\tilde{F}^{\mu\nu}\) its dual, \(|c_{\gamma\gamma}|\) is a dimensionless coupling constant, and \(f_{a}=V/N\) is given by Eq. (9) divided by \(N\), and is independent of \(N\), thus \[\frac{1}{f_{a}}\simeq\frac{1.4\times 10^{-8}}{c_{f}c_{\Omega}^{1/4}\text{GeV}} \left(\frac{m_{a}}{100\ \text{MeV}}\right)^{1/2}\left[\frac{16.5}{g_{\star}(T_{\text{ann}})}\right]^{1 /6}. \tag{13}\] Or using Eq. (11) in Eq. (13), \[\frac{1}{f_{a}}\lesssim\frac{1.4\times 10^{-8}}{(c_{f}c_{\Omega}^{1/4})^{1/2} \ \text{GeV}}\left[\frac{16.5}{g_{\star}(T_{\text{ann}})}\right]^{1/12}. \tag{14}\] So a larger \(c_{f}\) (thus also a larger \(T_{\text{ann}}\)) makes the coupling smaller. Requiring the upper limit on the ALP mass in Eq. (11) to reach \(m_{a}=300\) MeV (to avoid experimental limits on lighter ALPs), we obtain \(c_{f}\gtrsim 2.9\) and \(T_{\text{ann}}\gtrsim 200\) MeV (thus \(g_{\star}\simeq 42\)), which implies \(1/f_{a}<7.5\times 10^{-9}/\text{GeV}\). Requiring instead the upper limit to be \(m_{a}\simeq 1.8\) GeV, which corresponds to \(T_{\text{ann}}\simeq 1\) GeV since as mentioned above, \(c_{f}=15\), with \(g_{\star}(1\ \text{GeV})\simeq 70\), Eq. (13) or (14) implies \(1/f_{a}<3.1\times 10^{-9}/\text{GeV}\). Assuming \(|c_{\gamma\gamma}|\lesssim 1\), these upper limits on \(1/f_{a}\) translate into upper limits on the ALP coupling to photons as a function of \(m_{a}\). These limits constitute the upper boundary of the gray region in Fig. 1. Fig. 1 also shows relevant regions rejected by the most up-to-date limits on ALPs. Notice that if \(|c_{\gamma\gamma}|>1\), the region extends upward (as indicated by the dashed lines) where experimental limits (not only astrophysical limits) become important. The value of \(|c_{\gamma\gamma}|\) depends on the completion of the ALP model. It has been extensively studied only for the QCD axion, where \(|c_{\gamma\gamma}|\simeq\alpha_{\text{EM}}/8\pi\simeq 2.9\times 10^{-4}\) in the simplest models. However, \(|c_{\gamma\gamma}|\) can be many orders of magnitude, even exponentially, larger in some models (see e.g. the "Axions and Other Similar Particles" review in Ref. [53] or Refs. [54; 55; 56; 57; 34]). Notice that with the \(|c_{\gamma\gamma}|\) value in the simplest QCD axion models, the upper boundary of our region of compatibility (gray) in Fig. 1 would move into that excluded by BBN limits (yellow), i.e. the region of compatibility would not exist. We will now check the lifetime and the fraction of the density constituted by ALPs at the time of decay. The Figure 1: Region (in gray) of ALP coupling to two photons versus ALP mass \(m_{a}\) for models which could explain the NANOGrav 15 yr signal, where \(300\ \text{MeV}<m_{a}<1.8\) GeV, together with (colored) relevant regions excluded by: SN 1987A cooling [58; 59], SN 1987A ALP decay (Solar Maximum Mission) [60], SN 1987A ALP decay (Pioneer Venus Orbiter) [61], supernovae (SN) explosion energy [58], GW170817 [62], BBN + \(N_{\text{eff}}\) limits [63], and experimental limits [64; 65]. This figure reproduces a portion of Fig. 9 of Ref. [66] with additions from Ref. [67]. decay rate (see e.g. Eq. (138) of Ref. [66]) \[\Gamma(a\to\gamma\gamma)=\frac{|c_{\gamma\gamma}|^{2}m_{a}^{3}}{4\pi f_{a}^{2}} \tag{15}\] corresponds to a lifetime (using Eq. (13)) \[\tau=\frac{c_{f}^{2}c_{\rm th}^{1/2}}{|c_{\gamma\gamma}|^{2}}4.2\times 10^{-5} \;{\rm sec}\left(\frac{100\;{\rm MeV}}{m_{a}}\right)^{4}\left[\frac{g_{\star} (T_{\rm ann})}{16.5}\right]^{1/3}\,. \tag{16}\] With \(|c_{\gamma\gamma}|\) in the range 0.1 to 1, we can have \(\tau\simeq t_{\rm ann}\), i.e. the decay can happen at annihilation. Requiring \(\tau\lesssim 0.1\) sec, so that the decay happens early enough not to affect BBN, translates through Eq. (15) into \(|c_{\gamma\gamma}|/f_{a}>0.9\times 10^{-11}({\rm GeV}/m_{a})^{3/2}/{\rm GeV}\). This constitutes the lower boundary of the gray region shown in Fig. 1. To compute the density of the string-wall system with respect to that of radiation at annihilation, we consider that, had the system not annihilated, its energy density \(\rho_{\rm walls}\simeq\sigma/t\) would have continued to grow until the moment we call wall-domination \(t_{\rm wd}\), at which it becomes as large as the radiation energy, \(\rho_{\rm walls}(t_{\rm wd})\simeq\rho_{\rm rad}(t_{\rm wd})\). The temperature of wall-domination is (see Refs. [4; 5]) \[T_{\rm wd}\simeq\frac{3.4\;{\rm GeV}}{[g_{\star}(T_{\rm wd})]^{1/4}}\frac{f_{ \sigma}^{1/2}}{N}\left(\frac{V}{10^{9}\;{\rm GeV}}\right)\left(\frac{m_{a}}{1 0\;{\rm GeV}}\right)^{1/2}. \tag{17}\] Besides, \(\rho_{\rm walls}(t_{\rm ann})/\rho_{\rm walls}(t_{\rm wd})\simeq t_{\rm wd}/t_ {\rm ann}\), and the ratio of radiation densities at wall-domination and annihilation is given by the ratio of \(g_{\star}T^{4}\) at each temperature. Combining these equations we find \[\frac{\rho_{\rm walls}(T_{\rm ann})}{\rho_{\rm rad}(T_{\rm ann})}\simeq\left( \frac{g_{\star}(T_{\rm wd})}{g_{\star}(T_{\rm ann})}\right)^{1/2}\left(\frac{T _{\rm wd}}{T_{\rm ann}}\right)^{2}\,. \tag{18}\] Using Eq. (4) and Eq. (9) in Eq. (17), we find \[\frac{\rho_{\rm walls}(T_{\rm ann})}{\rho_{\rm rad}(T_{\rm ann})}\simeq 0.13 \;c_{\Omega}^{1/2}\left(\frac{g_{\star}(T_{\rm ann})}{16.5}\right)^{1/6}, \tag{19}\] which shows that this ratio is always \(<1\) for the annihilation temperatures we consider. Since practically all the density in the string-wall system goes into nonrelativistic (or quasi-nonrelativistic) ALPs at annihilation, considering the redshift of the ALP and radiation densities until ALPs decay at temperature \(T_{\rm decay}\), \[\frac{\rho_{\rm ALPs}(T_{\rm decay})}{\rho_{\rm rad}(T_{\rm decay})}\simeq\left( \frac{T_{\rm ann}}{T_{\rm decay}}\right)\frac{\rho_{\rm walls}(T_{\rm ann})}{ \rho_{\rm rad}(T_{\rm ann})}. \tag{20}\] As we mentioned above, \(T_{\rm decay}\) can be very close to \(T_{\rm ann}\), so ALPs do not get to matter dominate in our model and the decays happen early enough for the products to thermalize long before BBN. Otherwise, there would be a period of ALP matter domination before ALPs decay, which is in principle not problematic since the decays happen much before BBN, but would be a scenario deserving further study. The range of \(c_{f}\) values, 2.9 to 15, that we have found above corresponds to a peak frequency range through Eq. (1). In Fig. 2 we indicate two approximate spectra, with the maximum and minimum \(f_{\rm peak}\) in the mentioned range. Frequencies \(f<f_{\rm peak}\) correspond to superhorizon wavelengths at annihilation, so causality requires a \(\sim f^{3}\) dependence [68] for wavelengths that enter into the horizon during radiation domination, see e.g. [69; 70; 71]. For \(f>f_{\rm peak}\) the spectrum depends instead on the particular production model. Ref. [43] finds a roughly \(1/f\) dependence (although the approximate slope slightly depends on \(N\)), which we use for Fig. 2. In Fig. 2 the rough signal region of NANOGrav, as well as the limits and reach of other GW observatories, is shown. ## IV Possibility of primordial black hole formation The formation of primordial black holes (PBHs) during the process of annihilation of the string-wall system is an Figure 2: Two approximate spectra which could account for the NANOGrav signal in our catastrogenesis model, with \(f_{\rm peak}=2.9\times 10^{-8}\) Hz and \(1.5\times 10^{-7}\) Hz, representing the minimum and maximum values we found (see text), overlapped to the approximate NANOGrav 15 yr signal [1] (in purple) and limits (solid line boundaries) or reach (dashed line boundaries) of other GW detectors: the European Pulsar Timing Array (EPTA) [72] and the Square Kilometre Array (SKA) [73] in purple; the space-based experiments TianQin [74], Taiji [75], and the Laser Interferometer Space Antenna (LISA) [76] in green; the Atom Interferometer Observatory and Network (AION) [77], the Atomic Experiment for Dark Matter and Gravity Exploration in Space (AEDGE) [78], the Deci-hertz Interferometer Gravitational wave Observatory (DECIGO) [79], and the Big Bang Observer (BBO) [80] in blue; the ground-based Einstein Telescope (ET) in red [81]; and the Laser Interferometer Gravitational-wave Observatory (LIGO) in gray [82]. The cyan band corresponds to the 95% C.L. upper limit on the effective number of degrees of freedom during CMB emission from Planck and other data [83], which imposes \(\Omega_{\rm GW}h^{2}<10^{-6}\). exciting possible aspect of ALP models with \(N>1\). We recently dealt with the possibility of producing "asteroid-mass" PBHs, in the range in which they could constitute all of the dark matter, in Ref. [5]. If formed, the PBH mass in the models in the present paper would be in the range of 0.1 to a few solar masses, but PBH abundance would be too large to be allowed, and this would reject these models. However, the formation of PBHs is uncertain. The argument for PBH formation, first presented in Ref. [84] for QCD axions, is that in the latest stages of wall annihilation in \(N>1\) models (\(t>t_{\rm ann}\)) closed walls could arise and collapse in an approximately spherically symmetric way. In this case, if the characteristic linear size of the walls continues to grow with time after annihilation starts, some fraction of the closed walls could reach their Schwarzschild radius \(R_{\rm Sch}\) and collapse into PBHs. The figure of merit used is \(p(t)=R_{\rm Sch}/t=2M(t)/(t\,M_{\rm P}^{2})\), where \(M_{\rm P}\) is the Planck mass and \(M(t)\) is the mass within the collapsing closed wall at time \(t\). Reaching \(p(t)=1\) would indicate the formation of PBHs. This definition is based on the fact that while walls are in the scaling regime, the linear size of the walls \(L\) is close to the horizon size (\(L\simeq t\)). Annihilation starts when surface tension of the walls, which produces a pressure \(p_{T}\simeq\sigma/t\) that decreases with time (which tends to rapidly straighten out curved walls to the horizon scale \(t\)), is compensated by the volume pressure \(p_{V}\simeq V_{\rm bias}\) (which tends instead to accelerate the walls toward their higher-energy adjacent vacuum). In our model, \(p_{V}\ll p_{T}\) when walls form. At a later time, when \(p_{T}\simeq p_{V}\), the bias drives the walls (and the strings bounding them) to annihilate within a Hubble time. This defines \(t_{\rm ann}\simeq\sigma/V_{\rm bias}\), after which \(V_{\rm bias}\) dominates the energy density. After annihilation starts, \(L\simeq t\) is no longer guaranteed. We have checked that for the models in this paper, we always have \(p(t_{\rm ann})<1\). If \(L\) continues being close to \(t\) for \(t>t_{\rm ann}\), then \(p(t>t_{\rm ann})\simeq V_{\rm bias}L^{3}/L\) grows with time as \(t^{2}\) and eventually reaches 1. However, if \(L\) decreases with time at some point after annihilation starts, the figure of merit may never reach 1. Based on the simple power-law parameterization we used in our previous recent work [4; 5] for the evolution of the energy density after annihilation starts, namely \(\rho_{\rm walls}(T)/\rho_{\rm walls}(T_{\rm ann})\simeq(T/T_{\rm ann})^{\alpha}\) (with a parameter \(\alpha\) that needs to be extracted from simulations), we can make a naive estimate of how the characteristic linear wall size \(L\) within a Hubble volume \(t^{3}\) evolves with time. In Appendix A we show how this naive estimate requires \(\alpha<6\) for \(L\) to ever become larger than \(t_{\rm ann}\). The only simulations of the annihilation process available [85] find \(\alpha\geq 7\)[4; 5]. On the other hand, they also seem to indicate that the evolution of the string-wall system continues being close to that in the scaling regime for some time. Therefore, more detailed simulations of the annihilation process are required to elucidate the appearance of PBHs. In addition, a large enough departure from spherical symmetry due to angular momentum or vacua with different energy on different sides of the collapsing closed wall could prevent the formation of PBHs. Since the formation of PBHs is such an uncertain consequence of ALP models with \(N>1\), we do not use this feature to reject any of these models. ## Conclusions We pointed out that the recently confirmed stochastic gravitational wave background could be due to pseudo Nambu-Goldstone bosons, whose existence could only be revealed through their decays and this background. In particular, we examined unstable ALP models which can produce the recent NANOGrav 15 yr signal. ALP models have a complex cosmology in which a stable system of walls bounded by strings develops (for \(N>1\)), and non-relativistic ALPs and gravitational waves are produced when the cosmic string-wall system annihilates (a process we dubbed "catastrogenesis" in our recent work on these models). The annihilation produces a distinctive peaked spectrum, at a frequency corresponding to the inverse of the cosmic horizon at annihilation. Thus, this peak frequency is related to the annihilation temperature. We require ALPs to decay into Standard Model (SM) products which thermalize much before BBN. In particular, we have shown that ALPs decaying into two photons in the region of masses and couplings necessary to explain the signal can evade existing observational limits, the most relevant of which are derived from supernova data (see Fig. 1), for ALP masses from about 300 MeV to 1.8 GeV. The model closest to ours that NANOGrav fitted to their signal is that of domain walls decaying into SM products (DW-SM). Our model is very similar to this one if the ALP decay happens very shortly after string-wall system annihilation, which we showed is possible. Thus we use the NANOGrav fits to this model to select a range of annihilation temperatures and thus peak frequencies. We have found a range of \(c_{f}\) values (as defined in Eq. (1)) which corresponds to the range of peak frequencies from \(f_{\rm peak}=2.9\times 10^{-8}\) Hz to \(f_{\rm peak}=1.5\times 10^{-7}\) Hz. This corresponds to annihilation temperatures in the range 200 MeV to 1 GeV. This temperature range overlaps with the upper portion of the 68% credible interval (which goes to 275 MeV) and the 95% credible interval (which goes to 505 MeV) quoted by NANOGrav [2] if their DW-SM model is the sole origin of the signal. Considering their fit done with the addition of a SMBHB contribution, our temperature range overlaps with a larger portion of both the 68% credible interval (which goes to 309 MeV) and the 95% credible interval (which goes to 843 MeV) quoted in the text, and is included within the red region in the lower left corner of Fig. 12 of Ref. [2] (for its DW-SM + SMBHB fit). ###### Acknowledgements. We thank E. Vitagliano for useful comments. The work of GG was supported in part by the U.S. Department of Energy (DOE) Grant No. DE-SC0009937. ## Appendix A Before annihilation starts, the energy density of the walls in the scaling regime is \(\rho_{\rm walls}\simeq\sigma/t\gg V_{\rm bias}\). The annihilation of the string-wall system starts when the bias volume energy density, or magnitude of volume pressure, \(V_{\rm bias}\) becomes of the same order as the energy density, or surface tension, of the walls \(\sigma/t\) (\(t_{\rm ann}\simeq\sigma/V_{\rm bias}\)), after which \(V_{\rm bias}\) dominates and accelerates walls towards the higher-energy vacuum adjacent to each wall. If PBHs do not form at annihilation, i.e. \(p(t_{\rm ann})<1\), the energy contained in a closed wall will need to increase with time for PBHs to form later, at a time \(t_{\star}\) such that \(p(t_{\star})=1\). Since the energy density \(V_{\rm bias}\) is constant, this requires that the dimensions of the closed walls keep growing for \(t>t_{\rm ann}\). In fact, if the characteristic linear dimension \(L\) of walls continues being close to \(t\), \(L\simeq t\), then \(p(t>t_{\rm ann})\sim V_{\rm bias}L^{3}/L\) grows with time as \(t^{2}\) and eventually reaches 1. However, if \(L\) decreases with time and never becomes larger than \(t_{\rm ann}\), the figure of merit \(p(t)\) decreases after annihilation starts and never reaches 1. Based on the simple power-law parameterization we used in our previous recent work [4; 5] for the evolution of the energy density after annihilation starts, for \(T<T_{\rm ann}\), \[\frac{\rho_{\rm walls}(T)}{\rho_{\rm walls}(T_{\rm ann})}\simeq\left(\frac{T} {T_{\rm ann}}\right)^{\alpha}\simeq\left(\frac{t_{\rm ann}}{t}\right)^{\alpha /2}, \tag{21}\] with a real positive power \(\alpha\) that needs to be extracted from simulations of the annihilation process, we can make a naive estimate of how the characteristic linear wall size \(L\) within a Hubble volume \(t^{3}\) evolves with time. It is easy to do it either assuming that walls dominate the energy density, or that volume density dominates. In both cases we find the same condition on \(\alpha\) for \(L\) to continue growing with time, i.e. \(L>t_{\rm ann}\) for \(t>t_{\rm ann}\). Therefore it is reasonable to assume that the same condition holds in the transition period, when both volume and walls contribute significantly to the energy density of the annihilating string-wall system. If the energy in walls still dominates \[\left(\frac{t_{\rm ann}}{t}\right)^{\alpha/2}\simeq\frac{\rho_{\rm walls}(T )}{\rho_{\rm walls}(T_{\rm ann})}\simeq\frac{\sigma L^{2}}{t^{3}}\frac{t_{ \rm ann}^{3}}{\sigma t_{\rm ann}^{2}}\simeq\frac{L^{2}}{t^{3}}t_{\rm ann}. \tag{22}\] Thus \[L=t\left(\frac{t_{\rm ann}}{t}\right)^{(\alpha-2)/4} \tag{23}\] and requiring \(L/t_{\rm ann}>1\) for \(t>t_{\rm ann}\), means that \((t/t_{\rm ann})^{(6-\alpha)/4}>1\), i.e. \((6-\alpha)/4>0\), thus \(\alpha<6\). A similar calculation can be done assuming volume energy dominates, \(\rho_{\rm walls}(t)\simeq V_{\rm bias}L^{3}/t^{3}\) to find \[L=t\left(\frac{t_{\rm ann}}{t}\right)^{\alpha/6} \tag{24}\] and requiring \(L/t_{\rm ann}>1\) for \(t>t_{\rm ann}\), means that \((t/t_{\rm ann})^{(6-\alpha)/6}>1\), with \((6-\alpha)/6>0\), i.e. \(\alpha<6\) again. In both cases we find the condition \(\alpha<6\) for \(L\) to become larger than \(t_{\rm ann}\) after annihilation stars. However the only simulations available to estimate values of \(\alpha\)[85] lead to \(\alpha\geq 7\) (see Refs. [4; 5] for details). In this case, with our naive estimates the linear size of walls would decrease with time after annihilation starts and PBHs would not form if \(p(t_{\rm ann})<1\).
2306.10030
A New Approach in Solving Regular and Singular Conformable Fractional Coupled Burger's Equations
The conformable double ARA decomposition approach is presented in this current study to solve one-dimensional regular and singular conformable functional Burger's equations. We investigate the conformable double ARA transform's definition, existence requirements, and some basic properties. In this study, we introduce a novel interesting method that combines the double ARA transform with Adomian's decomposition method, in order to find the precise solutions of some nonlinear fractional problems. Moreover, we use the new approach to solve Burgers' equations for both regular and singular conformable fractional coupled systems. We also provide several instances to demonstrate the usefulness of the current study. Mathematica software has been used to get numerical results.
Amjad E. Hamza, Abdelilah K. Sedeeg, Rania Saadeh, Ahmad Qazza, Raed Khalil
2023-06-06T20:47:37Z
http://arxiv.org/abs/2306.10030v1
# A New Approach in Solving Regular and Singular Conformable Fractional Coupled Burger's Equations ###### Abstract The conformable double ARA decomposition approach is presented in this current study to solve one-dimensional regular and singular conformable functional Burger's equations. We investigate the conformable double ARA transform's definition, existence requirements, and some basic properties. In this study, we introduce a novel interesting method that combines the double ARA transform with Adomian's decomposition method, in order to find the precise solutions of some nonlinear fractional problems. Moreover, we use the new approach to solve Burgers' equations for both regular and singular conformable fractional coupled systems. We also provide several instances to demonstrate the usefulness of the current study. Mathematica software has been used to get numerical results. - - Conformable ARA transform; Conformable double ARA decomposition method; Singular one-dimensional coupled Burgers' equation; Conformable partial fractional derivative. September 9, 2022. Revised: March 29, 2023. Accepted: April 24, 2023. Published: May 9, 2023. ## 1 Introduction Fractional partial differential equations have drawn significant interest from a wide range of specialists in applied sciences and engineering, including acoustics, control, and viscoelasticity. In many areas of mathematics and physics, partial differential equations are crucial. To examine several time-fractional partial differential equations, the authors use a novel strategy termed the "simplest equation method", [1], [2]. In the context of applied sciences like mathematical modeling and fluid mechanics, this work focuses on Burger's equation. Burger's equation was initially brought up about steady-state solutions, in fact, [3]. Burger later changed the approach to characterize the viscosity of certain fluid types, [4]. The conformable double Laplace transform approach, which was first proposed in [5], was improved and used to solve fractional partial differential equations. This method has been used by a number of academics to obtain precise and numerical solutions to this type of equation. To precisely solve time-fractional Burger's equations, other scholars used the first integral technique, [6]. Another set of researchers developed the coupled Burger's equation solution using the generalized two-dimensional differential transform approach, [7, 8, 9, 10]. The ARA transform is a revolutionary integral transform that Saadeh and others introduced in, [11]. ARA is a new transform; it is not an acronym. It has novel properties, including the ability to generate numerous transforms by varying the value of the index n, a duality with the Laplace transform, and the capacity to get around the singularity at time zero. Many researchers have studied the new approach and implemented it to solve many problems by merging it with other numerical methods or other transforms, such as ARA-Sumdu transform, [12, 13], Laplace-ARA transform, [14], double ARA transform, [15, 16], ARA residual power series method, [17, 18]. In this article, we choose to build a unique combination of Adomian's decomposition method and the double ARA transform, so that we obtain the advantages of these two methods and fully utilize these two potent techniques. The conformable double ARA transform method will be introduced in this research in combination with Adomian's decomposition method, [19], to solve systems of conformable fractional partial differential equations. With the help of the conformable double ARA decomposition approach, this study aims to provide analytical solutions for the coupled, one-dimensional, singular, and regular conformable fractional Burger's equations (CDARADM). The following space-time fractional order coupled with Burger's equations were described in [20], and are given below: \[\begin{split}\frac{\partial^{q}u}{\partial\,t^{q}}-\frac{\partial ^{2p}u}{\partial\,x^{2p}}+\lambda&\,u\,\frac{\partial^{p}u}{ \partial\,x^{p}}+\,\alpha\,\,\frac{\partial^{p}}{\partial\,x^{p}}(u\nu)\\ &=k\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right),\\ \frac{\partial^{q}v}{\partial\,t^{q}}-\frac{\partial^{2p}v}{ \partial\,x^{2p}}+\lambda&\,v\,\frac{\partial^{p}v}{\partial\,x^ {p}}+\,\beta\,\,\frac{\partial^{p}}{\partial\,x^{p}}(u\nu)\\ &=\,l\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right).\end{split} \tag{1}\] This article is organized as follows, in the following section, we present the ARA transform with the main characteristics. In Section 3, we introduce some preliminaries about the conformable fractional derivatives. The conformable ARA transform and some related results are presented in Section 4. In Section 5, we introduce some numerical experiments to prove the efficiency and applicability of the new method. ## 2 ARA Integral Transforms, [11] **Definition 1.** If \(h(x)\) is a continuous function on \((0,\infty)\), then the ARA transform of order \(n\) \[\begin{split}& g_{n}[h(x)](r)=Q(n,r)\\ &=r\int_{0}^{\infty}x^{n-1}e^{-rx}h(x)\,dx,\quad\,r>0,\end{split} \tag{2}\] and the inverse ARA transform is defined as \[\begin{split}&\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, Now, we present some properties of ARA transform. If \(H(n,r)=\mathcal{G}_{n}[h(x)]\) and \(G(n,r)=\mathcal{G}_{n}[g(x)]\) and \(a,b\in\mathbb{R}\), then * \(\mathcal{G}_{n}[a\;h(x)+b\;g(x)]\) \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ \[\begin{array}{ll}\text{v.}&\text{Let }h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right)=e^{\lambda\frac{x^{p}}{p}+\beta\frac{t^{q}}{q}}.\text{Then}\\ &\bullet&\frac{\partial^{p}}{\partial x^{p}}\left(e^{\lambda\frac{x^{p}}{p}+ \beta\frac{t^{q}}{q}}\right)=\lambda e^{\lambda\frac{x^{p}}{p}+\beta\frac{t^{ q}}{q}},\\ &\bullet&\frac{\partial^{q}}{\partial t^{q}}\left(e^{\lambda\frac{x^{p}}{p}+ \beta\frac{t^{q}}{q}}\right)=\beta e^{\lambda\frac{x^{p}}{p}+\beta\frac{t^{q}}{ q}}.\end{array}\] **Property 1**. If \(h(x,t)\) is a differentiable function of order \(\alpha\) and \(\beta\) at the points \(x\) and \(t>0\), where \(0<\alpha,\beta\leq 1\). Then \[\begin{array}{ll}\frac{\partial^{p}}{\partial x^{p}}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)=& x^{1-p}\ \frac{\partial}{\partial x}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right),\\ \frac{\partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)=& t^{1-q}\ \frac{\partial}{\partial t}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right).\end{array}\] **Proof**. Using Definition **3** and putting \(k=\delta\ x^{1-p}\) in Equation (12), we get \[\begin{array}{ll}\frac{\partial^{p}}{\partial x^{p}}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)=&\lim_{\delta\to 0}h\left(\frac{x^{p}}{p}+\delta x^{1-p}, \frac{t^{q}}{q}\right)-\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\\ &=&\lim_{k\to 0}\frac{h\left(\frac{x^{p}}{p}+k,\frac{t^{q}}{q} \right)-\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)}{k\exp^{-1}}\\ &=& x^{1-p}\lim_{k\to 0}\frac{h\left(\frac{x^{p}}{p}+k,\frac{t^{q}} {q}\right)-\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)}{k}\\ &=& x^{1-p}\frac{\partial}{\partial x}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right).\end{array}\] Similarly, we can easily prove that \[\begin{array}{ll}\frac{\partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)=& t^{1-q}\ \frac{\partial}{\partial t}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right).\end{array}\] ## 4 Conformable Double ARA Transform (CDARAT) In this part of this study, we present the conformable double ARA transform using the following definitions. **Definition 5**.: Assume that \(h(x)\) is a real-valued function defined on \([0,\infty)\) to \(\mathbb{R}\), then the conformable ARA transform of \(h\left(\frac{x^{p}}{p}\right)\) is given by \[\begin{array}{ll}&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! iii. Let \(h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)=e^{\lambda\frac{x^{p}}{p}+\beta} \frac{t^{q}}{q}\). Then \[\begin{split}& G_{x}^{p}G_{t}^{q}\left[e^{\lambda\frac{x^{p}}{p}+ \beta}\frac{t^{q}}{q}\right]\\ &=rs\int_{0}^{\infty}\int_{0}^{\infty}e^{-\tau\frac{x^{p}}{p}-s \frac{t^{q}}{q}}\left[e^{\lambda\frac{x^{p}}{p}+\beta}\frac{t^{q}}{q}\right]x^ {p-1}t^{q-1}dx\,dt.\end{split}\] From Property 2 and Equation (7), we get \[\begin{split}& G_{x}^{p}G_{t}^{q}\left[e^{\lambda\frac{x^{p}}{p}+ \beta}\frac{t^{q}}{q}\right]=g_{x}\left[e^{\lambda x}\right]G_{t}\left[e^{ \beta t}\right]\\ &=\frac{rs}{(r-\lambda)(s-\beta)}.\end{split}\] iv. Let \(h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)=\sin\lambda\left(\frac{x^{p}}{p} \right)\sin\beta\left(\frac{t^{q}}{q}\right)\). Then \(\begin{split}& g_{x}^{p}G_{t}^{q}\left[\sin\lambda\left( \frac{x^{p}}{p}\right)\sin\beta\left(\frac{t^{q}}{q}\right)\right]\\ &=rs\int_{0}^{\infty}\int_{0}^{\infty}e^{-\tau\frac{x^{p}}{p}-s \frac{t^{q}}{q}}\sin\lambda\left(\frac{x^{p}}{p}\right)\sin\beta\left(\frac{t ^{q}}{q}\right)x^{p-1}t^{q-1}dx\,dt.\end{split}\) From Property 2 and Equation (8), we get \[\begin{split}& G_{x}^{p}G_{t}^{q}\left[\sin\lambda\left(\frac{x^ {p}}{p}\right)\sin\beta\left(\frac{t^{q}}{q}\right)\right]=g_{x}\left[\sin \lambda\,x\right]g_{t}\left[\sin\beta\,t\right]\\ &=\frac{rs}{(r^{2}+\lambda^{2})(s^{2}+\lambda^{2})}.\end{split}\] ### Existence Condition for the Conformable Double ARA Transform If \(h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) is an exponential order \(a\) and \(b\) as \(\frac{x^{p}}{p}\rightarrow\infty,\frac{t^{q}}{q}\rightarrow\infty\), If \(\exists\) a constant \(K>0\), such that for all \(x>X\) and \(t>T\) it is easy to get, \(h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)=O\left(e^{a\frac{x^{p}}{p}+b\, \frac{t^{q}}{q}}\right)\) as \(\frac{x^{p}}{p}\rightarrow\infty,\frac{t^{q}}{q}\rightarrow\infty\). **Theorem 2.** Let the function \(h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) be continuous on the region \((0,X)\times(0,T)\) and are of exponential orders \(\gamma\) and, then the conformable double ARA transform of \(h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) exists for all \(Re(r)>\gamma\), \(Re(s)>\tau\). **Proof.** Using the definition of the CDARAT of \(h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\), we have \[\begin{split}&|G_{p,q}(r,s)|\\ &=\left|rs\int_{0}^{\infty}\int_{0}^{\infty}e^{-\tau\frac{x^{p}}{p}- s\frac{t^{q}}{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)x^{p-1}t^{q-1}dx\,dt \right|\\ &\leq rs\int_{0}^{\infty}\int_{0}^{\infty}e^{-\tau\frac{x^{p}}{p}- s\frac{t^{q}}{q}}\left[h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]x^{p-1}t^{q-1}dx\,dt \end{split}\] \[\begin{split}&\leq rs\,\int_{0}^{\infty}\int_{0}^{\infty}e^{-( \tau-\tau)\frac{x^{p}}{p}-(s-\tau)\frac{t^{q}}{q}}x^{p-1}t^{q-1}dx\,dt\\ &=\frac{rsK}{(r-\gamma)(s-\tau)}\end{split}\] For \(\mathrm{Re}(r)>\gamma\), \(\mathrm{Re}(s)>\tau\). **Theorem 3.** Let \(G_{p,q}(r,s)=g_{x}^{p}G_{t}^{q}\left[h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right)\right]\), then \[\begin{split}&\text{i.}\quad\begin{split}& g_{x}^{p}G_{t}^{q}\left[\frac{x^{p}}{p}h \left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]=\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad \[\begin{split} G_{x}^{p}g_{t}^{q}\left[\frac{x^{p}}{p}h\left(\frac{x^{p} }{p},\frac{t^{q}}{q}\right)\right]\\ =-\frac{\partial G_{p,q}(r,s)}{\partial r}+\frac{1}{r}G_{p,q}(r,s) \\ =-r\frac{\partial}{\partial r}\left(\frac{1}{r}G_{x}^{p}g_{t}^{q} \left[h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\right).\end{split} \tag{20}\] **Proof of (iii)**. Differentiating the both sides with respect respect tor in Equation (19), we have \[\begin{split}&\frac{\partial^{2}G_{p,q}(r,s)}{\partial r^{2}}\\ &=s\int_{0}^{\infty}e^{-s\frac{t^{q}}{q}}t^{q-1}dt\int_{0}^{ \infty}\frac{\partial}{\partial r}\left[\left(1\right.\\ &-r\frac{x^{p}}{p})e^{-r\frac{x^{p}}{p}}\left.h\left(\frac{x^{p}}{ p},\frac{t^{q}}{q}\right)x^{p-1}dx\right.\\ &=s\int_{0}^{\infty}e^{-s\frac{t^{q}}{q}}t^{q-1}dt\int_{0}^{ \infty}\left(r\left(\frac{x^{p}}{p}\right)^{2}\right.\\ &-2\frac{x^{p}}{p})e^{-r\frac{x^{p}}{p}}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)x^{p-1}dx.\end{split}\] Thus, \[\begin{split}& g_{x}^{p}g_{t}^{q}\left[\left(\frac{x^{p}}{p} \right)^{2}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\\ &=\frac{\partial^{2}G_{p,q}(r,s)}{\partial r^{2}}\\ &+\frac{2}{r}g_{x}^{p}g_{t}^{q}\left[\frac{x^{p}}{p}h\left(\frac{ x^{p}}{p},\frac{t^{q}}{q}\right)\right].\end{split} \tag{21}\] From Equation (20), we have \[\begin{split}& g_{x}^{p}g_{t}^{q}\left[\left(\frac{x^{p}}{p} \right)^{2}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\\ &=\frac{\partial^{2}G_{p,q}(r,s)}{\partial r^{2}}+\frac{2}{r^{2} }G_{p,q}(r,s)\\ &\quad-\frac{2}{r}\frac{\partial G_{p,q}(r,s)}{\partial r}.\end{split}\] Similarly, we can easily prove that: \[\begin{split}& g_{x}^{p}g_{t}^{q}\left[\frac{t^{q}}{q}h\left( \frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\\ &=-s\frac{\partial}{s}\left(\frac{1}{s}g_{x}^{p}g_{t}^{q}\left[h \left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\right).\\ & g_{x}^{p}g_{t}^{q}\left[\left(\frac{t^{q}}{q}\right)^{2}h\left( \frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\\ &\quad=\frac{\partial^{2}G_{p,q}(r,s)}{\partial s^{2}}+\frac{2}{s ^{2}}G_{p,q}(r,s)\\ &\quad-\frac{2}{s}\frac{\partial G_{p,q}(r,s)}{\partial s}.\end{split}\] **Proof of (v)**. Differentiating both sides with respect to \(s\) in Equation (19), we have \[\begin{split}&\frac{\partial^{2}G_{p,q}(r,s)}{\partial r\partial s }\\ &=\int_{0}^{\infty}\left(-s\frac{t^{q}}{q}\right.\\ &+1\left.\right)\ e^{-s\frac{t^{q}}{q}}\left(\int_{0}^{\infty} \left(-r\frac{x^{p}}{p}\right.\\ &+1\left.\right)\ e^{-r\frac{x^{p}}{p}}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)x^{p-1}dx\right)t^{q-1}\,dt.\end{split}\] Therefore, \[\begin{split}&\frac{\partial^{2}G_{p,q}(r,s)}{\partial r\partial s }\\ &=\int_{0}^{\infty}s\frac{t^{q}}{q}e^{-s\frac{t^{q}}{q}}\left( \int_{0}^{\infty}r\frac{x^{p}}{p}\ e^{-r\frac{x^{p}}{p}}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)x^{p-1}dx\right)t^{q-1}dt\\ &\quad-\int_{0}^{\infty}e^{-s\frac{t^{q}}{q}}\left(\int_{0}^{ \infty}r\frac{x^{p}}{p}e^{-r\frac{x^{p}}{p}}h\left(\frac{x^{p}}{p},\frac{t^{q} }{q}\right)x^{p-1}dx\right)t^{q-1}dt\\ &\quad-\int_{0}^{\infty}s\frac{t^{q}}{q}e^{-s\frac{t^{q}}{q}} \left(\int_{0}^{\infty}e^{-r\frac{x^{p}}{p}}h\left(\frac{x^{p}}{p},\frac{t^{q} }{q}\right)x^{p-1}dx\right)t^{q-1}dt\\ &\quad+\int_{0}^{\infty}e^{-s\frac{t^{q}}{q}}\left[\int_{0}^{ \infty}e^{-r\frac{x^{p}}{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)x^{p-1 }dx\right]t^{q-1}dt.\end{split}\] From (i) and (ii), we have \[\begin{split}& g_{x}^{p}g_{t}^{q}\left[\frac{x^{p}}{p}q^{q}h\left( \frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\\ &\quad=\frac{\partial^{2}G_{p,q}(r,s)}{\partial r\partial s}+ \frac{1}{rs}G_{p,q}(r,s)\\ &\quad-\frac{1}{s}\frac{\partial G_{p,q}(r,s)}{\partial r}-\frac{1}{ r}\frac{\partial G_{p,q}(r,s)}{\partial s}.\end{split}\] The proof of (ii) and (iv) can be obtained by similar arguments of (i) and (iii). **Theorem 4.** Let \(G_{p,q}(r,s)=g_{x}^{p}g_{t}^{q}\left[h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right)\right]\), then \[\begin{split}\text{i.}& g_{x}^{p}g_{t}^{q}\left[\frac{ \partial^{p}}{\partial x^{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) \right]=r\ G_{p,q}(r,s)-\\ & r\ g_{t}^{q}\left[h\left(0,\frac{t^{q}}{q}\right)\right].\\ &\text{ii.}& g_{x}^{p}g_{t}^{q}\left[\frac{\partial^{2p}}{ \partial x^{2p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]=r^{2}\ G_{p,q}(r,s)-\\ & r^{2}\ G_{t}^{q}\left[h\left(0,\frac{t^{q}}{q}\right)\right]-r \ G_{t}^{q}\left[\frac{\partial^{p}}{\partial x^{p}}h\left(0,\frac{t^{q}}{q} \right)\right].\\ &\text{iii.}& g_{x}^{p}g_{t}^{q}\left[\frac{\partial^{q}}{ \partial x^{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]=s\ G_{p,q}(r,s)-\\ & s\ G_{x}^{p}\left[h\left(\frac{x^{p}}{p},0\right)\right].\\ &\text{iv.}& g_{x}^{p}g_{t}^{q}\left[\frac{\partial^{2q}}{ \partial x^{2q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]=s^{2}\ G_{p,q}(r,s)-\\ & s^{2}\ G_{x}^{p}\left[h\left(\frac{x^{p}}{p},0\right)\right]-s \ G_{x}^{p}\left[\frac{\partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p},0 \right)\right].\end{split}\] **Proof.** 1. Using the definition of CDARAT for \(\frac{\partial^{2}\partial x^{p}}{\partial x^{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}} {q}\right)\), we have \[\begin{split}& g_{x}^{p}g_{t}^{q}\left[\frac{\partial^{p}}{ \partial x^{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\\ &=rs\int_{0}^{\infty}\int_{0}^{\infty}e^{-r\frac{x^{p}}{p}-s \frac{t^{q}}{q}}\\ &\ \[=s\int_{0}^{\infty}e^{-s\frac{t^{q}}{q}t^{q-1}}t^{q-1}\] \[\left(r\int_{0}^{\infty}e^{-r\frac{q^{p}}{p}}\frac{\partial^{p}}{ \partial x^{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)x^{p-1}dx\right)dt.\] Applying Property 1, \(\frac{\partial^{p}}{\partial x^{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right)=x^{1-p}\frac{\partial}{\partial x}h\left(\frac{x^{p}}{p},\frac{t^{q}}{ q}\right)\), then Equation (22) becomes \(\mathcal{G}_{x}^{p}g_{t}^{q}\left[\frac{\partial^{p}}{\partial x^{p}}h\left( \frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]=s\int_{0}^{\infty}e^{-s\frac{t^ {q}}{q}t^{q-1}}\] (23) \[\left(r\int_{0}^{\infty}e^{-r\frac{x^{p}}{p}}\frac{\partial}{ \partial x}h\left(x,t\right)dx\right).\] Thus, the integral inside bracket is given by \[\begin{split}& r\int_{0}^{\infty}e^{-r\frac{x^{p}}{p}}\frac{ \partial}{\partial x}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)dx\\ &=r\left(-h\left(0,\frac{t^{q}}{q}\right)\right.\\ &+r\int_{0}^{\infty}e^{-r\frac{x^{p}}{p}}h\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)x^{p-1}dx\right).\end{split} \tag{24}\] Substituting Equation (24) into Equation (23), we obtain \[\begin{split}& g_{x}^{p}g_{t}^{q}\left[\frac{\partial^{p}}{ \partial x^{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\\ &=r\ G_{p,q}(r,s)\\ &-r\,g_{t}^{q}\left[h\left(0,\frac{t^{q}}{q}\right)\right].\end{split} \tag{25}\] In the same manner, the CDARAT of \(\frac{\partial^{q}}{\partial\epsilon^{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{ q}\right)\frac{\partial^{2p}}{\partial x^{2p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{ q}\right)\) and \(\frac{\partial^{2q}}{\partial\epsilon^{2q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) can be obtained. **Theorem 5**.: Let \(G_{p,q}(r,s)=g_{x}^{p}g_{t}^{q}\left[h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right)\right]\), then \[\begin{split}\text{i.}&\mathcal{G}_{x}^{p}g_{t}^{q} \left[\frac{x^{p}}{p}\frac{\partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]=\\ &-rs\frac{\partial}{\partial r}\left(\frac{1}{r}g_{x}^{p}g_{t}^{q} \left[h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\right)+\\ & rs\frac{d}{dr}\left(\frac{1}{r}g_{x}^{p}\left[h\left(\frac{x^{p }}{p},0\right)\right]\right).\\ \text{ii.}&\mathcal{G}_{x}^{p}g_{t}^{q}\left[\frac{ \partial^{p}}{\partial x^{p}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) \right]=\\ &-rs\frac{\partial}{\partial r}\left(\frac{1}{s}g_{x}^{p}g_{t}^{q} \left[h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]\right)+\\ & rs\frac{d}{ds}\left(\frac{1}{s}g_{t}^{q}\left[h\left(0,\frac{t^{ q}}{q}\right)\right]\right).\end{split}\] **Proof of \(\,\)i.** The conformable double ARA transforms definition for fractional partial derivatives, implies \[\begin{split}&\frac{\partial}{\partial r}\bigg{[}g_{x}^{p}g_{t}^{q} \left(\frac{\partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{ q}\right)\right)\bigg{]}\\ &=\text{s}\int_{0}^{\infty}e^{-s\frac{t^{q}}{q}}\frac{\partial^{q} }{\partial t^{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)t^{q-1}\,dt\\ &\left(\int_{0}^{\infty}\frac{\partial}{\partial r}\bigg{[}r\,e^{-r \frac{x^{p}}{p}}\bigg{]}\,x^{p-1}dx\right),\end{split} \tag{26}\] we calculate the partial derivative in the second integral as follows \[\begin{split}\int_{0}^{\infty}\frac{\partial}{\partial r}\bigg{(} r\,e^{-r\frac{x^{p}}{p}}\bigg{)}x^{p-1}dx\\ =\int_{0}^{\infty}e^{-r\frac{x^{p}}{p}}x^{p-1}dx\\ -r\int_{0}^{\infty}\frac{x^{p}}{p}\,e^{-r\frac{x^{p}}{p}}x^{p-1} dx.\end{split} \tag{27}\] Substituting Equation (27) into Equation (26), we get \[\begin{split}&\frac{\partial}{\partial r}\bigg{[}g_{x}^{p}g_{t}^{q} \left(\frac{\partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right)\right)\bigg{]}\\ &=s\int_{0}^{\infty}e^{-s\frac{t^{q}}{q}}\int_{0}^{\infty}e^{-r \frac{x^{p}}{p}}\frac{\partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p},\frac{t ^{q}}{q}\right)x^{p-1}\,t^{q-1}dxdt\\ -rs\int_{0}^{\infty}e^{-s\frac{t^{q}}{q}}\\ \int_{0}^{\infty}e^{-r\frac{x^{p}}{p}}\frac{x^{p}}{p}\frac{\partial^ {q}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)}{\partial t^{q}}x^{p-1}\,t^{ q-1}dx\,\,dt.\end{split}\] Thus, \[\begin{split}& g_{x}^{p}g_{t}^{q}\left[\frac{x^{p}}{p}\,\frac{ \partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) \right]\\ &=-\frac{\partial}{\partial r}g_{x}^{p}g_{t}^{q}\left[\frac{ \partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) \right]\\ &\quad\quad\quad\quad\quad\quad+\frac{1}{r}g_{x}^{p}g_{t}^{q} \left[\frac{\partial^{q}}{\partial t^{q}}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q} \right)\right].\end{split}\] Using Theorem 4, we have \[\begin{split}& g_{x}^{p}g_{t}^{q}\left[\frac{x^{p}}{p}\,\frac{ \partial^{q}h\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]}{\partial t ^{q}}\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad ## 5 Applications The CDARADM is used in this section of the study to solve regular and singular one-dimensional conformable fractional coupled Burger's equations. The goal problem is the same as the problem examined in [1], when \(p=1\) and \(q=1\). This is what we mention here. **Example 1.** Consider the One-dimensional conformable fractional coupled Burgers' equation of the form \[\begin{array}{l}\frac{\partial^{q}u}{\partial t^{q}}-\frac{\partial^{2p}u}{ \partial x^{2p}}+\lambda\ u\frac{\partial^{p}u}{\partial x^{p}}+\alpha\ \frac{\partial^{p}}{\partial x^{p}}\left(uv\right)\\ =k\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\\ \frac{\partial^{q}v}{\partial t^{q}}-\frac{\partial^{2p}v}{\partial x^{2p}}+ \lambda\ v\frac{\partial^{p}v}{\partial x^{p}}+\beta\ \frac{\partial^{p}}{\partial x^{p}}\left(uv\right)\\ =l\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right),\end{array} \tag{28}\] subject to \[\begin{array}{l}u\left(\frac{x^{p}}{p},0\right)=\ k_{1}\left(\frac{x^{p}}{p} \right),\\ v\left(\frac{x^{p}}{p},0\right)=\ l_{1}\left(\frac{x^{p}}{p}\right),\end{array} \tag{29}\] for \(t>0\). Here, \(k\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right),l\left(\frac{x^{p}}{p},\frac{t^ {q}}{q}\right)\,\ k_{1}\left(\frac{x^{p}}{p}\right)\) and \(l_{1}\left(\frac{x^{p}}{p}\right)\) are given functions, \(\lambda,\alpha\), and \(\beta\) are arbitrary parameters depending on the Peclet number, Stokes velocity of particles due to gravity and Brownian diffusivity, see, [9]. Now, operating the conformable double ARA transform to Equation (28) and the single conformable single ARA transform for Equation (29), to get \[\begin{array}{l}U(r,s)=K_{1}(r)+\frac{K(r,s)}{s}\\ +\ \frac{1}{s}\ g_{x}^{p}G_{t}^{q}\left[\frac{\partial^{2p}u}{\partial x^{2p}}- \lambda\ u\frac{\partial^{p}u}{\partial x^{p}}-\alpha\ \frac{\partial^{p}}{\partial x^{p}}\left(uv\right)\right] \end{array} \tag{30}\] \[\begin{array}{l}V(r,s)=L_{1}(r)+\frac{L(r,s)}{s}\\ +\ \frac{1}{s}\ g_{x}^{p}G_{t}^{q}\left[\frac{\partial^{2p}v}{\partial x^{2p}}- \lambda\ v\frac{\partial^{p}v}{\partial x^{p}}-\beta\ \frac{\partial^{p}}{\partial x^{p}}\left(uv\right)\right] \end{array} \tag{31}\] The CDARADM defines the solution of the target problem \(u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) and \(v\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) in the form of infinite series as \[\begin{array}{l}u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)=\sum_{n=0}^{ \infty}u_{n}\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right),\\ v\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)=\sum_{n=0}^{\infty}v_{n}\left( \frac{x^{p}}{p},\frac{t^{q}}{q}\right).\end{array} \tag{32}\] Define the Adomian's polynomials \(A_{n}\), \(B_{n}\) and \(C_{n}\) as \[A_{n}=\sum_{n=0}^{\infty}u_{n}u_{xn}, \tag{33}\] \[\begin{array}{l}B_{n}\,=\sum_{n=0}^{\infty}v_{n}v_{xn},\\ C_{n}\,=\,\sum_{n=0}^{\infty}u_{n}v_{n}.\end{array}\] We can compute the Adomian polynomials of the nonlinear terms \(uu_{x},vv_{x}\) and \(uv\) by the formulas \[\begin{array}{l}A_{0}\,=\,u_{0}u_{0x}.\\ A_{1}\,=\,u_{0}u_{1x}+\,u_{1}u_{0x}.\\ A_{2}\,=\,u_{0}u_{2x}+\,u_{1}u_{1x}+\,u_{2}u_{0x}.\\ A_{3}\,=\,u_{0}u_{3x}+\,u_{1}u_{2x}+\,u_{2}u_{1x}+u_{3}u_{0x}.\\ A_{3}\,=\,u_{0}u_{4x}+\,u_{1}u_{3x}+\,u_{2}u_{2x}+u_{3}u_{1x}+u_{4}u_{0x}.\\ \vdots\end{array}\] \[\begin{array}{l}B_{0}\,=\,v_{0}v_{0x}.\\ B_{1}\,=\,v_{0}v_{1x}+\,v_{1}v_{0x}.\\ B_{2}\,=\,v_{0}v_{2x}+\,v_{1}v_{1x}+\,v_{2}v_{0x}.\\ B_{3}\,=\,v_{0}v_{3x}+\,v_{1}v_{2x}+\,v_{2}v_{1x}+v_{3}v_{0x}.\\ B_{3}\,=\,v_{0}v_{4x}+\,v_{1}v_{3x}+\,v_{2}v_{2x}+v_{3}v_{1x}+v_{4}v_{0x}.\\ \vdots\end{array}\] \[\begin{array}{l}C_{0}\,=\,u_{0}v_{0}.\\ C_{1}\,=\,u_{0}v_{1}+\,u_{1}v_{0}.\\ C_{2}\,=\,u_{0}v_{2}+\,u_{1}v_{1}+\,u_{2}v_{0}.\\ C_{3}\,=\,u_{0}v_{3}+\,u_{1}v_{2}+\,u_{2}v_{1}+u_{3}v_{0}.\\ C_{3}\,=\,u_{0}v_{4}+\,u_{1}v_{3}+\,u_{2}v_{2}+u_{3}v_{1}+u_{4}v_{0}.\\ \vdots\end{array}\] Operating the inverse double ARA transform to Equation (30) and Equation (31), utilizing Equation (33), we get \[\begin{array}{l}\sum_{n=0}^{\infty}u_{n}\left(\frac{x^{p}}{p}, \frac{t^{q}}{q}\right)=\,k_{1}(x)\\ \begin{array}{l}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ \[u_{1}=\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ \[u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) =u_{0}+u_{1}+u_{2}+...\] \[=\left(1-\left(\frac{t^{q}}{q}\right)+\frac{\left(\frac{t^{q}}{q} \right)^{2}}{2!}-\frac{\left(\frac{t^{q}}{q}\right)^{3}}{3!}\right.\] \[+\cdots\left.\right)\sin\left(\frac{x^{p}}{p}\right),\] \[\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) =v_{0}+v_{1}+v_{2}+...\] \[=\left(1-\left(\frac{t^{q}}{q}\right)+\frac{\left(\frac{t^{q}}{q} \right)^{2}}{2!}-\frac{\left(\frac{t^{q}}{q}\right)^{3}}{6!}\right.\] \[+\cdots\left.\right)\,v\sin\left(\frac{x^{p}}{p}\right),\] and hence the exact solutions become \[u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) =e^{-\frac{t^{q}}{q}}\sin\left(\frac{x^{p}}{p}\right),\] \[v\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) =e^{-\frac{t^{q}}{q}}\sin\left(\frac{x^{p}}{p}\right).\] By taking \(p=1\) and \(q=1\), the fractional solution of Equation (39) becomes \[u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) =e^{-t}\sin x,\] \[v\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right) =e^{-t}\sin x.\] The behavior of the velocity field of the two-CDARADM (28) and (29) is depicted in Figure 1 for (a) the approximate and exact solutions of \(u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) for Example 1, when \(p=q\), at \(p=0.8\),\(0.9\),\(1\), and (b) the approximate and exact solutions of \(u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\), for Example 1, when taking various values of fractional order \(q\) (\(q=0.8\),\(0.9\),\(1\)) and \(p=1\). \[\begin{split}&\frac{\partial}{\partial r}\bigg{(}\frac{1}{r}U(r,s) \bigg{)}\\ &=\frac{d}{dr}\bigg{(}\frac{1}{r}G_{x}^{p}[k_{1}(x)]\bigg{)}\\ &-\frac{1}{rs}\xi_{x}^{p}G_{t}^{q}\bigg{[}\frac{\partial^{p}}{ \partial x^{p}}\bigg{(}\frac{x^{p}}{p}\frac{\partial^{p}}{\partial x^{p}}u \bigg{)}\\ &-\lambda\,\frac{x^{p}}{p}u\frac{\partial^{p}u}{\partial x^{p}}- \alpha\,\frac{x^{p}}{p}\frac{\partial^{p}}{\partial x^{p}}(uv)\bigg{]}\\ &+\frac{1}{s}\frac{\partial}{\partial r}\bigg{(}\frac{1}{r}G_{x} ^{p}G_{t}^{q}\bigg{[}k\bigg{(}\frac{x^{p}}{p},\frac{t^{q}}{q}\bigg{)}\bigg{]} \bigg{)},\\ &\frac{\partial}{\partial r}\bigg{(}\frac{1}{r}V(r,s)\bigg{)}\\ &=\frac{d}{dr}\bigg{(}\frac{1}{r}G_{x}^{p}[l_{1}(x)]\bigg{)}\\ &-\frac{1}{rs}\xi_{x}^{p}G_{t}^{q}\bigg{[}\frac{\partial^{p}}{ \partial x^{p}}\bigg{(}\frac{x^{p}}{p}\frac{\partial^{p}}{\partial x^{p}}v \bigg{)}\\ &-\lambda\,\frac{x^{p}}{p}\frac{\partial^{p}u}{\partial x^{p}}- \beta\,\frac{x^{p}}{p}\frac{\partial^{p}}{\partial x^{p}}(uv)\bigg{]}\\ &+\frac{1}{s}\frac{\partial}{\partial r}\bigg{(}\frac{1}{r}G_{x} ^{p}G_{t}^{q}\bigg{[}l\bigg{(}\frac{x^{p}}{p},\frac{t^{q}}{q}\bigg{)}\bigg{]} \bigg{)}.\end{split} \tag{46}\] Applying the definite integral \(\int_{0}^{r}\) with respect to \(r\) to both sides of Equation (46) \[\begin{split}&\frac{1}{r}U(r,s)\\ &=\int_{0}^{r}\frac{d}{dr}\bigg{(}\frac{1}{r}G_{x}^{p}[k_{1}(x)] \bigg{)}dr\\ &-\frac{1}{s}\int_{0}^{r}\bigg{(}\frac{1}{r}G_{x}^{p}G_{t}^{q} \bigg{[}\frac{\partial^{p}}{\partial x^{p}}\bigg{(}\frac{x^{p}}{p}\frac{ \partial^{p}}{\partial x^{p}}u\bigg{)}-\lambda\,\frac{x^{p}}{p}N_{1}\\ &-\alpha\,\frac{x^{p}}{p}N_{2}\bigg{]}\bigg{)}\,dr\\ &+\frac{1}{s}\int_{0}^{r}\bigg{(}\frac{\partial}{\partial r} \bigg{(}\frac{1}{r}G_{x}^{p}G_{t}^{q}\bigg{[}k\bigg{(}\frac{x^{p}}{p},\frac{t ^{q}}{q}\bigg{)}\bigg{]}\bigg{)}\bigg{)}\,dr,\\ &\frac{1}{r}V(r,s)\\ &=\int_{0}^{r}\frac{d}{dr}\bigg{(}\frac{1}{r}G_{x}^{p}[l_{1}(x)] \bigg{)}\,dr\\ &-\frac{1}{s}\int_{0}^{r}\bigg{(}\frac{1}{r}G_{x}^{p}G_{t}^{q} \bigg{[}\frac{\partial^{p}}{\partial x^{p}}\bigg{(}\frac{x^{p}}{p}\frac{ \partial^{p}}{\partial x^{p}}v\bigg{)}-\lambda\,\frac{x^{p}}{p}N_{3}\\ &-\beta\,\frac{x^{p}}{p}N_{2}\bigg{]}\bigg{)}\,dr\\ &+\frac{1}{s}\int_{0}^{r}\bigg{(}\frac{\partial}{\partial r} \bigg{(}\frac{1}{r}G_{x}^{p}G_{t}^{q}\bigg{[}l\bigg{(}\frac{x^{p}}{p},\frac{t ^{q}}{q}\bigg{)}\bigg{]}\bigg{)}\bigg{)}\,dr.\end{split} \tag{47}\] Multiplying both sides of the equations by \(r\), we get \[\begin{split}& U(r,s)\\ &=r\int_{0}^{r}\frac{d}{dr}\bigg{(}\frac{1}{r}G_{x}^{p}[k_{1}(x)] \bigg{)}\,dr\\ &-\frac{r}{s}\int_{0}^{r}\bigg{(}\frac{1}{r}G_{x}^{p}G_{t}^{q} \bigg{[}\frac{\partial^{p}}{\partial x^{p}}\bigg{(}\frac{x^{p}}{p}\frac{ \partial^{p}}{\partial x^{p}}u\bigg{)}\\ &-\lambda\,\frac{x^{p}}{p}N_{1}-\alpha\,\frac{x^{p}}{p}N_{2} \bigg{]}\bigg{)}\,dr\\ &+\frac{r}{s}\int_{0}^{r}\bigg{(}\frac{\partial}{\partial r} \bigg{(}\frac{1}{r}G_{x}^{p}G_{t}^{q}\bigg{[}k\bigg{(}\frac{x^{p}}{p},\frac{t ^{q}}{q}\bigg{)}\bigg{]}\bigg{)}\bigg{)}\,dr,\\ & V(r,s)\\ &=r\int_{0}^{r}\frac{d}{dr}\bigg{(}\frac{1}{r}G_{x}^{p}[l_{1}(x)] \bigg{)}\,dr\\ &-\frac{r}{s}\int_{0}^{r}\bigg{(}\frac{1}{r}G_{x}^{p}G_{t}^{q} \bigg{[}\frac{\partial^{p}}{\partial x^{p}}\bigg{(}\frac{x^{p}}{p}\frac{ \partial^{p}}{\partial x^{p}}v\bigg{)}\\ &-\lambda\,\frac{x^{p}}{p}N_{3}-\beta\,\frac{x^{p}}{p}N_{2}\bigg{]} \bigg{)}\,dr\\ &+\frac{r}{s}\int_{0}^{r}\bigg{(}\frac{\partial}{\partial r} \bigg{(}\frac{1}{r}G_{x}^{p}G_{t}^{q}\bigg{[}l\bigg{(}\frac{x^{p}}{p},\frac{t ^{q}}{q}\bigg{)}\bigg{]}\bigg{)}\bigg{)}\,dr.\end{split}\] Utilizing the CDARDM to present the solution of \(u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) and \(v\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)\) by infinite series as \[\begin{split}& u\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)= \sum_{n=0}^{\infty}u_{n}\bigg{(}\frac{x^{p}}{p},\frac{t^{q}}{q}\bigg{)},\\ & v\left(\frac{x^{p}}{p},\frac{t^{q}}{q}\right)=\sum_{n=0}^{\infty }v_{n}\bigg{(}\frac{x^{p}}{p},\frac{t^{q}}{q}\bigg{)}.\end{split} \tag{48}\] Define the nonlinear operators as \[\begin{split}& N_{1}=\sum_{n=0}^{\infty}A_{n}\,,\qquad N_{2}=\sum_{n=0 }^{\infty}C_{n},\\ & N_{3}=\sum_{n=0}^{\infty}B_{n}.\end{split} \tag{49}\] Operating the double inverse transform to Equation (47) and making use of Equation (48) and Equation (49), we have \[\begin{split}&\sum_{n=0}^{\infty}u_{n}\bigg{(}\frac{x^{p}}{p}, \frac{t^{q}}{q}\bigg{)}\\ &=k_{1}(x)+G_{r}^{-1}G_{s}^{-1}\bigg{[}\frac{1}{s}\bigg{(}G_{s}^{p}G_ {t}^{q}\bigg{[}k\bigg{(}\frac{x^{p}}{p},\frac{t^{q}}{q}\bigg{)}\bigg{]}\bigg{)} \\ &-\delta_{r}^{-1}G_{s}^{-1}\bigg{[}\frac{r}{s}\int_{0}^{r}\bigg{(} \frac{1}{r}\bigg{(}G_{s}^{p}G_{t}^{q}\bigg{[}\frac{\partial^{p}}{\partial x^{p}} \bigg{(}\frac{x^{p}}{p}\bigg{(}\sum_{n=0}^{\infty}u_{n}\bigg{)}\bigg{)}\bigg{]} \bigg{)}\bigg{)}\,dr\\ &+\delta_{r}^{-1}G_{s}^{-1}\bigg{[}\frac{r}{s}\int_{0}^{r}\frac{1}{r} \bigg{(}G_{s}^{p}G_{t}^{q}\bigg{[}\lambda\ And \[\begin{split}&\sum_{n=0}^{\infty}{{{u_{n}}\left(\frac{x^{p}}{p},\frac{t^ {q}}{q}\right)}}\\ &={{l_{1}}(x)}+{{\mathcal{G}}_{r}^{-1}}{{\mathcal{G}}_{s}^{-1}} \left[\frac{1}{s}{\left({{\mathcal{G}}_{r}^{p}}{{\mathcal{G}}_{t}^{q}}\left[ \frac{x^{p}}{p},\frac{t^{q}}{q}\right)\right]}\right)\\ &-{{\mathcal{G}}_{r}^{-1}}{{\mathcal{G}}_{s}^{-1}}\left[\frac{r} {s}{\int\limits_{0}^{t}{\frac{1}{r}{\left({{\mathcal{G}}_{r}^{p}}{{\mathcal{G }}_{t}^{q}}\left[\frac{{{\partial}^{p}}}{{\partial x^{p}}}\left(\frac{x^{p}}{p }\right)\left(\sum_{n=0}^{\infty}{{{u_{n}}}}\right)\right)}\right]}dr}\right] \\ &+{{\mathcal{G}}_{r}^{-1}}{{\mathcal{G}}_{s}^{-1}}\left[\frac{r} {s}{\int\limits_{0}^{t}{\frac{1}{r}{\left({{\mathcal{G}}_{r}^{p}}{{\mathcal{G }}_{t}^{q}}\left[\frac{x^{p}}{p}\sum_{n=0}^{\infty}{{{B_{n}}}}\right]}\right) dr}}\right]\\ &+{{\mathcal{G}}_{r}^{-1}}{{\mathcal{G}}_{s}^{-1}}\left[\frac{r} {s}{\int\limits_{0}^{t}{\frac{1}{r}{\left({{\mathcal{G}}_{r}^{p}}{{\mathcal{G }}_{t}^{q}}\left[\frac{x^{p}}{p}\sum_{n=0}^{\infty}{{{C_{n}}}}\right]}\right) dr}}\right].\end{split} \tag{51}\] Now, we can express the first few components as \[\begin{split}& \[=\left(\frac{x^{p}}{p}\right)^{2}+g_{r}^{-1}g_{s}^{-1}\left[\frac{1}{ s}\left(\frac{2}{r^{2}}\left(\frac{s^{2}}{s-1}-s\right)+4s\left(1-\frac{s}{s-1} \right)\right)\right]\] \[=\left(\frac{x^{p}}{p}\right)^{2}+g_{r}^{-1}g_{s}^{-1}\left[\frac{ 1}{s}\left(\frac{2}{r^{2}}\left(\frac{s^{2}}{s-1}\right)-\frac{4s^{2}}{s-1}- \frac{2s}{r^{2}}+4s\right)\right]\] \[=\left(\frac{x^{p}}{p}\right)^{2}+\left(\frac{x^{p}}{p}\right)^{ 2}e^{\frac{t}{q}}-4e^{\frac{t}{q}}-\left(\frac{x^{p}}{p}\right)^{2}+4\] \[=\left(\frac{x^{p}}{p}\right)^{2}e^{\frac{t}{q}}-4e^{\frac{t}{q}} +4,\] \[u_{1}\] \[=-g_{r}^{-1}g_{s}^{-1}\left[\frac{r}{s}\int\limits_{0}^{r}\frac{ 1}{r}\left(g_{r}^{p}g_{t}^{q}\left(\frac{x^{p}}{p}e^{\frac{t}{q}}\right.\right.\right.\] \[\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left. ## 6 Conclusion In the current study, we defined and went over some of the characteristics of the conformable double ARA transform. The conformable double ARA decomposition method is a novel approach that we present for the solution of nonlinear conformable partial differential equations. We used the proposed approach, a novel amalgamation of the conformable double ARA transform and Adomian decomposition methods, to present solutions to the one-dimensional regular and singular conformable fractional coupled Burgers' problem. Additionally, two intriguing examples were given to demonstrate the applicability of the novel approach. Different types of nonlinear time-fractional differential equations with conformable derivatives can be solved using this technique. We want to answer more fractional integral equations and fractional nonlinear problem classes in the future. ### Acknowledgement: The authors express their gratitude to the dear referees, who wish to remain anonymous, and the editor for their helpful suggestions.
2304.09318
AIRCADE: an Anechoic and IR Convolution-based Auralization Data-compilation Ensemble
In this paper, we introduce a data-compilation ensemble, primarily intended to serve as a resource for researchers in the field of dereverberation, particularly for data-driven approaches. It comprises speech and song samples, together with acoustic guitar sounds, with original annotations pertinent to emotion recognition and Music Information Retrieval (MIR). Moreover, it includes a selection of impulse response (IR) samples with varying Reverberation Time (RT) values, providing a wide range of conditions for evaluation. This data-compilation can be used together with provided Python scripts, for generating auralized data ensembles in different sizes: tiny, small, medium and large. Additionally, the provided metadata annotations also allow for further analysis and investigation of the performance of dereverberation algorithms under different conditions. All data is licensed under Creative Commons Attribution 4.0 International License.
TΓΊlio Chiodi, Arthur dos Santos, Pedro Martins, Bruno Masiero
2023-04-18T22:07:50Z
http://arxiv.org/abs/2304.09318v2
# AIRCADE: an Anechoic and IR Convolution-based ###### Abstract In this paper1, we introduce a data-compilation ensemble, primarily intended to serve as a resource for researchers in the field of dereverberation, particularly for data-driven approaches. It comprises speech and song samples, together with acoustic guitar sounds, with original annotations pertinent to emotion recognition and Music Information Retrieval (MIR). Moreover, it includes a selection of impulse response (IR) samples with varying Reverberation Time (RT) values, providing a wide range of conditions for evaluation. This data-compilation can be used together with provided Python scripts, for generating auralized data ensembles in different sizes: _tiny_, _small_, _medium_ and _large_. Additionally, the provided metadata annotations also allow for further analysis and investigation of the performance of dereverberation algorithms under different conditions. All data is licensed under Creative Commons Attribution 4.0 International License. Footnote 1: This work was partially supported by the SΓ£o Paulo Research Foundation (FAPESP), grants #2017/08120-6 and #2019/22795-1. Speech Emotion Recognition Song Emotion Recognition Music Information Retrieval Auralization Dereverberation ## 1 Introduction Reverberation is the persistence of sound in a space after the source ceases to emit it. It is mainly caused by reflections of the sound waves off the surfaces within the space and gradually dissipates over time. Since the characteristics of the space in which reverberation occurs can influence the duration and intensity of this phenomenon, the Reverberation Time (RT) is considered an important aspect of room acoustics (Beranek [2004], Lapointe [2010]). In the context of entertainment, reverberation can affect the Human Auditory System (HAS) in positive ways, since some degree of it can help to enhance the perceived loudness and richness of sounds. This occurs because the multiple reflections can create a sensation of spaciousness and immersion, which can be aesthetically pleasing, particularly in music. On the other hand, in the context of communications, excessive reverberation can affect the quality and intelligibility of speech because the multiple reflections can create a "_smearing_" effect that can make it difficult to distinguish individual sounds and syllables. Moreover, the reduction of the overall Signal-to-Noise Ratio (SNR) of a given sound source can also mask quiet sounds, making it harder to understand them (Gelfand [2017], Lyon [2017]). Therefore, the effects of reverberation on the HAS can vary widely, depending on the duration and intensity of the phenomenon, the frequency content of the sound source, the individual characteristics of the listener's hearing, etc. In general, it is desirable to optimize the amount of reverberation in a given acoustic scenario to ensure the best possible listening experience for the intended audience. In this situation, dereverberation is a process that can be used to reduce or remove the effects of excessive reverberation from an audio signal. This is typically used in situations where the audio signal has already been recorded in a reverberant environment, and the resulting reverberation is unwanted or detrimental to the quality of the recorded audio (Naylor et al. (2010)). In recent years, data-driven dereverberation methods have become increasingly popular due to their ability to learn complex mappings between the anechoic and reverberant signals. While showing promising results, these methods also have a wide range of applicability, including speech recognition, speaker verification, music production, etc (Xu et al. (2014), Hershey et al. (2016)). Still, an important challenge for these methods is the need for large amounts of training data, with paired clean and reverberant audio signals for training, which, ideally, should cover a wide range of acoustic environments, microphone types, and speaker characteristics, to ensure that a model will generalize well to new scenarios. Examples of dereverberation datasets include the REVERB challenge dataset (Kinoshita et al. (2013)), BUT Speech@FIT Reverb Database (Szoke et al. (2019)), and VoiceBank-SLR (Fu et al. (2022)), among others. However, most of these datasets carry some limitations, such as narrow-band anechoic data, i.e., only speech is considered as a signal of interest, and absence of Impulse Response (IR) data, i.e., usually only anechoic and reverberant data are paired for supervised training. Hence, in this paper, we introduce a new data-compilation ensemble, primarily intended for training data-driven dereverberation models capable of dealing with full bandwidth audio signals, e.g., speech, song, music etc. We offer pairs of natural anechoic and IR data, compiled from datasets licensed under Creative Commons Attribution 4.0 International License, together with Python scripts for convolution-based aneralization, under the hypothesis that these ensembles could serve as better training and evaluation tools for such algorithms. The remainder of this paper is organized as follows: Section 2 details the methodology used for selecting the anechoic and IR data, and then synthesizing the auralized data. Section 3 describes the resulting auralized data and their annotations. Finally, Section 4 presents an overall discussion of our obtained results, together with some pertinent considerations to conclude our study. ## 2 Methods To simulate or recreate an acoustic environment, such as a concert hall or a recording studio, using computer algorithms and specialized software, one can resort to a well known technique called auralization. It involves measurements or simulations of the space's physical properties to mimic its acoustic characteristics. To proceed accordingly, there are various techniques available, such as acoustic ray tracing and convolution-based auralization (Kleiner et al. (1993)). From a signal processing point of view, the convolution is a mathematical operation that describes the interaction between two signals. In the context of auralization, convolution can be used to simulate the effects of an acoustic environment on an audio signal. The basic idea is to convolve an audio signal with an IR that describes the acoustic characteristics of a room or space. The resultant output produces a new signal that represents the original audio signal after it has been modified by the acoustic characteristics of the room (Allen and Berkley (1979), Oppenheim et al. (1997)). In the context of data-driven dereverberation methods, the choice for either natural, synthetic or _joint_ datasets, i.e., those obtained by processing combinations of natural recordings with synthetic sounds, directly impacts _in-the-wild_ applicability. This occurs because, when models are trained on mostly synthetic data, they usually don't generalize well for real-world scenarios. However, the majority of recent studies use joint combinations, e.g., by convolving anechoic data with naturally recorded or synthesized IR data etc, perhaps because it would be too cumbersome to naturally acquire all the necessary data (dos Santos et al. (2022)). Hence our dataset attempts to reach a balance between these features, compiling real anechoic and IR data, and then synthetically producing auralized data ensembles by means of convolution. ### Anechoic data Trying to cover a wide range of full-bandwidth audio signals, we chose to compile signals of interest from three different categories: speech, song, and musical instruments. Since RAVDESS (Livingstone and Russo (2018)) is a well-known dataset with subsets of emotional speech and song, it was chosen to cover the first two aforementioned categories. Its speech-only portion comprises \(1,440\) samples performed by \(24\) actors (\(12\) male and \(12\) female), vocalizing \(2\) lexically-matched statements ("_kids are talking by the door_" and "_dogs are sitting by the door_") in a "neutral" North American accent. Each expression is pronounced at \(2\) different levels of emotional intensity (normal and strong), with an additional neutral expression, resulting in a total amount of \(60\) trials per actor. Speech emotions include calm, happy, sad, angry, fearful, surprise, and disgust. Its song-only portion is quite similar and comprises \(1,012\) samples performed by \(23\) actors, singing the same \(2\) lexically-matched statements. Song emotions include only neutral, calm, happy, sad, angry, and fearful expressions. The original sample rate is fixed at 48 kHz, and Figures 1 (a) and (b) illustrate histograms with the original duration of files in each RAVDESS subset that we used. Since the acoustic guitar is a popular musical instrument, for a variety of reasons, including its ability to produce polyphonic sound and its musical versatility, we chose to use a subset from GuitarSet (Xi et al. (2018)), referred to as _audio_mono-mic_. It comprises \(360\) samples performed by \(6\) musicians, playing \(30\) twelve to sixteen bar excerpts from lead-sheets in a variety of keys, tempos, and musical genres. Recording was performed using a Neumann U87 condenser microphone, placed at approximately 30 cm in front of the 18th fret of the guitar. The original sample rate is fixed at 44.1 kHz, and Figure 1 (c) illustrates a histogram with the original duration of files in this GuitarSet subset. #### 2.1.1 Anechoic data processing Considering the great difference between the duration of signals in RAVDESS and GuitarSet, we chose to split the samples in GuitarSet into segments of smaller duration, fixed at 5 s, resulting in \(2,004\) different samples. Another reason behind this decision is that when choosing the length of anechoic data, it is important to strike a balance between the computational cost of the convolution operation and the length of the segments. If they are too short, the resulting audio signals may not capture the full extent of the room's acoustics, and if the segments are too long, the convolution operation may become too computationally intensive. Moreover, since the sample rates of RAVDESS and GuitarSet are different, we also chose to up-sample the GuitarSet segments to 48 kHz, thus standardizing this value for all files in our dataset. ### IR data IR data was curated from the Open Acoustic Impulse Response (Open AIR) Library, which is an online database of Acoustic Impulse Response (AIR) data. Since the original metadata in this database provides information about space category, IR duration, etc., the samples were chosen in order to have a balance between a selected variety of open and enclosed spaces, with IRs in the range of a few milliseconds up to 5 s. Figure 1 (d) illustrates a histogram with the original duration of the selected IR data. Altogether, \(65\) IRs were chosen. Figure 1: Original duration of files in the (a) speech, (b) song, (c) guitar, and (d) IR subsets of our compilation. #### 2.2.1 IR data processing The selected IR data was found in a variety of formats, e.g., B-format, MS Stereo, mono, etc. And at sample rates varying from 44.1 kHz to 96 kHz. Since all the anechoic data was compiled in mono format at 48 kHz, the selected IR data was first converted to mono, then normalized to prevent files from clipping, and finally, each IR was either down- or up-sampled to 48 kHz. ## 3 Results The processed anechoic and IR data is hosted at Zenodo, with an approximate total file size of 1.3 GB. For simplicity, all samples in our data-compilation were renamed, e.g., \({\it guitar\_0000}\), \({\it rir\_0000}\), \({\it song\_0000}\), \({\it speech\_0000}\), and so on. To synthesize different versions of the auralized data ensemble, the reader is referred to GitHub, where Python scripts are available for downloading the base data-compilation and synthesizing a chosen version of the auralized data ensemble. Table 1 illustrates the differences between all versions, detailing the number of song, speech, guitar, IR and auralized samples in each one, together with their respective total file size and duration. ### Anechoic data annotation Since the anechoic data in our compilation is comprised of speech and song samples with original annotations pertinent to emotion recognition, together with acoustic guitar sounds with original annotations pertinent to Music Information Retrieval (MIR), we provide metadata which can be used to trace back each sample to its original annotations. This is done because we do not intend for this dataset to be limited to dereverberation tasks only, but also to be used for applications such as emotion recognition and MIR in more challenging scenarios, i.e., in the presence of convolutive noise. ### IR data annotation Since the computational effort in dereverberation tasks is highly intertwined with the RT values of IR data, \(RT_{20}\) values were extracted from each IR sample using ITA Toolbox. Figure 2 illustrates a histogram with the extracted \(RT_{20}\) values from the selected the IR data. \begin{table} \begin{tabular}{c|c|c|c|c} & **Tiny** & **Small** & **Medium** & **Large** \\ \hline **Song samples** & 100 & 500 & 1,012 & 1,012 \\ \hline **Speech samples** & 100 & 500 & 1,012 & 1,440 \\ \hline **Guitar samples** & 100 & 500 & 1,012 & 2,004 \\ \hline **IR samples** & 5 & 9 & 33 & 65 \\ \hline **Auralized samples** & 1,500 & 13,500 & 100,188 & 289,640 \\ \hline **Total duration** & 3.2 h & 30.41 h & 221.77 h & 658.08 h \\ **Total file size** & 1.1 GB & 10.5 GB & 76.6 GB & 227.5 GB \\ \end{tabular} \end{table} Table 1: Number of anechoic, IR and resultant auralized data samples, together with their respective total duration and file size for each ensemble version. Figure 2: Extracted \(RT_{20}\) values from the selected the IR data. ## 4 Discussion and Conclusion Overall, the data-compilation ensemble presented in this work provides a diverse and comprehensive set of acoustic scenes for use in dereverberation tasks, as well as some other audio signal processing applications, such as emotion recognition and MIR. By combining different types of signals of interest, including speech, song, and acoustic guitar sounds, with a variety of IRs, we provide a challenging dataset for researchers working on dereverberation and related fields. The dataset is available in different sizes, from a tiny version with limited data, to a large version with almost \(300,000\) samples, allowing users to choose the most suitable version for their specific research needs. The dataset also includes metadata that can be used to trace back each sample to its original annotations, facilitating the use of the dataset for tasks such as emotion recognition and MIR. We hope that this dataset will be useful for researchers working on dereverberation and related fields, and we encourage its use in future research. We also believe that the diversity and variability of the dataset can facilitate the development of more robust and generalizable algorithms for dereverberation and other audio signal processing tasks.
2307.10319
Frequency-dependent electron power absorption mode transitions in capacitively coupled argon-oxygen plasmas
Phase Resolved Optical Emission Spectroscopy (PROES) measurements combined with 1d3v Particle-in-Cell/Monte Carlo Collision (PIC/MCC) simulations are performed to investigate the excitation dynamics in low-pressure capacitively coupled plasmas (CCPs) in argon-oxygen mixtures. The system used for this study is a geometrically symmetric CCP reactor operated in a fixed mixture gas composition, at fixed pressure and voltage amplitude, with a wide range of driving RF frequencies (2$~$MHz$~\le f \le~15~$MHz). The measured and calculated spatio-temporal distributions of the electron impact excitation rates from the Ar ground state to the Ar$~\rm{2p_1}$ state (with a wavelength of 750.4~nm) show good qualitative agreement. The distributions show significant frequency dependence, which is generally considered to be predictive of transitions in the dominant discharge operating mode. Three frequency ranges can be distinguished, showing distinctly different excitation characteristics: (i) in the low frequency range ($f \le~3~$MHz), excitation is strong at the sheaths and weak in the bulk region; (ii) at intermediate frequencies (3.5$~$MHz$~\le f \le~5~$MHz), the excitation rate in the bulk region is enhanced and shows striation formation; (iii) above 6$~$MHz, excitation in the bulk gradually decreases with increasing frequency. Boltzmann term analysis was performed to quantify the frequency dependent contributions of the Ohmic and ambipolar terms to the electron power absorption.
Aranka Derzsi, Mate Vass, Ranna Masheyeva, Benedek Horvath, Zoltan Donko, Peter Hartmann
2023-07-19T06:40:16Z
http://arxiv.org/abs/2307.10319v1
Frequency-dependent electron power absorption mode transitions in capacitively coupled argon-oxygen plasmas ###### Abstract Phase Resolved Optical Emission Spectroscopy (PROES) measurements combined with 1d3v Particle-in-Cell/Monte Carlo Collision (PIC/MCC) simulations are performed to investigate the excitation dynamics in low-pressure capacitively coupled plasmas (CCPs) in argon-oxygen mixtures. The system used for this study is a geometrically symmetric CCP reactor operated in a fixed mixture gas composition, at fixed pressure and voltage amplitude, with a wide range of driving RF frequencies (2 MHz \(\leq\)\(f\)\(\leq\) 15 MHz). The measured and calculated spatio-temporal distributions of the electron impact excitation rates from the Ar ground state to the Ar 2p\({}_{1}\) state (with a wavelength of 750.4 nm) show good qualitative agreement. The distributions show significant frequency dependence, which is generally considered to be predictive of transitions in the dominant discharge operating mode. Three frequency ranges can be distinguished, showing distinctly different excitation characteristics: (i) in the low frequency range (\(f\)\(\leq\) 3 MHz), excitation is strong at the sheaths and weak in the bulk region; (ii) at intermediate frequencies (3.5 MHz \(\leq\)\(f\)\(\leq\) 5 MHz), the excitation rate in the bulk region is enhanced and shows striation formation; (iii) above 6 MHz, excitation in the bulk gradually decreases with increasing frequency. Boltzmann term analysis was performed to quantify the frequency dependent contributions of the Ohmic and ambipolar terms to the electron power absorption. ## 1 Introduction Capacitively coupled plasmas (CCPs) driven by radio frequency (RF) waveforms have a wide range of applications in the semiconductor industry and are basic tools in biomedical applications [1, 2, 3, 4, 5]. Study of processing plasmas is driven by the motive of understanding the complex physical and chemical interactions as well as improving performance and control in such systems [6, 7, 8]. In plasma processing applications electronegative gases are frequently diluted with electropositive gases. Argon mixed with reactive gases is particularly important as it can enhance the etching process [9]. Gas discharges in argon-oxygen mixtures are often used for sputtering deposition, to grow SiO\({}_{2}\) dielectric films on silicon, to etch photoresist and polymer films, as well as for sterilization of medical instruments and surface activation [10; 11; 12; 13; 14]. Over the years, the plasma properties of low-pressure argon-oxygen mixture CCPs have been studied extensively via modeling and numerical simulations as well as experimentally [15; 16; 17; 18; 19; 20; 21; 22]. A reaction set for modeling argon-oxygen CCPs was introduced in [23] and the role of ionization, resonant and nonresonant charge-exchange collisions in the formation of the ion energy distribution at the electrodes was studied by Particle-in-Cell/Monte Carlo Collisions (PIC/MCC) simulations. PIC/MCC and fluid simulations of argon-oxygen CCPs were performed in [24] to study the effects of the gas pressure and the ratio of argon and oxygen in the mixture on the plasma density, space potential, electron temperature and ion energy distribution. A global (volume averaged) model for argon-oxygen discharges was proposed in [25] to determine which reactions are important in the discharge model. An extensive set of processes was also provided in [26] in the frame of a hybrid Monte Carlo-fluid model. The mechanisms of Ar metastable generation was investigated in argon-oxygen CCPs by two-dimensional computer simulations in [27], showing that small amount of oxygen in the mixture decreases the Ar metastable atom density due to quenching by O and O\({}_{2}\) and change their spatial density profile due to a transition to an electronegative plasma. In dual-frequency argon-oxygen CCPs, the effects of the external discharge parameters, namely the effects of the frequency and power of the low frequency source and the gas pressure on the energy distributions of ions bombarding the electrodes, were studied both experimentally and by simulations [28]. Recently, the use of tailored voltage waveforms in geometrically asymmetric CCPs sustained in argon-oxygen mixtures was investigated computationally with the goal of shaping the energy and angular distributions of electrons incident onto the substrate to address positive surface charging inside nanometer scale high-aspect-ratio features [29]. The characterization of CCPs in terms of operation modes is linked to the electron power absorption in such systems. In low-pressure CCPs, various discharge operation modes can be observed. The most common ones are the \(\alpha\)-mode and the \(\gamma\)-mode [30] in electropositive gases, while the drift-ambipolar (DA) mode [31] and the striation (STR) mode [32; 33; 34; 35; 36] are characteristic of electronegative gases. In the \(\alpha\)-mode, the ionization is caused by energetic electrons accelerated in the vicinity of the edges of the expanding sheaths, resulting in ionization peaks (\(\alpha\)-peaks) at the expanding sheath edges. In the \(\gamma\)-mode, the ionization peaks within the sheaths (\(\gamma\)-peaks) and the ionization is primarily caused by secondary electrons emitted from the electrodes accelerated in the sheaths. In the DA-mode, the ionization is mainly concentrated in the central bulk region and at the collapsing sheath edges. Here, the ionization is generated by electrons accelerated by the drift electric field in the bulk (caused by the low conductivity of the plasma bulk), and by the ambipolar electric fields at the sheath edges (caused by the strong electron density gradients). In the STR-mode (which develops when both positive and negative ions can react to the fast variation of the RF electric field), the ionization, concentrated within the bulk, exhibits features called "striations". These structures are due to the modulation of the electric field and the electrons' power absorption in the bulk. By varying the external control parameters, transitions between the different operation modes of low-pressure CCPs can be observed. In pure oxygen CCPs, by changing the gas pressure a transition between the \(\alpha\)-mode and the DA-mode was found [37, 38]. Similar mode transitions were found to be induced by changing the gap distance [38, 39], the driving frequency [37, 40], the driving voltage waveform [37, 41, 42, 43, 44, 45], as well as the external magnetic field [46]. By increasing the driving frequency at a constant pressure [40], as well as by increasing the pressure or the electrode gap [38], a transition from a hybrid DA-\(\alpha\) mode to a pure \(\alpha\)-mode was observed. A transition from the STR-mode to \(\gamma\)-mode due to enhanced secondary electron emission was also found in oxygen CCPs [34], as well as a transition between the \(\alpha\)-mode and \(\gamma\)-mode [47]. In argon CCPs, \(\alpha\)-\(\gamma\) mode transitions were found under a wide range of discharge conditions [30, 48, 49]. Here, we study the electron power absorption and excitation dynamics in CCPs operated in mixtures of 70% argon and 30% oxygen (volumetric ratio). Phase Resolved Optical Emission Spectroscopy (PROES) measurements combined with PIC/MCC simulations are performed in a geometrically symmetric CCP reactor in a wide frequency range (\(2~{}\mathrm{MHz}\leq f\leq 15~{}\mathrm{MHz}\)) at constant pressure (120 Pa) and peak-to-peak voltage (350 V). PROES [50, 51, 52] is considered to be an experimental tool which can reveal the electron power absorption and discharge operation mode in CCPs. PROES images are frequently used to formulate statements on the operation mode of low-pressure CCP discharges despite the fact that PROES provides information on the spatio-temporal distribution of the electron-impact excitation dynamics from the ground state into the selected excited atomic state in the discharge, while the discharge operation mode is determined by the spatio-temporal distribution of the ionization dynamics. Recently, the applicability of PROES to probe the discharge operation mode was tested in low-pressure CCPs in pure neon [53] and in neon-oxygen mixtures [54], and limiations were revealed especially around the transition regime between the \(\alpha\)-mode and the \(\gamma\)-mode. The computational investigation of the spatio-temporal dynamics of the electron power absorption is based on the Boltzmann term analysis, a computational diagnostic tool, which is capable of providing a complete description of the electron power absorption in CCPs [55, 56]. This method, first proposed in [57] and later revisited in [58], has recently been applied to CCPs under various conditions: in inert gases [55, 56, 59, 60, 61, 62, 63, 64], in electronegative gases [65, 66, 36], in CCPs at atmospheric pressure [67], and in magnetized CCPs as well [68, 69, 70, 46]. The paper is structured in the following way. In section 2, the experimental setup, the simulation method and the argon-oxygen discharge model is described, as well as the the Boltzmann term method. The results are discussed in section 3. The conclusions are drawn in section 4. ## 2 Methods and discharge conditions ### Experimental method A geometrically symmetric plasma reactor (our "Budapest v.3" cell) is used for the measurements [54]. In the discharge cell, the plane parallel electrodes, made of stainless steel, with identical diameters of 14 cm, are situated within a quartz cylinder. One electrode is driven by a RF voltage, while the other electrode is grounded. The distance between the electrodes is set to \(L=2.5\) cm. The background gas is an argon-oxygen mixture (70% Ar-30% O\({}_{2}\)). The gas pressure is kept constant at \(p=120\) Pa. The driving frequency, \(f\), is varied between 2 MHz and 15 MHz at a constant peak-to-peak voltage of \(V_{\rm pp}=350\) V. PROES measurements [50, 51, 52] are performed by using a fast-gateable ICCD camera (4 QuickE, Stanford Computer Optics). The optical emission from the Ar \({}^{2}\)P\({}_{1}\) excited state (also denoted as 2p\({}_{1}\) in the simplified Paschen notation) with a wavelength of 750.4 nm is measured (by applying an interference filter with a central wavelength of 750 nm and a spectral full width at half maximum of \(\sim\)10 nm), from which the electron impact excitation rate from the ground state into the observed state is calculated as introduced in [52]. A more detailed description of the experimental setup and diagnostics is given in [54]. ### Simulation method The simulations are based on a one dimensional in space and three dimensional in velocity space (1d3v) Particle-in-Cell/Monte Carlo Collisions (PIC/MCC) simulation code [71, 72, 73, 74, 75, 76]. The particles traced in the simulations of the argon-oxygen gas mixture are electrons, Ar\({}^{+}\) ions, fast Ar atoms (Ar\({}^{\rm f}\)), O\({}_{2}^{+}\) ions, O\({}^{-}\) ions and fast O\({}_{2}\) molecules (O\({}_{2}^{\rm f}\)). O\({}_{2}\)(a\({}^{1}\Delta_{\rm g}\)) metastable molecules are also considered in the model as continuum species. In total, 61 collision processes are included in the argon-oxygen discharge model. For e\({}^{-}+\)Ar collisions, the Hayashi cross section set (comprising 27 collision processes, including 25 excitation channels) [77] is used (available at the LxCat database [78, 79, 80]). For Ar\({}^{+}+\)Ar collisions (including isotropic and backward elastic scattering processes) the cross section data from Phelps [81] are used. The cross sections adopted for the collisions of electrons with O\({}_{2}\) molecules, O\({}_{2}^{+}\) ions and O\({}^{-}\) ions, as well as for the collisions of oxygen species with these targets and O\({}_{2}\)(a\({}^{1}\Delta_{\rm g}\)) metastable molecules (23 collision processes in total) are the same as those introduced in [41] and used previously in simulation studies of CCPs operated in pure oxygen [37, 39, 41, 45, 65, 82] and neon-oxygen mixtures [54]. More details as well as plots of these cross sections can be found in the papers quoted above and these are not repeated here. The above collision processes are complemented with "cross processes" between oxygen and argon species (5 processes) and collision processes for fast neutrals (4 processes), shown in table 1. The cross sections of these processes are plotted in figure 1. The collision processes listed in table 1, which are specific of the argon-oxygen discharge model are discussed below. Process 1 is a nonresonant charge transfer reaction caused by two different atomic states of the projectile: Ar\({}^{+}\)(\({}^{2}\)P\({}_{3/2}\)) and Ar\({}^{+}\)(\({}^{2}\)P\({}_{1/2}\)). They have different charge exchange cross sections within the same order of magnitude and with similar trends according to Flesch _et al._[83]. As the triplet and the singlet states are created with ratios 2:1 in ionization, a weighted average of the two cross sections is used, correspondingly. Processes 2, 3, and 4 (isotropic elastic scattering between Ar\({}^{+}\) ions and O\({}_{2}\) molecules, between O\({}_{2}^{+}\) ions and Ar atoms, and between O\({}^{-}\) ions and Ar atoms, respectively) are treated with their Langevin cross section: \(\sigma_{\rm L}=\sqrt{\frac{\alpha^{*}\pi e^{2}}{\epsilon_{0}\mu}}\frac{1}{g}\), where \(\alpha^{*}\) is the polarizibility of the target, \(\mu\) is the reduced mass, and \(g\) is the relative velocity. The polarizibility values are: \(\alpha^{*}\)(O\({}_{2}\)) = 1.562 \(\times\) 10\({}^{-30}\) m\({}^{3}\) and \(\alpha^{*}\)(Ar) = 1.664 \(\times\)10\({}^{-30}\) m\({}^{3}\)[86]. The cross sections of processes 2, 3, and 4 are nearly equal to each other. Due to the lack of data, process 5 (mutual neutralisation of O\({}^{-}\) with Ar\({}^{+}\) ion) is treated with the same cross section as the mutual neutralisation between O\({}^{-}\) and O\({}_{2}^{+}\)[25]. The cross sections for fast neutral collisions in argon-oxygen mixtures (processes 6-9) are calculated based on the pair-potential between the particles, for which the Lennard-Jones type is assumed (similarly to the case of fast neutrals in neon-oxygen mixtures, discussed in details in [54]). The cross sections of these processes are also nearly equal to each other. The heating of the background gas due to elastic collisions of fast neutrals and ions with thermal atoms/molecules of the background gas and heating up of the electrodes due to inelastic collisions of plasma particles with the electrodes is taken into account in the discharge model (see details in [54]). As surface processes, secondary electron emission due to Ar\({}^{+}\) and O\({}_{2}^{+}\) ions, elastic reflection of electrons and surface quenching of O\({}_{2}\)(a\({}^{1}\Delta_{\rm g}\)) metastable molecules are considered in the model by constant surface coefficients. The elastic electron reflection coefficient is set to \(\eta_{\rm e}=0.7\)[87]. The secondary electron emission coefficient is set to 0.1 for Ar\({}^{+}\) ions and 0.01 for O\({}_{2}^{+}\) ions. These values resulted in a good agreement between the experimental and simulation results in the wide parameter regime covered in this study. \begin{table} \begin{tabular}{l l l l} \hline \# & Reaction & Process & References \\ \hline 1 & Ar\({}^{+}\) + O\({}_{2}\)\(\longrightarrow\) Ar + O\({}_{2}^{+}\) & Charge transfer & [83] \\ 2 & Ar\({}^{+}\) + O\({}_{2}\)\(\longrightarrow\) Ar\({}^{+}\) + O\({}_{2}\) & Isotropic elastic scattering & [84] \\ 3 & O\({}_{2}^{+}\) + Ar \(\longrightarrow\) O\({}_{2}^{+}\) + Ar & Isotropic elastic scattering & [84] \\ 4 & O\({}^{-}\) + Ar \(\longrightarrow\) O\({}^{-}\) + Ar & Isotropic elastic scattering & [84] \\ 5 & O\({}^{-}\) + Ar\({}^{+}\)\(\longrightarrow\) O + Ar & Mutual neutralisation & [85] \\ \hline 6 & Ar\({}^{\rm f}\) + Ar \(\longrightarrow\) Ar\({}^{\rm f}\) + Ar & Isotropic elastic scattering & \\ 7 & O\({}_{2}^{\rm f}\) + O\({}_{2}\)\(\longrightarrow\) O\({}_{2}^{\rm f}\) + O\({}_{2}\) & Isotropic elastic scattering & \\ 8 & Ar\({}^{\rm f}\) + O\({}_{2}\)\(\longrightarrow\) Ar\({}^{\rm f}\) + O\({}_{2}\) & Isotropic elastic scattering & \\ 9 & O\({}_{2}^{\rm f}\) + Ar \(\longrightarrow\) O\({}_{2}^{\rm f}\) + Ar & Isotropic elastic scattering & \\ \hline \end{tabular} \end{table} Table 1: The list of β€œcross processes” between oxygen and argon species and collision processes for fast neutrals (Ar\({}^{\rm f}\) and O\({}_{2}^{\rm f}\)). The cross sections of processes 6–9 are calculated as described in [54]. The density of the O\({}_{2}\)(a\({}^{1}\Delta_{\rm g}\)) molecules is determined self-consistently in the simulations, based on their balance of creation, transport and de-excition at the surfaces (see details in [54]). The value of the surface quenching probability is set to \(\alpha=8\cdot 10^{-4}\)[54]. PIC/MCC simulations have been performed for the whole parameter regime covered by the PROES measurements: for driving frequencies varied between 2 MHz and 15 MHz at a peak-to-peak voltage of 350 V, for a discharge gap of 2.5 cm and a pressure of 120 Pa of the background gas, which is a mixture of 70% Ar and 30% O\({}_{2}\). The numerical parameters of the simulations were set to ensure the fulfillment of the usual PIC/MCC stability and accuracy requirements [88]. ### The Boltzmann term method For the investigation of the electron power absorption dynamics, the Boltzmann term method is applied [56, 57, 58]. This method provides a self-consistent, spatio-temporally resolved description of the electron power absorption by splitting the electric field into various, physically distinct terms based on the momentum balance equation, according to [65, 82]: Figure 1: Cross sections of the collision processes listed in table 1 (processes 1–9), as a function of the kinetic energy (considered in the center-of-mass frame). β€œCross processes” between oxygen and argon species: charge transfer and isotropic elastic scattering between Ar\({}^{+}\) ions and O\({}_{2}\) molecules (processes 1 and 2, respectively), isotropic elastic scattering between O\({}_{2}^{+}\) ions and Ar atoms (process 3), and O\({}^{-}\) ions and Ar atoms (process 4), and mutual neutralisation of O\({}^{-}\) with Ar\({}^{+}\) (process 5). Cross sections of the collision processes for fast Ar atoms (Ar\({}^{\rm f}\)) and fast O\({}_{2}\) molecules (O\({}_{2}^{\rm f}\)) (processes 6–9): elastic scattering between Ar\({}^{\rm f}\) and Ar atoms/O\({}_{2}\) molecules (process 6/8) and elastic scattering between O\({}_{2}^{\rm f}\) and O\({}_{2}\) molecules/Ar atoms (process 7/9). Note that the cross sections overlap in case of processes 2–4 and processes 6–9. \[E_{\rm tot} = E_{\rm in}+E_{\nabla p}+E_{\rm Ohm},\rm{where}\] \[E_{\rm in} = -\frac{m_{\rm e}}{n_{\rm e}e}\left[\frac{\partial}{\partial t}(n_{ \rm e}u_{\rm e})+\frac{\partial}{\partial x}(n_{\rm e}u_{\rm e}^{2})\right],\] \[E_{\nabla p} = -\frac{1}{n_{\rm e}e}\frac{\partial}{\partial x}p_{\parallel},\] \[E_{\rm Ohm} = -\frac{\Pi_{\rm c}}{n_{\rm e}e}, \tag{1}\] with \(m_{\rm e}\) and \(e\) being the electron mass and the elementary charge, respectively, \(n_{\rm e}\) the electron density, \(u_{\rm e}\) the mean velocity, \(\Pi_{\rm c}\) the electron momentum loss (as a result of collisions between electrons and the background gas particles), and \(p_{\parallel}\) the diagonal element of the pressure tensor. \(E_{\rm in}\) is the electric field term originating from inertial effects, \(E_{\rm Ohm}\), the Ohmic electric field, is a consequence of electron collisions, while \(E_{\nabla p}\) describes (kinetic) pressure effects. This electric field term can be split into two additional terms: \[E_{\nabla p} = E_{\nabla n}+E_{\nabla T},\rm{where}\] \[E_{\nabla n} = -\frac{T_{\parallel}}{n_{\rm e}e}\frac{\partial n_{\rm e}}{ \partial x},\] \[E_{\nabla T} = -\frac{1}{e}\frac{\partial T_{\parallel}}{\partial x}. \tag{2}\] Here \(T_{\parallel}=p_{\parallel}/n_{\rm e}\) is the parallel electron temperature (where "parallel" refers to the direction of the electric field that is perpendicular to the electrode surface. The corresponding power absorption terms can be calculated based on these electric field terms by multiplying each of them with the electron conduction current density, \(j_{\rm c}\). ## 3 Results and discussion Figure 2 shows the spatio-temporal distribution of the electron impact excitation rate from the ground state into the Ar 2p\({}_{1}\) state measured by PROES in CCPs operated in a 70% Ar-30% O\({}_{2}\) gas mixture at different driving frequencies, \(f\), between 2 MHz and 15 MHz. In all panels of figure 2, the vertical axes show the distance from the powered electrode, and the horizontal axes cover one RF period (\(T_{\rm RF}=1/f\)). At the lowest frequency of 2 MHz (figure 2(a)), strong excitation at the bulk side of the collapsing sheath edge is found at both electrodes, as well as significant excitation in the central bulk region, indicating discharge operation in the DA-mode. By increasing the frequency to 2.5 MHz, the spatio-temporal distribution of the excitation rate exhibits a spatially weakly modulated structure (figure 2(b)). As the driving frequency is further increased, these excitation structures become well separated and the gap between the excitation rate maxima decreases (the number of striations increases). At frequencies between 3 MHz and 4.5 MHz (figure 2(c)-(f)), the relative intensity of the excitation in the bulk region (in the striated excitation patterns) is enhanced by increasing the frequency. At these driving frequencies, excitation at the expanding sheath edge can also be observed (\(\alpha\)-peak), as well as the formation of an excitation peak at the bulk side of the expanding sheath edge. These excitation features are also enhanced as the driving frequency is increased (figure 2(c)-(f)). At 5 MHz (figure 2(g)), strong excitation at both the expanding and the collapsing sheath edges are found, as well as in the central bulk region. At this frequency, the gaps between the excitation rate maxima in the bulk becomes narrow, and the striations cannot be clearly resolved spatially. At frequencies higher than 5 MHz (6 MHz \(\leq f\leq\) 15 MHz), the striations in the measured spatio-temporal excitation rate vanish completely in the bulk (figure 2(h)-(q)). The excitation at the expanding sheath edge (the \(\alpha\)-peak) is enhanced, while the excitation at the collapsing sheath edge and that in the bulk region get weaker by increasing the frequency, exhibiting a discharge operation mode transition from a hybrid \(\alpha\)-DA mode to \(\alpha\)-mode. In figure 3, time-averaged results for the electron impact excitation rate from the Figure 2: Spatio-temporal plots of the electron impact excitation rate from the ground state into the Ar 2p\({}_{1}\) state measured by PROES [a.u.] for different driving frequencies (2 MHz \(\leq f\leq\) 15 MHz), for a 70% Ar–30% O\({}_{2}\) background gas mixture. The horizontal axes correspond to one RF period, \(T_{\rm RF}=1/f\). The vertical axes show the distance from the powered electrode, which is located at \(x/L=0\), while the grounded electrode is at \(x/L=1\). The color scales of the plots are individually normalized to a maximum of 1. Discharge conditions: \(L=2.5\) cm, \(p=120\) Pa, \(V_{\rm pp}=350\) V. ground state into the Ar 2p\({}_{1}\) state obtained by PROES for different driving frequencies (2 MHz \(\leq f\leq\) 15 MHz) are presented, corresponding to the spatially and temporally resolved experimental results shown in figure 2. The vertical axis shows the distance from the powered electrode, while the frequency values are shown in the horizontal axis. Here, columns of the time-averaged results obtained for the different frequencies are put next to each other, separated by thin white vertical dashed lines. This figure facilitates comparison of the locations of the main excitation features revealed by PROES at different driving frequencies, as well as the analysis of the transitions between the different discharge operation modes as a function of frequency. At all frequencies, the excitation rate peaks at the sheath-bulk boundary at both electrodes. However (as discussed later), different mechanisms are responsible for these excitation maxima at low and high frequencies. Based on this figure, three frequency ranges can be defined with different characteristic excitation features. These are separated by thick vertical dashed black lines and labeled as I., II. and III. in the figure. Between 2 MHz and 3 MHz, the excitation is strong at the sheaths and weak (but enhanced with the frequency) in the middle of the bulk region. The 2 MHz \(\leq f\leq\) 3 MHz frequency range is defined as range I. or low frequency range. At all frequencies in range I., the intensity of the excitation in the bulk is significantly lower than those observed at the sheath edges. As the frequency is increased from about 3.5 MHz up to about 5 MHz, the central bulk region exhibits an intensifying excitation rate/light emission, including development of striations. At Figure 3: Time averaged results for the electron impact excitation rate from the ground state into the Ar 2p\({}_{1}\) state measured by PROES [a.u.] for different driving frequencies (2 MHz \(\leq f\leq\) 15 MHz), corresponding to the spatio-temporal results shown in panels of figure 2. The horizontal axis shows the driving frequency. The vertical axis shows the distance from the powered electrode. The results obtained for the different frequencies are separated by thin white vertical dashed lines. The thick vertical dashed black lines indicate the frequency values around which significant changes in the spatial distribution of the excitation rate take place (suggesting transitions of the discharge operation mode), defining three frequency regimes labeled as I., II. and III. for ranges 2 MHz \(\leq f\leq\) 3 MHz, 3.5 MHz \(\leq f\leq\) 5 MHz, and 6 MHz \(\leq f\leq\) 15 MHz, respectively. Discharge conditions: 70% Ar–30% O\({}_{2}\) background gas mixture, \(L=2.5\) cm, \(p=120\) Pa, \(V_{\rm pp}=350\) V. these frequencies, the intensity of the excitation in the bulk is comparable to those at the sheath edges. The 3.5 MHz \(\leq f\leq 6\) MHz frequency range is defined as range II. or intermediate frequency range. At higher frequencies (\(f\geq 6\) MHz), the excitation in the bulk decreases gradually with increasing frequency. The 6 MHz \(\leq f\leq 15\) MHz frequency range is referred as frequency range III. or high frequency range. Figure 3 illustrates spectacularly the effect of the driving frequency on the length of the sheath/bulk as well. The sheath length slightly increases with the frequency up to about 3.5 MHz, and decreases as the frequency is further increased. In the following, the results of PIC/MCC simulations performed for discharge conditions identical to those of the PROES measurements (see figure 2 and figure 3) are presented. In figure 4, the spatio-temporal plots of the calculated electron impact excitation rate from the ground state into the Ar 2p\({}_{1}\) state are shown for different driving frequencies (2 MHz \(\leq f\leq 15\) MHz). At all frequencies, the main excitation patterns revealed by PROES are well reproduced by the simulations. At low frequencies, in agreement with Figure 4: Spatio-temporal plots of the electron-impact excitation rate [a.u.] from the ground state into the Ar 2p\({}_{1}\) state obtained from PIC/MCC simulations for different driving frequencies (2 MHz \(\leq f\leq 15\) MHz), for a 70% Ar–30% O\({}_{2}\) background gas mixture. The horizontal axes correspond to one RF period. \(T_{\rm RF}=1/f\). The vertical axes show the distance from the powered electrode, which is located at \(x/L=0\), while the grounded electrode is at \(x/L=1\). The color scales of the plots are individually normalized to a maximum of 1. Discharge conditions: \(L=2.5\) cm, \(p=120\) Pa, \(V_{\rm pp}=350\) V. the PROES measurements, the excitation rate in the bulk region is spatially modulated. The variation of the number of striations as a function of the driving frequency is also well reproduced by the simulations. At 2.5 MHz, besides the strong excitation at the bulk side of the collapsing sheath edge at both electrodes, 2 weak excitation rate maxima can be observed in the central bulk region during each half of the RF period. The number of striations increases with the frequency, exhibiting 5 peaks in the excitation rate in the central bulk region at 4.5 MHz, in perfect agreement with the experimental results. The enhancement of the excitation at the expanding sheath edge, as well as the formation and enhancement of an excitation peak at the bulk side of the expanding sheath edge with increasing frequency are also captured in the simulations. Figure 5: Time averaged results for the electron impact excitation rate from the ground state into the Ar 2p\({}_{\mathrm{1}}\) state, \(S_{\mathrm{exc,Ar}}\) (a), the ambipolar power absorption, \(P_{\nabla n}\) (b), the Ohmic power absorption, \(P_{\mathrm{Ohm}}\) (c), the ambipolar electric field, \(E_{\nabla n}\) (d), and the electron density, \(n_{\mathrm{e}}\) (e), obtained from PIC/MCC simulations for different driving frequencies (2 MHz \(\leq f\leq\) 15 MHz). The horizontal axes show the driving frequencies. The vertical axes show the distance from the powered electrode. At each frequency value, the corresponding quantities are normalized to their respective maxima. Discharge conditions: 70% Ar–30% O\({}_{2}\) background gas mixture, \(L=2.5\) cm, \(p=120\) Pa, \(V_{\mathrm{pp}}=350\) V. The vertical dashed black lines in all panels indicate the frequency values around which transitions in the discharge operation mode appear to take place based on the excitation rates shown in panel (a). results, at frequencies above 5 MHz, the excitation in the bulk region is reduced and strong excitation is found at the expanding sheath edges. In figure 5, time averaged results for several discharge characteristics obtained from PIC/MCC simulations for different driving frequencies (2 MHz \(\leq f\leq\) 15 MHz) are presented: panel (a) shows the electron impact excitation rate from the ground state into the Ar 2p\({}_{1}\) state, \(S_{\rm exc,Ar}\); the ambipolar and Ohmic power absorptions, \(P_{\nabla n}\) and \(P_{\rm Ohm}\), respectively, are shown in panels (b) and (c); panel (d) shows the ambipolar electric field, \(E_{\nabla n}\), while in panel (e) the electron density, \(n_{\rm e}\), is plotted for different driving frequencies. The format of the panels is the same as that used in case of figure 3. Here (unlike in figure 3), the results obtained for the different driving frequencies are presented by using a color map in which the intermediate values are smoothed (resulting in continuous transitions between the results corresponding to the different driving frequencies, contrary to the step-like transitions one can see in figure 3). Thin white vertical dashed lines are used here again to separate the slabs (columns) of time-averaged results corresponding to the different driving frequencies. The plot in panel (a) looks similar to the time-averaged PROES results shown in figure 3. The locations of the main excitation patterns in the gap observed in the experimental results at different frequencies are well reflected by the simulations. This way of representation of the simulation results on the electron impact excitation rate exposes impressively the differences between the characteristic excitation features at distinct frequencies and reveals the frequencies (frequency regimes) where significant changes in the spatial distribution of the excitation rate take place. Such pronounced changes in the excitation or the ionization rates (judged by simply looking at the spatio-temporal distribution of the excitation/ionization rate) are generally associated with changes in the dominant electron power absorption mechanisms and transitions in the discharge operation mode. Based on the results shown in panel (a), two transitions in the electron power absorption and excitation happen as the frequency is varied. The transitions seem to occur at frequencies around 3 MHz and 5 MHz, in accordance with the PROES results (figure 3). These frequencies are marked by vertical black dashed lines in all panels of figure 5. Similarly to the experimental results, one can identify a frequency range where strong excitation is found at the sheath edges and no excitation/only weak excitation in the middle of the bulk (\(f\leq\) 3.5 MHz, range I.), a range where strong excitation can be observed both at the sheath edges and in the bulk region (3.5 MHz \(\leq f\leq\) 5 MHz, range II.), and one range where strong excitation is found at the sheath edges again (\(f\geq\) 6 MHz, range III.). In panel (b), which shows the ambipolar power absorption term, striations can be observed in the bulk region at low frequencies (range I.). As the frequency is increased, the striations branch to other striations (the number of striations increases up to about 5 MHz, i.e. striations are present in both frequency ranges I. and II.). These bifurcations arise here from the specific way of representation of the simulation data using interpolation, however finer resolution of the low frequency range studied here would also reveal these structures. At high frequencies (range III.), the ambipolar power absorption is concentrated near the sheaths, characteristic of the \(\alpha\)-mode. The Ohmic power absorption (panel (c)) is relatively high in the bulk in frequency range II. (characteristic of the DA-mode, resulting in strong excitation in the central bulk region) and decreases in the middle of the bulk by decreasing/increasing the driving frequency in range I./range III. (the excitation in the bulk region disappears by decreasing/increasing the frequency towards range I./III.). The Ohmic power absorption is high whenever the electron density decreases locally (see the time-averaged electron densities in panel (e)). By increasing the frequency (in ranges I. and II.), more and more striations develop (at positions of local density minima). The density decrease is sharper at lower number of striations (at lower frequencies), where local maxima in the Ohmic power absorption are also resolved (at \(f=3.5\) MHz in panel (c)). Whenever the number of striations is high, the density decrease will not be as large as in case of lower number of striations, therefore striations in the Ohmic power absorption are not visible in the intermediate frequency range II. for \(f\geq 4\) MHz. The change in the number of striations is also the reason why there is a region of negative/positive ambipolar electric field, \(E_{\nabla n}\), near the powered/grounded electrode, respectively, as seen in panel (d): since the smaller number of striations leads to a steeper increase/decrease of electron density, the corresponding density gradients will be high, which leads to an increase in the magnitude of the ambipolar electric field. Based on these findings, there is no change in the dominant power absorption mechanisms in frequency range I. and II. (at the two sides of the black vertical dashed line at 3 MHz), despite the significant differences that can clearly be observed in the excitation rates. Based on the excitation rate (panel (a)) one could identify mode transitions from \(\alpha\)-mode to the DA-mode and striation mode as the frequency is decreased from 15 MHz to 2 MHz. However, regarding the power absorption, it is not straightforward to infer the power absorption mode transitions based on the excitation rate alone. The present results clearly show that the same electron power absorption mechanisms could be associated with excitation patterns of significantly different characteristics. In the following, the PIC/MCC simulation results are analysed in detail for thee different driving frequencies: (i) 2 MHz, (ii) 4.5 MHz, and (iii) 10 MHz. These frequency values are in the frequency ranges I. (low frequency range), II. (intermediate frequency range), and III. (high frequency range), respectively. At these driving frequencies, the corresponding excitation rates, obtained both from PROES (see figure 2 and figure 3) and PIC/MCC simulations (see figure 4 and figure 5) exhibit substantially different characteristics, suggesting different discharge operation modes. For these three cases, the time averaged particle density distributions and the electronegativity are plotted in figure 6, while various discharge characteristics, including the spatio-temporal distribution of the electron-impact excitation rate, the electron density, the electric field, the ambipolar and Ohmic power absorption terms, and temporal snapshots of particle densities, are shown in figures 7 (2 MHz), 8 (4.5 MHz), and 9 (10 MHz), respectively. At 2 MHz, local maxima at the edges of the bulk region can be observed in the time averaged electron density distribution (figure 6(a)). The electron density as well as the density of heavy particles exhibit spatial modulation in the discharge center. The electron density is significantly lower than the density of negative O\({}^{-}\) ions in the bulk, resulting in high electronegativity in the bulk (figure 6(d)). Due to the spatial modulation of the particle densities in the bulk, the electronegativity also shows this feature. The ratio of the negative ion density and electron density is above 30 in the middle of the gap, reaching high values of about 160 at the positions of electron density minima in the bulk. Under these conditions, the global electronegativity of the discharge (the ratio of the density of negative ions and electrons averaged over the electrode gap) is about 30. The local minima/maxima of the fast neutrals is due to the presence of an oscillating electric field, as a result of the presence of the striations: since the charged heavy particle density has a gradient, so will the electron density, which leads to an ambipolar electric field. This field will accelerate charged particles, which can create fast neutrals through collisions with the background gas. The maxima of the fast neutral density correspond to maxima of the electric field (see figure 7). The local maxima of the Ar\({}^{+}\) density in the center of the discharge is due to electrons being accelerated by the ambipolar electric field, as will be discussed later. At the higher frequency of 4.5 MHz (figure 6(b)), similarly to the case of 2.5 MHz, local maxima in the time averaged electron density distribution at the edges of the bulk and high negative ion density in the bulk can be seen, as well as spatial modulation of the particle densities in the discharge center. However, the electron density and the density of Ar\({}^{+}\) ions is Figure 6: Time averaged particle density distributions obtained from PIC/MCC simulations for different driving frequencies: (a) 2 MHz, (b) 4.5 MHz, and (c) 10 MHz and ratio of the time averaged negative ion (O\({}^{-}\)) density and electron density (local electronegativity, \(\beta\)(x)): (d) 2 MHz, (e) 4.5 MHz, and (f) 10 MHz. Discharge conditions: 70% Ar–30% O\({}_{2}\) background gas mixture, \(L=2.5\) cm, \(p=120\) Pa, \(V_{\rm pp}=350\) V. enhanced in the discharge center at this frequency, while the density of fast neutrals decreases in the bulk, in accordance with the higher number of striations present in this situation. The electronegativity is modulated in space, with maximum values of about 90 in the center of the bulk (figure 6(e)). The global electronegativity is about 16 at this frequency. At 10 MHz (figure 6(c)), the bulk region is wider compared to the lower frequency cases. The time averaged electron density peaks in the discharge center, while the density of Ar\({}^{+}\) ions as well as that of fast neutrals drop in the bulk. The O\({}^{-}\) and O\({}_{2}^{+}\) densities are high in the bulk, the corresponding time averaged density profiles exhibit local minima in the discharge center and local maxima near the bulk-sheath boundary. As a result of this, the electronegativity exhibits local maxima of about 7 at the bulk edges and a local minimum of about 5 in the center of the bulk (figure 6(f)). The global electronegativity is about 3 at this frequency. The electronegativity shows similar spatial distribution also at higher frequencies. The global electronegativity of the discharge decreases with further increase of the frequency, and is about 1.2 at the highest frequency of 15 MHz studied here. Figure 7 shows PIC/MCC simulation results for various discharge characteristics obtained for the 70% Ar-30% O\({}_{2}\) background gas mixture at \(p=120\) Pa pressure and \(f=2\) MHz driving frequency. The spatio-temporal distribution of the electron Figure 7: Spatio-temporal plots of the electron-impact excitation rate, \(S_{\rm{exc,Ar}}\) (a), the electron density, \(n_{\rm{e}}\) (b), temporal snapshots of the electron density (solid color curves) as well as the temporally averaged negative ion (O\({}^{-}\)) density (dashed black curve) divided by a factor of 50 (c), spatio-temporal plot of the electric field, \(E_{\rm{tot}}\) (d), the Ohmic power absorption, \(P_{\rm{Ohm}}\) (e), and the ambipolar power absorption, \(P_{\nabla n}\) (f) for a driving frequency of \(f=2\) MHz. The colors of the curves in panel (c) denote the respective time instances in panel (b). The powered electrode is located at \(x/L=0\), while the grounded electrode is at \(x/L=1\). Discharge conditions: 70% Ar–30% O\({}_{2}\) background gas mixture, \(L=2.5\) cm, \(p=120\) Pa, \(V_{\rm{pp}}=350\) V. impact excitation rate from the ground state into the Ar 2p\({}_{1}\) state, \(S_{\rm exc}\), is shown in panel (a) (not normalized, otherwise same as panel (a) of figure 4), while that of the electron density, \(n_{\rm e}\), is shown in panel (b). In panel (b), 4 time instances within the RF period (at values of \(t/T_{\rm RF}\) of 0, 0.25, 0.5, and 0.75) are marked with vertical dashed lines in different colours. The electron densities in the discharge gap corresponding to these time instances are shown in panel (c), where the colors of the solid curves identify the respective time instances in panel (b). In panel (c) the temporally averaged density of negative O\({}^{-}\) ions divided by a factor of 50 is also included (dashed black line), to illustrate that the density of O\({}^{-}\) ions is much higher than the electron density in the bulk region. The spatio-temporal plots of the electric field, \(E_{\rm tot}\), the Ohmic power absorption, \(P_{\rm Ohm}\), and the ambipolar power absorption, \(P_{\nabla n}\), are shown in panels (d), (e) and (f), respectively. At all selected \(t/T_{\rm RF}\) time instances, strong electron density peaks can be observed at the edges of the bulk region: one peak at the powered/grounded electrode side at \(t/T_{\rm RF}\) values of 0 and 0.5 and peaks at both sides at \(t/T_{\rm RF}\) values of 0.25 and 0.75. The electron density is low in the discharge center, where additional local electron density peaks can be observed: one local peak at \(t/T_{\rm RF}\) values of 0 and 0.5 and two local peaks at \(t/T_{\rm RF}\) values of 0.25 and 0.75. Under these conditions, the ratio of the O\({}^{-}\) ion density and the electron density, i.e. the electronegativity is high (between about 30 and 160) in the discharge center (see figure 6(d)). Due to the high electronegativity in the discharge center, the conductivity of the plasma is low in the bulk. As a consequence of this, the Ohmic power absorption, \(P_{\rm Ohm}\), is high in the discharge center at the times of electron density minima within the RF period (panel (e)). The ambipolar power absorption, \(P_{\nabla n}\), peaks at the edges of the bulk region at the time of sheath collapse at both electrodes (panel (f)) due to the local maxima (panels (b) and (c)). These ambipolar power absorption maxima near the sheath edges produce an electric field which does not change sign during the RF-cycle (e.g., near the powered electrode, the ambipolar electric field is always negative). This is a well-known feature of eletronegative discharges [65], and is due to the fact that the "electropositive edge", i.e. the local electron density maximum near the sheath is not modulated in time (cf. also panel (b)). It is very important to note that the ambipolar electric field near the striations are of different nature. Since, based on panel (b) and (c), the electron density is temporally modulated, so is the density gradient. Thus, at a given position in the bulk, the ambipolar electric field will change sign depending on the sign of the electron conduction current. This means that in the bulk region, the ambipolar electron power absorption has the same sign in the whole RF-cycle at a given position. This difference between the behaviour of the ambipolar power absorption near the sheath edge and in the bulk is what leads to the spatio-temporal excitation patterns seen in panel (a): the maxima near the sheath edges are a consequence of electrons accelerated by the local ambipolar electric field, as a result of the density gradient due to the presence of the "electropositive edge" (cf. panel (f)). The second peak near \(x/L=0.5\) is due to the ambipolar electric field resulting from the presence of the striations: since the electron power absorption at this position is positive in the entire RF-cycle, the local excitation maxima are much closer to each other: the one in the first half of the RF-cycle is closer to the grounded electrode, since electrons move towards the grounded electrode, while the situation is the opposite in the second half of the RF-cycle. This fact is the reason for the symmetric local maxima in the Ar\({}^{+}\)-density seen in fig. 6 (a). Under the present conditions, both the ambipolar and the Ohmic power absorption mechanisms contribute to the power absorption, their magnitudes being similar. Similarly to the 2 MHz case (figure 7), some discharge characteristics obtained for 4.5 MHz are shown in figure 8. In this case, the spatio-temporal distribution of the excitation rate shows strong excitation at the collapsing sheath edge, spatially modulated structure (with 6 excitation peaks) in the central bulk region and excitation at the expanding sheath edge as well (panel (a)). The electron density profiles plotted at different time instances (panel (c)) show strong peaks at the edges of the bulk region (one peak at the powered/grounded electrode side at \(t/T_{\mathrm{RF}}\) values of 0 and 0.5 and peaks at both sides at \(t/T_{\mathrm{RF}}\) values of 0.25 and 0.75). The electron density is low and spatially modulated in the bulk (panels (b) and (c)). This is similar to the 2 MHz case. However, the electron density is enhanced in the discharge center and the number of the local electron density peaks is higher compared to the results obtained for 2 MHz: here, 6 local electron density peaks can be observed at all selected \(t/T_{\mathrm{RF}}\) time instances, which is due to the increased driving frequency and the correspondingly smaller amplitude of ion charge separation, which causes the striations [32]. The ratio of the O\({}^{-}\) ion density and the electron density is high in the discharge center (see figure 6), exhibiting spatial Figure 8: Various discharge characteristics – same as those presented in figure 7 – obtained for a driving frequency of \(f=4.5\) MHz. Here, the temporally averaged negative ion (O\({}^{-}\)) density (dashed black curve in panel (c)) is divided by a factor of 100. Discharge conditions: 70% Ar–30% O\({}_{2}\) background gas mixture, \(L=2.5\) cm, \(p=120\) Pa, \(V_{\mathrm{pp}}=350\) V. The horizontal dashed black lines in panels (a), (d) and (f) indicate the region where striations develop in the bulk. modulation with values above 30 in the bulk and reaching maximum values of about 90 in the center of the bulk. Similarly to the previous case, the Ohmic power absorption is high in the discharge center (panel (e)) and shows spatial modulation in the bulk. The ambipolar power absorption peaks at the edges of the bulk region and exhibits striations in the bulk (panel (f)). The six peaks in panel (a) in the bulk are the result of striations, as in the previous case: the local maxima are due to the positive ambipolar power absorption near each striation as a result of the electron density gradient. The superposition of the Ohmic and ambipolar electric fields results in a strong electric field at both sides of the bulk as well as in the bulk region (panel (d)). For 10 MHz, the simulation results on various discharge characteristics are shown in figure 9. At this frequency, strong excitation is found only at the expanding sheath edges (panel (a)). The electron density profiles for the different time instances show only small local minima at the edges of the bulk region (panel (c)). The time averaged O\({}^{-}\) density has a local minimum in the center and local maxima at the sheath-bulk boundary. The ratio of the O\({}^{-}\) ion density and the electron density is low in the discharge center (see figure 6(f)), with local minimum of about 5. The Ohmic power absorption (panel (e)) shows peaks at the expanding sheath edges. The ambipolar power absorption (panel (f)) is also concentrated at the sheath edges at both electrodes, which is characteristic of the \(\alpha\)-mode. Since in this case there are no striations, and due to the low electronegativity the "electropositive edge" is not pronounced, the only peak visible in panel (a) is the \(\alpha\)-peak, although there is a small spatio-temporal region of ambipolar power absorption Figure 9: Various discharge characteristics – same as those presented in figure 7 – obtained for a driving frequency of \(f=10\) MHz. Here, the temporally averaged negative ion (O\({}^{-}\)) density (dashed black curve in panel (c)) is divided by a factor of 100. Discharge conditions: 70% Ar–30% O\({}_{2}\) background gas mixture, \(L=2.5\) cm, \(p=120\) Pa, \(V_{\rm pp}=350\) V. according to panel (f) near the collapsing phase of each sheath. However, this is too small to lead to any excitation that would be visible in panel (a). ## 4 Conclusions Phase Resolved Optical Emission Spectroscopy (PROES) measurements combined with 1d3v Particle-in-Cell/Monte Carlo Collisions (PIC/MCC) simulations have been performed in low-pressure capacitively coupled argon-oxygen plasmas. The discharge conditions covered a wide frequency range between 2 MHz and 15 MHz in a geometrically symmetric plasma reactor with a gap length of 2.5 cm, operated in a mixture of 70% Ar and 30% O\({}_{2}\) (volumetric ratio) at 120 Pa, applying a peak-to-peak voltage of 350 V. The measured electron impact excitation rates from the Ar ground state into the Ar 2p\({}_{1}\) state were compared to the PIC/MCC simulation results on the Ar excitation rate, showing a good qualitative agreement for all discharge conditions. At the lowest frequency of 2 MHz, strong excitation at the bulk side of the collapsing sheath edge was found at both electrodes, as well as weak excitation in the central bulk region, indicating discharge operation in the DA-mode. By increasing the frequency, the spatio-temporal distribution of the excitation rate was found to exhibit spatially modulated excitation patterns, with increasing number of these features with the frequency, as well as enhancement of the excitation in the bulk and at the expanding sheath edges. At frequencies higher than 5 MHz, it was found that the striations vanish in the bulk, while the excitation at the expanding sheath edge is enhanced, and the excitation in the bulk and at the collapsing sheath edge becomes weaker. Visualization of the time-averaged results for the electron impact excitation rate from the Ar ground state into the Ar 2p\({}_{1}\) state (obtained both by PROES and PIC/MCC simulations) for different driving frequencies in a single plot clearly showed the main differences in the characteristic excitation features at different frequencies and it revealed the frequency values around which significant changes in the excitation rate take place. Such changes are generally considered to be predictive of transitions of the dominant electron power absorption mode and discharge operation mode. Three frequency ranges could be defined with profoundly different characteristic excitation features. Up to 3 MHz (frequency range I., low frequency range), the excitation was found to be strong at the sheaths and weak in the middle of the bulk region. At frequencies between 3.5 MHz and about 5 MHz (frequency range II., intermediate frequency range), the excitation was found to be strong at the sheaths, while the central bulk region was found to exhibit intensifying excitation rate including the formation of spatially modulated patterns. At frequencies higher than 5 MHz (frequency range III., high frequency range), the excitation in the bulk was found to decrease gradually, while the excitation was found to remain strong at the sheath edges. This suggested two transitions in the dominant power absorption mode and the discharge operation mode as the frequency was tuned between 2 MHz and 15 Mhz: one transition at about 3 MHz and another one at about 5 MHz. Based on Boltzmann term analysis, the mechanisms behind the excitation characteristics at different frequencies were analyzed. At low frequencies (range I.), striations could be observed in the ambipolar power absorption in the bulk region. As the frequency was increased, the striations were found to branch to other striations at intermediate frequencies (range II.). At high frequencies (range III.), the ambipolar power absorption was found to be concentrated near the sheaths. The Ohmic power absorption was found to be high in the bulk in frequency range II. and to decrease in this region by decreasing/increasing the driving frequency towards range I./range III. The variation of the number of striations with the frequency was found to have important effects on the contributions of the Ohmic and ambipolar terms to the power absorption. It was found that despite the significantly different excitation maps seen in the different frequency regimes, the dominant power absorption mechanisms are basically the same in frequency ranges I. and II. The present results clearly showed that it is not straightforward to infer the power absorption mode transitions based on the excitation rate alone. It was found that the same electron power absorption mechanisms could be associated with excitation patterns of significantly different characteristics. The agreement between the experimental data and the results of the simulations over a wide range of operation frequencies confirms that the discharge model properly captures the main physical phenomena in the CCP operated in the Ar-O\({}_{2}\) mixture and verifies the implementation of this model in the simulation code. The authors thank Ihor Korolov for his invaluable advice on the cross sections used in the simulations. This work was supported by the Hungarian National Research, Development and Innovation Office via grants K-134462 and FK-128924, by the German Research Foundation (DFG) within the frame of the collaborative research centre SFB-CRC 1316 (project A4) and by the UNKP-22-3 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund.
2304.11557
FAN-Net: Fourier-Based Adaptive Normalization For Cross-Domain Stroke Lesion Segmentation
Since stroke is the main cause of various cerebrovascular diseases, deep learning-based stroke lesion segmentation on magnetic resonance (MR) images has attracted considerable attention. However, the existing methods often neglect the domain shift among MR images collected from different sites, which has limited performance improvement. To address this problem, we intend to change style information without affecting high-level semantics via adaptively changing the low-frequency amplitude components of the Fourier transform so as to enhance model robustness to varying domains. Thus, we propose a novel FAN-Net, a U-Net--based segmentation network incorporated with a Fourier-based adaptive normalization (FAN) and a domain classifier with a gradient reversal layer. The FAN module is tailored for learning adaptive affine parameters for the amplitude components of different domains, which can dynamically normalize the style information of source images. Then, the domain classifier provides domain-agnostic knowledge to endow FAN with strong domain generalizability. The experimental results on the ATLAS dataset, which consists of MR images from 9 sites, show the superior performance of the proposed FAN-Net compared with baseline methods.
Weiyi Yu, Yiming Lei, Hongming Shan
2023-04-23T06:58:21Z
http://arxiv.org/abs/2304.11557v1
# Fan-Net: Fourier-based Adaptive Normalization for Cross-Domain Stroke Lesion Segmentation ###### Abstract Since stroke is the main cause of various cerebrovascular diseases, deep learning-based stroke lesion segmentation on magnetic resonance (MR) images has attracted considerable attention. However, the existing methods often neglect the domain shift among MR images collected from different sites, which has limited performance improvement. To address this problem, we intend to change style information without affecting high-level semantics via adaptively changing the low-frequency amplitude components of the Fourier transform so as to enhance model robustness to varying domains. Thus, we propose a novel FAN-Net, a U-Net-based segmentation network incorporated with a Fourier-based adaptive normalization (FAN) and a domain classifier with a gradient reversal layer. The FAN module is tailored for learning adaptive affine parameters for the amplitude components of different domains, which can dynamically normalize the style information of source images. Then, the domain classifier provides domain-agnostic knowledge to endow FAN with strong domain generalizability. The experimental results on the ATLAS dataset, which consists of MR images from 9 sites, show the superior performance of the proposed FAN-Net compared with baseline methods. Weiyi Yu\({}^{1}\) Yiming Lei\({}^{2,\dagger}\) Hongming Shan\({}^{1,3}\)\({}^{1}\) Institute of Science and Technology for Brain-inspired Intelligence \({}^{2}\) Shanghai Key Lab of Intelligent Information Processing, School of Computer Science Fudan University, Shanghai 200433, China \({}^{3}\) Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 201210, China Stroke lesion segmentation, domain generalization, Fourier transform, convolutional neural network. ## 1 Introduction Stroke is the main cause of death worldwide [1]. Traditionally, to examine the patients after a stroke, medical experts manually segment stroke lesions on T1-weighted magnetic resonance (MR) images, and this requires a costly workload. Therefore, it triggers a popular research topic over the past decade--automatic stroke lesion segmentation. Convolutional neural networks (CNNs) are commonly used models in medical image segmentation [2]. The current works related to stroke lesion segmentation can be roughly categorized into two categories, i.e., 2D CNNs and 3D CNNs. The 2D CNN-based methods can generate a large number of independent samples for training but hardly capture the inter-slice relationships. Therefore, X-Net with a non-local attention module [3] and MSDF-Net with a multiscale fusion module [4] were proposed to learn more diverse relationships among slices. By contrast, 3D CNNs are able to extract inter-slice and intra-slice relationships simultaneously; nevertheless, they require more 3D samples to prevent overfitting. Thus, D-UNet [5], _etc_. [6, 7, 8] combined the 2D and 3D CNNs to achieve a trade-off between them. In addition, efforts have also been made on the training strategy. For instance, the MI-UNet [9] introduced brain parcellations as prior knowledge, which is useful but time-consuming. The methods mentioned above have neglected the domain diversity inherited in MR images from different sites. More specifically, the cross-domain differences involve MR scanners, imaging protocols, and patient populations, which greatly affect model generalization. Fortunately, domain generalization is an effective technique that helps machine learning models perform well on medical images of _unseen_ domains [10], and gradient reversal layer (GRL) [11] has Figure 1: Example that proves the low-frequency amplitude component contains the style information. been testified as an effective way of reducing cross-domain variance [12]. Generally, GRL is combined with a domain classifier and tends to confuse this classifier by directly maximizing the classification loss using gradient reversing. Hence, we need to model the domain knowledge or domain style information explicitly. It is well-known that the phase component of the Fourier spectrum preserves high-level semantic structures, and the amplitude component contains style information statistics [13, 14]. Since one pixel in the image space simultaneously relates to the phase and amplitude components of the Fourier spectrum, the methods directly operating on the image space cannot change the global style information without affecting the semantic structures. However, processing frequency space can easily preserve the semantic features when changing style information. For example in Fig. 1, an MR image from site A can be converted to a style that is similar to site B by substituting the low-frequency component of the amplitude spectrum. Thus, the style information can be represented by the low-frequency amplitude components [15, 16, 17]. Motivated by this perspective in the frequency space, we can change style information while preserving high-level semantic structures through a domain-agnostic amplitude. Hence, we propose FAN-Net to tackle the problem of _unseen_ domain generalization in stroke lesion segmentation. Specifically, we propose a Fourier-based adaptive normalization (FAN) module to dynamically learn adaptive affine parameters for the amplitude components, which could convert the input MR images into a domain-unrelated style. Then, the style-transferred source image also acts as the input of a domain classifier, which further guarantees FAN to minimize cross-domain diversity through GRL. Finally, it is fed into U-Net for segmentation. The experiments conducted on the Anatomical Tracings of Lesions After Stroke (ATLAS) dataset [18] have illustrated the superior results obtained by FAN-Net, note that the FAN-Net outperforms baseline methods without large memory and computational overheads. ## 2 Methodology Let \(\mathbf{x}\in\mathbb{R}^{H\times W}\) be a source image with domain label \(y^{\text{D}}\in\{1,2,\dots,K\}\) where \(K\) is the number of source classes, and the corresponding segmentation ground-truth is \(\mathbf{y}^{\text{S}}\in\mathbb{R}^{H\times W}\). Our aim is to predict the segmentation result of an input from any target domain. The superscripts "D" and "S" denote domain and segmentation, respectively. ### Fourier-based Adaptive Normalization Since the amplitude component in Fourier Transform contains low-level statistics, swap, and linear decomposition are proposed to change the style information of MR images without affecting high-level semantics. However, the existing Fourier-based methods do not consider how to enable the model to be robust to an unseen domain. It motivates us to propose FAN-Net, a dynamic neural network that can learn how to normalize the input images to reduce cross-domain discrepancies. FAN leverages two affine transformation parameters learned from the amplitude components of MR images to convert them into a domain-agnostic style. For a 2D image \(x\), its Fourier transform \(\mathcal{F}(x)\) is formulated as: \[\mathcal{F}(x)(u,v)=\sum_{h=0}^{H-1}\sum_{w=0}^{W-1}x(h,w)e^{-j2\pi\left(\frac {h}{H}u+\frac{w}{W}v\right)}, \tag{1}\] where \(u\) and \(v\) are the coordinates in the frequency space. Then, let \(\mathcal{F}^{-1}(x)\) denote the inverse Fourier transform, then \(\mathcal{F}(x)\) and \(\mathcal{F}^{-1}(x)\) can be calculated with the Fast Fourier Transform (FFT) algorithm [19], and the amplitude and phase components can be represented as: \[\mathcal{A}(x)(u,v)=\left[R^{2}(x)(u,v)+I^{2}(x)(u,v)\right]^{1/2}, \tag{2}\] \[\mathcal{P}(x)(u,v)=\arctan\left[\frac{I(x)(u,v)}{R(x)(u,v)}\right], \tag{3}\] where \(R(x)\) and \(I(x)\) represent the real and imaginary parts of \(\mathcal{F}(x)\), respectively. Further, we denote \(M_{\alpha}\) as a binary mask: \[M_{\alpha}(h,w)\!=\!\begin{cases}1,\text{if}(h,w)\!\in\![-\alpha H\!:\!\alpha H,-\alpha W\!:\!\alpha W]\\ 0,\text{otherwise}\end{cases}, \tag{4}\] where we treat the center point of the image as \((0,0)\), and \(\alpha\in(0,1)\) is set manually. We assume the affine transformation parameters \(\gamma\) and \(\beta\) are learned from the amplitude component of the MR image. The adjusted amplitude component can be represented as: \[\mathcal{A}^{\prime}(x)=\gamma\times A(x)+\beta, \tag{5}\] where \(\beta=g_{\beta}(\mathcal{A}(x);\mathbf{\theta}_{\beta})\), and \(\gamma=g_{\gamma}(\mathcal{A}(x);\mathbf{\theta}_{\gamma})\), \(g_{\beta}\) and \(g_{\gamma}\) are two lightweight networks for learning \(\beta\) and \(\gamma\), respectively. The structures of \(g_{\beta}\) and \(g_{\gamma}\) are the same but their parameters are independent. As illustrated in Fig. 2(a), \(\beta\) and \(\gamma\) are the outputs of global average pooling layer. Then the output of FAN can be formalized as: \[x^{\prime}=\mathcal{F}^{-1}\left(\left[M_{\alpha}\circ\mathcal{A}^{\prime}(x) +(1-M_{\alpha})\circ\mathcal{A}(x)\right],\mathcal{P}(x)\right), \tag{6}\] where \(x^{\prime}\) represents the image dynamically converted from the original \(x\), \(\circ\) denotes element-wise multiplication. Note that the parameters of FAN, _i.e._, \(\mathbf{\theta}_{\beta}\) and \(\mathbf{\theta}_{\gamma}\), can be updated through gradient back-propagation. ### Domain-agnostic Knowledge Learning We have designed the learning scheme for adaptive affine parameters \(\beta\) and \(\gamma\), the FAN is still sensitive to different domain knowledge. In order to endow \(\beta\) and \(\gamma\) with domain-unrelated knowledge, we further identify \(\mathcal{A}^{\prime}(x)\) using a domain classifier (DC) with a GRL. Here, we use the cross-entropy (CE) loss to train the domain classifier: \(\mathcal{L}_{\text{CE}}=-\sum_{k=1}^{K}y_{k}^{\text{D}}\log\hat{y}_{k}^{\text{D}}\), where \(y_{k}^{\text{D}}\) represents the k-th domain class, and \(\hat{y}_{k}^{\text{D}}\) denotes the predicted probability of \(k\)-th domain class. Due to the gradient reversing by GRL, the DC practically maximizes the CE loss so that the gradients \(\frac{\partial\mathcal{L}_{\text{CE}}}{\partial\mathbf{\theta}_{\beta}}\) and \(\frac{\partial\mathcal{L}_{\text{CE}}}{\partial\mathbf{\theta}_{\gamma}}\) are toward the same direction of gradients of DC, _i.e._, \(\beta\) and \(\gamma\) are learned to be insensitive to different domain knowledge. Hence, \(x^{\prime}\) in Eq. (6) will have no specific domain information. Consequently, after training with multiple domains of source images, the FAN can convert style information of any domain to an unified form without affecting semantic structures. ### Loss Functions When we guarantee that the \(x^{\prime}\) is domain-agnostic, it can be used as an input of a U-Net [20] segmentation network. Dice loss is used for segmentation \(\mathcal{L}_{\text{Dice}}(\hat{y}^{\text{S}},y^{\text{S}})\), where \(\hat{y}^{\text{S}}\) is the output of U-Net. Then the total loss function is defined as: \[\mathcal{L}=\mathcal{L}_{\text{Dice}}+\lambda\mathcal{L}_{\text{CE}}, \tag{7}\] where \(\lambda\) decides the weight of \(\mathcal{L}_{\text{CE}}\) and is empirically set as \(1\). Minimizing \(\mathcal{L}_{\text{Dice}}\) triggers domain-agnostic segmentation results, which is attributed to FAN and domain classifier. ## 3 Experiments ### Dataset and Implementation Details ATLAS [18] is the unique and high-quality open-source dataset of 229 patients' T1-weighted MR images (version: v1.2). For each subject, \(189\) T1-weighted images were acquired and normalized into the standard space (MNI-152 template). The original size of images is \(233\times 197\), and all the images were cropped into \(224\times 192\). The size of lesions ranges from \(10\) to \(2.838\times 10^{5}\) mm\({}^{3}\), and the ground-truth segmentation masks were manually segmented by specialists. What's more, this dataset was collected from nine sites, and more specific details are listed in the supplementary materials. Population variation leads to different vascular territories, meanwhile, the MR images were acquired by various types of 3T MR scanners and imaging protocols. These factors result in substantial challenges in cross-domain learning. The FAN-Net is implemented with PyTorch and trained on an NVIDIA V100 Tensor Core GPU. In our experiments, we use SGD optimizer, the mini-batch size is 8, and a total of 50 epochs. The learning rate is set to 0.001 initially, with a 0.04 weight decay after each epoch. The parameter \(\alpha\) of the binary mask \(M_{\alpha}\) is set to 0.1. More details are provided in the supplementary material ([https://github.com/ymLeiFDU/Supplementary-for-FAN-Net/blob/main/Supplementary-for-FAN-Net.pdf](https://github.com/ymLeiFDU/Supplementary-for-FAN-Net/blob/main/Supplementary-for-FAN-Net.pdf)). ### Performance Comparisons All the experiments were conducted under the "leave-one-site-out" setting, which regards one site as the test/target set and the other sites as the training/source set. We compare FAN-Net with baseline methods including U-Net [20], Re-SUNet [21], PSPNet [22], DeepLabv3+ [23], X-Net [3], U-Net3+ [24], nnU-Net [25], and Unlearning [26]. Note that all the images were preprocessed by z-score normalization. Table 1 reports the quantitative results including the mean and standard deviation calculated across 9 independent experiments on 9 split test sets. Obviously, our method has relatively better performance than other methods. Moreover, the FAN-Net can improve the performance without large memory usage or computational costs _w.r.t._ memory, FLOPs, number of parameters, and MACC. Specifically, FAN-Net outperforms Unlearning [26] that is only based on the image space. For the qualitative comparisons, some segmentation results are shown in Fig. 3. The third row shows that our method predicts the presence of two small lesions similar to the ground truth, but the other methods can only predict one or none of them. Even in the first row, UNet3+ incorrectly predicts one more lesion. ### Ablation Studies Effects of \(\alpha\) values.Here, we investigate the effects of the adjusted range of the amplitude component \(\alpha\) in Eq. (5). We selected site 5 as the testing set. According to Table 2, the suitable setting for the binary mask \(M_{\alpha}\) is \(\alpha=0.1\). If the \(\alpha\) value is smaller, the style information to be adjusted is not Figure 2: Illustration of the proposed FAN-Net for brain stroke lesion segmentation. (**a**) Fourier-based adaptive normalization (FAN) standardizes the MR images into a domain-unrelated style. (**b**) Domain classifier with gradient reversal layer (\(-\mathbf{\Delta}\)). enough for FAN. While if the value of \(\alpha\) is higher, FAN would adjust the high-frequency amplitude component, changing the texture information such as the tissue boundary. **Effects of FAN and DC.** In this experiment, we take U-Net and the z-score-normalized MR images as the baseline, and also select site 5 as the testing set. Table 3 shows that each component can consistently improve all metrics. It also demonstrates that the model benefits from the domain-agnostic knowledge obtained by DC. Moreover, in Fig. 4 we show the training curves of Dice loss and domain accuracy to understand the behavior of DC. We can see that both of them decrease to a degree, which demonstrates that less domain-specific knowledge is preserved and the model will be more insensitive to the influence of domain diversity and perform stably. Especially, the domain accuracy finally approaches 0.125 (approximately equal to 1/8). On the other hand, this also validates that the learned adaptive affine parameters by FAN definitely model accurate domain knowledge. ## 4 Conclusion In this paper, we proposed FAN-Net, which can dynamically change cross-domain style information of MR images without affecting high-level semantic structures. Experimental results on the ATLAS dataset validated the effectiveness of our FAN-Net, suggesting its potential generalizability to unseen domains in clinical practice. \begin{table} \begin{tabular}{r c c c c c c c} \hline \hline **Method** & **Dice** & **Recall** & **F1-score** & **\#Par.** [M] & **Mem.** [MB] & **MACC** [G] & **FLOPs** [G] \\ \hline U-Net [20] & 0.4712 \(\pm\) 0.1952 & 0.4315 \(\pm\) 0.1931 & 0.4864 \(\pm\) 0.2161 & 28.94 & 260.20 & 63.21 & 31.63 \\ ResUNet [21] & 0.4780 \(\pm\) 0.1952 & 0.4693 \(\pm\) 0.1931 & 0.5322 \(\pm\) 0.1846 & 28.94 & 260.20 & 63.20 & 31.63 \\ PSPNet [22] & 0.4318 \(\pm\) 0.2054 & 0.3813 \(\pm\) 0.1792 & 0.3921 \(\pm\) 0.1948 & 38.28 & 261.91 & 65.07 & 32.56 \\ Deeplaby+ [23] & 0.4639 \(\pm\) 0.2077 & 0.4594 \(\pm\) 0.2181 & 0.4714 \(\pm\) 0.1840 & 59.33 & 171.63 & 28.98 & 14.50 \\ X-Net [3] & 0.5083 \(\pm\) 0.1926 & 0.4954 \(\pm\) 0.1844 & 0.5179 \(\pm\) 0.1896 & **15.05** & 915.67 & 40.49 & 20.33 \\ U-Net3+ [24] & 0.5210 \(\pm\) 0.2077 & 0.4851 \(\pm\) 0.1849 & 0.4972 \(\pm\) 0.1930 & 26.97 & 961.57 & 259.57 & 129.87 \\ nnU-Net [25] & 0.5047 \(\pm\) 0.2002 & 0.4916 \(\pm\) 0.1990 & 0.5268 \(\pm\) 0.2026 & 18.67 & **155.01** & **21.22** & **10.18** \\ Unlearning [26] & 0.5415 \(\pm\) 0.1881 & 0.5632 \(\pm\) 0.1721 & 0.5365 \(\pm\) 0.1881 & 27.90 & 205.73 & 50.50 & 23.86 \\ FAN-Net (ours) & **0.5591 \(\pm\) 0.1801** & **0.5762 \(\pm\) 0.1624** & **0.5455 \(\pm\) 0.1624** & 28.94 & 261.59 & 65.77 & 33.09 \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative comparisons under the leave-one-site-out setting. #Par.: the number of model parameters; Mem.: total GPU memory of the model; MACC: multiply-accumulate operations; and FLOPs: floating-point operations. \begin{table} \begin{tabular}{r c c c} \hline \hline \(\alpha\) & **Dice** & **Recall** & **F1-score** \\ \hline 0.05 & 0.4856 & 0.4948 & 0.5303 \\ 0.10 & **0.5098** & **0.5117** & **0.5484** \\ 0.15 & 0.4917 & 0.4832 & 0.5406 \\ 0.20 & 0.4601 & 0.4586 & 0.4829 \\ \hline \hline \end{tabular} \end{table} Table 2: Performances of using different values of \(\alpha\). Figure 4: Curves of the Dice loss (Left) and domain accuracy (Right) along the training. Figure 3: Examples of segmentation results on ATLAS dataset. Cyan arrows indicate discriminative regions.
2309.01960
Topological quantum synchronization of fractionalized spins
The gapped symmetric phase of the Affleck-Kennedy-Lieb-Tasaki (AKLT) model exhibits fractionalized spins at the ends of an open chain. We show that breaking SU(2) symmetry and applying a global spin-lowering dissipator achieves synchronization of these fractionalized spins. Additional local dissipators ensure convergence to the ground state manifold. In order to understand which aspects of this synchronization are robust within the entire Haldane-gap phase, we reduce the biquadratic term which eliminates the need for an external field but destabilizes synchronization. Within the ground state subspace, stability is regained using only the global lowering dissipator. These results demonstrate that fractionalized degrees of freedom can be synchronized in extended systems with a significant degree of robustness arising from topological protection. \rev{A direct consequence is that permutation symmetries are not required for the dynamics to be synchronized, representing a clear advantage of topological synchronization compared to synchronization induced by permutation symmetries.
Christopher W. WΓ€chtler, Joel E. Moore
2023-09-05T05:30:15Z
http://arxiv.org/abs/2309.01960v3
# Topological synchronization of fractionalized spins ###### Abstract The gapped symmetric phase of the Affleck-Kennedy-Lieb-Tasaki (AKLT) model exhibits fractionalized spins at the ends of an open chain. We show that breaking SU(2) symmetry and applying a global spin-lowering dissipator achieves synchronization of these fractionalized spins. Additional local dissipators ensure convergence to the ground state manifold. In order to understand which aspects of this synchronization are robust within the entire Haldane-gap phase, we reduce the bi-quadratic term which eliminates the need for an external field but destabilizes synchronization. Within the ground state subspace, stability is regained using only the global lowering dissipator. These results demonstrate that fractionalized degrees of freedom can be synchronized in extended systems with a significant degree of robustness arising from topological protection. _Introduction.--_From neuroscience to chemical reactions, synchronization emerges in an impressively vast variety of seemingly unrelated systems [1; 2; 3; 4; 5] and despite its long history continues to be crucial for the development of modern technology [6; 7; 8; 9; 10; 11; 12; 13]. In the past decade, the concept of synchronization has been generalized to the quantum regime with studies ranging from classically inspired systems like nonlinear oscillators [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31] to systems without any classical counterpart like spins [32; 33; 34; 35; 36]. Mutual synchronization and forced synchronization have been examined with surprising effects that are absent in the classical regime, such as for example the phenomenon of synchronization blockade of two identical systems [37; 38], which has recently also been verified experimentally [39]. Promising applications of quantum synchronization range from quantum information [40; 41; 42; 43] to quantum thermodynamics [44; 45; 46]. One approach to synchronization - followed in particular in the study of quantum many-body systems - is in terms of persistently oscillating eigenmodes of time-independent quantum master equations [47; 35; 48]. The existence of such eigenmodes is intimately related to dynamical symmetries [47; 48; 49], which together with permutation symmetries allow for synchronized dynamics of local observables. An illustrative example investigated in Ref. [47] is a 3-site Hamiltonian which non-trivially couples three spin-1/2 particles, is reflection symmetric about the central site, and conserves total magnetization. Dissipation acting locally on the central spin forces it to be in the spin-down state. As a consequence, there are two steady states of the master equation, where the two remaining spins are both spin-down or form a singlet. These two pure states form a decoherence-free subspace [50] such that coherent oscillations between these two states are possible even in this dissipative setup. Starting in an initial state that has non-vanishing overlap with both the singlet and the both-down state, results (after a short transient time) in perfectly anti-synchronized oscillations of the local transverse spin of the non-central sites 2 and 3, i.e., \((\sigma_{2}^{\rm x}(t))=-\langle\sigma_{3}^{\rm x}(t)\rangle\). They are anti-synchronized because the singlet state is anti-symmetric upon reflection while the spin-down state is symmetric. In the corresponding Bloch sphere representation, the central spin rapidly decays to the south pole, while the other two spins reach the same limit cycle within the Bloch sphere (parallel to the x-y-plane) which they orbit perfectly out of phase. In this Letter we investigate whether a similar strategy can be exploited to synchronize the fractionalized spin-1/2 degrees of freedom localized at the open ends of a spin-1 AKLT chain [51; 52; 53]. By applying dissipation that acts globally on all sites, we show that lifting the ground state degeneracy through a small external magnetic field leads to stable synchronization of the fractionalized spins. In that case, local spin-1 observables at the ends are perfectly anti-synchronized with amplitudes that reflect the topological edge states, i.e. they are exponentially localized at the boundaries. In addition we show that quasi-local dissipators acting on two neighboring sites that dissipate the energetically lowest state of the total spin \(S=2\) subspace are sufficient to depopulate the whole excited subspace and remove unwanted additional oscillations. The observed synchronization is of topological nature as the underlying mechanism relies on the fractionalization of the spin degrees of freedom and is thus symmetry topological protected. Lastly, we show that even if the fractionalized spins are allowed to interact by decreasing the biquadratic term of the AKLT Hamiltonian, stable synchronization within the ground state manifold is still possible if one only considers a global spin lowering dissipator and no magnetic field. _Synchronization model.--_We consider the open spin-1 AKLT chain of size \(N\) with an additional external magnetic field \(B\) yielding the Hamiltonian \[H=\sum_{j=1}^{N-1}\left[\frac{1}{2}\tilde{S}_{j}\cdot\tilde{S}_{j+1}+\frac{1}{ 6}\left(\tilde{S}_{j}\cdot\tilde{S}_{j+1}\right)^{2}+\frac{1}{3}\right]+\frac {B}{N}S^{x}, \tag{1}\] where \(S^{x}=\sum_{j=1}^{N}S_{j}^{x}\) is the total magnetization. For sufficiently small values of \(B\), the Hamiltonian remains gapped even if the chain size is increased, yet breaks SU(2) symmetry (which will become important for synchronization as we explain later). For \(B=0\) the ground state is fourfold degenerate as a consequence of effective spin-1/2 degrees of freedom that are localized at both ends of the chain. The ground states of (1) can be constructed explicitly, e.g., in terms of Schwinger bosons [54] or matrix product states [55; 56; 57]. As the spin-1/2 degrees of freedom at the ends are exactly decoupled, there are three ground states with total spin \(S=1\), where the two dangling spin-1/2's form a triplet state with \(S_{\mathrm{z}}=1,0,-1\) and one with \(S=0\), where the dangling spin-1/2's form a singlet with \(S_{\mathrm{z}}=0\). Thus, we may label the ground states accordingly as \(|G_{1,1}\rangle\), \(|G_{1,0}\rangle\), \(|G_{1,-1}\rangle\) and \(|G_{0,0}\rangle\). Note, that while a finite value of \(B\) partially lifts the ground state degeneracy, the corresponding manifold is still spanned by \(\{|G_{S,S_{\mathrm{z}}}\rangle\}\) as the total magnetization is preserved; \([H,S^{\mathrm{z}}]=0\). Synchronization is inherently connected to open system dynamics because it requires dissipation in order to reduce all potential dynamics to only the desired, synchronized ones. To this end, we describe the system by a time dependent density operator \(\varrho(t)\) acting on the Hilbert space of the system \(\mathcal{H}\). We consider Markovian dynamics such that the evolution may be described via a Lindblad master equation [58; 59], \[\dot{\varrho}=-\mathrm{i}\left[H,\varrho\right]+\sum_{\mu}\left(2L_{\mu} \varrho L_{\mu}^{\dagger}-\left\{L_{\mu}^{\dagger}L_{\mu},\varrho\right\} \right)=\mathcal{L}\left[\varrho\right], \tag{2}\] where \(L_{\mu}\) denotes (for now unspecified) Lindblad operators. The Liouvillian superoperator \(\mathcal{L}\) is the generator of a smooth, time-homogeneous, completely positive and trace-preserving (CPTP) map (or quantum channel), which obeys the semi-group property. The system dynamics described by Eq. (2) is guaranteed to have at least one steady state \(\varrho_{\mathrm{ss}}\) such that \(\mathcal{L}\left[\varrho_{\mathrm{ss}}\right]=0\)[58]. A sufficient and necessary condition for the existence of an eigenstate \(\varrho=A\varrho_{\mathrm{ss}}\) of \(\mathcal{L}\) with purely imaginary eigenvalue \(\lambda=-\mathrm{i}\omega\), i.e., \(\mathcal{L}[\varrho]=-\mathrm{i}\omega\varrho\) with \(\omega\in\mathbb{R}\), is given by [47] \[\left[L_{\mu},A\right]\varrho_{\mathrm{ss}} =0, \tag{3}\] \[\left(-\mathrm{i}\left[H,A\right]-\sum_{\mu}\left[L_{\mu}^{ \dagger},A\right]L_{\mu}\right)\varrho_{\mathrm{ss}} =-\mathrm{i}\omega A\varrho_{\mathrm{ss}}. \tag{4}\] While Eqs. (3) and (4) guarantee the existence of persistent oscillations in the long time limit, another condition is required for (anti-)synchronization [47]. Let \(P_{jk}\) be an operator that exchanges subsystem \(j\) with \(k\) and let \(\mathcal{P}_{jk}[x]=P_{jk}xP_{jk}\). Then, if \(P_{jk}\) is a weak symmetry of the Liouvillian, i.e \(\left[\mathcal{L},\mathcal{P}_{jk}\right]=0\), and (anti-)commutes with the operator A, \(P_{jk}AP_{jk}=\pm A\), then we find _stable_ synchronization (\(+\)) or anti-synchronization (\(-\)) of the two local operators \(O_{j}\) and \(O_{k}\) if \(\mathrm{Tr}\left[O_{j}A\varrho_{\mathrm{ss}}\right]\neq 0\) and conditions (3) and (4) are fulfilled, that is after some transient time \(\tau\) \[\left\langle O_{j}(t)\right\rangle=\pm\left\langle O_{k}(t)\right\rangle\ \forall t\geq\tau \tag{5}\] up to exponentially small corrections. In the example referred to in the introduction the local transverse spin of the non-central sites 2 and 3 will be perfectly anti-synchronized, i.e., \(\left\langle\sigma_{2}^{\mathrm{x}}(t)\right\rangle=-\left\langle\sigma_{3}^ {\mathrm{x}}(t)\right\rangle\propto\cos(\omega t)\), where the oscillation frequency \(\omega\) depends on the specific choice of Hamiltonian [47]. In the following we first focus on the ground state manifold and show how a single, globally acting dissipator \(L_{\mathrm{G}}\) leads to the fulfilment of conditions (3)-(5) within the ground state manifold and thus to stable synchronization. In a second step we will then show that additional, locally acting dissipators force the dynamics into the ground state manifold. In order to find adequate dissipators such that Eqs. (3) and (4) are fulfilled, we utilize the fractionalized spins of the AKLT ground states: Since the triplet and singlet states have different respective total spin \(S=1\) and \(S=0\), a global lowering operator \(S^{-}=\sum_{j=1}^{N}S_{j}^{-}\) leaves the singlet state \(|G_{0,0}\rangle\) invariant while lowering the magnetization \(S_{z}\) of the triplet states. Repeated application of \(S^{-}\) will then force the population into the state with the lowest weight, i.e., \(|G_{1,-1}\rangle\), which is also invariant upon acting with \(S^{-}\). Hence, a globally acting Lindblad dissipator \(L_{\mathrm{G}}=\sqrt{\gamma}S^{-}\), where \(\gamma\) denotes a dissipation rate, establishes two steady states of the master Eq. (2) given by the pure states \(\varrho_{0}=|G_{0,0}\rangle\left\langle G_{0,0}\right|\) and \(\varrho_{1}=|G_{1,-1}\rangle\left\langle G_{1,-1}\right|\). Together with the operator \(A=|G_{1,-1}\rangle\left\langle G_{0,0}|\) conditions (3) and (4) are fulfilled; in particular it holds that \[\mathcal{L}\left[\varrho_{10}=A\varrho_{0}\right]=\mathrm{i}\frac{B}{N}\varrho _{10},\ \mathcal{L}\left[\varrho_{01}=\varrho_{0}A^{\dagger}\right]=-\mathrm{i}\frac{B} {N}\varrho_{01}. \tag{6}\] Note, that \(\varrho_{1}=A\varrho_{0}A^{\dagger}\). We now also recognize that lifting the ground state degeneracy is necessary to observe synchronization, i.e., without the external magnetic field in Eq. (1) the oscillation frequency would be zero. _Depopulating the excited states._--So far we have only discussed how synchronization may arise within the ground state manifold with the help of a dissipative channel in terms of \(L_{\mathrm{G}}\). However, similar to synchronization of classical systems, it is desirable to observe synchronized dynamics not only for particular initial states (here within the ground state manifold) but for (almost) all initial states. Thus, the goal is to depopulate the excited states of the system as well during the dissipative evolution. In order to construct the simplest possible operators, we exploit that the Hamiltonian (1) preserves total angular momentum. In particular, for \(B=0\), each term in \(H\) can be written as \(P_{j,j+1}^{(2)}\), where \(P_{j,j+1}^{(2)}\) denotes the projector of two spin-1's on sites \(j\) and \(j+1\) onto total spin-2. Hence, the ground states are reached by driving two adjacent spin-1 particles out of the \(S=2\) subspace. This may be achieved by constructing an operator that connects all states belonging to the local \(S=2\) subspace, which are five-fold degenerate, with the fourfold degenerate local ground states (with total angular momentum \(S=0,1\)) [60], or via additional global coherent manipulation [61]. However, the previously introduced dissipative channel (\(L_{\mathrm{G}}=\sqrt{\gamma}S^{-}\)) forces all population within the \(S=2\) subspace to eventually reach the \(S_{z}=-2\) state. Thus, we only need to depopulate these states to dissipatively reach the ground state manifold. An exemplary choice is the Lindblad dissipators \(L_{j,j+1}=\sqrt{\kappa}\ket{00}\bra{-}_{j,j+1}\) written in the \(S_{x}\) basis \(\{\ket{+},\ket{0},\ket{-}\}\). _Synchronized dynamics.--_ Combining all Lindblad operators, the dissipative evolution of the density matrix which eventually leads to the synchronization of the fractionalized spins is given by \[\dot{\varrho}=-\mathrm{i}\left[H,\varrho\right]+\mathcal{D}\left[L_{\mathrm{G }}\right]\varrho+\sum_{j=1}^{N-1}\mathcal{D}\left[L_{j,j+1}\right]\varrho= \mathcal{L}\left[\varrho\right], \tag{7}\] where \(\mathcal{D}\left[L\right]\varrho=2L\varrho L^{\dagger}-\{L^{\dagger}L,\varrho\}\). Its solution, given that the system is initialized in the state \(\varrho(0)\), may be expressed using the spectral decomposition of the Liouvillian superoperator as \[\varrho(t)=\sum_{k}C_{k}\exp\left(\lambda_{k}t\right)\varrho_{k}, \tag{8}\] where \(\varrho_{k}\) is the right eigenstate of \(\mathcal{L}\) with corresponding eigenvalue \(\lambda_{k}\), i.e., \(\mathcal{L}\left[\varrho_{k}\right]=\lambda_{k}\varrho_{k}\). As \(\mathcal{L}\) is non-Hermitian, the left eigenvalues defined by \(\mathcal{L}^{\dagger}\left[\sigma_{k}\right]=\lambda_{k}^{*}\sigma_{k}\) may differ from the right ones. However, it holds that \(\mathrm{Tr}(\sigma_{k}^{\dagger}\varrho_{k^{\prime}})=\delta_{kk^{\prime}}\). The constant \(C_{k}\) in Eq. (8) denotes the overlap of the eigenstates with the initial state \(\varrho(0)\), i.e., \(C_{k}=\mathrm{Tr}\!\left[\sigma_{k}^{\dagger}\varrho(0)\right]\). Note that because \(\mathcal{L}\) generates a CPTP map, the eigenvalues \(\lambda_{k}\) can lie only in the left half of the complex plane with \(\mathrm{Re}\!\left[\lambda_{k}\right]\!\leq\!0\), and they always come in pairs, i.e., if \(\lambda_{k}\) is an eigenvalue, so is \(\lambda_{k}^{*}\). All eigenstates of \(\mathcal{L}\) with negative real part of the corresponding eigenvalues will experience selective decay, and only the ones which lie on the imaginary axis contribute to the dynamics in the long time limit. As discussed previously, the dynamics given by Eq. (7) will eventually terminate in the decoherence-free subspace [50] spanned by \(\{\varrho_{0},\varrho_{1}\}\). Thus, the expectation value of some observable \(O\) is given by \[\begin{split}\lim_{t\to\infty}\bra{O}(t)=& C_{0} \mathrm{Tr}\left(O\varrho_{0}\right)+C_{1}\mathrm{Tr}\left(O\varrho_{1}\right) \\ &+\left[\mathrm{e}^{\mathrm{i}Bt/N}C_{01}\left(G_{0,0}|O|G_{1,-1} \right)+\mathrm{c.c.}\right].\end{split} \tag{9}\] Because the subspace is decoherence-free, \(C_{i}=\mathrm{Tr}[\sigma_{i}^{\dagger}\varrho(0)]=\mathrm{Tr}[\sigma_{i}^{ \dagger}\varrho(0)]\). In order to observe stable synchronization, not only does the initial state need to have non-vanishing overlap with the eigenstate \(\varrho_{01}\), but also that the obersvable is non-zero in that state, i.e., \(\mathrm{Tr}(OA\varrho_{0})=\left(G_{0,0}|O|G_{1,-1}\right)\neq 0\). A suitable choice of local operators that may be used as witnesses of the fractionalized spin synchronization are given by the transverse spin \(S_{j}^{x}\), for which the first two terms in Eq. (9) are identical to zero, and only \(C_{01}=\left(G_{0,0}|\varrho(0)|G_{1,-1}\right)\) and \(\left(G_{1,-1}|S_{j}^{x}|G_{0,0}\right)\) contribute to the long-time dynamics. As the dynamical symmetry operator \(A=\left|G_{1,-1}\right\rangle\left\langle G_{0,0}\right|\) is anti-symmetric upon inversion of the chain, an operator acting locally on site \(j\) will be anti-synchronized with the corresponding site at the other end of the chain located at \((N+1)-j\). Figure 1(a) and (b) show the time evolution of the transverse spin \(\langle S_{j}^{x}\rangle\) for a chain of length \(N=6\) with random initial condition (bold lines correspond to the left half of the chain \(j=1,2,3\), light colors to the right half \(j=4,5,6\)). The oscillations are perfectly anti-synchronized upon inversion of the chain. As a consequence of the fractionalized spin, the amplitudes decay exponentially into the bulk. As seen in Fig. 1(b) for short times there is no synchronization. However, the transient time is short compared to the oscillation frequency \(\omega=B/N\). For random initial conditions the Figure 1: Evolution of the local transverse spin \(\langle S_{j}^{x}\rangle\) of the synchronized AKLT model for an open chain of length \(N=6\) (sites \(j=1,2,3\) in bold colors, sites \(j=4,5,6\) in light colors). (a) Starting from random initial conditions, the two halves of the chain are perfectly anti-synchronized with each other after a transient time because the dynamical symmetry operator \(A=\left|G_{1,-1}\right\rangle\left(G_{0},0\right|\) is anti-symmetric upon inversion of the chain. The (anti-)synchronized amplitudes after the transient time decay exponentially into the bulk. (b) Same plot as in (a) but focusing on the early time dynamics: The random initial conditions result in transient random spin dynamics. (c) The balanced superposition of \(\left|G_{0,0}\right\rangle\) and \(\left|G_{1,-1}\right\rangle\) as initial state is immune to dissipation and maximizes the observed (anti-)synchronization amplitudes. The oscillation frequency is \(\omega=B/N\). Parameters: \(B=0.2\), \(\gamma=\kappa=0.2\). oscillation amplitudes even at the boundaries are small. The reason is that the overwhelming majority of states has no overlap with the ground state coherences such that \(C_{01}=(G_{0,0}|\varrho(0)|G_{1,-1})\ll 1\). However, one may maximize this overlap by choosing \(\varrho(0)=\left|\psi\right\rangle(\psi|\) with \(\left|\psi\right\rangle=(|G_{0,0}\rangle+|G_{1,-1}\rangle)/\sqrt{2}\) as the initial state. Fig. 1(c) shows the corresponding dynamics. As this state is decoherence free, the amplitudes are unaffected by the dissipation and anti-synchronization is stable. _Haldane chain.--_The AKLT Hamiltonian (1) exhibits spin-\(1/2\) degrees of freedom that are perfectly localized at boundaries and do not interact. In the following we investigate the impact of interactions by decreasing the value of the biquadratic term in Eq. (1), i.e., we consider the Hamiltonian \[H_{\varepsilon}=H-\varepsilon\sum_{j=1}^{N-1}\left(\vec{S}_{j}\cdot\vec{S}_{j +1}\right)^{2}. \tag{10}\] For finite values of \(\varepsilon\), the states within the \(S=1\) subspace are not simply connect through the spin lowering operator \(S^{-}\) such that \(L_{\mathrm{G}}\) induces additional dissipation. Fig. 2(a) shows the complex eigenvalues of \(\mathcal{L}\) close to the imaginary axis for different values of \(\varepsilon\) in the range of \([0,1/6]\), where \(\varepsilon=1/6\) removes the biquadratic term completely, and corresponds to the spin-\(1\) Heisenberg chain (with additional magnetic field). Upon increasing \(\varepsilon\) the initially purely imaginary eigenvalues move away from both the real and the imaginary axis, i.e. the oscillation frequency increases, yet the synchronization is damped. However, the real part remains small and in particular the eigenvalues with second smallest real part also move away from the imaginary axis. Thus, there exist a time range for which all eigenstates but the synchronized ones are damped. Such damped synchronized dynamics has also been termed metastable synchronization [47]. Stable synchronization may however be restored under certain conditions even for the Heisenberg chain (\(\varepsilon=1/6\)). To this end we consider the case of \(B=0\), such that for \(\varepsilon=0\) the ground state is fourfold degenerate. Perturbations to the biquadratic term of the AKLT Hamiltonian partially lift the ground degeneracy such that the \(S=0\) state is energetically distinct from the states within the \(S=1\) subspace. In the following, we will refer to both the fourfold degenerate ground state for \(\varepsilon=0\) as well as the threefold degenerate ground state together with the first excited state for \(\varepsilon\neq 0\) as the ground state subspace. As \(H_{\varepsilon}\) and the dissipator \(L_{\mathrm{G}}\) preserve the total angular momentum, the dynamics is confined to their respective total angular momentum subspace for \(\kappa=0\). Then, there exists again a dynamical symmetry operator connecting the threefold degenerate subspace (\(S=1\)) with the \(S=0\) state. Similar to the previous discussion, this results in perfect anti-synchronization if the initial state is chosen to be within the ground state subspace. Figs. 2(b) and (c) show the time evolution of the transverse spin \(\langle S^{z}_{j}\rangle\left(t\right)\) for a chain of length \(N=6\) for \(\varepsilon=0.1\) and \(\varepsilon=1/6\), respectively. The dynamics show perfect anti-synchronization for arbitrary initial conditions within the ground state subspace. The oscillation frequency in Fig. 2 (c) is larger compared to (b) as the energy gap between the \(S=1\) and \(S=0\) subspaces opens. Two remarks are in order: first, perfect anti-synchronization in the Haldane chain (\(\varepsilon\neq 0\)) is possible without an additional magnetic field (\(B=0\)). In fact, a finite external magnetic field destroys the synchronization away from the AKLT limit because it introduces additional oscillation frequencies by lifting the threefold degeneracy of the \(S=1\) subspace. Second, the synchronization observed in Figs. 2(b) and (c) is distinct from regular coherent dynamics: While without dissipation, coherent Figure 2: (a) Eigenvalues of the Liouvillian superoperator \(\mathcal{L}\) close to the imaginary axis for an open chain of length \(N=6\) and different values of \(\varepsilon\), considering both global and local dissipators. The purely imaginary eigenvalues for \(\varepsilon=0\) move away from the imaginary axis as the biquadratic term is decreased. Simultaneously the oscillation frequency increases resulting in fast but damped (anti-)synchronization. Parameters: \(B=0.2\), \(\kappa=\gamma=0.2\). (b, c) Stable synchronization may be recovered for finite values of \(\varepsilon\) within the ground state subspace for \(B=0\) if one considers only the global dissipator \(L_{\mathrm{G}}\) (i.e. \(\kappa=0\)). Sites \(j=1,2,3\) are in bold colors, sites \(j=4,5,6\) in light colors. The oscillation frequency in panel (c) with \(\varepsilon=1/6\) is larger compared to panel (b) with \(\varepsilon=0.1\) because of the increased gap above the threefold degenerate ground state. (anti-phase) oscillations are still present, there also exists an additional constant shift depending on the initial conditions: different locally acting observables may not exhibit the same shift, so in the strict definition of Eq. (5) they are not synchronized. Open system dynamics are thus necessary for perfect (anti-)synchronization even within the ground states and the first excited state of the Haldane chain. _Conclusions.--_We have shown that it is possible to synchronize the _fractionalized_ spin degrees of freedom in the spin-1 AKLT chain via engineered dissipation and an external magnetic field. The observed synchronization is stable and topologically protected. While perturbations to the biquadratic term result in an additional dissipation channel, stable synchronization may be restored between the threefold degenerated ground state and the first excited state of the chain without magnetic field via a single global spin lowering operator. Our results illuminate on the possibility to utilize dissipation in order to control the dynamics of fractionalized degrees of freedom, and not only to prepare them. The simple form of the dissipation scheme we propose opens the opportunity to observe topological synchronization experimentally, for example in superconducting qutrit arrays [62]. _Acknowledgements.--_The authors acknowledge useful discussions with Samuel J. Garratt. C. W. W. was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Project No. 496502542 (WA 5170/1-1), and J.E.M. by the Quantum Materials program under the Director, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division, of the U.S. Department of Energy, Contract No. DE-AC02-05CH11231. Both authors received additional support from a Simons Investigator award.
2303.05585
Chemical kinetic theory of aging
A theory of aging based on the principles of the kinetics of chemical reactions and the rules of natural selection of organisms is proposed. The theory is based on a hypothesis that the biochemical processes in the organism can be described in the terms of chemical reaction kinetics. The evolutionary process of organisms is determined by the goal of continuing life, and the natural selection forced organisms to develop in an optimized way for survival and reproduction, after which any further development of the organisms did not matter for natural selection and therefore was not regulated by it. Accordingly, the ongoing biochemical processes in the organism after giving birth were not stabilised and continued casual, which, in the case of complex systems of biochemical processes, led to imbalances and biochemical processes that did not contribute to health in the organism, resulting in aging. Based on this view, it is necessary to look for the key biochemical processes that regulate the vital activity of the organism. Balancing of these processes in the period after giving birth can artificially lead to the stabilization of the kinetics of biochemical reactions, and consequently, the continuation of life almost indefinitely.
Alexey Kondyurin
2023-03-09T21:24:18Z
http://arxiv.org/abs/2303.05585v2
## Abstract ## Abstract A theory of aging based on the principles of the kinetics of chemical reactions and the rules of natural selection of organisms is proposed. The theory is based on a hypothesis that the biochemical processes in the organism can be described in the terms of chemical reaction kinetics. The evolutionary process of organisms is determined by the goal of continuing life, and the natural selection forced organisms to develop in an optimized way for survival and reproduction, after which any further development of the organisms did not matter for natural selection and therefore was not regulated by it. Accordingly, the ongoing biochemical processes in the organism after giving birth were not stabilised and continued casual, which, in the case of complex systems of biochemical processes, led to imbalances and biochemical processes that did not contribute to health in the organism, resulting in aging. Based on this view, it is necessary to look for the key biochemical processes that regulate the vital activity of the organism. Balancing of these processes in the period after giving birth can artificially lead to the stabilization of the kinetics of biochemical reactions, and consequently, the continuation of life almost indefinitely. **Keywords**: theory of ageing, kinetics of chemical reaction ## Introduction For today there are a number of theories of human aging [1]. All of these theories can be divided into two large groups. The first group consists of theories of programmed death, and the second group consists of theories of violations or errors. Programming theories claim that aging is programmed and follows a biological schedule. This schedule can be implemented as a sequence of turning genes on or off. Aging is observed as an age deficit [2]. It has been shown in a number of experiments that there is a "biological clock" in the cell that limits the cell's ability to continuously proliferate. Based on these experiments, the limit of human life is estimated to no more than 100-120 years [3, 4]. But other experiments have shown that a cell can proliferate indefinitely. Experiments on limited cell proliferation and cell death rather refer to an unnatural cell proliferation environment. Another theory of programmed aging is based on the body's use of hormones to regulate the aging process [5]. Indeed, all organs of the body are regulated by hormones that are secreted by the organs of the endocrine system. Together, the neurologic and endocrine systems determine many processes of the body's metabolic activity. One of the functions of hermore system is the reproductive activity of the body. The concentration of hormones responsible for the reproductive function of the body is directly related to the active period of women and men to produce a child. Other hormones regulate biological rhythms and are responsible for activating or suppressing the immune system, and a number of hormones are responsible for nutrient absorption. Changes in the neurologic and endocrine systems occur with age concentration of hormones change in the body. However, the artificial intake of additional doses of hormones does not help to increase life time, although in some cases it can temporarily solve complications of aging in an organism, in particular, reproductive function. It also remains unclear why the concentration of hormones change during life time. Another view on the problem of aging is associated with the immune system. The activity of the immune system decreases with time and it ceases to protect the body, which leads to aging and death [6, 7]. Indeed, the decrease of the immune system activity leads to a decrease in the body's resistance to diseases and an increase in the risk of cancer. This is shown in a number of cases, for example, with HIV patients and patients with implanted organs where the immune system was supressed. However, the mechanism by which the activity of the immune system declines in the elderly remains unclear. On the other hand, methods of increasing the activity of the immune system does not lead to an increase in life time. The theory of aging based on the shortening of telomerase can be attributed to the same group. Over time, during cell division, telomerase is shortened, the cell stops proliferating, and this causes the death of the whole organism [8-10]. However, artificial methods of telomerase lengthening also does not lead to an increase in the lifespan of the whole organism. Theories of wear and tear of organisms are based on how everyday wear and tear affects the human body's ability to maintain itself. For example, the theory of the standard of living is based on the basic oxygen metabolism, which if accelerated causes a shorter lifespan of an organism [11]. The cells and organs of the body then wear out and stop functioning, and the organ tissues can no longer be renewed and the organism dies. This theory intersects with theories of programmed death. However, it should be noted that physical exercise can increase body functionality, but does not shorten or prolong life. As a variant of this theory, it is assumed that the processes of growth and maintenance compete for bodily resources [12]. One of the ways for the accumulation of errors in body functionality is the theory of protein cross-links, which causes disturbances in biochemical processes in the body [13, 14]. The cross-linking of collagen protein is mainly considered, which hinder the transport of nutrients and reduce tissue elasticity. The result can be seen by comparing the soft, supple skin of a child with the hard, wrinkled skin of an older person. The chemical reaction of crosslinks can be accelerated by unsaturated fats, metal ions, and by radiation. The use of a special diet that excludes the intake of such cross-linking agents can reduce the rate of these cross-linking process. Using antioxidants as inhibitors of cross-links that have been caused by radiation damage also helps, however, the problem of aging and death has not yet been solved by diet and vitamin intake. The theory of the accumulation of defects such as free radicals, including ones in cell DNA, belongs to the same class of these theories [15-17]. Replication errors can occur in all stages of DNA and RNA replication. Accumulation of errors above a certain level can lead to improper synthesis of cell proteins and death. However, accumulation of significant amounts of DNA and protein synthesis errors in old cells - which could cause disruption of cell and organism functionality - has not been experimentally detected [4, 18-19]. This theory is also contradicted by data on people who have received large doses of radiation during catastrophes and yet maintained a long healthy life, despite the extremely high value of the number of damages to DNA and other active molecules in the body. A similar theory is based on free radical accumulation and damage in cell membranes in the mitochondria, lysosomes, and nuclear membranes, which leads to lipid oxidation, and so then the transport of substances into cells and the functionality of the cell are impaired. The body has its own mechanisms to protect and resist the oxidation processes, however, over time they become insufficient. Animal experiments have shown that the administration of antioxidants reduced the occurrence of heart disease and prevented the occurrence of cancer. Even though the intake of antioxidants slows down the degeneration of the nervous and immunity system, the intake of antioxidants does not give a significant increase in life expectancy. In addition, it is unclear why the body's ability to fight lipid oxidation declines over time. There are also a number of theories of aging based on social and psychological factors associated with higher nervous activity. These theories are not considered here because they do not relate to organisms without higher nervous activity. At the same time, aging processes for such animals also exist and are similar to the aging processes within the human body. Thus, so far none of these theories fully explains the aging process and do not show ways to slow it down, or stop it [20]. More importantly, we still do not understand why we age, what is the aging process, how did it appear, and for what? ## Hypothesis To understand the aging process of an organism, we must first consider the origin and development of life. Modern theories of the origin of life on Earth are based on chemical reactions in the synthesis of organic molecules. The synthesis of these complex organic molecules from a mixture of ammonia, carbon dioxide and other molecules dissolved in water is possible, as shown in a number of experiments. However, such organic molecules can both form and decompose under the influence of environmental factors. We should assume that at some point some of these special organic molecules appeared. The key characteristic of such molecules as the precursors of life is the ability to exactly replicate themselves. That is, such molecules should be able to collect the necessary molecules from their environment to clone themselves. RNA molecules have this ability under certain conditions, however, this does not mean that exactly those RNA molecules that we observe now appeared in the proto-soup. Perhaps these were the simplified forms of precursors of to today's known RNA molecules, which still had the main property of precise replication. There are only descendants of these self-replicating molecules, making them our ancestors. The rest of the organic molecules that did not have the ability to clone themselves formed in the proto-soup eventually broke up and disappeared, so now there is little information left about them. The further development of such molecules entailed the possibility of protein synthesis that helps the synthesis of its clones. Next, it was possible to protect such molecules with a lipid membrane from the external environment. As a result, there was a protocell that could replicate. Further development of the protocell led to more complex mechanisms of protection from the external environment and obtaining the necessary substances to maintain this protection. But the main property in such cells - the ability to replication - has been preserved. Later, cells began to unite in conglomerates, where cells found ways of communication and regulation. The ability of cells to distribute the responsibilities of functions improves the preservation of these cells as part of a conglomerate in the environment. At each stage of development, there was a complication of the cell conglomerate and an increase in the probability of survival - and in turn reproduction - of the cell conglomerate in the environment. This was determined by the rules of natural selection. There was an optimization of biochemical processes to be able to survive most consistently compared to other cell conglomerates. And the cells' ability of replication remained as a key-factor. Then, such conglomerates formed multicellular organisms, where different cells in the organism would have a certain functionality. The specialization of cells within a multicellular organism allows for efficient division of functionality and a greater potential for adaptation and survival in changing environments. Thus, the main parameter of the survival of the organism's species is the ability to replicate itself, and is the main property that underlies the rules of natural selection. This is the main factor that drives further genetic development improvement. And how are these processes related to the aging process? For a unicellular organism, the aging as such does not exist. Unicellular organisms are able to divide endlessly. There is no ageing mechanism for a single cell. Even a cell separately taken out of an animal or humans can divide endlessly in certain conditions with nutrients, gas, medium, temperature, and sterility as it is proven in the experiments. This has been proven by evolution of life on Earth, which has a history of more than 3 billion years. But at some stage of the evolutionary process of the cell conglomerates, a problem arose of the difficulty of maintaining simultaneous endless replication and protection of a conglomerate of cells from the external environment. During this process a new path of biochemical reactions spontaneously solved this problem leading to the successful protection of the conglomerate up until it could provide successful offspring. But the fact that after replication with this new path, whether such a conglomerate could exist became unimportant. After replication, the conglomerate degraded and nothing could stop its degradation, since there was no mechanism of biochemical reactions to maintain such a conglomerate. Any processes that were linked to survival after replication were no longer deemed important by the rules of natural selection. The offspring of the conglomerate had already been produced and the replication repeated. The structure of RNA (and later DNA) was preserved in nature. The former conglomerate had been removed from evolution and became waste. At this stage, the infinite lifecycle of the cell conglomerate was sacrificed to obtain the most optimal parameters for the survival of the cell conglomerate in the external environment and the production of offspring. From this point of view, the process of aging and death is seen as optional for multicellular organisms, but it happened as a result of evolution. So far, no one knows whether it is possible to slow down the aging process, or if the aging process is strictly related to the body's ability to develop and produce offspring, and if the exclusion of one property will cause the exclusion of another property. On the other hand, starting from the replication of the first RNA molecules, the kinetics of these transformations can be described in terms of a chemical reaction. Further complication of the structure of replicating RNA molecules, and later of cells does not change the laws of substance transformation. Therefore, the biochemical transformations in the cell and organism must obey the laws of the kinetics of a chemical reaction. This means that the concentration of a certain substance B must be determined by the chemical kinetic equations with the reaction constants for obtaining such a substance from substance A, and the kinetic equations for the chemical reactions of the transformation of substance B into substance C with appropriate coefficients depending on the reaction conditions, including the presence of catalysts. Of course, one should not simplify the real kinetics of biochemical reactions in a both a cell or an organism to a few equations, but there is no reason to assert that the principles of chemical kinetics in a cell and an organism will differ significantly from the principles of chemical kinetics in organic chemistry. This simplification can help to understand the principles of aging as a biochemical process determined by the concentration of certain substances in the cells and body. ## Model Let us consider a model of the kinetics of chemical reactions. Suppose there is a substance we will call "A" in a certain concentration in a certain volume. This substance can be converted as a result of a chemical reaction into substance we will call "B" at a certain rate. Suppose the rate of this reaction is determined solely by the concentration of substance A. Then the differential reaction equation will correspond to the first-order reaction equation: \[\frac{\partial[A]}{\partial t}=-k_{1}[A] \tag{1}\] where k\({}_{1}\) is the reaction rate constant. This equation describes the most chemical reactions. Let us assume that substance B can also be transformed into a substance we will call "C". Let say the reaction kinetics also correspond to the first order of the reaction with the reaction constant k\({}_{2}\). Then the equation for the substance B is following: \[\frac{\partial[B]}{\partial t}=k_{1}[A]-k_{2}[B] \tag{2}\] Next, consider the chemical reactions with substances C, D, E, F, and so on. For ease of consideration, let us assume that the reactions correspond to first-order reactions. The system of the equations would be the following: \[\frac{\partial[A]}{\partial t}=-k_{1}[A]\] \[\frac{\partial[B]}{\partial t}=k_{1}[A]-k_{2}[B]\] \[\frac{\partial[C]}{\partial t}=k_{2}[B]-k_{3}[C] \tag{3}\] \[\frac{\partial[D]}{\partial t}=k_{3}[C]-k_{4}[D]\] ... \[\frac{\partial[X_{n}]}{\partial t}=k_{n-1}[X_{n-1}]-k_{n}[X_{n}]\] Let's limit the calculation to substance I, which we will install last in the chain of reactions. The solution of equations can be represented in the form of a graph (Fig. 1). Fig.1. Kinetics of chemical transformations of substances in the chain reactions according to the solution of the system of equations (3). Explanation in the text. Assume that these reactions occur in the human body from birth or conceiving. If we assume that substance G is a regulator of the hormones responsible for reproductive functions, like a regulator of testosterone and estradiol for example, then the ability of a person to produce offspring appears at a certain age in youth and disappears at old age. As the concentration increases, in order to produce offspring, it is necessary that the hormonal level be above a certain limit, which is shown in the graph by the dotted line. Thus, the ability to reproduce offspring appears in a person some time after his birth and is determined by a chain of chemical reactions, and disappears due to subsequent chemical reactions. For example, in this model calculation, the age of fertility starts at 17 years and ends at 53 years. Let's assume that substance H is responsible for the vital functions of the body in old age. Then lowering this substance below a certain limit will cause the death of a person. A decrease in the concentration of this substance H can also be caused by chemical reactions ongoing in the body. In this calculation, death occurs at the age of 84 years The organism at the "End point" is withdrawn from the process of evolution. If older than this age, the organism does not produce offspring and does not participate in the process of natural selection. Accordingly, further development of the organism is not influenced by natural selection. Due to the fact that the running chemical processes in the body proceed further than the "End point" according to the theory of the kinetics of chemical reactions, it makes no sense to expect that these processes are aimed at maintaining bodily function. Chemical reactions in the body occurring after the "end point" may unbalance the vital processes in the body and as a result the body dies. Let's consider some cases. Let us assume that the reaction constant of the transformation of substance F is 30% less than in the previous case (Fig.1), while keeping all other parameters same. Then the fertile age begins at 20 years and lasts until 60 years (Fig.2). Death occurs at the age of 96 years. Consider a situation where a person gets an injection of substance F periodically starting from the age of 50 every 10 years. Then the solution of the system of equations (3) will look like this (Fig.3). Fig.2. Kinetics of chemical transformations of substances in a chain reaction according to the solution of the system of equations (3). The parameter of the reaction rate of the transformation of the substance F is reduced by 30% in comparison with the calculation results in Fig.1. Explanation in the text Consider a situation where a person gets an injection of substance F periodically starting from the age of 50 every 10 years. Then the solution of the system of equations (3) will look like this (Fig.3). Fig.3. Kinetics of chemical transformations of substances in a chain reaction according to the solution of the system of equations (3) under the condition of additional introduction of substance F into the system every 10 years. Explanation in the text. The introduction of substance F changes the course of the chemical reactions so that substance G and H will never cross the limits of fertility loss or death. Accordingly, the body will not age and will always be able to produce offspring. Of course, this does not solve the problem of diseases, catastrophes and other events and their impact on human life. But the aging process according to this model can be stopped. ## Discussion What is known today about the concentration of biochemical regulators of vital activity in relation to such a model of chemical kinetics in the body? Unfortunately, there is practically no monitoring of biochemically important regulatory compounds in the body of the same healthy person from birth to death. The most complete data on the dynamics of the concentration of biochemically important compounds in the body can be found related to hormones. It is possible that hormones are only regulators of the functionality of the organs of the body and by themselves cannot represent components of chain biochemical reactions that determine the development of the body. However, the release of hormones during the life of the organism is regulated by certain compounds. The release of hormones and their concentration are provided by the regulators of the organs of the endocrine system, which can determine the development of the body. Therefore, hormones can be considered as indicators of the activity and concentration of regulators of biochemical processes in the body. Let us consider several examples of well-known measurements of hormone concentrations in a healthy organism in life time. One example is the levels of the concentration of a growth hormone Somatotropin in mice with age (Fig.4a). This growth hormone is responsible for bone growth and organ development, but is also a hormone with distinct catabolic and anabolic functions in many tissue types [21, 22]. The concentration of the hormone falls within 8 weeks after the birth. Similar results can be found for hormone concentrations in other animals. The dependence of the concentration of somatotropin in human blood soon after birth is similar (Fig.4b). The highest concentration of this hormone is observed in blood from the umbilical vein at birth. Furthermore, the concentration falls and remains constant until teenage years. During teenage years, the concentration of this hormone increases. Then, after the growth period in adolescence, the concentration of the growth hormone falls. The concentration of the well-known hormone testosterone is high in the womb and falls shortly after birth (Fig.4c). In adolescence, during maturation, the concentration of testosterone rises sharply and reaches a maximum at the age of 20-25 years. Further, the concentration of testosterone continuously declines in old age. This hormone is responsible for reproductive functions, for the growth and development of bone and muscle tissue, as well as mood. The concentration of follicle stimulating hormone (FSH) and luteinizing hormone (LH) increases throughout a person's life (Fig.4d). These hormones regulate the human reproductive system. A certain concentration of these, and other hormones is required for male and female fertility. Deviations in the concentration of these hormones from the optimal values (to lower or higher values) leads to infertility. The concentration of the hormone DHEAS rises during adolescence, reaches a maximum around 18-25 years of age, and then falls (Fig.4e). This hormone belongs to neurosteroids, and is associated with memory. There is evidence that this hormone is also responsible for diseases of the cardiovascular system, diabetes, Parkinson's, Alzheimer's, and others. The presented curves show that the concentrations of hormones change with age and the curves of changes have a characteristic shape for the curves of the concentrations of substances in chain reactions. Curves with sharp changes tend to be at a young age, curves with a maximum tend to be at a fertile age, and monotonically decreasing and monotonically increasing curves of hormone concentration tend to be observed in old ages. Basically, sharp changes in concentration are observed in childhood and at the age of maturation. Changes in old ages are more gradual. Most of the curves show falling hormone concentrations in old ages after the age of fertility. Note that none of the above theories of aging provides an explanation of the behaviour of the concentrations of hormones and other biologically important substances in the body. On the other hand, changes in the concentrations of biologically important regulatory substances in the body seem to correlate with phases of change in body functionality, including the reproductive phase and the old age phase. This confirms the correlation between the organism's functionality and the concentration of biologically important regulators of the vital activity of the organism. Of course, the real scheme of the biochemical reactions in the organism is much more complicate, but this very simple model demonstrates the main reason of the ageing. This model of aging based on the kinetics of chemical reactions provides answers to the fundamental questions posed in [4, 25]: 1. Why do organisms undergo a progressive decline in physiological functions in the last part of their lives? 2. Why does the rate of aging vary within species and between species? 3. Why do experimental restrictions (reducing the calorie content of food, for example) slow down aging and lengthen life? 1. According to the general conclusions of the theory of the kinetics of chemical processes in a closed system, the concentrations of reactants decrease with the reaction time and the concentration of the final reaction products increase. Apparently, the same applies to the regulators of vital processes in the human body and other living organisms. The initial concentration of these substances is set at conception or at the birth of an individual. Then a decrease in the concentration of regulators leads to a decrease in critically important hormones and, as a result, to a decrease in the physiological functions of the body. 2. The course of a chemical reaction depends on the rate constant of the reaction. The difference in the rate constants of chemical reactions in the body, apparently, is determined by the genome of the organism and may differ between organisms of the same and different species. The evidence is the difference in the absolute values of the concentrations of hormones and other biologically important compounds in the organisms of different individuals within the same and different species. The difference in the genome within the same species is small. The same small difference is observed in the duration of life within the same species. The difference in the genome of different species can be significant. There is also a significant difference in life expectancy between species. The model shows how a decrease in the rate constant of only one of the regulators by 30% shifts the age of fertility and the date of death of the organism. 3. An artificial change in the metabolic rate (for example, due to the calorie content of food) apparently also affects the rate of chemical reactions of the regulators responsible for the development of the organism. In the presented model, an example with a decrease in the rate constant of the reaction of a substance F lengthened the life of the organism These results should not be taken directly as a reason for the introduction of some hormone into the body. The body is a rather complex system and the introduction of one or more hormones will create an imbalance in the body, and can disrupt its normal functionality. It is necessary to find the key substances and/or regulators of the vital activity of the body, the artificial regulation of which will change the dynamics of hormone concentrations. Currently, the accumulated results on the studies of the human body are not enough to make a decision on the introduction of any drug or hormone to increase life expectancy. However, at present, there are no detailed studies of the kinetics of reactions in the body of a healthy person during his/her entire life, starting from birth or even from the moment of conception. Without the knowledge of the kinetics of the body's biochemical reactions, the search for ways to regulate or stop the aging process is a random process. A knowledge of the kinetics of the regulators in the body, knowledge of the kinetics of biochemical processes, including the patterns and chains of reactions of the regulators of the vital activity of the body, such as cytokines, hormones, and others is required. Thus, the theory of organism aging based on the kinetics of chemical reactions of developmental regulators can explain the aging dynamics in principle. Further detailed study of the kinetics of biochemical processes with age, starting from the moment of conception, may make it possible to regulate biochemical processes for life extension or even immortality. ## Conclusion Thus, it is assumed that the aging of an organism is not a planned and obligatory result of life, but is rather the result of one path of genealogical development of a multicellular organism, which was evolutionarily chosen by chance. If so, then there is a possibility that it might be possible to adjust the biochemical reactions of the body so that the aging process be slowed down or even stopped. ## Statements and Declarations Author declares no financial support and no conflict of interests.
2308.07786
Box dimension of generalized affine fractal interpolation functions (II)
Let $f$ be a generalized affine fractal interpolation function with vertical scaling functions. In this paper, we prove the monotonicity of spectral radii of vertical scaling matrices without additional assumptions. We also obtain the irreducibility of these matrices under certain conditions. By these results, we estimate $\mathrm{dim}_B \Gamma f$, the box dimension of the graph of $f$, by the limits of spectral radii of vertical scaling matrices. We also estimate $\mathrm{dim}_B \Gamma f$ directly by the sum function of vertical scaling functions. As an application, we study the box dimension of the graph of a generalized Weierstrass-type function.
Lai Jiang, Huo-Jun Ruan
2023-08-15T14:07:24Z
http://arxiv.org/abs/2308.07786v2
# Box dimension of generalized affine fractal interpolation functions (II) ###### Abstract. Let \(f\) be a generalized affine fractal interpolation function with vertical scaling functions. In this paper, we first estimate \(\dim_{B}\Gamma f\), the box dimension of the graph of \(f\), by the sum function of vertical scaling functions. Then we estimate \(\dim_{B}\Gamma f\) by the limits of spectral radii of vertical scaling matrices under certain conditions. As an application, we study the box dimension of the graph of a generalized Weierstrass-type function. Key words and phrases: Fractal interpolation functions, box dimension, iterated function systems, vertical scaling functions, spectral radius, vertical scaling matrices 2010 Mathematics Subject Classification: Primary 28A80; Secondary 41A30 The research was supported in part by NSFC grant 11771391, ZJNSF grant LY22A010023 and the Fundamental Research Funds for the Central Universities of China grant 2021FZZX001-01 Corresponding author: Huo-Jun Ruan ## 2. Preliminaries and main results ### The definition of generalized affine FIFs Let \(N\geq 2\) be a positive integer. Given a data set \(\{(x_{i},y_{i})\}_{i=0}^{N}\subset\mathbb{R}^{2}\) with \(x_{0}<x_{1}<\ldots<x_{N}\), Barnsley [2] introduced fractal functions to interpolate the data set. Let \(L_{i}:\,[x_{0},x_{N}]\to[x_{i-1},x_{i}],1\leq i\leq N\) be contractive homeomorphisms with \[L_{i}(x_{0})=x_{i-1},\quad L_{i}(x_{N})=x_{i}. \tag{2.1}\] Let \(F_{i}:\,[x_{0},x_{N}]\times\mathbb{R}\to\mathbb{R},1\leq i\leq N\) be continuous maps satisfying \[F_{i}(x_{0},y_{0})=y_{i-1},\quad F_{i}(x_{N},y_{N})=y_{i}, \tag{2.2}\] and \(F_{i}\) is contractive with the second variable, i.e., there exists a constant \(\beta_{i}\in(0,1)\), such that for all \(x\in[x_{0},x_{N}]\), and all \(y^{\prime},y^{\prime\prime}\in\mathbb{R}\), \[|F_{i}(x,y^{\prime})-F_{i}(x,y^{\prime\prime})|\leq\beta_{i}|y^{\prime}-y^{ \prime\prime}|. \tag{2.3}\] Then we can define maps \(W_{i}:\,[x_{0},x_{N}]\times\mathbb{R}\to[x_{i-1},x_{i}]\times\mathbb{R}\), \(1\leq i\leq N\) by \[W_{i}(x,y)=(L_{i}(x),F_{i}(x,y)). \tag{2.4}\] From above conditions, it is easy to check that \(W_{i}(x_{0},y_{0})=(x_{i-1},y_{i-1})\) and \(W_{i}(x_{N},y_{N})=(x_{i},y_{i})\) for each \(i\). Notice that for each \(1\leq i\leq N\), \(W_{i}\) is continuous and it maps \([x_{0},x_{N}]\times\mathbb{R}\) to itself. Hence \(\{W_{i}:\,1\leq i\leq N\}\) is an _iterated function system_ (IFS for short) on \([x_{0},x_{N}]\times\mathbb{R}\). Barnsley [2] proved that there exists a unique continuous function \(f\) on \([x_{0},x_{N}]\) such that its graph \(\Gamma f:=\{(x,f(x)):\,x\in[x_{0},x_{N}]\}\) is the invariant set of the IFS \(\{W_{i}:\,1\leq i\leq N\}\), i.e., \[\Gamma f=\bigcup_{i=1}^{N}W_{i}(\Gamma f). \tag{2.5}\] Furthermore, the function \(f\) always interpolates the data set, i.e., \(f(x_{i})=y_{i}\) for all \(0\leq i\leq N\). The function \(f\) is called the _fractal interpolation function_ (FIF for short) determined by the IFS \(\{W_{i}\}_{i=1}^{N}\). In the case that every \(W_{i}\) is an affine map, we call \(f\) an _self-affine FIF_. In this case, for each \(i\), there exist real numbers \(a_{i},b_{i},c_{i},d_{i}\) and \(e_{i}\), such that \[W_{i}(x,y)=(a_{i}x+b_{i},c_{i}x+d_{i}y+e_{i}).\] \(d_{i}\)'s are called _vertical scaling factors_. According to (2.3), \(|d_{i}|<1\) for each \(i\). In [3], Barnsley, Elton, Hardin and Massopust obtained the box dimension formula of self-affine FIFs. This result was generalized by Ruan, Yao and Su to linear FIFs in [18], where \(W_{i}(x,y)=(a_{i}x+b_{i},d_{i}y+q_{i}(x))\). It is natural to study the more general FIFs with \[W_{i}(x,y)=(a_{i}x+b_{i},S_{i}(x)y+q_{i}(x)),\quad 1\leq i\leq N,\] where both \(S_{i}\) and \(q_{i}\) are continuous on \([x_{0},x_{N}]\) and \(|S_{i}(x)|<1\) for all \(x\in[x_{0},x_{N}]\). In this case, the corresponding FIF \(f\) is called a _generalized affine FIF_. In [5], Barnsley and Massopust studied the box dimension of a special class of generalized FIFs, called bilinear FIFs, with \[W_{i}(x,y)=\big{(}L_{i}(x),S(L_{i}(x))(y-b(x))+h(L_{i}(x))\big{)},\] where \(x_{i}-x_{i-1}=(x_{N}-x_{0})/N\) for \(1\leq i\leq N\), \(L_{i}\) is an affine function for each \(1\leq i\leq N\), both \(S\) and \(h\) are affine on \([x_{i-1},x_{i}]\) for every \(1\leq i\leq N\) and \(h(x_{i})=y_{i}\) for all \(0\leq i\leq N\), and \(b\) is an affine function satisfying \(b(x_{0})=y_{0}\) and \(b(x_{N})=y_{N}\). In [13], the authors generalized the setting of Barnsely and Massopust in [5], by only assuming that \(S\) is Lipschitz and positive on \([x_{0},x_{N}]\). In the present paper, we will study the box dimension of generalized affine FIFs with \[W_{i}(x,y)=(a_{i}x+b_{i},S_{i}(x)y+q_{i}(x)),\quad 1\leq i\leq N,\] where the following conditions are satisfied for each \(i\): * \(x_{i}-x_{i-1}=(x_{N}-x_{0})/N\), * \(S_{i}\) is Lipschitz on \([x_{0},x_{N}]\) and \(|S_{i}(x)|<1\) for all \(x\in[x_{0},x_{N}]\), * \(q_{i}\) is of bounded variation on \([x_{0},x_{N}]\). It is easy to see that under these assumptions, \[L_{i}(x)=(x-x_{0})/N+x_{i-1},\quad F_{i}(x,y)=S_{i}(x)y+q_{i}(x).\] From (2.5), \(W_{i}(x,f(x))=(L_{i}(x),f(L_{i}(x)))\). Thus, we have the following useful equality: \[f(L_{i}(x))=S_{i}(x)f(x)+q_{i}(x),\quad x\in[x_{0},x_{N}],\;i=1,2,\ldots,N. \tag{2.6}\] ### Box dimension estimate of the graph of continuous functions For any \(k_{1},k_{2}\in\mathbb{Z}\) and \(\varepsilon>0\), we call \([k_{1}\varepsilon,(k_{1}+1)\varepsilon]\times[k_{2}\varepsilon,(k_{2}+1) \varepsilon]\) an \(\varepsilon\)-coordinate square in \(\mathbb{R}^{2}\). Let \(E\) be a bounded set in \(\mathbb{R}^{2}\) and \(\mathcal{N}_{E}(\varepsilon)\) the number of \(\varepsilon\)-coordinate squares intersecting \(E\). We define \[\overline{\dim}_{B}E=\varlimsup_{\varepsilon\to 0+}\frac{\log\mathcal{N}_{E}( \varepsilon)}{\log 1/\varepsilon}\quad\text{and}\quad\underline{\dim}_{B}E= \varliminf_{\varepsilon\to 0+}\frac{\log\mathcal{N}_{E}(\varepsilon)}{\log 1/ \varepsilon}, \tag{2.7}\] and call them the _upper box dimension_ and the _lower box dimension_ of \(E\). Respectively, if \(\overline{\dim}_{B}E=\underline{\dim}_{B}E\), then we use \(\dim_{B}E\) to denote the common value and call it the _box dimension_ of \(E\). It is easy to see that in the definition of the upper and lower box dimensions, we can only consider \(\varepsilon_{k}=N^{-k}\), where \(k\in\mathbb{Z}^{+}\). That is, \[\overline{\dim}_{B}E=\varliminf_{k\to\infty}\frac{\log\mathcal{N}_{E}( \varepsilon_{k})}{k\log N}\quad\text{and}\quad\underline{\dim}_{B}E=\varliminf _{k\to\infty}\frac{\log\mathcal{N}_{E}(\varepsilon_{k})}{k\log N}. \tag{2.8}\] It is also well known that \(\underline{\dim}_{B}E\geq 1\) when \(E\) is the graph of a continuous function on a closed interval of \(\mathbb{R}\). Please see [8] for details. Given a closed interval \(J=[a,b]\), for each \(k\in\mathbb{Z}^{+}\) and \(1\leq j\leq N^{k}\), we write \[J^{k}_{j}=\Big{[}a+\frac{j-1}{N^{k}}(b-a),a+\frac{j}{N^{k}}(b-a)\Big{]}.\] Let \(g\) be a continuous function on \(J\), we define \[O_{k}(g,J)=\sum_{j=1}^{N^{k}}O(g,J^{k}_{j}), \tag{2.9}\] where we use \(O(g,U)\) to denote the oscillation of \(g\) on \(U\subset J\), that is, \[O(g,U)=\sup_{x^{\prime},x^{\prime\prime}\in U}|g(x^{\prime})-g(x^{\prime \prime})|.\] It is clear that \(\{O_{k}(g,J)\}_{k=1}^{\infty}\) is increasing with respect to \(k\). Thus \(\lim_{k\to\infty}O_{k}(g,J)\) always exists. Write \(\operatorname{Var}(g,J)\) the variation of \(g\) on \(J\). We have the following simple fact. **Lemma 2.1**.: _Let \(g\) be a continuous function on a closed interval \(J=[a,b]\). Then \(\lim_{k\to\infty}O_{k}(g,J)=\operatorname{Var}(g,J)\)._ Proof.: Clearly, \(O_{k}(g,J)\leq\operatorname{Var}(g,J)\) for all \(k\in\mathbb{Z}^{+}\). Thus \(\lim_{k\to\infty}O_{k}(g,J)\leq\operatorname{Var}(g,J)\). Now we prove the another inequality. Arbitrarily pick a partition \(T=\{a=t_{0}<t_{1}<\cdots<t_{n}=b\}\) of \(J\). Fix \(k\in\mathbb{Z}^{+}\) large enough such that \(N^{-k}<\min\{t_{i}-t_{i-1}:\,1\leq i\leq n\}\). Then for every \(0\leq i\leq n\), there exists \(\alpha_{i}\in\{1,\ldots,N^{k}\}\) such that \(t_{i}\in J^{k}_{\alpha_{i}}\). Furthermore, it is easy to see that \(\alpha_{0}<\alpha_{1}<\cdots<\alpha_{n}\). Notice that for any \(1\leq i\leq n\), \[|g(t_{i})-g(t_{i-1})|\leq\sum_{p=\alpha_{i-1}}^{\alpha_{i}}O(g,J^{k}_{p}).\] Thus \[\sum_{i=1}^{n}|g(t_{i})-g(t_{i-1})| \leq\sum_{i=1}^{n}\sum_{p=\alpha_{i-1}}^{\alpha_{i}}O(g,J^{k}_{p})\] \[=O_{k}(g,J)+\sum_{i=1}^{n-1}O(g,J^{k}_{\alpha_{i}})\leq\lim_{k\to \infty}O_{k}(g,J)+\sum_{i=1}^{n-1}O(g,J^{k}_{\alpha_{i}}).\] Since \(g\) is continuous on \(I\), we can choose \(k\) large enough such that \(\sum_{i=1}^{n-1}O(g,J^{k}_{\alpha_{i}})\) as small as possible. Hence \[\sum_{i=1}^{n}|g(t_{i})-g(t_{i-1})|\leq\lim_{k\to\infty}O_{k}(g,J).\] By the arbitrariness of the partition \(T\), \(\operatorname{Var}(g,J)\leq\lim_{k\to\infty}O_{k}(g,J)\). The following lemma presents a method to estimate the upper and lower box dimensions of the graph of a function by its oscillation. Similar results can be found in [8, 15, 18]. **Lemma 2.2** ([13]).: _Let \(g\) be a continuous function on a closed interval \(J\). Then_ \[\underline{\dim}_{B}(\Gamma g)\geq 1+\varliminf_{k\to\infty}\frac{\log\big{(}O_ {k}(g,J)+1\big{)}}{k\log N},\quad\text{and} \tag{2.10}\] \[\overline{\dim}_{B}(\Gamma g)\leq 1+\varlimsup_{k\to\infty}\frac{\log\big{(}O_ {k}(g,J)+1\big{)}}{k\log N}. \tag{2.11}\] We remark that \(J=[0,1]\) in the original version of the above lemma in [13]. However, it is straightforward to see that the lemma still holds in the present version. ### Main results In the rest of the paper, we write \(I=[x_{0},x_{N}]\) for simplicity. We define a function \(\gamma\) on \(I\) by \[\gamma(x)=\sum_{i=1}^{N}|S_{i}(x)|.\] We call \(\gamma\) the _sum function_ with respect to \(\mathbf{S}=\{S_{i}\}_{i=1}^{N}\). Write \(\gamma^{*}=\max_{x\in I}\gamma(x)\) and \(\gamma_{*}=\min_{x\in I}\gamma(x)\). In section 3, we estimate the upper box dimension and the lower box dimension of generalized FIFs by \(\gamma^{*}\) and \(\gamma_{*}\), respectively. **Theorem 2.3**.: _Let \(f\) be a generalized FIF satisfying conditions (A1)-(A3). Then we have the following results on the box dimension of \(\Gamma f\)._ 1. \(\overline{\dim}_{B}\Gamma f\leq\max\{1,1+\log\gamma^{*}/\log N\}\)_._ 2. _If_ \(\gamma_{*}>1\) _and_ \(\operatorname{Var}(f,I)=\infty\)_, then_ \(\underline{\dim}_{B}\Gamma f\geq 1+\log\gamma_{*}/\log N\)_._ 3. _Assume that_ \(\gamma(x)\equiv\gamma_{0}\) _for all_ \(x\in I\)_. Then in the case that_ \(\gamma_{0}>1\) _and_ \(\operatorname{Var}(f,I)=\infty\)_,_ \[\dim_{B}\Gamma f=1+\frac{\log\gamma_{0}}{\log N};\] _otherwise_ \(\dim_{B}\Gamma f=1\)_._ In section 4, we analyze properties of two sequences of vertical scaling matrices \(\{\overline{M}_{k}\}_{k\geq 1}\) and \(\{\underline{M}_{k}\}_{k\geq 1}\). We prove that the limits of spectral radii of these two sequences of matrices always exist, and denote them by \(\rho^{*}\) and \(\rho_{*}\), respectively. In section 5, we estimate the upper box dimension and the lower box dimension of generalized FIFs by \(\rho^{*}\) and \(\rho_{*}\). In the case that \(\rho^{*}=\rho_{*}\), we write the common value by \(\rho_{\mathbf{S}}\). **Theorem 2.4**.: _Let \(f\) be a generalized FIFs satisfying conditions (A1)-(A3). Then we have the following results on the box dimension of \(\Gamma f\)._ 1. _Assume that the function_ \(S_{i}\) _is not identically zero on every subinterval of_ \(I\) _for all_ \(1\leq i\leq N\)_. Then_ \(\overline{\dim}_{B}\Gamma f\leq\max\{1,1+\log\rho^{*}/\log N\}\)_._ 2. _Assume that_ \(\gamma_{*}\geq 1\) _and the function_ \(S_{i}\) _has finitely zero points on_ \(I\) _for all_ \(1\leq i\leq N\)_. If_ \(\operatorname{Var}(f,I)=\infty\)_, then_ \(\underline{\dim}_{B}\Gamma f\geq 1+\log\rho_{*}/\log N.\)__ 3. _Assume that_ \(\rho_{*}=\rho^{*}\) _and the function_ \(S_{i}\) _has finitely zero points on_ \(I\) _for all_ \(1\leq i\leq N\)_. Then in the case that_ \(\operatorname{Var}(f,I)=\infty\) _and_ \(\rho_{\mathbf{S}}>1\)_,_ \[\dim_{B}\Gamma f=1+\frac{\log\rho_{\mathbf{S}}}{\log N},\] _otherwise_ \(\dim_{B}\Gamma f=1\)_._ We remark that \(\rho_{*}=\rho^{*}\) if the function \(|S_{i}|\) is positive on \(I\) for all \(1\leq i\leq N\). Please see Corollary 5.5 for details. ## 3. Estimate the box dimension of FIFs by \(\gamma^{*}\) and \(\gamma_{*}\) In the rest of the paper, we always assume that \(f\) is a generalized FIFs satisfying conditions (A1)-(A3). In this section, we will estimate the upper box dimension and the lower box dimension of \(f\) by \(\gamma_{*}\) and \(\gamma^{*}\), respectively. By using the same arguments in the proof of [13, Lemma 4.2], we can obtain the following lemma. We present the proof here for completeness. **Lemma 3.1**.: _There exists a constant \(\beta\geq 0\) such that for any \(1\leq i\leq N\), \(D\subset I\) and any \(t\in D\),_ \[\Big{|}O(f,L_{i}(D))-|S_{i}(t)|\cdot O(f,D)\Big{|}\leq O(q_{i},D)+\beta|D|, \tag{3.1}\] _where \(|D|=\sup\{|x^{\prime}-x^{\prime\prime}|:\,x^{\prime},x^{\prime\prime}\in D\}\) is the diameter of \(D\)._ Proof.: From (2.6), \[O(f,L_{i}(D))=\sup_{x^{\prime},x^{\prime\prime}\in D}\big{|}S_{i}(x^{\prime} )f(x^{\prime})-S_{i}(x^{\prime\prime})f(x^{\prime\prime})+q_{i}(x^{\prime})-q _{i}(x^{\prime\prime})\big{|}.\] Write \(M_{f}=\max_{x\in I}|f(x)|\). Notice that for any \(x^{\prime},x^{\prime\prime}\in D\), \[|q_{i}(x^{\prime})-q_{i}(x^{\prime\prime})|\leq O(q_{i},D),\] and \[|S_{i}(x^{\prime})f(x^{\prime})-S_{i}(x^{\prime\prime})f(x^{\prime \prime})|\] \[\leq |S_{i}(x^{\prime})-S_{i}(t)|\cdot|f(x^{\prime})|+|S_{i}(x^{\prime \prime})-S_{i}(t)|\cdot|f(x^{\prime\prime})|+|S_{i}(t)|\cdot|f(x^{\prime})-f(x^ {\prime\prime})|\] \[\leq 2M_{f}\lambda_{\mathbf{S}}|D|+|S_{i}(t)|\cdot O(f,D).\] Let \(\beta=2M_{f}\lambda_{\mathbf{S}}\). Then from the above arguments, \[O(f,L_{i}(D))\leq|S_{i}(t)|\cdot O(f,D)+O(q_{i},D)+\beta|D|.\] Similarly, it is easy to see that \[O(f,L_{i}(D))\geq|S_{i}(t)|\cdot O(f,D)-O(q_{i},D)-\beta|D|.\] Thus, the lemma holds. **Lemma 3.2**.: _Let \(\beta\) be the constant in Lemma 3.1. Then for all \(k\in\mathbb{Z}^{+}\),_ \[O_{k+1}(f,I) \leq\gamma^{*}\cdot O_{k}(f,I)+\sum_{i=1}^{N}\operatorname{Var}( q_{i},I)+\beta N|I|,\quad\text{and} \tag{3.3}\] \[O_{k+1}(f,I) \geq\gamma_{*}\cdot O_{k}(f,I)-\sum_{i=1}^{N}\operatorname{Var}( q_{i},I)-\beta N|I|. \tag{3.2}\] Proof.: Given \(D\subset I\), we know from Lemma 3.1 that for any \(t\in D\), \[\sum_{i=1}^{N}O(f,L_{i}(D)) \leq\gamma(t)\cdot O(f,D)+\sum_{i=1}^{N}O(q_{i},D)+\beta N|D|\] \[\leq\gamma^{*}\cdot O(f,D)+\sum_{i=1}^{N}O(q_{i},D)+\beta N|D|.\] For any \(k\in\mathbb{Z}^{+}\) and \(1\leq j\leq N^{k}\), by letting \(D=I_{j}^{k}\) in the above inequality, we have \[\sum_{i=1}^{N}O(f,L_{i}(I_{j}^{k}))\leq\gamma^{*}\cdot O(f,I_{j}^{k})+\sum_{i= 1}^{N}O(q_{i},I_{j}^{k})+\beta N^{-k+1}|I|.\] Hence \[O_{k+1}(f,I) =\sum_{i=1}^{N}\sum_{j=1}^{N^{k}}O(f,L_{i}(I_{j}^{k}))\] \[\leq\gamma^{*}\cdot O_{k}(f,I)+\sum_{i=1}^{N}O_{k}(q_{i},I)+\beta N |I|\] \[\leq\gamma^{*}\cdot O_{k}(f,I)+\sum_{i=1}^{N}\operatorname{Var}( q_{i},I)+\beta N|I|,\] so that (3.2) holds. Similarly, we can prove that (3.3) holds. From this lemma, we can obtain the upper box dimension estimate by \(\gamma^{*}\) and the lower box dimension estimate by \(\gamma_{*}\). **Theorem 3.3**.: _We have \(\overline{\dim}_{B}\Gamma f\leq\max\{1,1+\log\gamma^{*}/\log N\}\)._ Proof.: Write \(\eta=N\beta|I|+\sum_{i=1}^{N}\operatorname{Var}(q_{i},I)\). It is clear that \(\eta<\infty\) since \(q_{i}\) is of bounded variation on \(I\) for each \(i\). If \(\gamma^{*}\leq 1\), from Lemma 3.2, \[O_{k+1}(f,I)\leq O_{k}(f,I)+\eta,\quad\forall k\geq 1,\] so that \[O_{k}(f,I)\leq O_{1}(f,I)+(k-1)\eta,\quad\forall k\geq 1.\] Thus from Lemma 2.2, \(\overline{\dim}_{B}\Gamma f\leq 1=\max\{1,1+\log\gamma^{*}/\log N\}\). In the case that \(\gamma^{*}>1\), we know from Lemma 3.2 that \[O_{k+1}(f,I)+\frac{\eta}{\gamma^{*}-1}\leq\gamma^{*}\Big{(}O_{k}(f,I)+\frac{ \eta}{\gamma^{*}-1}\Big{)},\quad\forall k\geq 1,\] so that \[O_{k}(f,I)+\frac{\eta}{\gamma^{*}-1}\leq(\gamma^{*})^{k-1}\Big{(}O_{1}(f,I)+ \frac{\eta}{\gamma^{*}-1}\Big{)},\quad\forall k\geq 1.\] Thus from Lemma 2.2, \(\overline{\dim}_{B}\Gamma f\leq 1+\log\gamma^{*}/\log N=\max\{1,1+\log \gamma^{*}/\log N\}\). Hence, the theorem holds. **Theorem 3.4**.: _If \(\gamma_{*}>1\) and \(\operatorname{Var}(f,I)=\infty\), then \(\underline{\dim}_{B}\Gamma f\geq 1+\frac{\log\gamma_{*}}{\log N}\)._ Proof.: Write \(\eta=\beta N|I|+\sum_{i=1}^{N}\operatorname{Var}(q_{i},I)\). Then \(\eta<\infty\). From Lemma 3.2, \[O_{k+1}(f,I)-\frac{\eta}{\gamma_{*}-1}\geq\gamma_{*}\Big{(}O_{k}(f,I)-\frac{ \eta}{\gamma_{*}-1}\Big{)},\quad\forall k\geq 1. \tag{3.4}\] Since \(\operatorname{Var}(f,I)=\infty\), from Lemma 2.1, there exists \(k_{0}\in\mathbb{Z}^{+}\) such that \(O_{k_{0}}(f,I)>\eta/(\gamma_{*}-1)\). From (3.4), \[O_{k}(f,I)-\frac{\eta}{\gamma_{*}-1}\geq(\gamma_{*})^{k-k_{0}}\Big{(}O_{k_{0} }(f,I)-\frac{\eta}{\gamma_{*}-1}\Big{)},\quad\forall k\geq k_{0}.\] Thus from Lemma 2.2, \(\underline{\dim}_{B}\Gamma f\geq 1+\log\gamma_{*}/\log N\). _Remark 3.5_.: From the proof of Theorem 3.4 and noticing that \(\beta=2M_{f}\lambda_{\mathbf{S}}\), it is easy to see that under the condition \(\gamma_{*}>1\), the following two properties are equivalent: 1. \(\operatorname{Var}(f,I)=\infty\), 2. there exists \(k_{0}\in\mathbb{Z}^{+}\), such that \[O_{k_{0}}(f,I)>\frac{2M_{f}\lambda_{\mathbf{S}}N|I|+\sum_{i=1}^{N} \operatorname{Var}(q_{i},I)}{\gamma_{*}-1}.\] _Remark 3.6_.: Under the condition that the function \(S_{i}\) is nonnegative for each \(i\), from (2.6), \[\sum_{i=1}^{N}f(L_{i}(x))=\sum_{i=1}^{N}\big{(}S_{i}(x)f(x)+q_{i}(x)\big{)}= \gamma(x)f(x)+\sum_{i=1}^{N}q_{i}(x).\] Thus, by using arguments similar to the proof of [13, Theorem 4.10], we have \[O_{k+1}(f,I)\geq\gamma_{*}O_{k}(f,I)-\lambda^{\prime}M_{f}|I|-\operatorname{ Var}\Big{(}\sum_{i=1}^{N}q_{i},I\Big{)},\] where \(\lambda^{\prime}\) is a Lipschitz constant of the function \(\gamma\), i.e., \[|\gamma(x^{\prime})-\gamma(x^{\prime\prime})|\leq\lambda^{\prime}|x^{\prime}- x^{\prime\prime}|,\quad x^{\prime},x^{\prime\prime}\in I.\] Thus, if \(\gamma_{*}>1\) and the function \(S_{i}\) is nonnegative on \(I\) for each \(1\leq i\leq N\), then \(\operatorname{Var}(f,I)=\infty\) if and only if there exists \(k_{0}\in\mathbb{Z}^{+}\) satisfying \[O_{k_{0}}(f,I)>\frac{\lambda^{\prime}M_{f}|I|+\operatorname{Var}(\sum_{i=1}^{N} q_{i},I)}{\gamma_{*}-1}.\] From Theorems 3.3 and 3.4, we can obtain the following result. **Theorem 3.7**.: _Assume that \(\gamma(x)\equiv\gamma_{0}\) for all \(x\in I\). Then in the case that \(\gamma_{0}>1\) and \(\operatorname{Var}(f,I)=\infty\),_ \[\dim_{B}\Gamma f=1+\frac{\log\gamma_{0}}{\log N}, \tag{3.5}\] _otherwise \(\dim_{B}\Gamma f=1\)._ Proof.: Notice that \(\underline{\dim}_{B}\Gamma f\geq 1\) always holds since \(f\) is a continuous function on \(I\). In the case that \(\gamma_{0}\leq 1\), it follows from Theorem 3.3 that \(\overline{\dim}_{B}\Gamma f\leq 1\). In the case that \(\operatorname{Var}(f,I)<\infty\), we have \(\lim_{k\to\infty}O_{k}(f,I)<\infty\). Thus, from Lemma 2.2, \(\overline{\dim}_{B}\Gamma f\leq 1\). Hence \(\dim_{B}\Gamma f=1\) if \(\gamma_{0}\leq 1\) or \(\operatorname{Var}(f,I)<\infty\). Now we assume that \(\gamma_{0}>1\) and \(\operatorname{Var}(f,I)=\infty\). From Theorem 3.4, \(\underline{\dim}_{B}\Gamma f\geq 1+\log\gamma_{0}/\log N\). On the other hand, from Theorem 3.3, \(\overline{\dim}_{B}\Gamma f\leq 1+\log\gamma_{0}/\log N\). Thus (3.5) holds. If \(\sum_{i=1}^{N}q_{i}\) is a constant function on \(I\), then \(\lambda^{\prime}=0\) is a Lipschitz constant of \(\sum_{i=1}^{N}q_{i}\). Thus, from Remark 3.6 and Theorem 3.7, we have the following result. **Corollary 3.8**.: _Assume that both \(\gamma\) and \(\sum_{i=1}^{N}q_{i}\) are constant functions on \(I\). Then in the case that \(\gamma(0)>1\) and \(f\) is not a constant function, \(\dim_{B}\Gamma f=1+\log_{N}\gamma(0)\), otherwise \(\dim_{B}\Gamma f=1\)._ From Theorems 3.3, 3.4 and 3.7, we know that Theorem 2.3 holds. ## 4. Analysis on vertical scaling matrices ### Definition of vertical scaling matrices Given \(k,p\in\mathbb{Z}^{+}\) and \(g\in C(I)\), we define \[V(g,k,p)=\left(O_{p}(g,I_{1}^{k}),O_{p}(g,I_{2}^{k}),\cdots,O_{p}(g,I_{N^{k}} ^{k})\right)^{T}\in\mathbb{R}^{N^{k}},\] and call it an _oscillation vector_ of \(g\) with respect to \((k,p)\). It is obvious that \[O_{k+p}(g,I)=\left\|V(g,k,p)\right\|_{1},\] where \(\|v\|_{1}:=\sum_{i=1}^{n}|v_{i}|\) for any \(v=(v_{1},\ldots,v_{n})\in\mathbb{R}^{n}\). Let \(k\) be a positive integer. Given \(1\leq i\leq N\) and \(1\leq j\leq N^{k}\), we define \[\overline{s}_{i,j}^{k}=\max_{x\in I_{j}^{k}}|S_{i}(x)|,\qquad\underline{s}_{ i,j}^{k}=\min_{x\in I_{j}^{k}}|S_{i}(x)|.\] **Lemma 4.1**.: _Let \(\beta\) be the constant in Lemma 3.1. Then for any \(1\leq i\leq N\), \(k\in\mathbb{Z}^{+}\), \(1\leq\ell\leq N^{k-1}\) and any \(p\in\mathbb{Z}^{+}\),_ \[O_{p+1}(f,L_{i}(I_{\ell}^{k-1}))\leq\beta|I_{\ell}^{k-1}|+O_{p+1 }(q_{i},I_{\ell}^{k-1})+\sum_{j=(\ell-1)N+1}^{\ell N}\overline{s}_{i,j}^{k}O_ {p}(f,I_{j}^{k}), \tag{4.2}\] \[O_{p+1}(f,L_{i}(I_{\ell}^{k-1}))\geq-\beta|I_{\ell}^{k-1}|-O_{p+1 }(q_{i},I_{\ell}^{k-1})+\sum_{j=(\ell-1)N+1}^{\ell N}\underline{s}_{i,j}^{k}O_ {p}(f,I_{j}^{k}). \tag{4.1}\] Proof.: From Lemma 3.1, for any \(1\leq i\leq N\), \(k\in\mathbb{Z}^{+}\), \(1\leq j\leq N^{k}\) and any \(D\subset I_{j}^{k}\), \[O(f,L_{i}(D))\leq\overline{s}_{i,j}^{k}O(f,D)+O(q_{i},D)+\beta|D|\] so that \[O_{p}(f,L_{i}(I_{j}^{k})) =\sum_{m=1}^{N^{p}}O\Big{(}f,\big{(}L_{i}(I_{j}^{k})\big{)}_{m}^{ p}\Big{)}\] \[\leq\sum_{m=1}^{N^{p}}\Big{(}\overline{s}_{i,j}^{k}O(f,(I_{j}^{k} )_{m}^{p}))+O(q_{i},(I_{j}^{k})_{m}^{p})+\beta|(I_{j}^{k})_{m}^{p}|\Big{)}\] \[=\overline{s}_{i,j}^{k}O_{p}(f,I_{j}^{k})+O_{p}(q_{i},I_{j}^{k}) +\beta|I_{j}^{k}|.\] Hence, from \(I_{\ell}^{k-1}=\bigcup_{j=(\ell-1)N+1}^{\ell N}I_{j}^{k}\), \[O_{p+1}(f,L_{i}(I_{\ell}^{k-1})) =\sum_{j=(\ell-1)N+1}^{\ell N}O_{p}(f,L_{i}(I_{j}^{k}))\] \[\leq\sum_{j=(\ell-1)N+1}^{\ell N}\Big{(}\overline{s}_{i,j}^{k}O_{ p}(f,I_{j}^{k})+O_{p}(q_{i},I_{j}^{k})+\beta|I_{j}^{k}|\Big{)}\] \[\leq\beta|I_{\ell}^{k-1}|+O_{p+1}(q_{i},I_{\ell}^{k-1})+\sum_{j=( \ell-1)N+1}^{\ell N}\overline{s}_{i,j}^{k}O_{p}(f,I_{j}^{k}).\] Thus, (4.1) holds. Similarly, we can prove that (4.2) holds. Define a vectors \(u_{\beta,\mathbf{q},k}\) in \(\mathbb{R}^{N^{k}}\) by \[(u_{\beta,\mathbf{q},k})_{(i-1)N^{k-1}+\ell}=\beta N^{-k+1}|I|+\mathrm{Var}(q_ {i},I_{\ell}^{k-1}), \tag{4.3}\] where \(1\leq i\leq N,1\leq\ell\leq N^{k-1}\). We also define an \(N^{k}\times N^{k}\) matrix \(\overline{M}_{k}\) as follows. \[\begin{pmatrix}\overline{s}_{1,1}^{k}&\cdots&\overline{s}_{1,N}^{k}\\ &&\overline{s}_{1,N+1}^{k}&\cdots&\overline{s}_{1,2N}^{k}\\ &&&&\ddots\\ &&&&\overline{s}_{1,N^{k}-N+1}^{k}&\cdots&\overline{s}_{1,N^{k}}^{k}\\ &&&&\overline{s}_{2,N+1}^{k}&\cdots&\overline{s}_{2,2N}^{k}\\ &&&&\ddots\\ &&&&\overline{s}_{2,N^{k}-N+1}^{k}&\cdots&\overline{s}_{2,N^{k}}^{k}\\ \vdots&\vdots&\vdots&&\vdots&&\vdots\\ \overline{s}_{N,1}^{k}&\cdots&\overline{s}_{N,N}^{k}&\\ &&&&\overline{s}_{N,N+1}^{k}&\cdots&\overline{s}_{N,2N}^{k}\\ &&&&\ddots\\ &&&&\overline{s}_{N,N^{k}-N+1}^{k}&\cdots&\overline{s}_{N,N^{k}}^{k}\end{pmatrix}.\] That is, for \(1\leq i\leq N\), \(1\leq\ell\leq N^{k-1}\) and \(1\leq j\leq N^{k}\), \[(\overline{M}_{k})_{(i-1)N^{k-1}+\ell,j}=\begin{cases}\overline{s}_{i,j}^{k},& \text{if }(\ell-1)N<j\leq\ell N,\\ 0,&\text{otherwise}.\end{cases} \tag{4.4}\] Then we can rewrite (4.1) as \[V(f,k,p+1)\leq u_{\beta,\mathbf{q},k}+\overline{M}_{k}V(f,k,p). \tag{4.5}\] Similarly, we define another \(N^{k}\times N^{k}\) matrix \(\underline{M}_{k}\) by replacing \(\overline{s}_{i,j}^{k}\) with \(\underline{s}_{i,j}^{k}\). We can rewrite (4.2) as \[V(f,k,p+1)\geq-u_{\beta,\mathbf{q},k}+\underline{M}_{k}V(f,k,p). \tag{4.6}\] Both \(\overline{M}_{k}\) and \(\underline{M}_{k}\) are called _vertical scaling matrices_ with level-\(k\). We remark that vertical scaling matrices were introduced in [13], and these matrices play crucial role to estimate box dimension of \(\Gamma f\). ### Some well known theorems and definitions Now we recall some notations and definitions in matrix analysis [10]. Given a matrix \(A=(a_{ij})_{n\times n}\), we say \(A\) is _nonnegative_ (resp. _positive_), denoted by \(A\geq 0\) (resp. \(A>0\)), if \(a_{ij}\geq 0\) (resp. \(a_{ij}>0\)) for all \(i\) and \(j\). Let \(B=(b_{ij})_{n\times n}\) be another matrix. We write \(A\geq B\) (resp. \(A>B\)) if \(a_{ij}\geq b_{ij}\) for all \(i\) and \(j\). Similarly, given \(u=(u_{1},\dots,u_{n}),v=(v_{1},\dots,v_{n})\in\mathbb{R}^{n}\), we write \(u\geq v\) (resp. \(u>v\)) if \(u_{i}\geq v_{i}\) (resp. \(u_{i}>v_{i}\)) for all \(i\). A nonnegative matrix \(A=(a_{ij})_{n\times n}\) is called _irreducible_ if for any \(i,j\in\{1,\dots,n\}\), there exists a finite sequence \(i_{0},\dots,i_{t}\in\{1,\dots,n\}\), such that \(i_{0}=i,i_{t}=j\) and \(a_{i_{\ell-1},i_{\ell}}>0\) for all \(1\leq\ell\leq t\). \(A\) is called _primitive_ if there exists \(k\in\mathbb{Z}^{+}\), such that \(A^{k}>0\). It is clear that a primitive matrix is irreducible. Given an \(n\times n\) real matrix \(A\), we write \(\sigma(A)\) the set of all eigenvalues of \(A\) and define \(\rho(A)=\max\{|\lambda|:\,\lambda\in\sigma(A)\}\). We call \(\rho(A)\) the _spectral radius_ of \(A\). The following two lemmas are well known. Please see [10, Chapter 8] for details. **Lemma 4.2** (Perron-Frobenius Theorem).: _Let \(A=(a_{ij})_{n\times n}\) be an irreducible nonnegative matrix. Then_ 1. \(\rho(A)\) _is positive,_ 2. \(\rho(A)\) _is an eigenvalue of_ \(A\) _and has a positive eigenvector,_ 3. \(\rho(A)\) _increases if any element of_ \(A\) _increases._ **Lemma 4.3**.: _Let \(A=(a_{ij})_{n\times n}\) be a nonnegative matrix. Then \(\rho(A)\) is an eigenvalue of \(A\) and there is a nonnegative nonzero vector \(x\) such that \(Ax=\rho(A)x\)._ ### Monotonicity of spectral radii of vertical scaling matrices **Theorem 4.4**.: _For all \(k\in\mathbb{Z}^{+}\),_ \[\rho(\overline{M}_{k+1})\leq\rho(\overline{M}_{k}).\] _As a result, \(\lim_{k\to\infty}\rho(\overline{M}_{k})\) exists, denoted by \(\rho^{*}\)._ In [13], we proved this theorem under an additional assumption. Essentially, we required that \(\overline{M}_{k}\) are irreducible for all \(k\). Proof.: Similarly as in [13], we introduce another \(N^{k+1}\times N^{k+1}\) matrix \(\overline{M}_{k}^{*}\) as follows: \[(\overline{M}_{k}^{*})_{(i-1)N^{k}+\ell,j}=\begin{cases}\overline{s}_{i,\ell}^ {k},&\text{if }(\ell-1)N<j\leq\ell N,\\ 0,&\text{otherwise},\end{cases} \tag{4.7}\] for \(1\leq i\leq N\), \(1\leq\ell\leq N^{k}\) and \(1\leq j\leq N^{k+1}\). Using the same arguments in the proof of [13, Theorem 3.3], we have \(\overline{M}_{k+1}\leq\overline{M}_{k}^{*}\) so that \(\rho(\overline{M}_{k+1})\leq\rho(\overline{M}_{k}^{*})\). Now we prove that \(\rho(\overline{M}_{k}^{*})=\rho(\overline{M}_{k})\). The proof is divided into two parts. Firstly we show that \(\rho(\overline{M}_{k})\geq\rho(\overline{M}_{k}^{*})\). Write \(\lambda=\rho(\overline{M}_{k}^{*})\). From Lemma 4.3, \(\lambda\) is an eigenvalue of \(\overline{M}_{k}^{*}\) and there is a nonnegative nonzero vector \(u=(u_{1},\ldots,u_{N^{k+1}})^{T}\) such that \(\overline{M}_{k}^{*}u=\lambda u\). We define a vector \(u^{\prime}=(u_{1}^{\prime},\ldots,u_{N^{k}}^{\prime})^{T}\) by \[u_{j}^{\prime}=\sum_{p=(j-1)N+1}^{jN}u_{p},\quad 1\leq j\leq N^{k}.\] It is clear that \(u^{\prime}\) is also nonnegative and nonzero. By using the same arguments in the proof of [13, Theorem 3.3], we can obtain that \(\overline{M}_{k}u^{\prime}=\lambda u^{\prime}\) so that \(\lambda\) is an eigenvalue of \(\overline{M}_{k}\). Hence, \(\rho(\overline{M}_{k}^{*})=\lambda\leq\rho(\overline{M}_{k})\). Secondly we show that \(\rho(\overline{M}_{k})\leq\rho(\overline{M}_{k}^{*})\). Without loss of generality, we may assume that \(\mu:=\rho(\overline{M}_{k})>0\). From Lemma 4.3, \(\mu\) is an eigenvalue of \(\overline{M}_{k}\) and there is a nonnegative nonzero vector \(v=(v_{1},\ldots,v_{N^{k}})^{T}\) such that \(\overline{M}_{k}v=\mu v\). We define a vector \(v^{\prime}=(v_{1}^{\prime},\ldots,v_{N^{k+1}}^{\prime})^{T}\) by \[v_{(i-1)N^{k}+\ell}^{\prime}=\overline{s}_{i,\ell}^{k}v_{\ell},\quad 1\leq i \leq N,1\leq\ell\leq N^{k}.\] It is clear that \(v^{\prime}\) is nonnegative. Furthermore, it follows from \(\overline{M}_{k}v=\mu v\) that for all \(1\leq i\leq N\) and \(1\leq j\leq N^{k-1}\), \[\mu v_{(i-1)N^{k-1}+j}=\sum_{t=(j-1)N+1}^{jN}\overline{s}_{i,t}^{k}v_{t}=\sum_ {t=(j-1)N+1}^{jN}v_{(i-1)N^{k}+t}^{\prime}. \tag{4.8}\] Thus \(v^{\prime}\) is a nonzero vector since otherwise, \(v\) is a zero vector which is a contradiction. For any \(1\leq i\leq N\) and \(1\leq\ell\leq N^{k}\), there exist \(1\leq i^{\prime}\leq N\) and \(1\leq j^{\prime}\leq N^{k-1}\) such that \(\ell=(i^{\prime}-1)N^{k-1}+j^{\prime}\). Thus, \[(\overline{M}_{k}^{*}v^{\prime})_{(i-1)N^{k}+\ell} =\overline{s}_{i,\ell}^{k}\sum_{p=(\ell-1)N+1}^{\ell N}v_{p}^{\prime}\] \[=\overline{s}_{i,\ell}^{k}\mu v_{(i^{\prime}-1)N^{k-1}+j^{\prime}} \text{(By \eqref{eq:m_k})}\] \[=\mu\overline{s}_{i,t}^{k}v_{\ell}=\mu v_{(i-1)N^{k}+\ell}^{ \prime},\] which implies \(\overline{M}_{k}^{*}v^{\prime}=\mu v^{\prime}\) so that \(\mu\) is an eigenvalue of \(\overline{M}_{k}^{*}\). Hence, \(\rho(\overline{M}_{k})=\mu\leq\rho(\overline{M}_{k}^{*})\). From the above arguments, \(\rho(\overline{M}_{k+1})\leq\rho(\overline{M}_{k}^{*})=\rho(\overline{M}_{k})\). Since \(\rho(\overline{M}_{k})\geq 0\) for all \(k\), we know that \(\lim_{k\to\infty}\rho(\overline{M}_{k})\) exists. Similarly, we can obtain the following result. **Theorem 4.5**.: _For all \(k\in\mathbb{Z}^{+}\),_ \[\rho(\underline{M}_{k+1})\geq\rho(\underline{M}_{k}).\] _As a result, \(\lim_{k\to\infty}\rho(\underline{M}_{k})\) exists, denoted by \(\rho_{*}\)._ In the case that \(\rho_{*}=\rho^{*}\), we denote the common value by \(\rho_{\mathbf{S}}\). ### The irreducibility of vertical scaling matrices Recall that \[\gamma(x)=\sum_{i=1}^{N}|S_{i}(x)|,\quad x\in I,\] and \(\gamma^{*}=\max_{x\in I}\gamma(x)\), \(\gamma_{*}=\min_{x\in I}\gamma(x)\). For any \(k\in\mathbb{Z}^{+}\), we define \[\overline{\gamma}_{k}=\max_{1\leq j\leq N^{k}}\sum_{i=1}^{N}\overline{s}_{i,j} ^{k},\quad\underline{\gamma}_{k}=\min_{1\leq j\leq N^{k}}\sum_{i=1}^{N} \underline{s}_{i,j}^{k}.\] Using the same arguments in [13, Lemma 3.6], we can obtain that \[\gamma^{*}=\lim_{k\to\infty}\overline{\gamma}_{k},\quad\gamma_{*}=\lim_{k\to \infty}\underline{\gamma}_{k}.\] For every \(k\in\mathbb{Z}^{+}\), from [10, Theorem 8.1.22], \(\underline{\gamma}_{k}\leq\rho(\underline{M}_{k})\leq\rho(\overline{M}_{k}) \leq\overline{\gamma}_{k}.\) Hence, \[\gamma_{*}\leq\rho_{*}\leq\rho^{*}\leq\gamma^{*}.\] Thus, if \(\gamma\) is a constant function on \(I\), then \(\gamma(x)=\rho_{\mathbf{S}}\) for all \(x\in I\). Using the same arguments in the proof of [13, Lemma 3.2], we have the following result. **Lemma 4.6**.: _Assume that for each \(1\leq i\leq N\), vertical scaling function \(S_{i}\) is not identically zero on every subinterval of \(I\). Then \((\overline{M}_{k})^{k}>0\) for all \(k\in\mathbb{Z}^{+}\). As a result, \(\overline{M}_{k}\) is primitive for all \(k\in\mathbb{Z}^{+}\)._ Similarly, we can show that \(\underline{M}_{k}\) is primitive if \(|S_{i}|\) is positive for each \(1\leq i\leq N\). However, it is much more involved to prove the irreducibility of \(\underline{M}_{k}\) under general setting. In this paper, we will show that \(\underline{M}_{k}\) is irreducible for sufficiently large \(k\) if \(\gamma_{*}\geq 1\) and \(S_{i}\) has finitely many zero points for each \(1\leq i\leq N\). **Lemma 4.7**.: _Suppose that \(N=2\) and \(\gamma_{*}\geq 1\). Then both \(|S_{1}|\) and \(|S_{2}|\) are positive functions on \(I\). As a result, the matrix \(\underline{M}_{k}\) is primitive for all \(k\in\mathbb{Z}^{+}\)._ Proof.: We prove this lemma by contradiction. Assume that there exists \(x^{*}\in I\) such that \(S_{1}(x^{*})=0\). Then for every \(k\in\mathbb{Z}^{+}\), we have \(x^{*}\in I_{1,j}^{k}\) for some \(1\leq j\leq 2^{k}\) so that \(\underline{s}_{1,j}^{k}=0\) and \(\underline{\gamma}_{k}\leq\underline{s}_{1,j}^{k}+\underline{s}_{2,j}^{k}= \underline{s}_{2,j}^{k}\leq s^{*}.\) It follows that \(\gamma_{*}\leq s^{*}<1\), which is a contradiction. Thus, \(|S_{1}|\) is positive on \(I\). Similarly, \(|S_{2}|\) is positive on \(I\). By using same arguments in [13, Lemma 3.2], \((\underline{M}_{k})^{k}>0\) for all \(k\in\mathbb{Z}^{+}\) so that \(\underline{M}_{k}\) is primitive for all \(k\in\mathbb{Z}^{+}\). Thus, \(\underline{M}_{k}\) is primitive for all \(k\in\mathbb{Z}^{+}\). **Lemma 4.8**.: _Suppose that \(N\geq 3\), \(\gamma_{*}\geq 1\) and function \(S_{i}\) has finitely many zero points for all \(1\leq i\leq N\). Then there exists \(k_{1}\in\mathbb{Z}^{+}\), such that for all \(k>k_{1}\), every row of \(\underline{M}_{k}\) has at least \(N-1\) positive entries, and every column of \(\underline{M}_{k}\) has at least \(2\) positive entries._ Proof.: Let \(s^{*}=\max\{|S_{i}(x)|:\,x\in I,1\leq i\leq N\}\). Then \(s^{*}<1\leq\gamma_{*}\). Thus there exists a positive integer \(k_{2}\), such that \(\underline{\gamma}_{k}>s^{*}\) for all \(k>k_{2}\). By the definition of \(\underline{\gamma}_{k}\), we know that for all \(k>k_{2}\) and all \(1\leq j\leq N^{k}\), there are at least two distinct \(i,i^{\prime}\in\{1,\dots,N\}\) such that \(\underline{s}_{i,j}^{k}>0\) and \(\underline{s}_{i^{\prime},j}^{k}>0\). That is, every column of \(\underline{M}_{k}\) has at least two positive entries. Let \(Z_{i}\) be the set of zero points of \(S_{i}\) for \(1\leq i\leq N\) and write \(Z=\bigcup_{i=1}^{N}Z_{i}\). Let \(T\) be the set of endpoints of all \(I_{j}^{k}\) for \(1\leq j\leq N^{k}\) and \(k\geq 1\), i.e., \(T=\bigcup_{k\geq 1}\{x_{0}+jN^{-k}(x_{N}-x_{0}):\,0\leq j\leq N^{k}\}\). Since \(Z\) is a finite set, there exists a positive integer \(k_{3}\) satisfying the following two conditions: 1. \(I_{j}^{k_{3}}\) contains at most one element of \(Z\) for all \(1\leq j\leq N^{k_{3}}\), 2. for every point \(x\in Z\cap T\), there exists \(1\leq j\leq N^{k_{3}}\) such that \(x\) is the endpoint of \(I_{j}^{k_{3}}\). Then it is easy to see that for all \(k>k_{3}\), every row of \(\underline{M}_{k}\) has at least \(N-1\) positive entries. Let \(k_{1}=\max\{k_{2},k_{3}\}\). We know that the lemma holds. **Lemma 4.9**.: _Assume that \(N\geq 3\), \(\gamma_{*}\geq 1\) and the function \(S_{i}\) has finitely many zero points for each \(1\leq i\leq N\). Then there exists \(k_{1}\in\mathbb{Z}^{+}\), such that for all \(k>k_{1}\), every row of \((\underline{M}_{k})^{k}\) has at least \((N-1)^{k}\) positive entries and every column of \((\underline{M}_{k})^{k}\) has at least \(2^{k}\) positive entries._ Proof.: Let \(k_{1}\) be the constant in Lemma 4.8. Fix \(k>k_{1}\). For all \(m\geq 1\) and \(1\leq i\leq N^{k}\), we define \[row_{m}(i)=\{j:\left((\underline{M}_{k})^{m}\right)_{ij}>0\}.\] Notice that \(\left((\underline{M}_{k})^{m+1}\right)_{ij}=\sum_{t=1}^{N^{k}}(\underline{M}_{ k})_{it}\big{(}(\underline{M}_{k})^{m}\big{)}_{tj}\) for all \(m\geq 1\) and \(1\leq i,j\leq N^{k}\). Thus for all \(m\geq 1\) and \(1\leq i\leq N^{k}\), \[row_{m+1}(i)=\{j:\text{there exists }t\in row_{1}(i),\text{such that }j\in row_{m}(t)\}.\] It follows from the definition of \(\underline{M}_{k}\) that for all \(1\leq i\leq N\), \(1\leq\ell\leq N^{k-1}\), \[row_{1}((i-1)N^{k-1}+\ell)\subset\{(\ell-1)N+1,(\ell-1)N+2,\ldots,\ell N\}. \tag{4.9}\] We claim that for each \(1\leq m\leq k-1\), \[row_{m}((i-1)N^{k-m}+\ell)\subset\{(\ell-1)N^{m}+1,(\ell-1)N^{m}+2,\ldots,\ell N ^{m}\}\] for all \(1\leq i\leq N^{m}\) and \(1\leq\ell\leq N^{k-m}\). It follows from (4.9) that the claim holds for \(m=1\). Assume that the claim holds for some \(1\leq m\leq k-2\). Now given \(1\leq i\leq N^{m+1}\) and \(1\leq j\leq N^{k-(m+1)}\), we write \(i^{\prime}=(i-1)N^{k-(m+1)}+\ell\). If \(j\in row_{m+1}(i^{\prime})\), then there exists \(t\in row_{1}(i^{\prime})\) such that \(j\in row_{m}(t)\). Notice that there exist unique integer pair \((i_{1},i_{2})\) with \(1\leq i_{1}\leq N\) and \(1\leq i_{2}\leq N^{m}\) such that \(i=(i_{1}-1)N^{m}+i_{2}\). Thus \[i^{\prime}=(i_{1}-1)N^{k-1}+(i_{2}-1)N^{k-(m+1)}+\ell.\] Hence, from (4.9), \((i_{2}-1)N^{k-m}+(\ell-1)N+1\leq t\leq(i_{2}-1)N^{k-m}+\ell N.\) Combining this with the inductive assumption, we have \((\ell-1)N^{m+1}+1\leq j\leq\ell N^{m+1}\) so that the claim holds for \(m+1\). This completes the proof of the claim. It directly follows from the claim that for all \(1\leq m\leq k-1\) and \(1\leq i\leq N^{k}\), if \(t_{1}\neq t_{2}\in row_{1}(i)\), then \(row_{m}(t_{1})\cap row_{m}(t_{2})=\emptyset\), which implies that \[\operatorname{card}\left(row_{m+1}(i)\right)=\sum_{t\in row_{1}(i)} \operatorname{card}\left(row_{m}(t)\right). \tag{4.10}\] From Lemma 4.8, \(\operatorname{card}\left(row_{1}(i)\right)\geq N-1\) for all \(1\leq i\leq N^{k}\). Combining this with (4.10), we can use inductive arguments to obtain that \(\operatorname{card}\left(row_{m}(i)\right)\geq(N-1)^{m}\) for all \(1\leq m\leq k\) and \(1\leq i\leq N^{k}\). Thus every row of \((\underline{M}_{k})^{k}\) has at least \((N-1)^{k}\) positive entries. Similarly, for all \(m\geq 1\) and \(1\leq j\leq N^{k}\), we define \[col_{m}(j)=\{i:\left((\underline{M}_{k})^{m}\right)_{ij}>0\}.\] Then for all \(m\geq 1\) and \(1\leq j\leq N^{k}\), \[col_{m+1}(j)=\{i:\text{there exists }t\in col_{1}(j),\text{such that }i\in col_{m}(t)\}.\] By using similar arguments as above, we can obtain that for each \(1\leq m\leq k-1\), \[col_{m}((j-1)N^{m}+\ell)\subset\{j,j+N^{k-m},\ldots,j+(N^{m}-1)N^{k-m}\}\] for all \(1\leq j\leq N^{k-m}\) and \(1\leq\ell\leq N^{m}\). Hence, for all \(1\leq m\leq k-1\) and \(1\leq j\leq N^{k}\), if \(t_{1}\neq t_{2}\in col_{1}(j)\), then \(col_{m}(t_{1})\cap col_{m}(t_{2})=\emptyset\). As a result, we have \(\operatorname{card}\left(col_{m}(j)\right)\geq 2^{m}\) for all \(1\leq m\leq k\) and \(1\leq j\leq N^{k}\), which implies that every column of \((\underline{M}_{k})^{k}\) has at least \(2^{k}\) positive entries. The following result is part of the statement in [10, 8.5.P5]. We will use it to prove that \(\underline{M}_{k}\) is primitive for sufficiently large \(k\) under certain conditions. **Lemma 4.10** ([10]).: _Let \(A\) be an irreducible nonnegative matrix. Assume that at least one of its main diagonal entry is positive. Then \(A\) is primitive._ **Theorem 4.11**.: _Assume that \(\gamma_{*}\geq 1\) and the function \(S_{i}\) has finitely many zero points for each \(1\leq i\leq N\). Then there exists \(k_{0}\in\mathbb{Z}^{+}\), such that \(\underline{M}_{k}\) is primitive for all \(k>k_{0}\)._ Proof.: From Lemma 4.7, we may assume that \(N\geq 3\). For every \(1\leq i\leq N\) and \(1\leq j\leq N^{k}\), we call \(\underline{s}_{i,j}^{k}\) a _basic entry_ of the matrix \(\underline{M}_{k}\). If all basic entries are positive, then using the same arguments in the proof of Lemma 4.6, we can obtain that \((\underline{M}_{k})^{k}>0\). Let \(m_{i}\) be the number of zero points of \(S_{i}\) on \(I\). Write \(m=\sum_{i=1}^{N}m_{i}\). Then for any \(k\geq 1\), there are at most \(2m\) basic entries equal to zero. Notice that for every \(k\geq 1\) and \(1\leq i,j\leq N^{k}\), the \((i,j)\) entry of \((\underline{M}_{k})^{k}\) is \[\bigl{(}(\underline{M}_{k})^{k}\bigr{)}_{i,j}=\sum_{{1\leq i_{2},\ldots,t_{k-1 }\leq N^{k}}\atop{t_{1}=i,t_{k}=j}}\prod_{\ell=1}^{k-1}(\underline{M}_{k})_{t _{\ell},t_{\ell+1}}, \tag{4.11}\] and both every row and every column of \(\underline{M}_{k}\) have \(N\) basic entries. Hence, a zero basic entry of \(\underline{M}_{k}\) can make at most \(kN^{k-1}\) entries of \((\underline{M}_{k})^{k}\) to be zero. Thus there are at most \(2mkN^{k-1}\) zero entries in \((\underline{M}_{k})^{k}\). Let \(k_{1}\) be the constant in Lemma 4.9 and \(k_{0}=\max\{2,m,k_{1}\}\). We prove that \((\underline{M}_{k})^{k}\) is irreducible for all \(k>k_{0}\) by contradiction. Assume that \((\underline{M}_{k})^{k}\) is reducible. Then there are nonempty and disjoint subsets \(A,B\) of \(\{1,\ldots,N^{k}\}\) satisfying \(A\cup B=\{1,\ldots,N^{k}\}\), and for all \(i\in A\) and \(j\in B\), the \((i,j)\) entry of \((\underline{M}_{k})^{k}\) is zero. From Lemma 4.9 and \(N-1\geq 2\), there are at least \(2^{k}\) elements in both \(A\) and \(B\). Hence \((\underline{M}_{k})^{k}\) has at least \(2^{k}(N^{k}-2^{k})\) zero entries. From \(N\geq 3\) and \(k>k_{0}\), we have \(2^{k}>k^{2}>mk\) and \(N^{k}-2^{k}>2N^{k-1}\) so that \(2^{k}(N^{k}-2^{k})>2mkN^{k-1}\), which is a contradiction. Hence, \((\underline{M}_{k})^{k}\) is irreducible for all \(k>k_{0}\). Now we will show that for all \(k>k_{0}\), at least one of the main entry of \((\underline{M}_{k})^{k}\) is positive, so that \((\underline{M}_{k})^{k}\) is primitive by Lemma 4.10. As a result, \(\underline{M}_{k}\) is primitive for all \(k>k_{0}\). For any \(j\in\{1,2,\ldots,N^{k}\}\), there exists unique \(j_{1}\cdots j_{k}\in\{1,\ldots,N\}^{k}\) such that \[j=(j_{1}-1)N^{k-1}+(j_{2}-1)N^{k-2}+\cdots+(j_{k-1}-1)N+j_{k}.\] Write \(t_{j,1}=j\) and \[t_{j,p+1}=(t_{j,p}-(j_{p}-1)N^{k-1}-1)N+j_{p},\quad 1\leq p\leq k-1.\] Then \(t_{k}=j\) and \((t_{j,p},t_{j,p+1})\) is a basic entry of \(\underline{M}_{k}\) for all \(1\leq p\leq k-1\). From (4.11), \(\big{(}(\underline{M}_{k})^{k}\big{)}_{j,j}\geq\prod_{p=1}^{k-1}(\underline{M }_{k})_{t_{j,p},t_{j,p+1}}\). Hence, a zero basic entry of \(\underline{M}_{k}\) can make at most \(k\) main diagonal entries of \((\underline{M}_{k})^{k}\) to be zero. Thus there are at most \(2mk\) zero main diagonal entries in \((\underline{M}_{k})^{k}\). Notice that \(k_{0}\geq\max\{2,m\}\) and \(N\geq 3\). Hence, for \(k>k_{0}\), we have \(N^{k}\geq 3^{k}>2k^{2}>2mk\) so that \((\underline{M}_{k})^{k}\) contains at least one positive main diagonal entry. ## 5. Calculation of box dimension of generalized affine FIFs In this section, we will we estimate the upper box dimension and the lower box dimension of \(f\) by \(\rho^{*}\) and \(\rho_{*}\), respectively. Using these results, we obtain the box dimension of \(\Gamma f\) under certain conditions. We remark that the proofs in this section are similar to that in section 4 in [13]. **Theorem 5.1**.: _Assume that the function \(S_{i}\) is not identically zero on every subinterval of \(I\) for all \(1\leq i\leq N\). Then_ \[\overline{\dim}_{B}\Gamma f\leq\max\Big{\{}1,1+\frac{\log\rho^{*}}{\log N} \Big{\}}. \tag{5.1}\] Proof.: Fix \(k\in\mathbb{Z}^{+}\). Let \(u_{\beta,\mathbf{q},k}\) be the vector in \(\mathbb{R}^{N^{k}}\) defined by (4.3). From Lemma 4.6, \(\overline{M}_{k}\) is primitive so that it is irreducible. By Lemma 4.2, we can choose a positive eigenvector \(w_{k}\) of \(\overline{M}_{k}\) such that \(w_{k}\geq u_{\beta,\mathbf{q},k}\) and \(w_{k}\geq V(f,p,1)\). Hence, from (4.5), we have \[V(f,k,p+1)\leq w_{k}+\overline{M}_{k}V(f,k,p)\] for all \(p\in\mathbb{Z}^{+}\). Thus, \[V(f,k,p) \leq w_{k}+\overline{M}_{k}w_{k}+\cdots+(\overline{M}_{k})^{p-2} w_{k}+(\overline{M}_{k})^{p-1}V(f,k,1)\] \[\leq\sum_{n=0}^{p-1}\rho(\overline{M}_{k})^{n}w_{k}\] for all \(p\in\mathbb{Z}^{+}\). It follows that \[O_{k+p}(f,I)=||V(f,k,p)||_{1}\leq||w_{k}||_{1}\sum_{n=0}^{p-1}(\rho(\overline{ M}_{k}))^{n}\leq||w_{k}||_{1}p\big{(}\rho(\overline{M}_{k})^{p}+1\big{)}.\] Hence, \[\overline{\lim_{p\to\infty}}\frac{\log(O_{k+p}(f,I)+1)}{p\log N}\leq\max\Big{\{} 0,\frac{\log\rho(\overline{M}_{k})}{\log N}\Big{\}}.\] Thus, from Lemma 2.2, \[\overline{\dim}_{B}\Gamma f\leq 1+\overline{\lim_{p\to\infty}}\frac{\log(O_{k+p} (f,I)+1)}{p\log N}\leq\max\Big{\{}1,1+\frac{\log\rho(\overline{M}_{k})}{\log N }\Big{\}}.\] By the arbitrariness of \(k\), we know from Theorem 4.4 that (5.1) holds. **Theorem 5.2**.: _Assume that \(\operatorname{Var}(f,I)=\infty\), \(\gamma_{*}\geq 1\) and the function \(S_{i}\) has finitely zero points on \(I\) for all \(1\leq i\leq N\). Then_ \[\underline{\dim}_{B}\Gamma f\geq 1+\frac{\log\rho_{*}}{\log N}. \tag{5.2}\] Proof.: Let \(k_{0}\) be the constant in Theorem 4.11. Fix \(k>k_{0}\). Write \(\xi_{k}=u_{\beta,\mathbf{q},k}\). Given \(0<\tau<\rho(\underline{M}_{k})\), from Lemma 4.2, we can find a positive eigenvector \(w_{k}\) of \(\underline{M}_{k}\) satisfying \(w_{k}\geq\xi_{k}/(\rho(\underline{M}_{k})-\tau)\). From Theorem 4.11, \(\underline{M}_{k}\) is primitive so that there exists \(n_{k}\in\mathbb{Z}^{+}\) such that \(\big{(}\underline{M}_{k}\big{)}^{n_{k}}>0\). Let \(\alpha_{k}\) be the minimal entry of the matrix \(\big{(}\underline{M}_{k}\big{)}^{n_{k}}\). Then \(\alpha_{k}>0\). From (4.6), \[V(f,k,p+1)\geq\underline{M}_{k}V(f,k,p)-\xi_{k} \tag{5.3}\] for all \(q\in\mathbb{Z}^{+}\). Repeatedly using this inequality, we can obtain that for all \(p\in\mathbb{Z}^{+}\), \[V(f,k,p+n_{k})\geq(\underline{M}_{k})^{n_{k}}V(f,k,p)-\sum_{\ell=0}^{n_{k}-1}( \underline{M}_{k})^{\ell}\xi_{k}. \tag{5.4}\] Notice that the maximal entry of \(V(f,k,p)\) is at least \(N^{-k}\|V(f,k,p)\|_{1}\). Thus, \[(\underline{M}_{k})^{n_{k}}V(f,k,p)\geq(\alpha_{k}^{\prime},\cdots,\alpha_{k} ^{\prime}),\] where \(\alpha_{k}^{\prime}=\alpha_{k}N^{-k}\|V(f,k,p)\|_{1}\). Notice that \[\lim_{p\to\infty}\|V(f,k,p)\|_{1}=\lim_{p\to\infty}O_{k+p}(f,I)=\operatorname{ Var}(f,I)=\infty.\] Hence, we can choose \(p_{*}\) large enough, such that \[(\underline{M}_{k})^{n_{k}}V(f,k,p_{*})\geq w_{k}+\sum_{\ell=0}^{n_{k}-1}( \underline{M}_{k})^{\ell}\xi_{k}.\] Let \(p_{k}=p_{*}+n_{k}\). Then from (5.4), \[V(f,k,p_{k})\geq w_{k}\geq\frac{1}{\rho(\underline{M}_{k})-\tau}\xi_{k}.\] From (5.3), \[V(f,k,p_{k}+1)\geq\rho(\underline{M}_{k})w_{k}-\xi_{k}\geq\rho(\underline{M}_{ k})w_{k}-(\rho(\underline{M}_{k})-\tau)w_{k}=\tau w_{k}.\] Notice that all \(n\in\mathbb{Z}^{+}\), \[\rho(\underline{M}_{k})\tau^{n}w_{k}-\xi_{k} =\rho(\underline{M}_{k})\big{(}\tau^{n}-1\big{)}w_{k}+\rho( \underline{M}_{k})w_{k}-\xi_{k}\] \[\geq\tau\big{(}\tau^{n}-1\big{)}w_{k}+\tau w_{k}=\tau^{n+1}w_{k}.\] Thus, by induction, \(V(f,k,p_{k}+n)\geq\tau^{n}w_{k}\) for all \(n\in\mathbb{Z}^{+}\). Hence \(O_{k+p_{k}+n}(f,I)=\|V(f,k,p_{k}+n)\|_{1}\geq\tau^{n}\|w_{k}\|_{1}\), which implies that \[\varliminf_{n\to\infty}\frac{\log\big{(}O_{n}(f,I)+1\big{)}}{n\log N}=\varliminf _{n\to\infty}\frac{\log\big{(}O_{k+p_{k}+n}(f,I)+1\big{)}}{n\log N}\geq\frac{ \log\tau}{\log N}.\] It follows from the arbitrariness of \(\tau\) that \(\log\rho(\underline{M}_{k})/\log N\) is less than the left hand side of this inequality. Combining this with Lemma 2.2, we have \[\underline{\dim}_{B}\Gamma f\geq 1+\frac{\log\rho(\underline{M}_{k})}{\log N}.\] Since this result holds for all \(k>k_{0}\), we know from Theorem 4.5 that (5.2) holds. _Remark 5.3_.: From the proof of the above theorem, it is easy to see that under the assumptions of the theorem, \(\operatorname{Var}(f,I_{j}^{k})=\infty\) for any \(k\in\mathbb{Z}^{+}\) and \(1\leq j\leq N^{k}\). **Theorem 5.4**.: _Assume that \(\rho^{*}=\rho_{*}\) and the function \(S_{i}\) has finitely zero points on \(I\) for all \(1\leq i\leq N\). Then in the case that \(\operatorname{Var}(f,I)=\infty\) and \(\rho_{\mathbf{S}}>1\),_ \[\dim_{B}\Gamma f=1+\frac{\log\rho_{\mathbf{S}}}{\log N}, \tag{5.5}\] _otherwise \(\dim_{B}\Gamma f=1\)._ Proof.: In the case that \(\operatorname{Var}(f,I)<\infty\), we know from Lemma 2.2 that \(\overline{\dim}_{B}\Gamma f\leq 1\). In the case that \(\rho_{\mathbf{S}}\leq 1\), we know from Theorem 5.1 that \(\overline{\dim}_{B}\Gamma f\leq 1\). Since \(\underline{\dim}_{B}\Gamma f\geq 1\) always holds, \(\dim_{B}\Gamma f=1\) if \(\operatorname{Var}(f,I)<\infty\) or \(\rho_{\mathbf{S}}\leq 1\). In the case that \(\operatorname{Var}(f,I)=\infty\) and \(\rho_{\mathbf{S}}>1\), we know from Theorems 5.1 and 5.2 that (5.5) holds. From Theorems 5.1, 5.2 and 5.4, we know that Theorem 2.4 holds. Using the same arguments in the proof of [13, Proposition 3.5], we can prove that \(\rho^{*}=\rho_{*}\) if \(|S_{i}|\) is positive on \(I\) for all \(1\leq i\leq N\). Thus we have the following result. **Corollary 5.5**.: _Assume that the function \(|S_{i}|\) is positive on \(I\) for each \(1\leq i\leq N\). Then in case \(\operatorname{Var}(f,I)=\infty\) and \(\rho_{\mathbf{S}}>1\), (5.5) holds, otherwise \(\dim_{B}\Gamma f=1\)._ ## 6. An example: generalized Weierstrass-type functions Weierstrass functions are classical fractal functions. There are many works on fractal dimensions of their graphs, including box dimension, Hausdorff dimension, and other dimensions. Please see [11, 14, 17] and the references therein. For example, Ren and Shen [17] studied the following Weierstrass-type functions \[g_{\lambda,N}^{\phi}(x)=\sum_{k=0}^{\infty}\lambda^{k}\phi(N^{k}x),\quad x\in \mathbb{R},\] where \(N\geq 2\) is an integer, \(1/N<\lambda<1\) and \(\phi:\mathbb{R}\to\mathbb{R}\) is a \(\mathbb{Z}\)-periodic real analytic function. They proved that either such a function is real analytic, or the Hausdorff dimension of its graph is equal to \(2+\log_{N}\lambda\). It is well known that \(f=g_{\lambda,N}^{\phi}\big{|}_{[0,1]}\) is an FIF. In fact, for \(i\in\{1,2,\ldots,N\}\) and \(x\in[0,1]\), we have \[f\Big{(}\frac{x+i-1}{N}\Big{)}=\phi\Big{(}\frac{x+i-1}{N}\Big{)}+\lambda\sum_ {k=0}^{\infty}\lambda^{k}\phi(N^{k}x)=\phi\Big{(}\frac{x+i-1}{N}\Big{)}+ \lambda f(x).\] Thus, \(\Gamma f=\bigcup_{i=1}^{N}W_{i}(\Gamma f)\), where for \(i=1,2,\ldots,N\), \[W_{i}(x,y)=\Big{(}\frac{x+i-1}{N},\lambda y+\phi\big{(}\frac{x+i-1}{N}\big{)} \Big{)},\quad(x,y)\in[0,1]\times\mathbb{R}.\] Hence, \(f\) is a generalized affine FIF determined by the IFS \(\{W_{i}\}_{i=1}^{N}\). Let \(\phi(x)=\cos(2\pi x)\). Then \(g_{\lambda,N}^{\phi}\) is the classical Weierstrass function. Shen [19] proved that the Hausdorff dimension of its graph is equal to \(2+\log_{N}\lambda\). Let \(q_{i}(x)=\cos(2\pi(x+i-1)/N)\), \(1\leq i\leq N\). It is easy to check that \(\sum_{i=1}^{N}q_{i}(x)=0\) for all \(x\in[0,1]\). Thus, from Corollary 3.8, we obtain the well known result \(\dim_{B}\Gamma f=2+\log_{N}\lambda\), where \(f=g_{\lambda,N}^{\phi}\big{|}_{[0,1]}\). By Theorem 2.4, we can study the box dimension of generalized Weierstrass-type functions by replacing vertical scaling factor \(\lambda\) with vertical scaling functions. _Example 6.1_.: Let \(I=[0,1]\) and \(N=3\). Then \(x_{i}=i/3\), \(i=0,1,2,3\). Let vertical scaling functions \(S_{i}\), \(1\leq i\leq 3\) on \([0,1]\) are defined by \[S_{1}(x)=S_{2}(x)=\frac{1}{2}+\frac{\sin(2\pi x)}{4},\qquad S_{3}(x)=\frac{1}{ 2}-\frac{\sin(2\pi x)}{4}.\] Then each function \(S_{i}\) is positive on \(I\) so that \(\rho_{*}=\rho^{*}\). Let \(\phi(x)=\cos(2\pi x)\) and define maps \(W_{i}\), \(1\leq i\leq 3\) by \[W_{i}(x,y)=\Big{(}\frac{x+i-1}{3},S_{i}(x)y+\phi\big{(}\frac{x+i-1}{3}\big{)} \Big{)},\quad(x,y)\in[0,1]\times\mathbb{R}.\] Let \(y_{0}=y_{2}=2\) and \(y_{1}=y_{3}=1/2\). Then it is easy to check that \[W_{i}(x_{0},y_{0})=(x_{i-1},y_{i-1}),\quad W_{i}(x_{3},y_{3})=(x_{i},y_{i})\] for \(i=1,2,3\). Thus \(\{W_{i}\}_{i=1}^{3}\) determines a generalized affine FIF \(f\). Please see Figure 1 for the graph of \(f\). Notice that \(\gamma(x)=\sum_{i=1}^{3}|S_{i}(x)|=3/2+\sin(2\pi x)/4\) for \(x\in[0,1]\). Hence, \(\gamma_{*}=5/4\), \(\gamma^{*}=7/4\) and \(\lambda^{\prime}=\pi/2\) is a Lipschitz constant of \(\gamma(x)\). Let \(q_{i}(x)=\phi((x+i-1)/3)\), \(x\in[0,1]\), \(i=1,2,3\). Then \(\sum_{i=1}^{3}q_{i}(x)=0\) for all \(x\in[0,1]\) so that \(\operatorname{Var}(\sum_{i=1}^{3}q_{i},I)=0\). Now we calculate \(M_{f}=\max\{|f(x)|:\,x\in I\}\). Notice that for any \(x\in I\), there exists \(i_{1}i_{2}\cdots\in\{1,2,3\}^{\infty}\), such that \(x\in\bigcap_{n=1}^{\infty}L_{i_{1}}\circ L_{i_{2}}\circ\cdots\circ L_{i_{n}}(I)\). Thus from (2.6), we have \[f(x) =q_{i_{1}}(L_{i_{1}}^{-1}(x))+S_{i_{1}}(L_{i_{1}}^{-1}(x))f(L_{i_{ 1}}^{-1}(x))\] \[=q_{i_{1}}(L_{i_{1}}^{-1}(x))+\sum_{n=2}^{\infty}\Big{(}\prod_{t= 1}^{n-1}S_{i_{t}}\big{(}L_{i_{t}}^{-1}\circ\cdots\circ L_{i_{1}}^{-1}(x)\big{)} \Big{)}q_{i_{n}}\big{(}L_{i_{n}}^{-1}\circ\cdots\circ L_{i_{1}}^{-1}(x)\big{)}.\] Hence, from \(q^{*}=\max\{|q_{i}(x)|:x\in[0,1],i=1,2,3\}=1\) and \[S^{*}=\max\{S_{i}(x):x\in[0,1],i=1,2,3\}=\frac{3}{4},\] we have \(M_{f}\leq q^{*}\sum_{n=0}^{\infty}(S^{*})^{n}=q^{*}/(1-S^{*})=4\). Thus, \[\frac{\lambda^{\prime}M_{f}|I|+\operatorname{Var}(\sum_{i=1}^{3}q_{i},I)}{ \gamma_{*}-1}\leq\frac{(\pi/2)\times 4\times 1+0}{5/4-1}=8\pi.\] Figure 1. The FIF in Example 6.1 By calculation, \(O_{6}(f,I)>8\pi\). Thus, from Remark 3.6, \(\operatorname{Var}(f,I)=\infty\). By definition of vertical scaling matrices, we have \[\overline{M}_{1}=\begin{pmatrix}\frac{3}{4}&\frac{1}{2}+\frac{\sqrt{3}}{8}& \frac{1}{2}\\ \frac{3}{4}&\frac{1}{2}+\frac{\sqrt{3}}{8}&\frac{1}{2}\\ \frac{1}{2}&\frac{1}{2}+\frac{\sqrt{3}}{8}&\frac{3}{4}\\ \frac{1}{4}&\frac{1}{2}-\frac{\sqrt{3}}{8}&\frac{1}{2}\end{pmatrix},\quad \underline{M}_{1}=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}-\frac{\sqrt{3}}{8}& \frac{1}{4}\\ \frac{1}{2}&\frac{1}{2}-\frac{\sqrt{3}}{8}&\frac{1}{4}\\ \frac{1}{4}&\frac{1}{2}-\frac{\sqrt{3}}{8}&\frac{1}{2}\end{pmatrix}.\] In general, by calculation, we can obtain the spectral radii of vertical scaling matrices \(\rho(\overline{M}_{k})\) and \(\rho(\underline{M}_{k})\), \(k=1,2,4,5,7,8\) as in Tabel 1. Thus, from Theorem 2.4, \[\dim_{B}\Gamma f=1+\log\rho_{\mathbf{S}}/\log N\approx 1+\log 1.516/\log 3 \approx 1.379.\]
2304.01022
Uncertainty-Based Knowing How Logic
We introduce a novel semantics for a multi-agent epistemic operator of knowing how, based on an indistinguishability relation between plans. Our proposal is, arguably, closer to the standard presentation of knowing that modalities in classical epistemic logic. We study the relationship between this new semantics and previous approaches, showing that our setting is general enough to capture them. We also study the logical properties of the new semantics. First, we define a sound and complete axiomatization. Second, we define a suitable notion of bisimulation and prove correspondence theorems. Finally, we investigate the computational complexity of the model checking and satisfiability problems for the new logic.
Carlos Areces, Raul Fervari, AndrΓ©s R. Saravia, Fernando R. VelΓ‘zquez-Quesada
2023-04-03T14:20:52Z
http://arxiv.org/abs/2304.01022v1
# Uncertainty-Based Knowing How Logic ###### Abstract We introduce a novel semantics for a multi-agent epistemic operator of _knowing how_, based on an indistinguishability relation between plans. Our proposal is, arguably, closer to the standard presentation of _knowing that_ modalities in classical epistemic logic. We study the relationship between this new semantics and previous approaches, showing that our setting is general enough to capture them. We also study the logical properties of the new semantics. First, we define a sound and complete axiomatization. Second, we define a suitable notion of bisimulation and prove correspondence theorems. Finally, we investigate the computational complexity of the model checking and satisfiability problems for the new logic. ## 1 Introduction Epistemic logic (EL; [28, 13]) is a logical formalism tailored for reasoning about the knowledge of abstract autonomous entities commonly called agents (e.g., a human being, a robot, a vehicle). It has contributed to the formal study of complex multi-agent epistemic notions not only in philosophy [25] but also in computer science [13, 41] and economics [47]. Standard epistemic logics deal with an agent's knowledge about the truth-value of propositions (the notion of _knowing that_). Thus, they focus on the study of sentences like _"the agent knows that it is sumny in Paris"_ or _"the robot knows that it is standing next to a wall"_. For doing so, at the semantic level, EL formulas are typically interpreted over relational models [9, 10]: essentially, labeled directed graphs. The elements of the domain (called _states_ or _worlds_) represent different possible situations, and they fix the facts an agent might or might not know. Then, the knowledge of each agent is given by her _epistemic indistinguishability_ relation, used to represent her uncertainty about the truth: related states are considered indistinguishable for the agent. Finally, an agent is said to know that a proposition \(\varphi\) is true at a given state \(s\) if and only if \(\varphi\) holds in all states she cannot distinguish from \(s\) (i.e., in all states accessible from \(s\)). In order to capture properly the properties of knowledge, it is typically assumed that the indistinguishability relation is an equivalence relation. In spite of its simplicity, this indistinguishability-based representation of knowledge has several advantages. First, it captures the agent's _high-order_ knowledge (knowledge about her own knowledge and that of other agents). Moreover, due to its generality, it opens the way to study other epistemic notions, such as the notion of _belief_[28]. Finally, it allows a very natural representation of actions through which knowledge changes [53, 50]. In recent years, other forms of knowledge have been studied (see the discussion in [58]). Some authors have studied knowledge of propositions using rather the notion of _knowing whether_[23, 14]; some others have focused in the reasons/justifications for propositional knowledge, exploring the notion of _knowing why_[5, 60]; some more have looked at more general scenarios, proposing logics for _knowing the value_[20, 6, 54]. A further and particularly interesting form of knowledge, motivated by different scenarios in philosophy and AI, is one that focuses rather on the agent's abilities: the notion of _knowing how_[15]. Intuitively, an agent knows how to achieve \(\varphi\) given \(\psi\) if she has the _ability_ to guarantee that \(\varphi\) will be the case whenever she is in a situation in which \(\psi\) holds. Arguably, this notion is particularly important as it provides the formal foundations of automated planning and strategic reasoning within AI. Historically, the concept of knowing how has been considered different from knowing that, as posed e.g. in [48]. Knowing how is often seen as a reflection of actions or abilities that agents may take, in an intelligent manner, in order to achieve a certain goal. In turn, there is a large literature connecting _knowing how_ with logics of knowledge and action (see, e.g., [39, 42, 31, 52, 27]). However, the way in which these proposals represent _knowing how_ has been the target of criticisms. The main issue is that a simple combination of standard operators expressing _knowing that_ and _ability_ (see, e.g., [51]) does not seem to lead to a natural notion of _knowing how_ (see [29, 26] for a discussion). Taking these considerations into account, [57, 58, 59] introduced a novel framework based on a binary _knowing how_ modality that is not defined in terms of _knowing that_. At the semantic level, this language is also interpreted over relational models -- called in this context labeled transition systems (LTSs). Yet, relations do not represent indistinguishability anymore; they rather describe the actions the agent has at her disposal (similar to what is done in, e.g., propositional dynamic logic [22]). Indeed, an edge labeled \(a\) from state \(w\) to state \(u\) indicates now that the agent can execute action \(a\) to transform state \(w\) into \(u\). In the proposed semantics, the new modality \(\mathsf{Kh}(\psi,\varphi)\) holds if and only if there is a "plan" -- a sequence of actions satisfying a constraint called strong executability (SE) -- leading from \(\psi\)-states to \(\varphi\)-states. Intuitively, SE implies that the plan is "fail-proof" in the LTS; it unerringly leads from every \(\psi\)-state only to \(\varphi\)-states. Other variants of this _knowing how_ operator follow a similar approach (see [32, 34, 16, 56]). Further motivation for these semantics can be found in the referred papers. It is interesting to notice how LTSs have no epistemic component: their relations are interpreted as actions, and then the abilities of an agent are defined only in terms what these actions can achieve. This is in sharp contrast with standard EL, where relational models provide two kinds of information: ontic facts about the given situation (the model's evaluation point) and the particular way an agent'sees' this situation (both the possible states available in the model and the agent's indistinguishability relation among them). In particular, in a multi-agent scenario, all agents share the same ontic information, and differ on their _epistemic interpretation_ of it. If one wants to mirror the situation in EL, it seems natural that _knowing how_ should be defined in terms of some kind of indistinguishability over the actual situation. Such an extended model would then be able to capture both the abilities of an agent as given by her available actions (the ontic information) as well as the knowledge (or lack of it) that arises from her uncertainty (the epistemic information). This paper investigates a new semantics for \(\mathsf{Kh}_{\mathsf{i}}(\psi,\varphi)\), a multi-agent version of the _knowing how_ modality, first presented in [3]. These semantics introduces two ideas. The first, the crucial, is the use of a notion of _epistemic indistinguishability over plans_, in the spirit of the _strategy indistinguishability_ of, e.g., [30, 8]. The intuition behind it is that, under the original LTS semantics, the only reason why an agent might not know how to achieve a goal is because there is no adequate action available. However, one can think of scenarios in which the lack of knowledge arises for a different reason: the agent might have an adequate plan available (she has the ability to do something), and yet she might not be able to distinguish it from a non-adequate one, in the sense of not being able to tell that, in general, these plans produce different outcomes. Section 4 provides a deeper discussion on this. In this way, these _uncertainty-based_ LTSs reintroduce the notion of epistemic indistinguishability. Now, although indistinguishability over plans is the main idea behind the new semantics, this proposal incorporates a second insight. One can also think of scenarios in which some of the actions, despite being _ontically_ available, are not _epistemically_ accessible to the agent. There might be several reasons for this, but an appealing one is that the agent might not be _aware of_ all available actions. In such cases, the epistemically inaccessible actions are then not even under consideration when the agent looks for a plan to reach a goal. The idea of awareness is not new in the EL literature: it has been used for dealing with the EL problem of logical omniscience [55, 49, 21] by allowing the agent not to be aware of all involved atoms/formulas, thus bringing it closer to what a'real' resource-bounded agent is (see [12]). Notice that, the ideas discussed above are in line with a reading that has a consensus among the literature (see, e.g., [24]): knowing how of an agent entails her ability (i.e., the capacity of actually doing it); but ability does not necessarily entail knowing how. It is equally important to notice that, in the new semantics, the agent does not _need_ to be incapable of distinguishing certain actions, and she does not _need_ to be unaware of some of them. As it will be proved, the new semantics are a generalisation of the original ones in [57, 59]. Thus, an agent in the new semantics who does not have uncertainty among plans and has full awareness of all of them is, knowledge-wise, exactly as an agent in the original semantics. Our contributions.Our work aims to shed new light on knowing how logics. In particular, we investigate a new multi-agent semantics for capturing the notion of knowing how, generalizing previous proposals [57, 58, 59, 3]. Herein, we establish a distinction between ontic information shared by the agents (or abilities), and epistemic information for each individual agent (or awareness), at the level of models. In our semantics, knowing how is given by the latter, instead of by the former, as in existing approaches [57, 58, 59]. Moreover, we present a thorough study of the metalogical properties of the new logic, and compare it with previous approaches. Our contributions can be summarized as follows: 1. We introduce a new semantics for \(\mathsf{Kh}_{i}(\psi,\varphi)\) (for \(i\) an agent) that reintroduces the notion of epistemic indistinguishability from classical EL. This dimension captures the awareness for each particular agent over the available abilities in the real world. 2. We introduce a suitable notion of bisimulation for the new semantics, based on ideas from [17, 18]. We prove an invariance result, and a Hennessy-Milner style theorem over finite models. 3. We show that the logic obtained is strictly weaker (and this is an advantage, as we will discuss) than the logic from [57, 58, 59]. Still, the new semantics is general enough to capture the original proposal by imposing adequate conditions on the class of models. Apart from the direct correspondence between models of each framework established already in [3], we introduce a new general class of models that also does the job. 4. We present a sound and complete axiomatization for the logic over the class of all models. 5. We study the computational properties of our logic. First, we provide a finite model property via filtrations. I.e., we show how, given an arbitrary model, it is possible to obtain a finite model satisfying the same set of formulas. A more careful selection argument can be used to prove that the satisfiability problem for the new logic is -complete, whereas model checking is in. This paper extends the work presented in [3]. Herein, we provide detailed discussions and motivations, and full proofs. Moreover, the results about bisimulations, expressive power and finite models via filtrations, are novel with respect to [3]. Outline of the article.Section3 recalls the framework of [57, 58, 59], including its axiom system. Section4 introduces _uncertainty-based LTSs_, indicating how they can be used for interpreting a multi-agent version of the _knowing how_ language. In Section5 we introduce a suitable notion of bisimulation, together with correspondence theorems. We provide a sound and complete axiom system in Section6. Section7 studies the correspondence between our semantics and the one in the original proposals. In particular, we present two different classes of models that capture the original semantics. In Section8 we investigate a finite model property via filtrations (Subsection8.1), and the computational complexity of model checking and the satisfiability problem for our logic (Subsection8.2). We conclude in Section9 with some conclusions and future lines of research. ## 2 A short review of the literature The ideas discussed in the previous section concerning the notion of _knowing how_ introduced in [57, 58, 59] have been successful, and have lead to different works in the literature. An earlier one is [34], which considers a ternary modality \(\mathsf{Kh}(\psi,\chi,\varphi)\) asking for a plan whose intermediate states satisfy \(\chi\). Then, [32] introduces a weaker binary modality \(\mathsf{Kh}^{\mathsf{w}}(\psi,\varphi)\) that allows plans that abort, and in which the states reached by these aborted executions should also satisfy the goal \(\varphi\). Finally, [56] uses a semantics under which intermediate actions in a given plan may be skipped. The respective works introducing these variants also provide an axiom system (interestingly, the logic for the modality with skippable plans is the same as the logic for the original modality). Regarding computational behaviour, the satisfiability problem has been proved to be decidable (in their respective papers) for the basic system, the one allowing aborted executions and the one with skippable plans. However, no complexity bounds for any of the systems have been given. Finally, suitable notions of bisimulation for these systems can be found in [17, 18] (for all but the one with skippable plans) and [56] (for the one with skippable plans). These bisimilarity tools have been useful to investigate the systems' relative expressive power. It has been shown that the original binary modality \(\mathsf{Kh}(\psi,\varphi)\) is strictly less expressive than the one with intermediate steps (\(\mathsf{Kh}(\psi,\chi,\varphi)\)), and that they are both incomparable with the modality with aborted executions \(\mathsf{Kh}^{\mathsf{w}}(\psi,\varphi)\). Moreover, in [11] the computational complexity of the model-checking problem for different knowing how logics is characterized. In particular, it is established that model-checking for the basic knowing how logic from [57, 58, 59] is \(\mathsf{PSpace}\)-complete, whereas for a variant with budget constraints is \(\mathsf{ExpSpace}\)-hard. Other constraints over plans are also studied therein, concretely the variant of [3] (the one studied in this paper) with regularity constraints and budgets, for which model-checking is in \(\mathsf{P}\). More recently, in [2], the framework of knowing how is extended to a deontic setting, formalizing the notion of _knowingly complying_. Further proposals explore new features. For instance, a natural extension is considering the interaction between _knowing how_ and standard _knowing that_ modalities. In [16], a single-agent logic with the two modalities is introduced. The knowing how operator is, unlike previous approaches, a unary local modality \(\mathsf{Kh}(\varphi)\), and its interpretation allows branching plans. The interaction between both kinds of knowledge is studied via an axiom system, and it is proved that its satisfiability problem is decidable. The decidability result has been recently refined in [33], where \(\mathsf{PSpace}\)-completeness is proved for the satisfiability problem, via a tableau-based procedure. In [37] a neighbourhood semantics is provided for the _knowing how_ modality, as an alternative to the standard relational semantics. Other papers incorporate multi-agent behaviour for _knowing how_ and _knowing that_ modalities. For instance, in [43, 45] this is explored in the context of coalitions, i.e., the logic is used to describe different notions of collective knowledge. It is known that a fragment of this logic is incomparable in expressive power with the logic from [16] (the proof uses bisimulation, and it is presented in [18]). Other variants of this logic have been explored, including those relying on _second-order knowing how_ strategies [44], and _knowing how_ with degrees of uncertainty [46]. Axiom systems are presented for each logic. Finally, a multi-agent knowing how logic describing the behaviour of epistemic planning is investigated in [35]. The main peculiarity is that the execution of an action is represented by an update in the model via epistemic action models [7]. The logic obtained is strictly weaker than the one in [16]. Again, its satisfiability problem is decidable. This work is extended in [38], which provides a unified approach for planning-based knowing how. More remarkably, the work in [36] establishes a connection between planning and knowing how, not just from the perspective of _planning-based_ know how, but also the other way around: a planning problem based on know how goals. To do so, the authors introduce a model checking algorithm running in \(\mathsf{P}\) time. ## 3 A logic of knowing how This section recalls the basics of the _knowing how_ framework from [57, 58, 59]. **Syntax and semantics.** Throughout the text, let \(\mathsf{Prop}\) be a countable non-empty set of propositional symbols. **Definition 3.1**: Formulas of the language \(\mathsf{L}_{\mathsf{Kh}}\) are given by the grammar \[\varphi::=p\mid\neg\varphi\mid\varphi\vee\varphi\mid\mathsf{Kh}(\varphi, \varphi),\] with \(p\in\mathsf{Prop}\). Boolean constants and other Boolean connectives are defined as usual. Formulas of the form \(\mathsf{Kh}(\psi,\varphi)\) are read as _"when \(\psi\) holds, the agent knows how to make \(\varphi\) true"_. \(\dashv\) In [57, 58, 59] (and variations like [34, 32]), formulas of \(\mathsf{L}_{\mathsf{Kh}}\) are interpreted over _labeled transition systems_: relational models in which the relations describe the state-transitions available to the agent. Throughout the text, let \(\mathsf{Act}\) be a denumerable set of (basic) action names. **Definition 3.2** (Actions and plans): Let \(\mathsf{Act}\)' be the set of finite sequences over \(\mathsf{Act}\). Elements of \(\mathsf{Act}\)' are called _plans_, with \(\epsilon\) being the _empty plan_. Given \(\sigma\in\mathsf{Act}\)', let \(|\sigma|\) be the length of \(\sigma\) (note: \(|\epsilon|:=0\)). For a plan \(\sigma\) and \(0\leq k\leq|\sigma|\), the _plan_\(\sigma_{k}\) is \(\sigma\)'s initial segment up to (and including) the \(k\)th position (with \(\sigma_{0}:=\epsilon\)). For \(0<k\leq|\sigma|\), the _action_\(\sigma[k]\) is the one in \(\sigma\)'s \(k\)th position. \(\dashv\) **Definition 3.3** (Labeled transition systems): A _labeled transition system_ (LTS) for \(\mathsf{Prop}\) and \(\mathsf{Act}\) is a tuple \(\mathcal{S}=\langle\mathsf{W},\mathsf{R},\mathsf{V}\rangle\) where \(\mathsf{W}\) is a non-empty set of states (also denoted by \(\mathsf{D}_{\mathsf{S}}\)), \(\mathsf{R}=\{\mathsf{R}_{a}\subseteq\mathsf{W}\times\mathsf{W}\mid a\in A,\text { for some }A\subseteq\mathsf{Act}\}\) is a collection of binary relations on \(\mathsf{W}\),1 and \(\mathsf{V}:\mathsf{W}\to 2^{\mathsf{Prop}}\) is a labelling function. Given an LTS \(\mathcal{S}\) and \(w\in\mathsf{D}_{\mathcal{S}}\), the pair \((\mathcal{S},w)\) is a _pointed_ LTS (parentheses are usually dropped). \(\dashv\) Footnote 1: Thus, \(\mathsf{R}_{a}\) might not be defined for some \(a\in\mathsf{Act}\). An LTS describes the _abilities_ of the agent; thus, sometimes (e.g., [57, 58, 59]) it is also called an _ability map_. Here we introduce some useful definitions. It is worth noticing that, although the signature is infinite (since \(\mathsf{Act}\) is a _denumerable_ set), the relations in the model might be defined only for a (possibly finite) subset of actions. **Definition 3.4**: Let \(\{\mathsf{R}_{a}\subseteq\mathsf{W}\times\mathsf{W}\mid a\in A,\text{ for some }A\subseteq\mathsf{Act}\}\) be a collection of binary relations. Define \(\mathsf{R}_{e}:=\{(w,w)\mid w\in\mathsf{W}\}\) and, for \(\sigma\in\mathsf{Act}\)' and \(a\in\mathsf{Act}\), \(\mathsf{R}_{oa}:=\{(w,u)\in\mathsf{W}\times\mathsf{W}\mid\exists v\in\mathsf{ W}\text{ s.t. }(w,v)\in\mathsf{R}_{o}\text{ and }(v,u)\in\mathsf{R}_{a}\}\). Take a plan \(\sigma\in\mathsf{Act}\)': for \(u\in\mathsf{W}\) define \(\mathsf{R}_{o}(u):=\{v\in\mathsf{W}\mid(u,v)\in\mathsf{R}_{o}\}\), and for \(U\subseteq\mathsf{W}\) define \(\mathsf{R}_{o}(U):=\bigcup_{u\in U}\mathsf{R}_{o}(u)\). \(\dashv\) The idea in [57, 58, 59] is that an agent knows how to achieve \(\varphi\) given \(\psi\) when she has an appropriate plan that allows her to go from any state in which \(\psi\) holds only to states in which \(\varphi\) holds. A crucial part is, then, what "appropriate" is taken to be. **Definition 3.5** (Strong executability): Let \(\{\mathsf{R}_{a}\subseteq\mathsf{W}\times\mathsf{W}\mid a\in A,\text{ for some }A\subseteq\mathsf{Act}\}\) be a collection of binary relations. A plan \(\sigma\in\mathsf{Act}^{\prime}\) is _strongly executable_ (SE) at \(u\in\mathsf{W}\) if and only if \(\mathsf{R}_{\sigma}\) is defined and, additionally, \(v\in\mathsf{R}_{\sigma_{i}}(u)\) implies \(\mathsf{R}_{\sigma[k+1]}(v)\neq\varnothing\) for every \(k\in[0\mathbin{..}|\mathord{\left\lvert}{o}\right\rvert-1]\). We define the set \(\mathrm{SE}(\sigma):=\{w\in\mathsf{W}\mid\sigma\text{ is SE at }w\}\). \(\dashv\) Thus, strong executability asks for _every_ partial execution of the plan (including \(\epsilon\)) to be completed. With this notion, formulas in \(\mathsf{L}_{\mathsf{R}_{0}}\) are interpreted over an LTS as follows. Notice that the semantic clause for the \(\mathsf{Kh}\) modality shown here is equivalent to the one found in the original papers. **Definition 3.6** (\(\mathsf{L}_{\mathsf{Kh}}\) over LTSs): The relation \(\models\) between a pointed LTS \(\mathcal{S},w\) (with \(\mathcal{S}=\langle\mathsf{W},\mathsf{R},\mathsf{V}\rangle\) an LTS over \(\mathsf{Act}\) and \(\mathsf{Prop}\)) and formulas in \(\mathsf{L}_{\mathsf{Kh}}\) (over \(\mathsf{Prop}\)) is defined inductively as follows: \[\mathcal{S},w\models p \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ agent knows how to make \(\varphi\) true, and given \(\varphi\) she knows how to make \(\chi\) true, then given \(\psi\) she knows how to make \(\chi\) true. **Theorem 1** ([57]): _The axiom system \(\mathcal{L}_{\mathsf{Kh}}^{\mathrm{LTS}}\) (Table 1) is sound and strongly complete for \(\mathsf{L}_{\mathsf{Kh}}\) w.r.t. the class of all LTSs._ Axioms in the second block might be questionable. First, one could argue that, contrary to what \(\not\in\not\in\not\in\not\in\) states, not all global truths about what is achievable in the model need to be considered as knowledge (how) of the agent. Second, notice that axiom \(\not\in\not\in\) implies also a certain level of omniscience: it might as well be that an agent knows how to make \(\varphi\) true given \(\psi\), and how to make \(\chi\) true given \(\varphi\), but still has not worked out how to put together the two witness plans to ensure \(\chi\) given \(\psi\). These are the two properties that will be lost in the more general semantics introduced in the next section. In Section 7 we will show how these formulas become valid when, in the new semantics, one make strong idealizations. ## 4 Uncertainty-based semantics The LTS-based semantics provides a reasonable representation of an agent's abilities: the agent knows how to achieve \(\varphi\) given \(\psi\) if and only if there is a plan that, when executed at any \(\psi\)-state, will always complete every partial execution, ending unerringly in states satisfying \(\varphi\). Still, one could argue that this representation involves a certain level of idealization. Take an agent that _lacks_ a certain ability. In the LTS-based semantics, this can only happen when the environment does not provide the required (sequence of) action(s). Still, there are situations in which an adequate plan exists, and yet the agent lacks the ability for a different reason. Indeed, she might _fail to distinguish_ an adequate plan from a non-adequate one, in the sense of not being able to tell that, in general, those plans produce different outcomes. Consider, for example, an agent baking a cake. She might have the ability to do the nine different mixing methods2 (beating, blending, creaming, cutting, folding, kneading, sifting, stirring, whipping), and she might even recognize them as different actions. However, she might not be able to perfectly distinguish one from the others: she might not recognize that, sometimes, they produce different results. In such cases, one would say that the agent does not know how to bake a cake: sometimes she gets good outcomes (when she uses the adequate mixing method) and sometimes she does not. Indistinguishability among _basic_ actions can account for the example above (with each mixing method a basic action). Still, one can also think of situations in which a more general form of indistinguishability, one _among plans_, is involved. Consider the baking agent again. It is reasonable to assume that she can tell the difference between "adding milk" and "adding flour", but perhaps she does not realize the effect that _the order_ of these actions might have in the final result. Here, the issue is not that she cannot distinguish between basic actions; rather, two plans are indistinguishable because the order of their actions is being considered irrelevant. For a last possibility, the agent might not know that, while opening the oven once to check whether the baking goods are done is reasonable, this must not be done in excess. In this case, the problem consists in not being able to tell the difference between the effect of executing an action once and executing it multiple times. Thus, plans of _different lengths_ might be considered equivalent for the task at hand, for such an agent. The previous examples suggest that one can devise a more general representation of an agent's abilities. This involves taking into account not only the plans she has available (the LTS structure), but also her skills for telling two different plans apart (a form of _indistinguishability among plans_). As we will see, this (in)ability for distinguishing plans will also let us define a natural model for a multi-agent scenario. In this setting, agents share the same set of _affordances_ (provided by the actual environment), but still have different _abilities_ depending on and how well they can tell these affordances apart, or even which of these affordances they have available. To drive this last point home notice that, in principle, an agent does not need to have 'epistemic access' to every available plan. Some might be so foreign to the agent, or so complex, that she might not be _aware of_ them. Such plans are, then, out of the agent's reach, not in the sense that she cannot distinguish them from others, but in that she does not even take them into consideration. This is similar to what [12] proposed for the epistemic notion of _knowing that_: the agent might not be aware of (i.e., she might not entertain) every formula of the language, and thus she does not need to know that these formulas are indeed the case. **Definition 4.1** (Uncertainty-based LTS): Let \(\mathsf{Agt}\) be a finite non-empty set of agents. A _multi-agent uncertainty-based_ LTS (\(\mathrm{LTS}^{U}\)) for \(\mathsf{Prop}\), \(\mathsf{Act}\) and \(\mathsf{Agt}\) is a tuple \(\mathcal{M}=\langle\mathsf{W},\mathsf{R},\sim,\mathsf{V}\rangle\) where \(\langle\mathsf{W},\mathsf{R},\mathsf{V}\rangle\) is an LTS and \(\sim\) assigns, to each agent \(i\in\mathsf{Agt}\), an equivalence _indistinguishability_ relation over a non-empty set of plans \(\mathsf{P}_{i}\subseteq\mathsf{Act}^{i}\). Given an \(\mathrm{LTS}^{U}\)\(\mathcal{M}\) and \(w\in\mathsf{D}_{\mathcal{M}}\), the pair \((\mathcal{M},w)\) (parenthesis usually dropped) is called a _pointed_\(\mathrm{LTS}^{U}\). -i Intuitively, \(\mathsf{P}_{i}\) is the set of plans that agent \(i\) has at her disposal; it contains the plans the agent has access to. Then, similarly as in classical epistemic logic, \(\sim_{i}\subseteq\mathsf{P}_{i}\times\mathsf{P}_{i}\) describes agent \(i\)'s indistinguishability over her available plans. **Remark 1**: The following change in notation will simplify some definitions later on, and will make the comparison with the LTS-based semantics clearer. Let \(\langle W,R,\sim,V\rangle\) be an \(\operatorname{LTS}^{U}\) and take \(i\in\operatorname{\mathsf{Agt}}\); for a plan \(\sigma\in P_{i}\), let \([\sigma]_{i}\) be its equivalence class in \(\sim_{i}\) (i.e., \([\sigma]_{i}:=\{\sigma^{\prime}\in P_{i}\mid\sigma\sim_{i}\sigma^{\prime}\}\)). There is a one-to-one correspondence between each \(\sim_{i}\) and its induced set of equivalence classes \(S_{i}:=\{[\sigma]_{i}\mid\sigma\in P_{i}\}\). Hence, from now on, an \(\operatorname{LTS}^{U}\) will be presented as a tuple \(\langle W,R,\{S_{i}\}_{i\in\operatorname{\mathsf{Agt}}},V\rangle\). Notice the following properties of each \(S_{i}\): _(1)_\(S_{i}\neq\varnothing\) (as \(P_{i}\neq\varnothing\)), _(2)_ if \(\pi_{1},\pi_{2}\in S_{i}\) and \(\pi_{1}\neq\pi_{2}\), then \(\pi_{1}\cap\pi_{2}=\varnothing\) (equivalence classes are pairwise disjoint), _(3)_\(P_{i}=\bigcup_{\pi\in S_{i}}\pi\) (their union is exactly \(P_{1}\)), and _(4)_\(\varnothing\notin S_{i}\) (the empty set is not an equivalence class). \(\dashv\) Given her uncertainty over \(\operatorname{\mathsf{Act}}^{*}\) (or, more precisely, over _her_ 'domain of plans' \(P_{i}\subseteq\operatorname{\mathsf{Act}}^{*}\)), the abilities of an agent \(i\) depend not on what a single plan can achieve, but rather on what a set of them can guarantee. **Definition 4.2**: For \(\pi\subseteq\operatorname{\mathsf{Act}}^{*}\), \(u\in W\) and \(U\subseteq W\), define \[R_{\pi}:=\bigcup_{\sigma\in\pi}R_{\sigma},\qquad R_{\pi}(u):=\bigcup_{\sigma \in\pi}R_{\sigma}(u),\qquad R_{\pi}(U):=\bigcup_{u\in U}R_{\pi}(u).\] We can now generalize the notion of strong executability for sets of plans. **Definition 4.3** (Strong executability): A _set of plans_\(\pi\subseteq\operatorname{\mathsf{Act}}^{*}\) is _strongly executable_ at \(u\in W\) if and only if _every_ plan \(\sigma\in\pi\) is _strongly executable_ at \(u\). Thus, \(\operatorname{SE}(\pi):=\bigcap_{\sigma\in\pi}\operatorname{SE}(\sigma)\) is the set of the states in \(W\) where \(\pi\) is strongly executable. \(\dashv\) **Definition 4.4** (Kh\({}_{i}\) over \(\operatorname{LTS}^{U}\)s): Let \(L_{\mathsf{Kh}_{i}}\) be the multi-agent version of the language \(L_{\mathsf{Kh}_{i}}\), obtained by replacing \(K\mathsf{h}\) with \(K\mathsf{h}_{i}\) (with \(i\in\operatorname{\mathsf{Agt}}\) for \(\operatorname{\mathsf{Agt}}\neq\varnothing\)). The satisfiability relation \(\models\) between a pointed \(\operatorname{LTS}^{U}\)\(\mathcal{M},w\) (with \(\mathcal{M}=\langle W,R,\{S_{i}\}_{i\in\operatorname{\mathsf{Agt}}},V\rangle\) an \(\operatorname{LTS}^{U}\) over \(\operatorname{\mathsf{Act}}\), \(\operatorname{\mathsf{Prop}}\) and \(\operatorname{\mathsf{Agt}}\)) and formulas in \(L_{\mathsf{Kh}_{i}}\) is defined inductively. The atomic and Boolean cases are as before. For _knowing how_ formulas, \[\mathcal{M},w\models K\mathsf{h}_{i}(\psi,\varphi)\quad\text{iff}_{\text{\tiny{ $\#$}}}\quad\text{there exists $\pi\in S_{i}$ such that}\] \[\quad\text{\bf(K\!h\!-\!1)}\llbracket\psi\rrbracket^{\mathcal{M}} \subseteq\operatorname{SE}(\pi)\text{ and }\text{\bf(K\!h\!-\!2)}\operatorname{R}_{\pi}(\llbracket\psi \rrbracket^{\mathcal{M}})\subseteq\llbracket\varphi\rrbracket^{\mathcal{M}},\] with \(\llbracket\varphi\rrbracket^{\mathcal{M}}:=\{w\in W\mid\mathcal{M},w\models\varphi\}\). The set of plans \(\pi\) in the semantic case for \(K\mathsf{h}_{i}(\psi,\varphi)\) is often called the witness for \(K\mathsf{h}_{i}(\psi,\varphi)\) in \(\mathcal{M}\). \(\dashv\) It is worth comparing Definition 3.6 and Definition 4.4. As before, \(K\mathsf{h}_{i}(\psi,\varphi)\) acts _globally_. But now, we require _for agent \(i\)_ to have a _set of plans_ satisfying strong executability in every \(\psi\)-state (condition _(K\!h\!-\!1)_). Still, the set of plans should work as the single plan did before: when executed at \(\psi\)-states, it should end unerringly in states satisfying \(\varphi\) (condition _(K\!h\!-\!2)_). It is also important to notice that the global universal modality is also definable within \(L_{\mathsf{Kh}_{i}}\) over \(\operatorname{LTS}^{U}\). (For this, it is crucial that \(S_{i}\neq\varnothing\) and \(\varnothing\notin S_{i}\), as stated in Remark 1.) **Proposition 2**: _Let \(\mathcal{M},w\) be a pointed \(\operatorname{LTS}^{U}\). Then,_ \[\text{there is $i\in\operatorname{\mathsf{Agt}}$ with $\mathcal{M},w\models K \mathsf{h}_{i}(\neg\varphi,\bot)$}\qquad\text{iff}\qquad\llbracket\varphi \rrbracket^{\mathcal{M}}=D_{\mathcal{M}}.\] _Proof._ **(\(\Rightarrow\))** Suppose there is \(i\in\operatorname{\mathsf{Agt}}\) with \(\mathcal{M},w\models K\mathsf{h}_{i}(\neg\varphi,\bot)\). Then, there is \(\pi\in S_{i}\) such that _(K\!h\!-\!1)_\(\llbracket\neg\varphi\rrbracket^{\mathcal{M}}\subseteq\operatorname{SE} (\pi)\) and _(K\!h\!-\!2)_\(\operatorname{R}_{\pi}(\llbracket\neg\varphi\rrbracket^{\mathcal{M}}) \subseteq\llbracket\bot\rrbracket^{\mathcal{M}}\). For a contradiction, suppose \(\llbracket\varphi\rrbracket^{\mathcal{M}}\neq D_{\mathcal{M}}\), so there is \(u\in\llbracket\neg\varphi\rrbracket^{\mathcal{M}}\). Then, Item (K\!h\!-\!1) implies \(u\in\operatorname{SE}(\pi)=\bigcap_{\sigma\in\pi}\operatorname{SE}(\sigma)\). But \(\pi\in S_{i}\), so \(\pi\neq\varnothing\), that is, there is \(\sigma\in\pi\) with \(u\in\mathrm{SE}(\sigma)\); thus, \(\mathrm{R}_{\sigma}(u)\neq\varnothing\), so \(\mathrm{R}_{\pi}(u)\neq\varnothing\) and hence \(\mathrm{R}_{\pi}(\llbracket\neg\varphi\rrbracket^{\mathcal{M}})\neq\varnothing\), that is, \(\varnothing\subset\mathrm{R}_{\pi}(\llbracket\neg\varphi\rrbracket^{\mathcal{M}})\). But then, from Item (**Kh-2**), \(\varnothing\subset\mathrm{R}_{\pi}(\llbracket\neg\varphi\rrbracket^{\mathcal{M}})\subseteq \llbracket\bot\rrbracket^{\mathcal{M}}\), i.e., \(\varnothing\subset\llbracket\bot\rrbracket^{\mathcal{M}}\), a contradiction. Therefore, \(\llbracket\varphi\rrbracket^{\mathcal{M}}=\mathrm{D}_{\mathcal{M}}\). (**\(\Leftarrow\)**) Suppose \(\llbracket\varphi\rrbracket^{\mathcal{M}}=\mathrm{D}_{\mathcal{M}}\). Then \(\llbracket\neg\varphi\rrbracket^{\mathcal{M}}=\varnothing\) and hence (**Kh-1**) in the semantic clause of \(\mathsf{Kh}_{\mathrm{i}}(\neg\varphi,\bot)\) holds for every \(\pi\in 2^{\mathsf{Act}}\). Moreover, \(\mathrm{R}_{\pi}(\llbracket\neg\varphi\rrbracket^{\mathcal{M}})=\bigcup_{u\in \llbracket\neg\varphi\rrbracket^{\mathcal{M}}}\mathrm{R}_{\pi}(u)=\bigcup_{u \in\varnothing}\mathrm{R}_{\pi}(u)=\varnothing\), so (**Kh-2**) also holds for any such \(\pi\). Finally, \(S_{i}\neq\varnothing\) (so there is \(\pi\in S_{i}\)) and \(\mathsf{Agt}\neq\varnothing\) (so there is \(i\in\mathsf{Agt}\)); therefore, there is \(i\in\mathsf{Agt}\) with \(\mathcal{M},w\models\mathsf{Kh}_{\mathrm{i}}(\neg\varphi,\bot)\). Hence, one can take \(\mathsf{A}\varphi:=\bigvee_{i\in\mathsf{Agt}}\mathsf{Kh}_{\mathrm{i}}(\neg \varphi,\bot)\) (recall: \(\mathsf{Agt}\) is non-empty and finite) and \(\mathsf{E}\varphi:=\neg\mathsf{A}\neg\varphi\). Now, clearly different agents have different awareness about their own abilities. At the same time, because of the global nature of the modality of knowing how, it holds that \[\mathcal{M},w\models\mathsf{Kh}_{\mathrm{i}}(\psi,\varphi)\text{ if and only if }\mathcal{M},w\models\mathsf{AKh}_{\mathrm{i}}(\psi,\varphi),\] or equivalent, \[\mathcal{M},w\models\mathsf{Kh}_{\mathrm{j}}(\neg\mathsf{Kh}_{\mathrm{i}}( \psi,\varphi),\bot),\text{ for some agent }j.\] But this does not imply that agents _know that_ "agent \(i\) knows how to achive \(\varphi\) given \(\psi\)." It is only the case that \(\mathsf{Kh}_{\mathrm{i}}(\psi,\varphi)\) becomes an objective true, and hence assuming its negation naturally leads to contradiction. There is no notion of epistemic indistinguishability over states in our models, which could lead to a notion of "knows that". Lastly, one can argue that since models are equipped with a notion of epistemic indistinguishability between plans, an agent should know that a certain plan is (or is not) distinguishable from another, or that an agent is aware of the availability of a certain course of action. However, knowing how modalities cannot talk about the relation itself, only about the existence of a set of indistinguishable plans, and the effects of executing those plans. ## 5 Bisimulations Bisimulation is a crucial tool for understanding the expressive power of a formal language. In [17, 18], bisimulation notions for \(\mathsf{L}_{\mathsf{Kh}}\) over \(\mathrm{LTS}\)s have been introduced. This section discusses similar ideas for \(\mathsf{L}_{\mathsf{Kh}_{\mathrm{i}}}\) over \(\mathrm{LTS}^{\mathrm{U}}\)s. First, a useful abbreviation. **Definition 5.1**: Let \(\mathcal{M}=\langle\mathrm{W},\mathrm{R},\{\mathrm{S}_{i}\}_{i\in\mathsf{Agt}}, \mathrm{V}\rangle\) be an \(\mathrm{LTS}^{\mathrm{U}}\) over \(\mathsf{Prop}\), \(\mathsf{Act}\) and \(\mathsf{Agt}\). Take a set of plans \(\pi\in 2^{(\mathsf{Act})}\), sets of states \(U,T\subseteq\mathrm{W}\) and an agent \(i\in\mathsf{Agt}\). * Write \(U\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.3pt\hbox{\vrule height 6.999893pt depth -0.23pt width 1px} \hss}\hbox{$\rightarrow$}}}T\) iff \(U\subseteq\mathrm{SE}(\pi)\) and \(\mathrm{R}_{\pi}(U)\subseteq T\). * Write \(U\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.3pt\hbox{\vrule height 6.999893pt depth -0.23pt width 1px} \hss}\hbox{$\rightarrow$}}}T\) iff \(T_{{}_{\!\! Two quick observations. First, note how the abbreviation simplifies the semantic clause for _knowing how_ formulas: \(\mathcal{M},w\models\mathsf{Kh}_{i}(\psi,\varphi)\) if and only if \(\llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \llbracket\psi\rrbracket^{\mathcal{M}}\stackrel{{\mbox{\tiny$ \leftrightarrow$}}}{{\Rightarrow}}\llbracket\!\!\!\!\!\!\!\!\!\!\!\!\!\! \llbracket\varphi\rrbracket^{\mathcal{M}}\). Second, under the \(\mathrm{LTS}^{U}\)-based semantics, \(\mathsf{L}_{\mathsf{Kh}_{i}}\)-definability implies propositional definability. Its proof, analogous to the LTS-based semantics case in [17, 18], relies on the fact that \(\mathsf{Kh}_{i}\) acts globally. **Proposition 3**: _Let \(\mathcal{M}\) be an \(\mathrm{LTS}^{U}\). For all \(U\subseteq\mathrm{D}_{\mathcal{M}}\), if \(U\) is \(\mathsf{L}_{\mathsf{Kh}_{i}}\)-definable, then it is propositionally definable. _ We now introduce the notion of bisimulation. Note how, although the collection of binary relations of a model is not explicitly mentioned, it is referred to through the abstract relation "\(\stackrel{{ i}}{{\Rightarrow}}\)" (Definition 5.1). **Definition 5.2** (\(\mathsf{L}_{\mathsf{Kh}_{i}}\)-bisimulation): Let \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) be two \(\mathrm{LTS}^{U}\)s, their domains being \(W\) and \(W^{\prime}\), respectively. Take \(Z\subseteq W\times W^{\prime}\). * For \(u\in W\) and \(U\subseteq W\), define \[Z(u):=\{u^{\prime}\in W^{\prime}\mid uZu^{\prime}\},\qquad Z(U):=\bigcup_{u \in U}Z(u).\] * For \(u^{\prime}\in W^{\prime}\) and \(U^{\prime}\subseteq W^{\prime}\), define \[Z^{-1}(u^{\prime}):=\{u\in W\mid uZu^{\prime}\};\qquad Z^{-1}(U^{\prime}):= \bigcup_{u^{\prime}\in U}Z(u^{\prime}).\] A non-empty \(Z\subseteq W\times W^{\prime}\) is called an \(\mathsf{L}_{\mathsf{Kh}_{i}}\)-bisimulation between \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) if and only if \(wZw^{\prime}\) implies all of the following. * **Atom**: \(\mathrm{V}(w)=\mathrm{V}^{\prime}(w^{\prime})\). * \(\mathsf{Kh}_{i}\)**-Zig**: for any _propositionally_ definable \(U\subseteq W\), if \(U\stackrel{{ i}}{{\Rightarrow}}T\) for some \(T\subseteq W\), then there is \(T^{\prime}\subseteq W^{\prime}\) satisfying both _(B1)_ \(Z(U)\stackrel{{ i}}{{\Rightarrow}}T^{\prime}\), _(B2)_ \(T^{\prime}\subseteq Z(T)\). * \(\mathsf{Kh}_{i}\)**-Zag**: for any _propositionally_ definable \(U^{\prime}\subseteq W^{\prime}\), if \(U^{\prime}\stackrel{{ i}}{{\Rightarrow}}T^{\prime}\) for some \(T^{\prime}\subseteq W^{\prime}\), then there is \(T\subseteq W\) satisfying both _(B1)_ \(Z^{-1}(U^{\prime})\stackrel{{ i}}{{\Rightarrow}}T\), _(B2)_ \(T\subseteq Z^{-1}(T^{\prime})\). * \(\mathsf{A}\)**-Zig**: for all \(u\) in \(W\) there is a \(u^{\prime}\) in \(W^{\prime}\) such that \(uZu^{\prime}\). * \(\mathsf{A}\)**-Zag**: for all \(u^{\prime}\) in \(W^{\prime}\) there is a \(u\) in \(W\) such that \(uZu^{\prime}\). We write \(\mathcal{M},w\stackrel{{ i}}{{\leftrightarrow}}\mathcal{M}^{ \prime},w^{\prime}\) when there is an \(\mathsf{L}_{\mathsf{Kh}}\)-bisimulation \(Z\) between \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) such that \(wZw^{\prime}\). _ The two requirements in \(\mathsf{Kh}_{i}\)-Zig are equivalent to a single one: \(Z(U)\stackrel{{ i}}{{\Rightarrow}}Z(T)\). They are split to resemble more closely the definition of a standard bisimulation: if \(U\) has an '\(i\)-successor' \(T\), then its 'bisimulation image' \(U^{\prime}\) also has an '\(i\)-successor', namely \(T^{\prime}\) (clause \(Z(U)\stackrel{{ i}}{{\Rightarrow}}T^{\prime}\)), and these successors are a 'bisimilar match' (clause \(T^{\prime}\subseteq Z(T)\)). The case of \(\mathsf{Kh}_{i}\)-Zag is analogous. In order to formalize the crucial properties of a bisimulation, we define the notion of model equivalence with respect to \(\mathsf{L}_{\mathsf{Kh}_{i}}\). **Definition 5.3** (\(\mathsf{L}_{\mathsf{Kh}_{i}}\)-equivalence): Two pointed \(\mathrm{LTS}^{U}\)s \(\mathcal{M},w\) and \(\mathcal{M},w^{\prime}\) are \(\mathsf{L}_{\mathsf{Kh}_{i}}\)-_equivalent_ (written \(\mathcal{M},w\leftrightarrow\mathcal{M}^{\prime},w^{\prime}\)) if and only if, for every \(\varphi\in\mathsf{L}_{\mathsf{Kh}_{i}}\), \[\mathcal{M},w\models\varphi\quad\text{ iff }\quad\mathcal{M},w^{\prime} \models\varphi.\] Then, we can state the intended correspondence between \(\leftrightarrows\) and \(\leftrightarrows\). **Theorem 2** (\(\mathsf{L_{Kh}}\)-bisimilarity implies \(\mathsf{L_{Kh}}\)-equivalence): _Let \(\mathcal{M},w\) and \(\mathcal{M}^{\prime},w^{\prime}\) be pointed \(\mathrm{LTS}^{U}\)s. Then,_ \[\mathcal{M},w\leftrightarrows\mathcal{M}^{\prime},w^{\prime}\quad\text{ implies}\quad\mathcal{M},w\leftrightarrows\mathcal{M}^{\prime},w^{\prime}.\] Proof.: Take \(\mathcal{M}=\langle\mathsf{W},\mathsf{R},\{\mathsf{S}_{i}\}_{i\in\mathsf{Agt }},\mathsf{V}\rangle\) and \(\mathcal{M}^{\prime}=\langle\mathsf{W}^{\prime},\mathsf{R}^{\prime},\{ \mathsf{S}^{\prime}_{i}\}_{i\in\mathsf{Agt}},\mathsf{V}^{\prime}\rangle\). From the given \(\mathcal{M},w\leftrightarrows\mathcal{M},w^{\prime}\), there is an \(\mathsf{L_{Kh}}\)-bisimulation \(Z\subseteq(\mathsf{W}\times\mathsf{W}^{\prime})\) with \(wZw^{\prime}\). The proof of \(\mathsf{L_{Kh}}\)-equivalence is by structural induction on \(\mathsf{L_{Kh}}\)-formulas. The cases for atomic propositions and Boolean operators are standard, and only formulas of the form \(\mathsf{Kh}_{i}(\psi,\varphi)\) are left. Note how, for this case, the inductive hypothesis (IH) states that, for \(u\in\mathsf{W}\), \(u^{\prime}\in\mathsf{W}^{\prime}\) and \(\chi\) a subformula of \(\mathsf{Kh}_{i}(\psi,\varphi)\), if \(uZu^{\prime}\) then \(u\in\llbracket\chi\rrbracket^{\mathcal{M}}\) iff \(u^{\prime}\in\llbracket\chi\rrbracket^{\mathcal{M}}\). Suppose \(w\in\llbracket\mathsf{Kh}_{i}(\psi,\varphi)\rrbracket^{\mathcal{M}}\). Then, by semantic interpretation, \(\llbracket\psi\rrbracket^{\mathcal{M}}\buildrel\text{\tiny$\text{\tiny$ \text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{\tiny$\text{ \text{\tiny$\text{\text{\text{\text{\text{\text * **Atom**. States \(w\) and \(w^{\prime}\) agree in all \(\mathsf{L}_{\mathsf{Kh}}\)-formulas, and thus in all atoms. * **A-Zig**. Take \(v\in\mathsf{W}\) and suppose, for the sake of a contradiction, that there is no \(v^{\prime}\in\mathsf{W}^{\prime}\) such that \(vZv^{\prime}\). Then, from \(Z\)'s definition, for each \(v^{\prime}_{i}\in\mathsf{W}^{\prime}=\{v^{\prime}_{1},\ldots,v^{\prime}_{n}\}\) (recall: \(\mathcal{M}^{\prime}\) is finite) there is an \(\mathsf{L}_{\mathsf{Kh}}\)-formula \(\theta_{i}\) such that \(\mathcal{M},v\models\theta_{i}\) but \(\mathcal{M}^{\prime},v^{\prime}_{i}\not\models\theta_{i}\). Now take \(\theta:=\theta_{1}\wedge\cdots\wedge\theta_{n}\). Clearly, \(\mathcal{M},v\models\theta\); however, \(\mathcal{M}^{\prime},v^{\prime}_{i}\not\models\theta\) for each \(v^{\prime}_{i}\in\mathsf{W}^{\prime}\), as each one of them makes 'its' conjunct \(\theta_{i}\) false. Then, \(\mathcal{M},w\models\mathsf{E}\theta\) but \(\mathcal{M},w^{\prime}\not\models\mathsf{E}\theta\), contradicting the assumption \(wZw^{\prime}\). * **A-Zag**. Analogous to the A-Zig case. * \(\mathsf{Kh}_{i}\)**-Zig**. Take any propositionally definable set \(\llbracket\psi\rrbracket^{\mathcal{M}}\subseteq\mathsf{W}\) (thus, \(\psi\) is propositional), and suppose \(\llbracket\psi\rrbracket^{\mathcal{M}}\stackrel{{ i}}{{ \Rightarrow}}T\) for some \(T\subseteq\mathsf{W}\). We need to find a \(T^{\prime}\subseteq\mathsf{W}^{\prime}\) satisfying both _(B1)_ \(Z(\llbracket\psi\rrbracket^{\mathcal{M}})\stackrel{{ i}}{{ \Rightarrow}}T^{\prime}\), _(B2)_ \(T^{\prime}\subseteq Z(T)\). Note that \(Z(\llbracket\psi\rrbracket^{\mathcal{M}})=\llbracket\psi\rrbracket^{\mathcal{M}}\). For **(2)**, suppose \(u^{\prime}\in\llbracket\psi\rrbracket^{\mathcal{M}}\). From A-Zag (proved above), there is \(u\in\mathsf{W}\) such that \(uZu^{\prime}\); then, from \(Z\)'s definition, \(u\in\llbracket\psi\rrbracket^{\mathcal{M}}\) so \(u^{\prime}\in Z(\llbracket\psi\rrbracket^{\mathcal{M}})\). For **(c)**, suppose \(u^{\prime}\in Z(\llbracket\psi\rrbracket^{\mathcal{M}})\). Then, there is \(u\in\llbracket\psi\rrbracket^{\mathcal{M}}\) such that \(uZu^{\prime}\), and therefore, from \(Z\)'s definition, \(u^{\prime}\in\llbracket\psi\rrbracket^{\mathcal{M}^{\prime}}\). Thus, we actually require a \(T^{\prime}\subseteq\mathsf{W}^{\prime}\) satisfying both _(B1)_ \(\llbracket\psi\rrbracket^{\mathcal{M}^{\prime}}\stackrel{{ i}}{{ \Rightarrow}}T^{\prime}\), _(B2)_ \(T^{\prime}\subseteq Z(T)\). Now, consider two alternatives. * Assume \(\llbracket\psi\rrbracket^{\mathcal{M}}=\varnothing\). Then, \(\llbracket\psi\rrbracket^{\mathcal{M}^{\prime}}=Z(\llbracket\psi\rrbracket^{ \mathcal{M}})=\varnothing\) and hence \(T^{\prime}=\varnothing\) does the job, as the following hold _(B1)_ \(\varnothing\stackrel{{ i}}{{\Rightarrow}}\varnothing\) (as \(\mathrm{S}_{i}\not=\varnothing\)), _(B2)_ \(\varnothing\subseteq Z(T)\). * Assume \(\llbracket\psi\rrbracket^{\mathcal{M}}\not=\varnothing\). This gives us \(T\not=\varnothing\) (from \(\llbracket\psi\rrbracket^{\mathcal{M}}\stackrel{{ i}}{{ \Rightarrow}}T\)), which will be useful later. To show that there is a \(T^{\prime}\subseteq\mathsf{W}^{\prime}\) satisfying both _(B1)_ and _(B2)_, we proceed by contradiction, so suppose there is no \(T^{\prime}\) satisfying both requirements: every \(T^{\prime}\subseteq\mathsf{W}^{\prime}\) satisfying _(B1)_ fails at _(B2)_. In other words, every \(T^{\prime}\subseteq\mathsf{W}^{\prime}\) satisfying \(\llbracket\psi\rrbracket^{\mathcal{M}}\stackrel{{ i}}{{ \Rightarrow}}T^{\prime}\) has a state \(v^{\prime}_{T^{\prime}}\in T^{\prime}\) that is not the \(Z\)-image of some state \(v\in T\) (i.e., \(vZv^{\prime}_{T^{\prime}}\), fails for every \(v\in T\)). From \(Z\)'s definition, the latter means that every state in \(T\) can be distinguished from this \(v^{\prime}_{T}\), by an \(\mathsf{L}_{\mathsf{Kh}}\)-formula. Thus, given any \(T^{\prime}\subseteq\mathsf{W}^{\prime}\) with \(\llbracket\psi\rrbracket^{\mathcal{M}}\stackrel{{ i}}{{ \Rightarrow}}T^{\prime}\), one can find a state \(v^{\prime}_{T^{\prime}}\in T^{\prime}\) such that, for each \(v\in T\), there is a formula \(\theta^{v}_{v^{\prime}_{T}}\), with \(\mathcal{M},v\models\theta^{v}_{v^{\prime}_{T}}\), but \(\mathcal{M}^{\prime},v^{\prime}_{T^{\prime}}\not\models\theta^{v}_{v^{\prime}_{T ^{\prime}}}\). Then, for each such \(v^{\prime}_{T}\), in each such \(T^{\prime}\) define \[\theta_{T^{\prime}}:=\bigvee_{v\in T}\theta^{v}_{v^{\prime}_{T}},\qquad\text{ and then}\qquad\theta:=\bigwedge_{\llbracket T^{\prime}\subseteq\mathsf{W}^{\prime}\llbracket\psi \rrbracket^{\mathcal{M}^{\prime}}\stackrel{{ i}}{{ \Rightarrow}}T^{\prime}\}\theta_{T},\] Observe the following. First, \(\theta_{T}\), is indeed a formula, as \(\mathsf{W}\) is finite and thus so is \(T\). Equally important, \(T\not=\varnothing\), and thus \(\theta_{T}\) does not collapse to \(\bot\). Second, \(\theta\) is also a formula, as \(\mathsf{W}^{\prime}\) is finite and thus so is \(\{T^{\prime}\subseteq\mathsf{W}^{\prime}\mid\llbracket\psi\rrbracket^{\mathcal{M }}\stackrel{{ i}}{{\Rightarrow}}T^{\prime}\}\). However, the latter set might be empty. This is what creates the following two cases. * Suppose \(\{T^{\prime}\subseteq W^{\prime}\mid\llbracket\psi\rrbracket^{\mathcal{M}} \xrightarrow{\raisebox{-1.0pt}{\scalebox{1.0pt}[1.0pt]{$\rightarrow$}}}T^{ \prime}\}=\varnothing\). Then, consider the formula \(\mathsf{Kh}_{i}(\psi,\tau)\). Since \(\llbracket\psi\rrbracket^{\mathcal{M}}\xrightarrow{\raisebox{-1.0pt}{\scalebox{1.0pt}[1.0pt]{$\rightarrow$}}}T\) and \(T\subseteq W=\llbracket\top\rrbracket^{\mathcal{M}}\), it follows that \(\mathcal{M},w\models\mathsf{Kh}_{i}(\psi,\top)\). However, \(\mathcal{M}^{\prime},w^{\prime}\not\models\mathsf{Kh}_{i}(\psi,\top)\) as, according to this case, there is no \(T^{\prime}\subseteq W^{\prime}\) with \(\llbracket\psi\rrbracket^{\mathcal{M}^{\prime}}\xrightarrow{\raisebox{-1.0pt}{ \scalebox{1.0pt}[1.0pt]{$\rightarrow$}}}T^{\prime}\). This contradicts the \(\mathsf{L}_{\mathsf{Kh}_{i}}\)-equivalence of \(w\) and \(w^{\prime}\). * Suppose \(\{T^{\prime}\subseteq W^{\prime}\mid\llbracket\psi\rrbracket^{\mathcal{M}^{ \prime}}\xrightarrow{\raisebox{-1.0pt}{\scalebox{1.0pt}[1.0pt]{$ \rightarrow$}}}T^{\prime}\}\neq\varnothing\). Then, \(\theta\) does not collapse to \(T\). Now, note how every \(v\in T\) satisfies its 'own' disjunct \(\theta^{v}_{v^{\prime}_{T^{\prime}}}\) in each conjunct \(\theta_{T^{\prime}}\), and thus it satisfies \(\theta\). Thus, \(T\subseteq\llbracket\theta\rrbracket^{\mathcal{M}}\) and hence, from \(\llbracket\psi\rrbracket^{\mathcal{M}}\xrightarrow{\raisebox{-1.0pt}{\scalebox {1.0pt}[1.0pt]{$\rightarrow$}}}T\) and the fact that \(\mathsf{Kh}_{i}\)-formulas are global, it follows that \(\mathcal{M},w\models\mathsf{Kh}_{i}(\psi,\theta)\). However, for each \(T^{\prime}\) in \(\{T^{\prime}\subseteq W^{\prime}\mid\llbracket\psi\rrbracket^{\mathcal{M}^{ \prime}}\xrightarrow{\raisebox{-1.0pt}{\scalebox{1.0pt}[1.0pt]{$ \rightarrow$}}}T^{\prime}\}\), the state \(v^{\prime}_{T^{\prime}}\), that cannot be matched with any state \(v\in T\) makes all disjuncts in \(\theta_{T^{\prime}}\) false, thus falsifying \(\theta_{T^{\prime}}\) and therefore falsifying \(\theta\) too. In other words, every \(T^{\prime}\subseteq W^{\prime}\) with \(\llbracket\psi\rrbracket^{\mathcal{M}^{\prime}}\xrightarrow{\raisebox{-1.0pt}{ \scalebox{1.0pt}[1.0pt]{$\rightarrow$}}}T^{\prime}\) contains a state \(t^{\prime}_{T}\), with \(\mathcal{M}^{\prime},t^{\prime}_{T}\not\models\theta\), that is, \(\llbracket\psi\rrbracket^{\mathcal{M}^{\prime}}\xrightarrow{\raisebox{-1.0pt}{ \scalebox{1.0pt}[1.0pt]{$\rightarrow$}}}T^{\prime}\) implies \(T^{\prime}\not\subseteq\llbracket\theta\rrbracket^{\mathcal{M}}\). Hence, using again the fact that \(\mathsf{Kh}_{i}\)-formulas are global, \(\mathcal{M}^{\prime},w^{\prime}\not\models\mathsf{Kh}_{i}(\psi,\theta)\), contradicting the \(\mathsf{L}_{\mathsf{Kh}_{i}}\)-equivalence of \(w\) and \(w^{\prime}\). * \(\mathsf{Kh}_{i}\)**-Zag**. Analogous to the \(\mathsf{Kh}_{i}\)-Zag case. ## 6 Axiomatization We now present a sound and complete axiom system for \(\mathsf{L}_{\mathsf{Kh}_{i}}\) under the \(\mathsf{LTS}^{U}\)-based semantics. Recall that \(\mathsf{A}\varphi:=\bigvee_{i\in\mathsf{A}\mathsf{A}}\mathsf{Kh}_{i}(\neg \varphi,\bot)\) and \(\mathsf{E}\varphi:=\neg\mathsf{A}\neg\varphi\). With this, it turns out that formulas and rules in \(\mathcal{L}\) (the first block of Table 1) are still sound under \(\mathsf{LTS}^{U}\) (provided \(\mathsf{Kh}\) is replaced by \(\mathsf{Kh}_{i}\)). They will constitute the first part of an axiom system for \(\mathsf{L}_{\mathsf{Kh}_{i}}\) over \(\mathsf{LTS}^{U}\) (first block in Table 2). Still, this is not enough for a complete axiom system. The axioms on the second block of Table 2, \(\mathcal{X}\!\! consistent sets are defined as usual [9]. We will rely on ideas from [57, 59]; the following theorems will be useful. **Proposition 4**: _Formulas \(\mathsf{A}\neg\psi\to\mathsf{Kh}_{i}(\psi,\varphi)\) (called SCON(\(\!\)D) and \(\mathsf{Kh}_{i}(\bot,\varphi)\) (called CO(\(\!\)D) are \(\mathcal{L}^{\mathrm{LTS}^{U}}_{\mathsf{Kh}_{i}}\)-derivable. That is, **(1)**\(\mathsf{\vdash}\)\(\mathsf{A}\neg\psi\to\mathsf{Kh}_{i}(\psi,\varphi)\) and **(2)**\(\mathsf{\vdash}\)\(\mathsf{Kh}_{i}(\bot,\varphi)\)._ _Proof._ **(1)** Take \(\mathsf{\vdash}\)\(\mathsf{A}(\psi\to\psi)\to\big{(}\mathsf{A}(\bot\to\varphi)\to\mathsf{(Kh}_{i}(\psi, \bot)\to\mathsf{Kh}_{i}(\psi,\varphi))\big{)}\), an instance of \(\mathcal{K}_{\mathsf{H}}\). Using \(\mathcal{T}_{\mathsf{A}\mathsf{T}\mathsf{A}\mathsf{T}}\) and \(\mathcal{K}_{\mathsf{C}\mathsf{C}}\) we get \(\mathsf{\vdash}\)\(\mathsf{A}(\psi\to\psi)\); analogously, we get \(\mathsf{\vdash}\)\(\mathsf{A}(\bot\to\varphi)\). Then, using \(\mathcal{M}\!\varphi\) twice yields \(\mathsf{\vdash}\)\(\mathsf{Kh}_{i}(\psi,\bot)\to\mathsf{Kh}_{i}(\psi,\varphi)\), which by \(\mathsf{A}\)'s definition is \(\mathsf{\vdash}\)\(\mathsf{A}\neg\psi\to\mathsf{Kh}_{i}(\psi,\varphi)\). **(2)** Take \(\mathsf{\vdash}\)\(\mathsf{A}\neg\bot\to\mathsf{Kh}_{i}(\bot,\varphi)\), an instance of the previous item. Using \(\mathcal{T}_{\mathsf{A}\mathsf{T}\mathsf{A}\mathsf{T}}\) and \(\mathcal{K}_{\mathsf{C}\mathsf{C}}\) we get \(\mathsf{\vdash}\)\(\mathsf{A}\neg\bot\) so, by \(\mathcal{M}\!\varphi\), \(\mathsf{\vdash}\)\(\mathsf{Kh}_{i}(\bot,\varphi)\). \(\blacksquare\) Here it is, then, the definition of the required \(\mathrm{LTS}^{U}\). **Definition 6.1**: Let \(\mathbf{\Phi}\) be the set of all maximally \(\mathcal{L}^{\mathrm{LTS}^{U}}_{\mathsf{Kh}_{i}}\)-consistent sets (MCS) of formulas in \(\mathsf{L}_{\mathsf{Kh}_{i}}\). For any \(\Delta\in\mathbf{\Phi}\), define \[\Delta|_{\mathsf{Kh}_{i}} :=\{\xi\in\Delta\mid\xi\text{ is of the form }\mathsf{Kh}_{i}(\psi,\varphi)\}, \Delta|_{\mathsf{Kh}} :=\bigcup_{i\in\mathsf{Agt}}\Delta|_{\mathsf{Kh}_{i}}.\] \[\Delta|_{\mathsf{\vdash}\mathsf{Kh}_{i}} :=\{\xi\in\Delta\mid\xi\text{ is of the form }\neg\mathsf{Kh}_{i}(\psi,\varphi)\}, \Delta|_{\mathsf{\vdash}\mathsf{Kh}} :=\bigcup_{i\in\mathsf{Agt}}\Delta|_{\mathsf{\vdash}\mathsf{Kh}_{i}}.\] Let \(\Gamma\) be a set in \(\mathbf{\Phi}\); we will define a structure satisfying its formulas. Define a set of basic actions \(\mathsf{Act}^{\Gamma}_{i}:=\{\langle\psi,\varphi\rangle\mid\mathsf{Kh}_{i}( \psi,\varphi)\in\Gamma\}\) associated to each agent \(i\in\mathsf{Agt}\), and then their union \(\mathsf{Act}^{\Gamma}:=\bigcup_{i\in\mathsf{Agt}}\mathsf{Act}^{\Gamma}_{i}\). Notice that \(\mathsf{Kh}_{i}(\bot,\varphi)\in\Gamma\) for every \(i\in\mathsf{Agt}\) and every \(\varphi\in\mathsf{L}_{\mathsf{Kh}}\) (by _CO(\(\!\)D)_; since \(\mathsf{Agt}\) is finite and non-empty, this implies that \(\mathsf{Act}^{\Gamma}\) is denumerable, and thus it is an adequate set of actions for building a model. It is worth noticing that \(\mathsf{Act}^{\Gamma}\) fixes a new signature. However, since the operators of the language cannot talk explicitly about the names of the actions, we can define a mapping from \(\mathsf{Act}^{\Gamma}\) to any particular \(\mathsf{Act}\), to preserve the original signature, provided that the cardinalities match. Then, the structure \(\mathcal{M}^{\Gamma}=\langle W^{\Gamma},R^{\Gamma},\{S^{\Gamma}_{i}\}_{i\in \mathsf{Agt}},V^{\Gamma}\rangle\) over \(\mathsf{Act}^{\Gamma}\), \(\mathsf{Agt}\) and \(\mathsf{Prop}\) is defined as follows. * \(W^{\Gamma}:=\{\Delta\in\mathbf{\Phi}\mid\Delta|_{\mathsf{Kh}}=\Gamma|_{\mathsf{Kh}}\}\). * \(\mathsf{R}^{\Gamma}_{(\psi,\varphi)}:=\bigcup_{i\in\mathsf{Agt}}\mathsf{R}^{\Gamma} _{(\psi,\varphi)^{i}}\), with \[\mathsf{R}^{\Gamma}_{(\psi,\varphi)^{i}}:=\{(\Delta_{1},\Delta_{2})\in \mathsf{W}^{\Gamma}\times\mathsf{W}^{\Gamma}\mid\mathsf{Kh}_{i}(\psi,\varphi) \in\Gamma,\psi\in\Delta_{1},\varphi\in\Delta_{2}\}.\] * \(\mathsf{S}^{\Gamma}_{i}:=\left\{(\langle\psi,\varphi\rangle)\mid\langle\psi, \varphi\rangle\in\mathsf{Act}^{\Gamma}_{i}\right\}\). * \(\mathsf{V}^{\Gamma}(\Delta):=\{p\in\mathsf{Prop}\mid p\in\Delta\}\). + Since \(\Gamma\in\mathbf{\Phi}\), the structure \(\mathcal{M}^{\Gamma}\) is of the required type, as the following proposition states. **Proposition 5**: _The structure \(\mathcal{M}^{\Gamma}=\langle\mathsf{W}^{\Gamma},\mathsf{R}^{\Gamma},\{\mathsf{ S}^{\Gamma}_{i}\}_{i\in\mathsf{Agt}},\mathsf{V}^{\Gamma}\rangle\) is an \(\mathrm{LTS}^{U}\)._ _Proof._ It is enough to show that each \(\mathsf{S}^{\Gamma}_{i}\) defines a partition over a non-empty subset of \(2^{(\mathsf{Act}^{\Gamma})}\). First, \(\mathcal{O}\mathcal{O}\mathcal{O}\mathcal{O}\) implies \(\mathsf{Kh}_{i}(\bot,\bot)\in\Gamma\), so \(\langle\bot,\bot\rangle\in\mathsf{Act}^{\Gamma}_{i}\) and hence \(\{\langle\bot,\bot\rangle\}\in\mathsf{S}^{\Gamma}_{i}\); thus, \(\bigcup_{n\in\mathsf{S}}\pi\neq\varnothing\). Then, \(\mathsf{S}_{i}\) indeed defines a partition over \(\bigcup_{\pi\in\mathsf{S}}\pi\): its elements are mutually disjoint (they are singletons with different elements), collective exhaustiveness is immediate and, finally, \(\varnothing\notin\mathsf{S}^{\Gamma}_{i}\). Let \(\Gamma\in\mathbf{\Phi}\); the following properties of \(\mathcal{M}^{\Gamma}\) will be useful (proofs are similar to the ones in [59]). **Proposition 6**: _For any \(\Delta_{1},\Delta_{2}\in\mathsf{W}^{\Gamma}\) we have \(\Delta_{1}|_{\mathsf{Kh}}=\Delta_{2}|_{\mathsf{Kh}}\)._ _Proof._ Straightforward from the definition of \(\mathsf{W}^{\Gamma}\). **Proposition 7**: _Take \(\Delta\in\mathsf{W}^{\Gamma}\). If \(\Delta\) has a \(\mathsf{R}^{\Gamma}_{(\psi,\varphi)}\)-successor, then every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\) with \(\varphi\in\Delta^{\prime}\) can be \(\mathsf{R}^{\Gamma}_{(\psi,\varphi)}\)-reached from \(\Delta\)._ _Proof._ If \(\Delta\) has a \(\mathsf{R}^{\Gamma}_{(\psi,\varphi)}\)-successor, then it has a \(\mathsf{R}^{\Gamma}_{(\psi,\varphi)}\)-successor for some \(i\in\mathsf{Agt}\); thus, \(\psi\in\Delta\) and \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Gamma\). Hence, every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\) with \(\varphi\in\Delta^{\prime}\) is such that \((\Delta,\Delta^{\prime})\in\mathsf{R}^{\Gamma}_{(\psi,\varphi)}\), and thus such that \((\Delta,\Delta^{\prime})\in\mathsf{R}^{\Gamma}_{(\psi,\varphi)}\). **Proposition 8**: _Let \(\varphi\) be an \(\mathsf{L}_{\mathsf{Kh}}\)-formula. If \(\varphi\in\Delta\) for every \(\Delta\in\mathsf{W}^{\Gamma}\), then \(\mathsf{Ag}\in\Delta\) for every \(\Delta\in\mathsf{W}^{\Gamma}\)._ _Proof._ First, some facts for any \(\Delta\) in \(\mathsf{W}^{\Gamma}\subseteq\mathbf{\Phi}\). By definition, \(\Delta|_{\mathsf{Kh}}\cup\Delta|_{\mathsf{Kh}}\) is a subset of \(\Delta\), and therefore it is consistent. Moreover: any maximally consistent extension of \(\Delta|_{\mathsf{Kh}}\cup\Delta|_{\mathsf{Kh}}\), say \(\Delta^{\prime}\), should satisfy \(\Delta|_{\mathsf{Kh}}=\Delta^{\prime}|_{\mathsf{Kh}}\). For \(\mathbf{(\triangle)}\), note that \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Delta|_{\mathsf{Kh}}\) implies \(\mathsf{Kh}_{i}(\psi,\varphi)\in(\Delta|_{\mathsf{Kh}}\cup\Delta|_{\mathsf{Kh}})\), and thus \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Delta^{\prime}\), i.e., \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Delta^{\prime}|_{\mathsf{Kh}}\). For \(\mathbf{(\triangle)}\), use the contrapositive. If \(\mathsf{Kh}_{i}(\psi,\varphi)\notin\Delta|_{\mathsf{Kh}}\) then \(\mathsf{Kh}_{i}(\psi,\varphi)\notin\Delta\), so \(\neg\mathsf{Kh}_{i}(\psi,\varphi)\in\Delta\) (as \(\Delta\) is an MCS). Thus, \(\neg\mathsf{Kh}_{i}(\psi,\varphi)\in(\Delta|_{\mathsf{Kh}}\cup\Delta|_{\mathsf{ Kh}})\) and hence \(\neg\mathsf{Kh}_{i}(\psi,\varphi)\in\Delta^{\prime}\); therefore, \(\mathsf{Kh}_{i}(\psi,\varphi)\notin\Delta^{\prime}\) (as \(\Delta\) is consistent) and thus \(\mathsf{Kh}_{i}(\psi,\varphi)\notin\Delta^{\prime}|_{\mathsf{Kh}}\). For the proof of the proposition, suppose \(\varphi\in\Delta\) for every \(\Delta\in\mathsf{W}^{\Gamma}\). Take any \(\Delta\in\mathsf{W}^{\Gamma}\), and note how \(\Delta|_{\mathsf{Kh}}=\Gamma|_{\mathsf{Kh}}\). Then, the set \(\Delta|_{\mathsf{Kh}}\cup\Delta|_{\neg\mathsf{Kh}}\cup\{\neg\varphi\}\) is inconsistent. Otherwise it could be extended into an MCS \(\Delta^{\prime}\in\mathbf{\Phi}\). By the result in the previous paragraph, this would imply \(\Delta^{\prime}|_{\mathsf{Kh}}=\Delta|_{\mathsf{Kh}}\), so \(\Delta^{\prime}|_{\mathsf{Kh}}=\Gamma|_{\mathsf{Kh}}\) and therefore \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\). But then, by the assumption, \(\varphi\in\Delta^{\prime}\), and by construction, \(\neg\varphi\in\Delta^{\prime}\). This would make \(\Delta^{\prime}\) inconsistent, a contradiction. Thus, given that \(\Delta|_{\mathsf{Kh}}\cup\Delta|_{\neg\mathsf{Kh}}\cup\{\neg\varphi\}\) is inconsistent, there should be sets \(\{\mathsf{Kh}_{b_{1}}(\psi_{1},\varphi_{1}),\ldots,\mathsf{Kh}_{b_{n}}(\psi_{n}, \varphi_{n})\}\subseteq\Delta|_{\mathsf{Kh}}\) and \(\{\neg\mathsf{Kh}_{b^{\prime}_{1}}(\psi^{\prime}_{1},\varphi^{\prime}_{1}), \ldots,\neg\mathsf{Kh}_{b^{\prime}_{m}}(\psi^{\prime}_{m},\varphi^{\prime}_{m}) \}\subseteq\Delta|_{\neg\mathsf{Kh}}\) such that \[+\left(\bigwedge_{k=1}^{n}\mathsf{Kh}_{h_{k}}(\psi_{k},\varphi_{k}) \wedge\bigwedge_{k=1}^{m}\lhd\mathsf{Kh}_{h^{\prime}_{k}}(\psi^{\prime}_{k}, \varphi^{\prime}_{k})\right)\to\varphi.\] Hence, by \(\mathcal{MECA}\), \[+\mathsf{A}\left(\left(\bigwedge_{k=1}^{n}\mathsf{Kh}_{h_{k}}(\psi_{k},\varphi _{k})\wedge\bigwedge_{k=1}^{m}\lhd\mathsf{Kh}_{h^{\prime}_{k}}(\psi^{\prime}_{ k},\varphi^{\prime}_{k})\right)\to\varphi\right)\] and then, by \(\mathcal{DISTA}\) and \(\mathcal{MP}\), \[+\mathsf{A}\left(\bigwedge_{k=1}^{n}\mathsf{Kh}_{h_{k}}(\psi_{k},\varphi_{k}) \wedge\bigwedge_{k=1}^{m}\lhd\mathsf{Kh}_{h^{\prime}_{k}}(\psi^{\prime}_{k}, \varphi^{\prime}_{k})\right)\to\mathsf{A}\varphi.\] Now, \(\mathsf{Kh}_{h_{k}}(\psi_{k},\varphi_{k})\in\Delta|_{\mathsf{Kn}}\) implies (\(4\mathcal{MEA}\) and \(\mathcal{MP}\)) that \(\mathsf{AKh}_{h_{k}}(\psi_{k},\varphi_{k})\in\Delta\) (for each \(k\in[1\..\ n]\)). Similarly, \(\lhd\mathsf{Kh}_{h^{\prime}_{k}}(\psi^{\prime}_{k},\varphi^{\prime}_{k})\in \Delta|_{\mathsf{Kn}}\) implies (\(5\mathcal{MEA}\) and \(\mathcal{MP}\)) that \(\mathsf{A}\lhd\mathsf{Kh}_{h^{\prime}_{k}}(\psi^{\prime}_{k},\varphi^{\prime }_{k})\in\Delta\) (for each \(k\in[1\..\ m]\)). Thus, \[\bigwedge_{k=1}^{n}\mathsf{AKh}_{h_{k}}(\psi_{k},\varphi_{k})\in\Delta\qquad \text{and}\qquad\bigwedge_{k=1}^{m}\mathsf{A}\lhd\mathsf{Kh}_{h^{\prime}_{k} }(\psi^{\prime}_{k},\varphi^{\prime}_{k})\in\Delta\] and hence \[\bigwedge_{k=1}^{n}\mathsf{AKh}_{h_{k}}(\psi_{k},\varphi_{k})\wedge\bigwedge_{ k=1}^{m}\mathsf{A}\lhd\mathsf{Kh}_{h^{\prime}_{k}}(\psi^{\prime}_{k}, \varphi^{\prime}_{k})\in\Delta,\,\text{so}\,\mathsf{A}\left(\bigwedge_{k=1}^ {n}\mathsf{Kh}_{h_{k}}(\psi_{k},\varphi_{k})\wedge\bigwedge_{k=1}^{m}\lhd \mathsf{Kh}_{h^{\prime}_{k}}(\psi^{\prime}_{k},\varphi^{\prime}_{k})\right)\in\Delta\] and therefore \(\mathsf{A}\varphi\in\Delta\). **Proposition 9**: _Take \(\psi,\psi^{\prime},\varphi^{\prime}\) in \(\mathsf{L}_{\mathsf{Kn}_{h}}\). Suppose that every \(\Delta\in\mathsf{W}^{\Gamma}\) with \(\psi\in\Delta\) has a \(\mathsf{R}^{\Gamma}_{\langle\psi^{\prime},\varphi^{\prime}\rangle}\)-successor. Then, \(\mathbf{A}(\psi\to\psi^{\prime})\in\Delta\) for all \(\Delta\in\mathsf{W}^{\Gamma}\)._ _Proof._ Take any \(\Delta\in\mathsf{W}^{\Gamma}\). On the one hand, if \(\psi\in\Delta\) then, by the supposition, \((\Delta,\Delta^{\prime})\in\mathsf{R}^{\Gamma}_{\langle\psi^{\prime},\varphi ^{\prime}\rangle}\) for some \(\Delta^{\prime}\). Hence, from \(\mathsf{R}^{\Gamma}_{\langle\psi^{\prime},\varphi^{\prime}\rangle}\)'s definition, \(\psi^{\prime}\in\Delta\) and thus (maximal consistency) \(\psi\to\psi^{\prime}\in\Delta\). On the other hand, if \(\psi\notin\Delta\) then \(\neg\psi\in\Delta\) (again, maximal consistency) and thus \(\psi\to\psi^{\prime}\in\Delta\). Thus, \(\psi\to\psi^{\prime}\in\Delta\) for every \(\Delta\in\mathsf{W}^{\Gamma}\); then, by Proposition 8, \(\mathbf{A}(\psi\to\psi^{\prime})\in\Delta\) for every \(\Delta\in\mathsf{W}^{\Gamma}\). With these properties at hand, we can prove the truth lemma for \(\mathcal{M}^{\Gamma}\). **Lemma 1** (Truth lemma for \(\mathcal{M}^{\Gamma}\)): _Given \(\Gamma\!\in\!\Phi\), take \(\mathcal{M}^{\Gamma}=\langle\mathsf{W}^{\Gamma},\mathsf{R}^{\Gamma},\{S^{ \Gamma}_{i}\}_{i\in\mathsf{A}\varphi},\mathsf{V}^{\Gamma}\rangle\). Then, for every \(\Theta\in\mathsf{W}^{\Gamma}\) and every \(\varphi\in\mathsf{L}_{\mathsf{Kh}_{i}}\),_ \[\mathcal{M}^{\Gamma},\Theta\models\varphi\qquad\text{if and only if}\qquad \varphi\in\Theta.\] _Proof._ The proof is by induction on \(\varphi\). The atom and Boolean cases as usual, so we focus on the _knowing how_ case. **Case \(\mathsf{Kh}_{i}(\psi,\varphi)\). (\(\Rightarrow\))** Suppose \(\mathcal{M}^{\Gamma},\Theta\models\mathsf{Kh}_{i}(\psi,\varphi)\), and consider two cases. * \([\![\psi]\!]^{\mathcal{M}^{\Gamma}}=\emptyset\). Then, each \(\Delta\in\mathsf{W}^{\Gamma}\) is such that \(\Delta\notin[\![\psi]\!]^{\mathcal{M}^{\Gamma}}\), which implies \(\psi\notin\Delta\) (by IH) and thus \(\neg\psi\in\Delta\) (by maximal consistency). Hence, by Proposition 8, \(\mathsf{A}\lhd\psi\in\Delta\) for every \(\Delta\in\mathsf{W}^{\Gamma}\). In particular, \(\mathsf{A}\lhd\psi\in\Theta\) and thus, by \(\mathcal{SCOXED}\) and \(\mathcal{MP}\), \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Theta\). * \(\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\neq\emptyset\). From \(\mathcal{M}^{\mathsf{T}},\Theta\models\mathsf{Kh}_{i}(\psi,\varphi)\), there is \(\{\langle\psi^{\prime},\varphi^{\prime}\rangle\}\in\mathsf{S}_{i}^{\Gamma}\) such that \[\text{\bf(Kh-1)}\;\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\subseteq \operatorname{SE}(\{\langle\psi^{\prime},\varphi^{\prime}\rangle\})\text{\bf) and \(\text{\bf(Kh-2)}\;\mathsf{R}_{\langle\psi^{\prime},\varphi^{\prime}\rangle \rangle}^{\Gamma}(\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}})\subseteq \llbracket\varphi\rrbracket^{\mathcal{M}^{\mathsf{F}}}.\] In other words, there is \(\langle\psi^{\prime},\varphi^{\prime}\rangle\in\mathsf{Act}_{a}^{\Gamma}\) such that \[\text{\bf(Kh-1)}\;\text{for all $\Delta\in\mathsf{W}^{\Gamma}$, if $\Delta\in\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}$ then $\Delta\in \operatorname{SE}(\{\langle\psi^{\prime},\varphi^{\prime}\rangle\})$, so $\Delta\in\operatorname{SE}(\langle\psi^{\prime},\varphi^{\prime}\rangle)$ and therefore $\Delta$ has a $\mathsf{R}_{\langle\psi^{\prime},\varphi^{\prime}\rangle}^{\Gamma}$-successor.}\] \[\text{\bf(Kh-2)}\;\text{for all $\Delta^{\prime}\in\mathsf{W}^{\Gamma}$, if $\Delta^{\prime}\in\mathsf{R}_{\langle\psi^{\prime},\varphi^{\prime}\rangle \rangle}^{\Gamma}$ then $\Delta^{\prime}\in\llbracket\varphi\rrbracket^{\mathcal{M}^{\mathsf{F}}}$.}\] This case requires three pieces. 1. Take any \(\Delta\in\mathsf{W}^{\Gamma}\) with \(\psi\in\Delta\). Then, by IH, \(\Delta\in\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\) and thus, by Item (Kh-1), \(\Delta\) has a \(\mathsf{R}_{\langle\psi^{\prime},\varphi^{\prime}\rangle}^{\Gamma}\)-successor. Thus, every \(\Delta\in\mathsf{W}^{\Gamma}\) with \(\psi\in\Delta\) has such successor; then (Proposition 9), it follows that \(\mathsf{A}(\psi\to\psi^{\prime})\in\Delta\) for every \(\Delta\in\mathsf{W}^{\Gamma}\). In particular, \(\mathsf{A}(\psi\to\psi^{\prime})\in\Theta\). 2. From \(\langle\psi^{\prime},\varphi^{\prime}\rangle\in\mathsf{Act}_{i}^{\Gamma}\) it follows that \(\mathsf{Kh}_{i}(\psi^{\prime},\varphi^{\prime})\in\Gamma\). But \(\Theta\in\mathsf{W}^{\Gamma}\), so \(\Theta|_{\mathsf{Kh}}=\Gamma|_{\mathsf{Kh}}\) (by definition of \(\mathsf{W}^{\Gamma}\)). Hence, \(\mathsf{Kh}_{i}(\psi^{\prime},\varphi^{\prime})\in\Theta\). 3. Since \(\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\neq\emptyset\), there is \(\Delta\in\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\). By Item (Kh-1), \(\Delta\) should have at least one \(\mathsf{R}_{\langle\psi^{\prime},\varphi^{\prime}\rangle}^{\Gamma}\)-successor. Then, by Proposition 7, every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\) satisfying \(\varphi^{\prime}\in\Delta^{\prime}\) can be \(\mathsf{R}_{\langle\psi^{\prime},\varphi^{\prime}\rangle}^{\Gamma}\)-reached from \(\Delta\); in other words, every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\) satisfying \(\varphi^{\prime}\in\Delta^{\prime}\) is in \(\mathsf{R}_{\langle\psi^{\prime},\varphi^{\prime}\rangle}^{\Gamma}(\Delta)\). But \(\Delta\in\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\), so every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\) satisfying \(\varphi^{\prime}\in\Delta^{\prime}\) is in \(\mathsf{R}_{\langle\psi^{\prime},\varphi^{\prime}\rangle}^{\Gamma}(\llbracket \psi\rrbracket^{\mathcal{M}^{\mathsf{F}}})\). Then, by Item (Kh-2), every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\) satisfying \(\varphi^{\prime}\in\Delta^{\prime}\) is in \(\llbracket\varphi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\). By IH on the latter part, every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\) satisfying \(\varphi^{\prime}\in\Delta^{\prime}\) is such that \(\varphi\in\Delta^{\prime}\). Thus, \(\varphi^{\prime}\to\varphi\in\Delta^{\prime}\) for every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\), and hence (Proposition 8) \(\mathsf{A}(\varphi^{\prime}\to\varphi)\in\Delta^{\prime}\) for every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\). In particular, \(\mathsf{A}(\varphi^{\prime}\to\varphi)\in\Theta\). Thus, \(\{\mathsf{A}(\varphi\to\psi^{\prime}),\mathsf{Kh}_{i}(\psi^{\prime},\varphi^{ \prime}),\mathsf{A}(\varphi^{\prime}\to\varphi)\}\subset\Theta\). Therefore, by \(\mathcal{A}\mathcal{A}\) and \(\mathcal{M}\mathcal{P}\), \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Theta\). **(\(\Leftarrow\))** Suppose \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Theta\). Thus (Proposition 6), \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Gamma\), so \(\langle\psi,\varphi\rangle\in\mathsf{Act}_{i}^{\Gamma}\) and therefore \(\{\langle\psi,\varphi\rangle\}\in\mathsf{S}_{i}^{\Gamma}\). The rest of the proof is split into two cases. * Suppose there is no \(\Delta_{\psi}\in\mathsf{W}^{\Gamma}\) with \(\psi\in\Delta\). Then, by IH, there is no \(\Delta_{\psi}\in\mathsf{W}^{\Gamma}\) with \(\Delta_{\psi}\in\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\), that is, \(\llbracket\neg\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}=\mathsf{D}_{\mathsf{W}^ {\Gamma}}\). Since \(\mathcal{M}^{\mathsf{F}}\) is an \(\operatorname{LTS}^{U}\) (Proposition 5), the latter yields \((\mathcal{M}^{\mathsf{F}},\Delta)\models\mathsf{Kh}_{i}(\psi,\chi)\) for any \(i\in\mathsf{Agt}\), \(\chi\in\mathsf{L}_{\mathsf{Kh}_{i}}\) and \(\Delta\in\mathsf{W}^{\Gamma}\) (cf. Proposition 2); hence, \((\mathcal{M}^{\mathsf{F}},\Theta)\models\mathsf{Kh}_{i}(\psi,\varphi)\). * Suppose there is \(\Delta_{\psi}\in\mathsf{W}^{\Gamma}\) with \(\psi\in\Delta_{\psi}\). It will be shown that the set of plans \(\{\langle\psi,\varphi\rangle\}\in\mathsf{S}_{i}^{\Gamma}\) satisfies the requirements. 1. Take any \(\Delta\in\llbracket\psi\rrbracket^{\mathcal{M}^{\mathsf{F}}}\). By IH, \(\psi\in\Delta\). Moreover, from \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Theta\) and Proposition 6 it follows that \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Delta\). Then, from \(\mathsf{R}_{\langle\psi,\varphi\rangle}^{\Gamma}\)'s definition, every \(\Delta^{\prime}\in\mathsf{W}^{\Gamma}\) with \(\varphi\in\Delta^{\prime}\) is such that \((\Delta,\Delta^{\prime})\in\mathsf{R}_{\langle\psi,\varphi\rangle^{\prime}}^{\Gamma}\), and therefore such that \((\Delta,\Delta^{\prime})\in\mathsf{R}_{\langle\psi,\varphi\rangle}^{\Gamma}\). Now note how, since there is \(\Delta_{\psi}\in\mathsf{W}^{\Gamma}\) with \(\psi\in\Delta_{\psi}\), there should be \(\Delta_{\varphi}\in\mathsf{W}^{\Gamma}\) with \(\varphi\in\Delta_{\varphi}\). Suppose otherwise, i.e., suppose there is no \(\Delta^{\prime\prime}\in\mathsf{W}^{\Gamma}\) with \(\varphi\in\Delta^{\prime\prime}\). Then, \(\neg\varphi\in\Delta^{\prime\prime}\) for every \(\Delta^{\prime\prime}\in\mathsf{W}^{\Gamma}\), and hence (Proposition 8) \(\mathsf{A}\neg\varphi\in\Delta^{\prime\prime}\) for every \(\Delta^{\prime\prime}\in\mathsf{W}^{\Gamma}\). In particular, \(\mathsf{A}\neg\varphi\in\Delta_{\psi}\). Moreover, from \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Theta\) and Proposition 6 it follows that \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Delta_{\psi}\). Then, \(\mathcal{M}\!\!F\!E\) (written as \(\mathsf{Kh}_{i}(\psi,\varphi)\rightarrow(\mathsf{A}\!\!-\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Proposition 10**: _[_37_]_ _and [_36_] are theorems of \(\mathcal{L}^{\mathrm{LTS}}_{\mathsf{Rh}}\)._ _Proof._ [_37_]_ can be rewritten as \((\mathsf{Kh}(\psi,\varphi)\wedge\mathsf{A}\neg\varphi)\to\mathsf{A}\neg\psi\), which is an instance of \(\mathit{COMPX}\#\) in \(\mathcal{L}^{\mathrm{LTS}}_{\mathsf{Kh}}\) (just unfold \(\mathsf{A}\)). For [_37_]_, use [_22_] and then [_59_, Proposition 2]. Hence, the _knowing how_ operator under LTSs is at least as strong as its \(\mathrm{LTS}^{U}\)-based counterpart: every formula valid under \(\mathrm{LTS}^{U}\)s is also valid under LTSs. The following fact shows that the converse is not the case. **Proposition 11**: _Within \(\mathrm{LTS}^{U}\), axioms [_22_] and [_59_] are not valid._ _Proof._ Consider the \(\mathrm{LTS}^{U}\) shown below, with the collection of sets of plans for the agent (i.e., the set \(\mathsf{S}\)) depicted on the right. Recall that \(\mathsf{Kh}\) acts globally. With respect to [_22_], notice that \(\mathsf{A}(p\to p)\) holds; yet, \(\mathsf{Kh}(p,p)\) fails since there is no \(\pi\in\mathsf{S}\) leading from \(p\)-states to \(p\)-states. More generally, [_22_]_ is valid over LTSs because the empty plan \(\epsilon\), strongly executable everywhere, is always available. However, in an \(\mathrm{LTS}^{U}\), the plan \(\epsilon\) might not be available to the agent (i.e., \(\epsilon\notin\mathsf{P}\)), and even if it is, it might be indistinguishable from other plans with different behaviour. With respect to [_59_], notice that \(\mathsf{Kh}(p,q)\) and \(\mathsf{Kh}(q,r)\) hold, witness \(\{a\}\) and \(\{b\}\), respectively. However, there is no \(\pi\in\mathsf{S}\) containing only plans that, when started on \(p\)-states, lead only to \(r\)-states. Thus, \(\mathsf{Kh}(p,r)\) fails. More generally, [_59_] is valid over LTS because the sequential composition of the plans that make true the conjuncts in the antecedent is a witness that makes true the consequent. However, in an \(\mathrm{LTS}^{U}\), this composition might be unavailable or else indistinguishable from other plans. From these two observations it follows that \(\mathsf{Kh}\) under \(\mathrm{LTS}^{U}\)s is strictly weaker than \(\mathsf{Kh}\) under LTSs: adding uncertainty about plans changes the logic. ### A very simple class of \(\mathrm{LTS}^{U}\)s Still, the uncertainty-based framework is general enough to capture the LTS semantics. Given the discussion in Proposition 11, there is an obvious class of \(\mathrm{LTS}^{U}\)s in which [_22_] and [_59_] are valid: the class of \(\mathrm{LTS}^{U}\)s in which the agent has every plan available and can distinguish between any two of them. Below, we define formally this class. **Definition 7.1**: Define the class of models: \(\mathbf{M}_{\mathbf{NU}}\coloneqq\{\mathcal{M}\mid\mathcal{M}\text{ is an $\mathrm{LTS}^{U}$ and $\mathsf{S}=\{[\sigma]\mid\sigma\in \mathsf{Act}^{\prime}]\}$}\). + Indeed, for models in \(\mathbf{M}_{\mathbf{NU}}\), the plan \(\epsilon\) is available and distinguishable from other plans (witnessing [_22_]_) and from \(\{\sigma_{1}\}\in\mathsf{S}\) and \(\{\sigma_{2}\}\in\mathsf{S}\) it follows that \(\{\sigma_{1}\sigma_{2}\}\in\mathsf{S}\) (witnessing [_59_]). Thus, as the following proposition states, an agent in LTS is exactly an agent in \(\mathrm{LTS}^{U}\) that can use every plan and has no uncertainty and full awareness about them. This class is enough to show how the uncertainty-based framework can capture the original one. **Proposition 12**: _The following properties hold._ 1. _Given a model_ \(\mathcal{M}=\langle W,R,S,V\rangle\) _in_ \(\mathbf{M_{NU}}\)_, the_ \(\mathrm{LTS}\ \mathcal{S}_{\mathcal{M}}=\langle W,R,V\rangle\) _is such that_ \([\![\varphi]\!]^{\mathcal{M}}=[\![\varphi]\!]^{\mathcal{S}_{\mathcal{M}}}\) _for every_ \(\varphi\in\mathsf{L_{Kh}}\)_._ 2. _Given an_ \(\mathrm{LTS}\ \mathcal{S}=\langle W,R,V\rangle\)_, the model_ \(\mathcal{M}_{S}=\langle W,R,S,V\rangle\) _with_ \(S=\{\sigma\}\mid\sigma\in\mathsf{Act}^{*}\}\)_, is in_ \(\mathbf{M_{NU}}\) _and is such that_ \([\![\varphi]\!]^{\mathcal{S}}=[\![\varphi]\!]^{\mathcal{M}_{S}}\) _for every_ \(\varphi\in\mathsf{L_{Kh}}\)_._ \(\triangleleft\)__ This correspondence, showing that every LTS has a point-wise equivalent model in \(\mathbf{M_{NU}}\) and vice-versa, gives us a direct completeness result. **Theorem 5**: _The axiom system \(\mathcal{L}_{\mathsf{Kh}}^{\mathrm{LTS}}\) (Table 1) is sound and strongly complete for \(\mathsf{L_{Kh}}\) w.r.t. the class \(\mathbf{M_{NU}}\)._ _Proof._ For soundness, we look at both blocks in Table 1. For the first, Theorem 4 shows that those axioms and rules are sound for all \(\mathrm{LTS}^{U}\), and thus in particular sound for those in the class \(\mathbf{M_{NU}}\). For the second, Item (1) of Proposition 12 shows that every model in \(\mathbf{M_{NU}}\) is point-wise \(\mathsf{L_{Kh}}\)-equivalent to an LTS, thus (Theorem 1) making sound such axioms. To prove that \(\mathcal{L}_{\mathsf{Kh}}^{\mathrm{LTS}}\) is strongly complete over the class \(\mathbf{M_{NU}}\), we need to show that, given \(\Gamma\cup\{\varphi\}\) a set of formulas in \(\mathsf{L_{Kh}}\), \(\Gamma\models\varphi\) implies \(\Gamma\vdash\varphi\). Let \(\Gamma\) be a consistent set of formulas. As in [59, Lemma 1], \(\Gamma\) can be extended to an MCS \(\Gamma^{\prime}\), and as a consequence, there exists an LTS \(\mathcal{S}^{\Gamma}\) such that \(\mathcal{S}^{\Gamma^{\prime}},\Gamma^{\prime}\models\Gamma\) (notice that states in the canonical model are MCS). Then, by Item (2) of Proposition 12, we can obtain an \(\mathrm{LTS}^{U}\)\(\mathcal{M}_{\mathcal{S}^{\Gamma^{\prime}}}\), such that \(\mathcal{M}_{\mathcal{S}^{\Gamma^{\prime}}},\Gamma^{\prime}\models\Gamma\). Moreover, from Item (2) of Proposition 12 we also know that \(\mathcal{M}_{\mathcal{S}^{\Gamma^{\prime}}}\) is in \(\mathbf{M_{NU}}\). ### Active and \(\mathrm{SE}\)-compositional \(\mathrm{LTS}^{U}\mathbf{s}\) We presented above a very simple class of models that enables us to establish a direct relation between both semantics. However, the result is somewhat trivial: \(\mathrm{LTS}^{U}\mathbf{s}\) generalize \(\mathrm{LTS}\mathbf{s}\) by adding uncertainty among plans, and the class \(\mathbf{M_{NU}}\) contains those \(\mathrm{LTS}^{U}\mathbf{s}\) in which the agent does not have uncertainty. The rest of this section will discuss a larger and very general class (with very weak constraints) for which the same correspondence holds. Let us start by introducing some preliminary definitions. **Definition 7.2**: Let \(\mathcal{M}=\langle W,R,S,V\rangle\) be an \(\mathrm{LTS}^{U}\). * The _composition_ of \(\pi_{1},\pi_{2}\in 2^{\mathsf{Act}}\) is the set of plans \(\pi_{1}\pi_{2}\in 2^{\mathsf{Act}^{*}}\) given by \[\pi_{1}\pi_{2}:=\{\sigma_{1}\sigma_{2}\in\mathsf{Act}^{*}\mid\sigma_{1}\in\pi_{ 1}\text{ and }\sigma_{2}\in\pi_{2}\}.\] * The _\(\mathrm{SE}\)-composition_ of \(\pi_{1},\pi_{2}\in 2^{\mathsf{Act}^{*}}\) _in_ \(\mathcal{M}\) _is the set of plans_ \(\pi_{1}\) _;_ \(\pi_{2}\in 2^{\mathsf{Act}^{*}}\) _given by_ \[\pi_{1}\,\boldsymbol{;}\,\pi_{2}:=\begin{cases}\pi_{1}\pi_{2}&\text{if $\mathrm{SE}(\pi_{1})\neq\varnothing$ and $\mathrm{R}_{\pi_{1}}(\mathrm{SE}(\pi_{1}))\subseteq\mathrm{SE}(\pi_{2})$}\\ \varnothing&\text{otherwise.}\end{cases}\] (4) Thus, the \(\mathrm{SE}\)-composition \(\pi_{1}\) _;_ \(\pi_{2}\) is the sequential composition of \(\pi_{1}\) and then \(\pi_{2}\) (i.e., \(\pi_{1}\pi_{2}\)) when \(\pi_{1}\) is strongly executable somewhere in the model and \(\pi_{2}\) is strongly executable at all the states that are reachable via \(\pi_{1}\) from states where \(\pi_{1}\) is strongly executable. Otherwise, \(\pi_{1}\,;\,\pi_{2}=\varnothing\). This guarantees that \(\pi_{1}\,;\,\pi_{2}\) contains only suitable plans. For multiple sets of plans \(\pi_{1},\ldots,\pi_{k}\in\mathsf{S}\), the \(\mathrm{SE}\)-composition \(\pi_{1}\,;\,\cdots;\pi_{k}\) is the set of plans \(\pi_{1}\cdots\pi_{k}\) if and only if \(\mathrm{SE}(\pi_{1})\neq\varnothing\) and \(\mathrm{R}_{\pi_{i}}(\mathrm{SE}(\pi_{i}))\subseteq\mathrm{SE}(\pi_{i+1})\) for all \(i=1,\ldots,k-1\), and \(\varnothing\) otherwise. The following lemma establishes important properties of the just defined \(\mathrm{SE}\)-composition. They will be helpful in the rest of the section. **Lemma 2**: _Let \(\mathcal{M}=\langle\mathsf{W},\mathsf{R},\mathsf{S},\mathsf{V}\rangle\) be an \(\mathrm{LTS}^{U}\) with \(\pi_{1},\ldots,\pi_{k}\in\mathsf{S}\). Then,_ * \(\pi_{1}\,;\,\cdots;\pi_{k}\neq\varnothing\) _if and only if_ \(\pi_{i}\,;\pi_{i+1}\neq\varnothing\) _for all_ \(i=1,\ldots,k-1\)_._ * \(\pi_{1}\,;\,\cdots;\pi_{k}\neq\varnothing\) _implies_ \(\mathrm{SE}(\pi_{1})=\mathrm{SE}(\pi_{1}\,;\,\cdots;\pi_{k})\)_._ _Proof._ For the first, consider the left-to-right direction. If \(\pi_{1}\,;\,\cdots;\pi_{k}\neq\varnothing\), then \(\mathrm{SE}(\pi_{1})\neq\varnothing\). Moreover, for all \(i=1,\ldots,k-1\), if \(\mathrm{SE}(\pi_{i})\neq\varnothing\), then \(\mathrm{SE}(\pi_{i+1})\neq\varnothing\) (because \(\mathrm{R}_{\pi_{i}}(\mathrm{SE}(\pi_{i}))\subseteq\mathrm{SE}(\pi_{i+1})\)). Therefore, \(\mathrm{SE}(\pi_{i})\neq\varnothing\) for all \(i=1,\ldots,k-1\). Using Definition 7.2, for all \(i=1,\ldots,k-1\) we have \(\pi_{i}\,;\,\pi_{i+1}\neq\varnothing\). The other direction is direct. For the second, suppose \(\pi_{1}\,;\,\cdots;\pi_{k}\neq\varnothing\). For **(\(\subseteq\))**, proceed by contradiction: assume there is \(w\in\mathrm{SE}(\pi_{1})\) with \(w\notin\mathrm{SE}(\pi_{1}\,;\,\cdots;\pi_{k})\). From the latter, \(w\notin\mathrm{SE}(\sigma)\) for some \(\sigma=\sigma_{1}\ldots\sigma_{k}\in\pi_{1}\,;\,\cdots;\pi_{k}\). Thus, there are \(d<k\) and \(w=v_{1},\ldots,v_{d+1}=v\) such that \(v_{i+1}\in\mathrm{R}_{v_{i}}(v_{i})\) and \(v\notin\mathrm{SE}(\sigma_{d+1})\). By hypothesis, \(v_{1}=w\in\mathrm{SE}(\pi_{1})\) and for all \(i=1,\ldots,d\), if \(v_{i}\in\mathrm{SE}(\pi_{i})\), then we have \(v_{i+1}\in\mathrm{SE}(\pi_{i+1})\) (since \(v_{i+1}\in\mathrm{R}_{\pi_{i}}(v_{i})\) and \(\mathrm{R}_{\pi_{i}}(\mathrm{SE}(\pi_{i}))\subseteq\mathrm{SE}(\pi_{i+1})\)). Hence, \(v_{d+1}=v\in\mathrm{SE}(\pi_{d+1})\) and therefore \(v\in\mathrm{SE}(\sigma_{d+1})\), a contradiction. Thus, \(w\in\mathrm{SE}(\pi_{1}\,;\,\cdots;\pi_{k})\), and therefore \(\mathrm{SE}(\pi_{1})\subseteq\mathrm{SE}(\pi_{1}\,;\,\cdots;\pi_{k})\). The direction **(\(\supseteq\))** is rather immediate. \(\blacksquare\) Now, here are the crucial properties we will require of \(\mathrm{LTS}^{U}\), to establish the intended correspondence with \(\mathrm{LTS}\)s. **Definition 7.3**: We say that an \(\mathrm{LTS}^{U}\)\(\mathcal{M}=\langle\mathsf{W},\mathsf{R},\mathsf{S},\mathsf{V}\rangle\) is: * _active_ if and only if there exists \(\pi\in\mathsf{S}\) such that \(\mathrm{SE}(\pi)=\mathsf{W}\) and, for all \(u,v\in\mathsf{W}\), \(v\in\mathrm{R}_{\pi}(u)\) implies \(\mathcal{M},u\rightleftarrows\mathcal{M},v\). * \(\mathrm{SE}\)_-compositional_ if and only if for all \(\pi_{1},\pi_{2}\in\mathsf{S}\) with \(\pi_{1}\,;\pi_{2}\neq\varnothing\) there exists \(\pi\in\mathsf{S}\) such that: 1. \(\mathrm{R}_{\pi_{1};\pi_{2}}\subseteq\mathsf{R}_{\pi}\), * \(\mathrm{SE}(\pi_{1}\,;\,\pi_{2})\subseteq\mathrm{SE}(\pi)\), and * for all \((w,v)\in\mathrm{R}_{\pi}\) there exists \((w^{\prime},v^{\prime})\in\mathrm{R}_{\pi_{1};\pi_{2}}\) such that \(\mathcal{M},w\rightleftarrows\mathcal{M},w^{\prime}\) and \(\mathcal{M},v\rightleftarrows\mathcal{M},v^{\prime}\). We define the class \(\mathbf{M_{AC}}:=\{\mathcal{M}\,|\,\,\mathcal{M}\text{ is active and $\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{\mathrm{ \mathrm{\mathrm{\mathrm{\mathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrmmathrm{\mathrm{\mathrmmathrm{\mathrmmathrmmathrmmathrmmathrm{\mathrm{ }}}}}}}}}}}\). \(\dash\) While activeness ensures that there is a set of plans doing what the empty plan \(\epsilon\) does in an \(\mathrm{LTS}\), \(\mathrm{SE}\)-compositionality ensures that \(\mathsf{S}\) is closed under a suitable notion of composition of sets of plans. The use of bisimilarity gives us a slightly more general class of models. The next lemma establishes that the requirements for \(\mathrm{SE}\)-compositionality generalize to an arbitrary number of sets of plans. **Lemma 3**: _Let \(\mathcal{M}=\langle\mathsf{W},\mathsf{R},\mathsf{S},\mathsf{V}\rangle\) be an \(\mathrm{SE}\)-compositional \(\mathrm{LTS}^{U}\), and take \(\pi_{1},\ldots,\pi_{k}\in\mathsf{S}\) (with \(k\geq 2\)) such that \(\pi_{1}\,;\,\cdots;\pi_{k}\neq\varnothing\). Then, there is \(\pi\in\mathsf{S}\) such that:_ 1. \(\mathrm{R}_{\pi_{1};\cdots;\pi_{k}}\subseteq\mathrm{R}_{\pi_{\tau}}\), 2. \(\mathrm{SE}(\pi_{1}\;;\cdots;\pi_{k})\subseteq\mathrm{SE}(\pi)\), _and_ 3. _for all_ \((w,v)\in\mathrm{R}_{\pi}\)_, there exists_ \((w^{\prime},v^{\prime})\in\mathrm{R}_{\pi_{1};\cdots;\pi_{k}}\) _such that_ \(\mathcal{M},w\rightleftarrows\mathcal{M},w^{\prime}\) _and_ \(\mathcal{M},v\rightleftarrows\mathcal{M},v^{\prime}\)_._ Proof.: We prove the existence of \(\pi\) by induction on \(k\geq 2\); then we will show that this witness does the work. The base case \(k=2\) follows from the definition, so take sets of plans in \(\mathrm{S}\) such that \(\pi_{1}\;;\cdots;\pi_{k}\;;\pi_{k+1}\neq\varnothing\). By Lemma2, \(\pi_{2};\cdots;\pi_{k};\pi_{k+1}\neq\varnothing\) and thus, by inductive hypothesis, there is a \(\pi^{\prime}\in\mathrm{S}\) such that \(\boldsymbol{(1)}\,\mathrm{R}_{\pi_{2};\cdots;\pi_{k+1}}\subseteq\mathrm{R}_{ \pi^{\prime}}\), _(2)_\(\mathrm{SE}(\pi_{2};\cdots;\pi_{k+1})\subseteq\mathrm{SE}(\pi^{\prime})\), and (3) for all \((w,v)\in\mathrm{R}_{\pi^{\prime}}\) there exists \((w^{\prime},v^{\prime})\in\mathrm{R}_{\pi_{2};\cdots;\pi_{k+1}}\) such that \(\mathcal{M},w\rightleftarrows\mathcal{M},w^{\prime}\) and \(\mathcal{M},v\rightleftarrows\mathcal{M},v^{\prime}\). Note also how \(\mathrm{SE}(\pi_{1})\neq\varnothing\) and \(\mathrm{R}_{\pi_{1}}(\mathrm{SE}(\pi_{1}))\subseteq\mathrm{SE}(\pi_{2})= \mathrm{SE}(\pi_{2};\cdots;\pi_{k+1})\subseteq\mathrm{SE}(\pi^{\prime})\) (by definition of \(\mathrm{SE}\)-composition, Lemma2 and the second property of \(\pi^{\prime}\)). Thus, \(\pi_{1};\pi^{\prime}\neq\varnothing\) and hence there is a \(\pi\in\mathrm{S}\) such that _(1)_\(\mathrm{R}_{\pi_{1};\pi^{\prime}}\subseteq\mathrm{R}_{\pi}\), _(2)_\(\mathrm{SE}(\pi_{1}\;;\pi^{\prime})\subseteq\mathrm{SE}(\pi)\), and _(3)_ for all \((w,v)\in\mathrm{R}_{\pi}\) there exists \((w^{\prime},v^{\prime})\in\mathrm{R}_{\pi_{1};\pi^{\prime}}\) such that \(\mathcal{M},w\rightleftarrows\mathcal{M},w^{\prime}\) and \(\mathcal{M},v\rightleftarrows\mathcal{M},v^{\prime}\). We will prove that \(\pi\) is the witness we are looking for. For Item1, take \((w,v)\in\mathrm{R}_{\pi_{1};\cdots;\pi_{k+1}}\). Then, there exists \(u\in\mathrm{W}\) such that \((w,u)\in\mathrm{R}_{\pi_{1}}\) and \((u,v)\in\mathrm{R}_{\pi_{2};\cdots;\pi_{k+1}}\). Hence, we have \((w,u)\in\mathrm{R}_{\pi_{1}}\) and \((u,v)\in\mathrm{R}_{\pi^{\prime}}\), so \((w,v)\in\mathrm{R}_{\pi_{1};\pi^{\prime}}\) and therefore \((w,v)\in\mathrm{R}_{\pi}\). For Item2, take \(w\in\mathrm{SE}(\pi_{1}\;;\cdots;\pi_{k+1})\). By Lemma2, \(w\in\mathrm{SE}(\pi_{1})\) and, by the same lemma, \(w\in\mathrm{SE}(\pi_{1}\;;\pi^{\prime})\). Hence, \(w\in\mathrm{SE}(\pi)\). For Item3, take \((w,v)\in\mathrm{R}_{\pi}\). Then, there exists \((w^{\prime},v^{\prime})\in\mathrm{R}_{\pi_{1};\pi^{\prime}}\) such that \(\mathcal{M},w\rightleftarrows\mathcal{M},w^{\prime}\) and \(\mathcal{M},v\rightleftarrows\mathcal{M},v^{\prime}\). Thus, \((w^{\prime},u^{\prime})\in\mathrm{R}_{\pi_{1}}\) and \((u^{\prime},v^{\prime})\in\mathrm{R}_{\pi^{\prime}}\) for some \(u^{\prime}\in\mathrm{W}\). Again, since \((u^{\prime},v^{\prime})\in\mathrm{R}_{\pi^{\prime}}\), there exists \((u^{\prime\prime},v^{\prime\prime})\in\mathrm{R}_{\pi_{2};\cdots;\pi_{k+1}}\) such that \(\mathcal{M},u^{\prime}\rightleftarrows\mathcal{M},u^{\prime\prime}\) and \(\mathcal{M},v^{\prime}\rightleftarrows\mathcal{M},v^{\prime\prime}\). Using that \(\rightleftarrows\) is transitive, there exists \((w^{\prime},v^{\prime\prime})\in\mathrm{R}_{\pi_{1};\cdots;\pi_{k+1}}\) such that \(\mathcal{M},w\rightleftarrows\mathcal{M},w^{\prime}\) and \(\mathcal{M},v\rightleftarrows\mathcal{M},v^{\prime\prime}\). With these tools at hand, we will show that for every LTS there is an \(\mathsf{L}_{\mathsf{Kn}}\)-equivalent \(\mathrm{LTS}^{U}\) in \(\mathbf{M}_{\mathbf{AC}}\), and vice-versa. First, we present the mapping from \(\mathrm{LTS}^{U}\)s to \(\mathrm{LTS}\)s. **Proposition 13**: _Let \(\mathcal{M}=\langle\mathrm{W},\mathrm{R},\mathrm{S},\mathrm{V}\rangle\) be an \(\mathrm{LTS}^{U}\) in \(\mathbf{M}_{\mathbf{AC}}\), over \(\mathsf{Act}\). Take \(\mathsf{Act}^{\prime}:=\{a_{\pi}\mid\pi\in\mathrm{S}\}\), and then define the \(\mathrm{LTS}\)\(\mathcal{S}_{\mathcal{M}}=\langle\mathrm{W},\mathrm{R}^{\prime},\mathrm{V}\rangle\) over \(\mathsf{Act}^{\prime}\) by taking \(\mathrm{R}^{\prime}_{a_{\pi}}:=\{(w,v)\in\mathrm{R}_{\pi}\mid w\in\mathrm{SE}( \pi)\}\) (so basic actions in \(\mathcal{S}_{\mathcal{M}}\) correspond to sets of \(\mathrm{SE}\) plans in \(\mathcal{M}\)). Then, for every \(\varphi\in\mathsf{L}_{\mathsf{Kn}}\)._ Proof.: It is clear that \(\mathcal{S}_{\mathcal{M}}\) is an LTS. To obtain a proper signature, we can extend \(\mathsf{Act}^{\prime}\) (in case it is finite) into an arbitrary \(\mathsf{Act}\). The rest of the proof of is by structural induction on the formulas in \(\mathsf{L}_{\mathsf{Kn}}\). The cases for the Boolean fragment are straightforward. We will discuss the case for formulas of the shape \(\mathsf{Kh}(\psi,\varphi)\). In doing so, the following property will be useful: for every \(\pi\in\mathrm{S}\), we have \(\mathrm{SE}(\pi)=\mathrm{SE}(a_{\pi})\). Indeed, **(\(\bigodot\)**)** if \(u\in\mathrm{SE}(\pi)\) then there is \(v\in\mathrm{W}\) such that \((u,v)\in\mathrm{R}_{\pi}\), so \((u,v)\in\mathrm{R}^{\prime}_{a_{\pi}}\) and therefore, being \(a_{\pi}\) a basic action, \(u\in\mathrm{SE}(a_{\pi})\). Moreover, **(\(\bigodot\)**)**, if \(u\in\mathrm{SE}(a_{\pi})\) then there is \(v\in\mathrm{W}\) such that \((u,v)\in\mathrm{R}^{\prime}_{a_{\pi}}\), so \(u\in\mathrm{SE}(\pi)\). **(\(\bigodot\)**)**: Suppose \(w\in[\![\mathsf{Kh}(\psi,\varphi)]\!]^{M}\); then there is \(\pi\in\mathrm{S}\) satisfying both **(Kh-1)**: \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}\subseteq\mathrm{SE}(\pi)\) and **(Kh-2)**: \(\mathrm{R}_{\pi}(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}) \subseteq\llbracket\!\langle\varphi\rangle\!\rrbracket^{\mathcal{M}}\). We will prove \(w\in\llbracket\!\llbracket\mathsf{Kh}(\psi,\varphi)\rrbracket^{\mathcal{S}_{M}}\) using \(a_{\pi}\in\mathsf{Act}^{\prime}\) as our witness. First, for showing that \(a_{\pi}\) has the right properties, suppose \(v\in\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{S}_{M}}\). Then \(v\in\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}\) (by IH), so \(v\in\mathrm{SE}(\pi)\) (by Item **(Kh-1)**) and hence \(v\in\mathrm{SE}(a_{\pi})\) (property discussed above). Therefore, \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{S}_{M}}\subseteq\mathrm{SE} (a_{\pi_{0}})\). Second, for showing that \(a_{\pi}\) does the required work, suppose \(u\in\mathrm{R}^{\prime}_{\pi_{i}}(\llbracket\!\langle\psi\rangle\!\rrbracket^{ \mathcal{S}_{M}})\). Then, \(u\in\mathrm{R}^{\prime}_{\pi_{i}}(\llbracket\!\langle\psi\rangle\!\rrbracket^{ \mathcal{M}})\) (by IH), hence (by definition of \(\mathrm{R}^{\prime}\)) so \(u\in\llbracket\!\langle\varphi\rangle\!\rrbracket^{\mathcal{M}}\) (by Item **(Kh-2)**), and then \(u\in\llbracket\!\langle\varphi\rangle\!\rrbracket^{\mathcal{S}_{M}}\) (by IH). Thus, \(\mathrm{R}^{\prime}_{\pi_{i}}(\llbracket\!\langle\psi\rangle\!\rrbracket^{ \mathcal{S}_{M}})\subseteq\llbracket\!\langle\varphi\rangle\!\rrbracket^{ \mathcal{S}_{M}}\). From the two pieces, it follows that \(w\in\llbracket\!\langle\mathsf{Kh}(\psi,\varphi)\rrbracket^{\mathcal{S}_{M}}\). **(2)**: Suppose \(w\in\llbracket\!\langle\mathsf{Kh}(\psi,\varphi)\rrbracket^{\mathcal{S}_{M}}\); then there is \(\sigma\in(\mathsf{Act}^{\prime})\)' satisfying both **(Kh-1)**: \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{S}_{M}}\subseteq\mathrm{SE }(\sigma)\) and **(Kh-2)**: \(\mathrm{R}^{\prime}_{\sigma}(\llbracket\!\langle\psi\rangle\!\rrbracket^{ \mathcal{S}_{M}})\subseteq\llbracket\!\langle\varphi\rangle\!\rrbracket^{ \mathcal{S}_{M}}\). There are two main cases. First, assume \(\sigma=\epsilon\). Since \(\mathcal{M}\) is active, there is \(\pi\in\mathrm{S}\) s.t. \(\mathrm{SE}(\pi)=\mathrm{W}\) and for all \(u,v\in\mathrm{W}\), \(v\in\mathrm{R}_{\pi}(u)\) implies \(\mathcal{M},u\leftrightarrows\mathcal{M},v\). It is not hard to show that this is the witness we need. Second, assume \(\sigma\neq\epsilon\), i.e., \(\sigma=a_{\pi_{1}}\cdots a_{\pi_{k}}\) with \(a_{\pi_{i}}\in\mathsf{Act}^{\prime}\) (so \(\pi_{i}\in\mathrm{S}\)). Then, there are two possibilities. * If \(\pi_{1}\); \(\cdots;\pi_{k}=\varnothing\), by Lemma 2 there is \(i\in\{1,\ldots,k-1\}\) s.t. \(\pi_{i}\); \(\pi_{i+1}=\varnothing\). Then (by Definition 7.2), either \(\mathrm{SE}(\pi_{i})=\varnothing\) (hence \(\mathrm{SE}(a_{\pi_{i}})=\varnothing\)) or \(\mathrm{R}_{\pi_{i}}(\mathrm{SE}(\pi_{i}))\nsubseteq\mathrm{SE}(\pi_{i+1})\) (so there is \(v\in\mathrm{R}_{\pi_{i}}(\mathrm{SE}(\pi_{i}))\) with \(v\notin\mathrm{SE}(\pi_{i+1})\), i.e., there are \(u,v\in W\) such that \(u\in\mathrm{SE}(\pi_{i})\), \((u,v)\in\mathrm{R}_{\pi_{i}}\) and \(v\notin\mathrm{SE}(\pi_{i+1})\), and hence \(u\in\mathrm{SE}(a_{\pi_{i}})\) [from the first], \((u,v)\in\mathrm{R}^{\prime}_{a_{\pi_{i}}}\) [from the first and the second] and \(v\notin\mathrm{SE}(a_{\pi_{i+1}})\) [from the third]; thus, \(\mathrm{R}^{\prime}_{a_{\pi_{i}}}(\mathrm{SE}(a_{\pi_{i}}))\nsubseteq\mathrm{SE }(a_{\pi_{i+1}})\)). In both cases we get \(\mathrm{SE}(\sigma)=\mathrm{SE}(a_{\pi_{1}}\cdots a_{\pi_{k}})=\varnothing\), and hence \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{S}_{M}}=\varnothing\) (by Item **(Kh-1)**). By IH, this implies \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}=\varnothing\), so to get \(w\in\llbracket\!\langle\mathsf{Kh}(\psi,\varphi)\rrbracket^{\mathcal{M}}\) we only need \(\mathrm{S}\neq\varnothing\), which we have as \(\mathcal{M}\) is an \(\mathrm{LTS}^{\mathcal{I}}\). * If \(\pi_{1}\); \(\cdots;\pi_{k}\neq\varnothing\), we contemplate two scenarios. If \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{S}_{M}}=\varnothing\) then, by IH, \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}=\varnothing\); thus, as before, any \(\pi\in\mathrm{S}\) works as a witness. Otherwise, \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{S}_{M}}\neq\varnothing\) and then, since \(\mathcal{M}\) is SE-compositional, by Lemma 3 there exists \(\pi\in\mathrm{S}\) such that \(\mathrm{R}_{\pi_{1}\); \(\cdots;\pi_{k}}\subseteq\mathrm{R}_{\pi}\), \(\mathrm{SE}(\pi_{1}\); \(\cdots;\pi_{k})\subseteq\mathrm{SE}(\pi)\), and for all \((v,u)\in\mathrm{R}_{\pi_{\pi}}\), we have \((v^{\prime},u^{\prime})\in\mathrm{R}_{\pi_{1}\); \(\cdots;\pi_{k}}\) for some \(v^{\prime},u^{\prime}\in\mathrm{W}\) satisfying \(\mathcal{M},v\leftrightarrows\mathcal{M},v^{\prime}\) and \(\mathcal{M},u\leftrightarrows\mathcal{M},u^{\prime}\). Let's show that this \(\pi\) does the work. For the first Kh-clause, take \(w\in\llbracket\!\langle\psi\rrbracket\!\rrbracket^{\mathcal{M}}\). Then, by IH, \(w\in\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{S}_{M}}\), and by Item **(Kh-1)**, \(w\in\mathrm{SE}(a_{\pi_{1}}\cdots a_{\pi_{k}})\). For a contradiction, suppose \(w\notin\mathrm{SE}(\pi_{1})\); then \(w\notin\mathrm{SE}(\pi_{1})\) (by Lemma 2), so \(w\notin\mathrm{SE}(a_{\pi_{1}})\) and hence \(\mathrm{R}^{\prime}_{a_{\pi_{1}}}(w)=\varnothing\). Hence, \(w\notin\mathrm{SE}(a_{\pi_{1}}\cdots a_{\pi_{k}})\), which is a contradiction. Therefore, \(w\in\mathrm{SE}(\pi_{1}\); \(\cdots;\pi_{k})\), so \(w\in\mathrm{SE}(\pi)\) (since \(\mathcal{M}\) is SE-compositional). Thus, \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}\subseteq\mathrm{SE} (\pi)\). For the second Kh-clause, take \(u,v\in\mathrm{W}\) such that \(v\in\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}\) and \((v,u)\in\mathrm{R}_{\pi}\). By Definition 7.3, there are \((v^{\prime},u^{\prime})\in\mathrm{R}_{\pi_{1}\); \(\cdots;\pi_{k}}\) such that \(\mathcal{M},v\leftrightarrows\mathcal{M},v^{\prime}\) and \(\mathcal{M},u\leftrightarrows\mathcal{M},u^{\prime}\). By Theorem 2, \(v^{\prime}\in\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}\) so, by inductive hypothesis, \(v^{\prime}\in\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{S}_{M}}\). By Definition 7.2, \(u^{\prime}\in(\mathrm{R}_{\pi_{1}}\circ\cdots\circ\mathrm{R}_{\pi_{k}})(v^{\prime})\). Now, let \(v^{\prime}=w_{1},\ldots,w_{k+1}=u^{\prime}\) be such that \(w_{i+1}\in\mathrm{R}_{\pi_{i}}(w_{i})\) for all \(i=1,\ldots,k\). Since \(w_{1}\in\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}\), from \(\llbracket\!\langle\psi\rangle\!\rrbracket^{\mathcal{M}}\subseteq\mathrm{SE}(\pi _{1}\); \(\cdots;\pi_{k})\) (proved in the paragraph above) and Lemma 2, we have \(w_{1}\in\mathrm{SE}(\pi_{1}\); \(\cdots;\pi_{k})=\mathrm{SE}(\pi_{1})\). Moreover, for all \(i=1 Hence, \(u^{\prime}\in(\mathrm{R}^{\prime}_{a_{\alpha_{1}}}\circ\cdots\circ\mathrm{R}^{ \prime}_{a_{\alpha_{2}}})(v^{\prime})\). In other words, since \(v^{\prime}\in[\![\psi]\!]^{S_{M}}\), \(u^{\prime}\in\mathrm{R}^{\prime}_{a_{\alpha_{1}}\cdots a_{\alpha_{n}}}([\![ \psi]\!]^{S_{M}})\). Thus, by Item (Kh-2) we get \(u^{\prime}\in[\![\varphi]\!]^{S_{M}}\). By IH, \(u^{\prime}\in[\![\varphi]\!]^{M}\), which implies that \(u\in[\![\varphi]\!]^{M}\) (by \(\mathcal{M},u\leftrightarrows\mathcal{M},u^{\prime}\), and Theorem 2). Therefore \(\mathrm{R}_{\pi}([\![\psi]\!]^{M})\subseteq[\![\varphi]\!]^{M}\). From the two pieces, \(w\in[\![\mathrm{Kh}(\psi,\varphi)]\!]^{M}\). This finishes the proof. Now we will prove the other direction. From an LTS we can obtain an active, SE-compositional and point-wise equivalent \(\mathrm{LTS}^{U}\). **Proposition 14**: _Let \(\mathcal{S}=\langle\mathrm{W},\mathrm{R},\mathrm{V}\rangle\) be an LTS over \(\mathsf{Act}\). Take \(\mathsf{Act}^{\prime}:=\{a_{\sigma}\mid\sigma\in\mathsf{Act}^{\prime}\) and \(\mathrm{SE}(\sigma)\neq\varnothing\}\), and then define the \(\mathrm{LTS}^{U}\)\(\mathcal{M}_{\mathcal{S}}=\langle\mathrm{W},\mathrm{R}^{\prime},\mathrm{S}^{ \prime},\mathrm{V}\rangle\) over \(\mathsf{Act}^{\prime}\) by taking \(\mathrm{R}^{\prime}_{a_{\sigma}}:=\{(w,v)\in\mathrm{R}_{\sigma}\mid w\in \mathrm{SE}(\sigma)\}\) (so basic actions in \(\mathcal{M}_{\mathcal{S}}\) are strongly executable plans in \(\mathcal{S}\)) and \(\mathrm{S}^{\prime}:=\{|a_{\sigma}|\mid a_{\sigma}\in\mathsf{Act}^{\prime}\}\). Then,_ * \(\mathcal{M}_{\mathcal{S}}\) _is an active and_ SE_-compositional_ \(\mathrm{LTS}^{U}\) _(i.e., is in_ \(\mathbf{M}_{\mathbf{AC}}\)_);_ * _for every_ \(\varphi\in\mathsf{L}_{\mathsf{Kh}}\)_,_ \([\![\varphi]\!]^{S}=[\![\varphi]\!]^{\mathcal{M}_{\mathcal{S}}}\)_._ Proof.: First, Item (1). For showing that \(\mathcal{M}_{\mathcal{S}}\) is an \(\mathrm{LTS}^{U}\), note how \(\mathrm{P}^{\prime}=\bigcup_{\pi\in\mathsf{S}^{\prime}}\pi\) is non-empty (\(\epsilon\in\mathsf{Act}^{\prime}\) and \(\mathrm{SE}(\epsilon)=\mathrm{W}\), so \(\epsilon\in\mathsf{Act}^{\prime}\) and hence \(\{a_{e}\}\in\mathrm{S}^{\prime}\)) and, moreover, \(\mathrm{S}^{\prime}\) does not contain the empty set and its elements are pairwise disjoint (the latter two by definition). Moreover, we can map elements from \(\mathsf{Act}^{\prime}\) into \(\mathsf{Act}\) to preserve the same signature (as their cardinalities match). Activeness is straightforward, as \(\{a_{\epsilon}\}\) is in \(\mathrm{S}^{\prime}\) and behaves exactly as \(\epsilon\). For SE-compositionality, take \(\{a_{\sigma_{1}}\},\{a_{\sigma_{2}}\}\in\mathsf{S}^{\prime}\) s.t. \(\{a_{\sigma_{1}}\}\)**;**\(\{a_{\sigma_{2}}\}\neq\varnothing\). Then, \(\{a_{\sigma_{1}}\};\{a_{\sigma_{2}}\}=\{a_{\sigma_{1}}a_{\sigma_{2}}\}\) and, moreover, \(\mathrm{SE}(\{a_{\sigma_{1}}\})\neq\varnothing\) and \(\mathrm{R}^{\prime}_{[\![a_{\sigma_{1}}]\!]}(\mathrm{SE}(\{a_{\sigma_{1}}\})) \subseteq\mathrm{SE}(\{a_{\sigma_{2}}\})\) (\(\mathbf{\otimes}_{1}\)). We need to provide a \(\pi\in\mathsf{S}^{\prime}\) satisfying the SE-compositionality conditions; it will be shown that \(\pi=\{a_{\sigma_{1}\sigma_{2}}\}\) does the work. In doing so, it is useful to notice that \(\mathrm{R}^{\prime}_{a_{\sigma_{1}}a_{\sigma_{2}}}=\mathrm{R}^{\prime}_{a_{ \sigma_{1}\sigma_{2}}}\) (the proof is straightforward). First, we need to show that \(\pi=\{a_{\sigma_{1}\sigma_{2}}\}\) is in \(\mathrm{S}\), which boils down to showing that \(a_{\sigma_{1}\sigma_{2}}\in\mathsf{Act}^{\prime}\), that is, \(\mathrm{SE}(\sigma_{1}\sigma_{2})\neq\varnothing\). For this, recall that \(\mathrm{SE}(\{a_{\sigma_{1}}\})\neq\varnothing\), so we know that \(\{a_{\sigma_{1}}\}\) is strongly executable at some \(u\in\mathrm{W}\). Moreover, from \(\mathrm{R}^{\prime}_{[\![a_{\sigma_{1}}]\!]}(\mathrm{SE}(\{a_{\sigma_{1}}\}) )\subseteq\mathrm{SE}(\{a_{\sigma_{2}}\})\) it follows that any such execution ends in states where \(a_{\sigma_{2}}\) is strongly executable. Then, \(a_{\sigma_{1}}a_{\sigma_{2}}\) is strongly executable at \(u\), which implies \(\mathrm{R}^{\prime}_{a_{\sigma_{1}}a_{\sigma_{2}}}(u)\neq\varnothing\). But \(\mathrm{R}^{\prime}_{a_{\sigma_{1}}a_{\sigma_{2}}}=\mathrm{R}^{\prime}_{a_{ \sigma_{1}\sigma_{2}}}\), so \(\mathrm{R}^{\prime}_{a_{\sigma_{1}\sigma_{2}}}(u)\neq\varnothing\), which by definition of \(\mathrm{R}^{\prime}\) implies \(u\in\mathrm{SE}(\sigma_{1}\sigma_{2})\), that is, \(\mathrm{SE}(\sigma_{1}\sigma_{2})\neq\varnothing\), as required. Then, the SE-compositionality conditions. The first and the third, \(\mathrm{R}^{\prime}_{a_{\sigma_{1}\sigma_{2}}}\subseteq\mathrm{R}^{\prime}_{a_{ \sigma_{1}\sigma_{2}}}\) and the bisimilarity one, follow from \(\mathrm{R}^{\prime}_{a_{\sigma_{1}}a_{\sigma_{2}}}=\mathrm{R}^{\prime}_{a_{ \sigma_{1}\sigma_{2}}}\). For the second, \(\mathrm{SE}(a_{\sigma_{1}}a_{\sigma_{2}})\subseteq\mathrm{SE}(a_{\sigma_{1} \sigma_{2}})\), take \(u\in\mathrm{SE}(a_{\sigma_{1}a_{\sigma_{2}}})\); we need to show that \(u\in\mathrm{SE}(a_{\sigma_{1}\sigma_{2}})\). For this, it is enough to show that \(\mathrm{R}^{\prime}_{a_{\sigma_{1}\sigma_{2}}}(u)\neq\varnothing\) (as \(a_{\sigma_{1}\sigma_{2}}\) is a basic action), i.e., that \(\mathrm{R}_{\sigma_{1}\sigma_{2}}(u)\neq\varnothing\) (which implies \(a_{\sigma_{1}\sigma_{2}}\) exists) and \(u\in\mathrm{SE}(\sigma_{1}\sigma_{2})\). Now, the assumption \(u\in\mathrm{SE}(a_{\sigma_{1}}a_{\sigma_{2}})\) implies \(u\in\mathrm{SE}(a_{\sigma_{1}})\) and \(\mathrm{R}^{\prime}_{a_{\sigma_{1}}}(u)\subseteq\mathrm{SE}(a_{\sigma_{2}})\). The first implies not only \(\mathrm{R}^{\prime}_{a_{\sigma_{1}}}(u)\neq\varnothing\) (so \(u\in\mathrm{SE}(\sigma_{1})\)) but also \(\mathrm{R}^{\prime}_{a_{\sigma_{1}}}(u)=\mathrm{R}_{\sigma_{1}}(u)\). From the second and the latter, \(\mathrm{R}_{\sigma_{1}}(u)\subseteq\mathrm{SE}(a_{\sigma_{2}})\). But note: \(v\in\mathrm{SE}(a_{\sigma_{2}})\) implies there is \(v^{\prime}\in\mathrm{R}^{\prime}_{a_{\sigma_{2}}}(v)\), so \(v\in\mathrm{SE}(\sigma_{2})\). Thus, \(\mathrm{SE}(a_{\sigma_{2}})\subseteq\mathrm{SE}(\sigma_{2})\) and hence \(\mathrm{R}_{\sigma_{1}}(u)\subseteq\mathrm{SE}(\sigma_{2})\). Then, the now latter and \(u\in\mathrm{SE}(\sigma_{1})\) imply \(u\in\mathrm{SE}(\sigma_{1}\sigma_{2})\) (the second goal) and thus, by definition of SE, it follows that \(\mathrm{R}_{\sigma_{1}\sigma_{2}}(u)\neq\varnothing\) (the first goal). For Item (2), the proof is by structural induction; again, only the case of \(\mathsf{Kh}(\psi,\varphi)\) is discussed. * Suppose \(w\in[\![\mathrm{Kh}(\psi,\varphi)]\!]^{S}\); then there is \(\sigma\in\mathsf{Act}^{\prime}\) satisfying both **(Kh-1)**: \(\llbracket\psi\rrbracket^{S}\subseteq\mathrm{SE}(\sigma)\) and **(Kh-2)**: \(\mathsf{R}_{\sigma}(\llbracket\psi\rrbracket^{S})\subseteq\llbracket\varphi \rrbracket^{S}\). There are two cases. First, assume \(\mathrm{SE}(\sigma)=\varnothing\). From this, Item **(Kh-1)** implies \(\llbracket\psi\rrbracket^{S}=\varnothing\), so \(\llbracket\psi\rrbracket^{\mathcal{M}_{S}}=\varnothing\) (by IH) and \(\mathsf{R}^{\prime}_{\alpha}(\llbracket\psi\rrbracket^{\mathcal{M}_{S}})=\varnothing\) for any \(\pi\in 2^{(\mathsf{Act})^{\prime}}\). Hence, to obtain \(w\in\llbracket\mathsf{Kh}(\psi,\varphi)\rrbracket^{\mathcal{M}_{S}}\) it is enough to have \(\mathsf{S}^{\prime}\neq\varnothing\), which we do as \(\{a_{e}\}\in\mathsf{S}^{\prime}\). Second, assume \(\mathrm{SE}(\sigma)\neq\varnothing\). Then, \(a_{\sigma}\in\mathsf{Act}^{\prime}\) and \(\{a_{\sigma}\}\in\mathsf{S}^{\prime}\); this will be shown to be our witness. For the first \(\mathsf{Kh}\)-clause, if \(u\in\llbracket\psi\rrbracket^{\mathcal{M}_{S}}\) then \(u\in\llbracket\psi\rrbracket^{\mathcal{S}}\) (IH), so \(u\in\mathrm{SE}(\sigma)\) (Item **(Kh-1)**), which implies \(\mathsf{R}_{\sigma}(u)\neq\varnothing\). The last two together imply \(\mathsf{R}^{\prime}_{a_{\sigma}}(u)\neq\varnothing\) (definition of \(\mathsf{R}^{\prime}\)), so \(u\in\mathrm{SE}(a_{\sigma})=\mathrm{SE}(\{a_{\sigma}\})\). Hence, \(\llbracket\psi\rrbracket^{\mathcal{M}_{S}}\subseteq\mathrm{SE}(\{a_{\sigma}\})\). For the second \(\mathsf{Kh}\)-clause, suppose \(u\in\mathsf{R}^{\prime}_{a_{\sigma}}(\llbracket\psi\rrbracket^{\mathcal{M}_{S}})\). Then, \(u\in\mathsf{R}^{\prime}_{a_{\sigma}}(\llbracket\psi\rrbracket^{\mathcal{M}_{S}})\) so \(u\in\mathsf{R}^{\prime}_{a_{\sigma}}(\llbracket\psi\rrbracket^{\mathcal{S}})\) (IH) and then \(u\in\mathsf{R}_{\sigma}(\llbracket\psi\rrbracket^{\mathcal{S}})\) (definition of \(\mathsf{R}^{\prime}\)) so, by Item **(Kh-2)**, \(u\in\llbracket\varphi\rrbracket^{\mathcal{S}}\) and thus \(u\in\llbracket\varphi\rrbracket^{\mathcal{M}_{S}}\) (IH). Consequently, \(\mathsf{R}^{\prime}_{\alpha}(\llbracket\psi\rrbracket^{\mathcal{M}_{S}}) \subseteq\llbracket\varphi\rrbracket^{\mathcal{M}_{S}}\). From the two clauses, \(w\in\llbracket\mathsf{Kh}(\psi,\varphi)\rrbracket^{\mathcal{M}_{S}}\). **(2)**: Suppose \(w\in\llbracket\mathsf{Kh}(\psi,\varphi)\rrbracket^{\mathcal{M}_{S}}\). Then there is an element of \(\mathsf{S}^{\prime}\) fulfilling the \(\mathsf{Kh}\)-clauses, which by definition of \(\mathsf{S}^{\prime}\) implies there is \(\{a_{\sigma}\}\in\mathsf{S}^{\prime}\) (with \(\sigma\in\mathsf{Act}^{\prime}\)) satisfying both **(Kh-1)**: \(\llbracket\psi\rrbracket^{\mathcal{M}_{S}}\subseteq\mathrm{SE}(\{a_{\sigma}\})\) and **(Kh-2)**: \(\mathsf{R}^{\prime}_{\{a_{\sigma}\}}(\llbracket\psi\rrbracket^{\mathcal{M}_{S} })\subseteq\llbracket\varphi\rrbracket^{\mathcal{M}_{S}}\). It will be shown that \(\sigma\) is our witness. For the first \(\mathsf{Kh}\)-clause, \(u\in\llbracket\psi\rrbracket^{\mathcal{S}}\) implies \(u\in\llbracket\psi\rrbracket^{\mathcal{M}_{S}}\) (IH), hence \(u\in\mathrm{SE}(\{a_{\sigma}\})\) (Item **(Kh-1)**) and then \(\mathsf{R}^{\prime}_{a_{\sigma}}(u)\neq\varnothing\), which implies \(u\in\mathrm{SE}(\sigma)\) (definition of \(\mathsf{R}^{\prime}\)). Thus, \(\llbracket\psi\rrbracket^{\mathcal{S}}\subseteq\mathrm{SE}(\sigma)\). For the second \(\mathsf{Kh}\)-clause, take \(u\in\mathsf{R}_{\sigma}(\llbracket\psi\rrbracket^{\mathcal{S}})\), so \(u\in\mathsf{R}_{\sigma}(\llbracket\psi\rrbracket^{\mathcal{M}_{S}})\) (IH). Then \(u\in\mathsf{R}_{\sigma}(v)\) for some \(v\in\llbracket\psi\rrbracket^{\mathcal{M}_{S}}\) By Item **(Kh-1)**, \(v\in\mathrm{SE}(\{a_{\sigma}\})\), i.e., \(v\in\mathrm{SE}(a_{\sigma})\) so \(\mathsf{R}^{\prime}_{a_{\sigma}}(v)\neq\varnothing\) and then \(v\in\mathrm{SE}(\sigma)\) (definition of \(\mathsf{R}^{\prime}\)). This, together with \(u\in\mathsf{R}_{\sigma}(v)\) imply \(u\in\mathsf{R}^{\prime}_{a_{\sigma}}(v)\), i.e., \(u\in\mathsf{R}^{\prime}_{a_{\sigma}}(v)\). Hence, \(u\in\mathsf{R}^{\prime}_{a_{\sigma}}(\llbracket\psi\rrbracket^{\mathcal{M}_{S}})\), so \(u\in\llbracket\varphi\rrbracket^{\mathcal{M}_{S}}\) (Item **(Kh-2)**) and then \(u\in\llbracket\varphi\rrbracket^{\mathcal{S}}\) (IH). Thus, \(\mathsf{R}_{\sigma}(\llbracket\psi\rrbracket^{\mathcal{S}})\subseteq\llbracket \varphi\rrbracket^{\mathcal{S}}\). From the two clauses, \(w\in\llbracket\mathsf{Kh}(\psi,\varphi)\rrbracket^{\mathcal{S}}\). From these results, the axiom system for \(\mathsf{L}_{\mathsf{Kh}}\) over LTS (Table 1) is also sound and complete for \(\mathsf{L}_{\mathsf{Kh}}\) over _active_ and \(\mathrm{SE}\)-_compositional_\(\mathrm{LTS}^{U}\)s. **Theorem 6**: _The axiom system \(\mathcal{L}_{\mathsf{Kh}}^{\mathrm{LTS}}\) (Table 1) is sound and strongly complete w.r.t. the class \(\mathbf{M}_{\mathbf{AC}}\)._ Proof.: The arguments are exactly as in Theorem 5, by using this time Proposition 13 and Proposition 14. ## 8 Finite model property and complexity This section is devoted to the study of the computational complexity of the logic \(\mathsf{L}_{\mathsf{Kh}_{i}}\) over \(\mathrm{LTS}^{U}\)s. To do so, we will use two standard tools from modal logic: filtration and selection (see, e.g., [9] for details). First, we define a notion of filtration that, given an arbitrary model and a formula, allows us to obtain a finite model that satisfies the formula if and only if the original model satisfies it. This proves that the satisfiability problem for \(\mathsf{L}_{\mathsf{Kh}_{i}}\) is decidable. Then, we define a (more specialized) selection function which, from a canonical model, enables us to extract a polynomial-size model. Thus, we show that the satisfiability problem for \(\mathsf{L}_{\mathsf{Kh}_{i}}\) is NP-complete (given that we provide a model checking algorithm running in P). ### Finite model property via filtrations We start by introducing two relations that will be crucial to define a proper notion of filtration, given a set of formulas \(\Sigma\) and a model \(\mathcal{M}\). **Definition 8.1** (\(\Sigma\)-equivalence): Let \(\mathcal{M}=\langle\mathsf{W},\mathsf{R},\{\mathsf{S}_{i}\}_{i\in\mathsf{Agt}}, \mathsf{V}\rangle\) be an \(\mathrm{LTS}^{U}\) and let \(\Sigma\) be a set of \(\mathsf{L}_{\mathsf{Kh}_{i}}\)-formulas closed under subformulas. Define the relations \(\leftrightarrow_{\Sigma}\subseteq\mathsf{W}\times\mathsf{W}\) and \(\leftrightarrow_{\Sigma}\subseteq\mathsf{S}_{\mathsf{Agt}}\times\mathsf{S}_{ \mathsf{Agt}}\) (with \(\mathsf{S}_{\mathsf{Agt}}:=\bigcup_{i\in\mathsf{Agt}}\mathsf{S}_{i}\)) as: \[w\leftrightarrow_{\Sigma}v\] _iff_ \[{}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Note that \(V^{f}\) is well-defined: given \(p\in\Sigma\), if \([w]_{\Sigma}=[v]_{\Sigma}\) and \(\mathcal{M},w\models p\), then \(\mathcal{M},v\models p\). Also, as \(S^{f}_{i}\) is well-defined (by definition), we have that \(\mathcal{M}^{f}\) is an \(\mathrm{LTS}^{U}\) over \(\mathsf{Act}^{\Sigma}\), Prop and \(\mathsf{Agt}\). Also, note that if (5), (6) and (7) above are turned into if and only if conditions, they always define an \(\mathrm{LTS}^{U}\) which is a filtration. Definition 8.3 deserves further comments. Notice that, for the LTS part, the filtration is defined similarly as for the basic modal logic (see, e.g., [9]). The most significant difference is the change in the labelling of the relations, since we now use \(\mathsf{Act}^{\Sigma}\) as the set of action names, instead of \(\mathsf{Act}\). But this has a consequence on the definition of \(S^{f}_{i}\). The relation \(\overleftarrow{\Sigma}\) enables us to obtain a finite set of witnesses for the formulas \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Sigma\), from which we also get that \(\mathsf{Act}^{\Sigma}\) and (together with \(\mathbf{\leftrightarrow}_{\Sigma}\)) \(W^{f}\) are finite. However, the new set \(S^{f}_{i}\) is defined in terms of a new set of action names, so there are potentially infinite new available plans to consider. Thus, we need to state that \(S^{f}_{i}\) is any finite set, satisfying the minimum and maximum conditions, whose members are also finite, and that is well-defined. Finally, \(\mathsf{Act}^{\Sigma}\) may be finite, unlike the original set of actions \(\mathsf{Act}\) which is infinite by definition. However, this poses no problem in the construction, as \(\mathsf{Act}^{\Sigma}\) can be extended to an infinite set, without breaking the finiteness of the filtration (recall that the accessibility relation is defined over a, potentially finite, subset of the set of actions). **Theorem 7**: _Let \(\mathcal{M}=\langle W,R,\{S_{i}\}_{i\in\mathsf{Agt}},V\rangle\) be an \(\mathrm{LTS}^{U}\) and let \(\Sigma\) be a set of \(\mathsf{L}_{\mathsf{Kh}}\)-formulas that is closed under subformulas. Then, for all \(\psi\in\Sigma\) and \(w\in W\), \(\mathcal{M},w\models\psi\) iff \(\mathcal{M}^{f},[w]_{\Sigma}\models\psi\). Moreover, if \(\Sigma\) is finite then \(\mathcal{M}^{f}\) is a finite model._ _Proof._ Boolean cases work as expected. So, we will only show that \(\mathcal{M},w\models\mathsf{Kh}_{i}(\psi,\varphi)\) iff \(\mathcal{M}^{f},[w]_{\Sigma}\models\mathsf{Kh}_{i}(\psi,\varphi)\). (\(\mathbf{\Rightarrow}\)) Suppose that \(\mathcal{M}\models\mathsf{Kh}_{i}(\psi,\varphi)\): let \(\pi\in S_{i}\) be such that \(\llbracket\psi\rrbracket^{\mathcal{M}}\subseteq\mathrm{SE}(\pi)\) and \(\mathsf{R}_{\pi}(\llbracket\psi\rrbracket^{\mathcal{M}})\subseteq\llbracket \varphi\rrbracket^{\mathcal{M}}\). By definition, \(a_{[\pi]_{\Sigma}}\in\mathsf{Act}^{\Sigma}_{i}\) and therefore, \(\{a_{[\pi]_{\Sigma}}\}\in S^{f}_{i}\). If \(\llbracket\psi\rrbracket^{\mathcal{M}^{f}}=\varnothing\), the result trivially follows. Otherwise, let \([w]_{\Sigma}\in\llbracket\psi\rrbracket^{\mathcal{M}^{f}}\). By IH, \(w\in\llbracket\psi\rrbracket^{\mathcal{M}}\), and since \(\pi\) is SE at \(w\), \(\mathsf{R}_{\pi}(w)\neq\varnothing\). Since \(\mathcal{M}^{f}\) is a filtration, we have that \(\mathsf{R}_{a_{[\pi]_{\Sigma}}}(w)\neq\varnothing\) and \(\{a_{[\pi]_{\Sigma}}\}\) is SE at \([w]_{\Sigma}\). Thus, \(\llbracket\psi\rrbracket^{\mathcal{M}^{f}}\subseteq\mathrm{SE}(\{a_{[\pi]_{ \Sigma}}\})\). Let \(([w]_{\Sigma},[v]_{\Sigma})\in R^{f}_{a_{[\pi]_{\Sigma}}}\) be such that \([w]_{\Sigma}\in\llbracket\psi\rrbracket^{\mathcal{M}^{f}}\). By IH, \(w\in\llbracket\psi\rrbracket^{\mathcal{M}}\). Since \(\pi\) is a witness of \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Sigma\) in \(\mathcal{M}\) (by assumption), by the definition of \(\mathcal{M}^{f}\) we get \(v\in\llbracket\varphi\rrbracket^{\mathcal{M}}\). Again, by IH, \([v]_{\Sigma}\in\llbracket\varphi\rrbracket^{\mathcal{M}}\). Thus, \(R^{f}_{[a_{[\pi]_{\Sigma}}]}(\llbracket\psi\rrbracket^{\mathcal{M}})\subseteq \llbracket\varphi\rrbracket^{\mathcal{M}}\). Therefore, \(\mathcal{M}^{f}\models\mathsf{Kh}_{i}(\psi,\varphi)\). (\(\mathbf{\Leftarrow}\)) Suppose that \(\mathcal{M}^{f}\models\mathsf{Kh}_{i}(\psi,\varphi)\): let \(\pi\in S^{f}_{i}\) be such that \(\llbracket\psi\rrbracket^{\mathcal{M}^{f}}\subseteq\mathrm{SE}(\pi)\) and \(\mathsf{R}_{\pi}(\llbracket\psi\rrbracket^{\mathcal{M}^{f}})\subseteq \llbracket\varphi\rrbracket^{\mathcal{M}^{f}}\). By definition of \(\mathcal{M}^{f}\), since \(\pi\) is a witness of \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Sigma\) in \(\mathcal{M}^{f}\), we have that there is \(\pi^{\prime}\in S_{i}\) such that \(\pi^{\prime}\) is a witness of \(\mathsf{Kh}_{i}(\psi,\varphi)\) in \(\mathcal{M}\). Thus, \(\mathcal{M}\models\mathsf{Kh}_{i}(\psi,\varphi)\). It remains to show that \(\mathcal{M}^{f}\) is finite. First, note that the number of elements in \(W^{f}\) is \(2^{m}\), with \(m\) being the number of formulas in \(\Sigma\). By definition, for all \(i\in\mathsf{Agt}\), \(\mathsf{Act}^{\Sigma}_{i}\) is at most the number of \(\mathsf{Kh}_{i}(\psi,\varphi)\in\Sigma\), since if there are two groups of witnesses \([\pi]_{\Sigma}\) and \([\pi^{\prime}]_{\Sigma}\) for some \(\mathsf{Kh}_{i}(\psi,\varphi),[\pi]_{\Sigma}=[\pi^{\prime}]_{\Sigma}\). Hence, \(\mathsf{Act}^{\Sigma}\) is polynomial in the number of \(\mathsf{Kh}_{i}(\psi,\varphi)\) in \(\Sigma\). Finally, by definition, \(S^{f}_{i}\) is finite. Thus, \(\mathcal{M}^{f}\) is finite. The last theorem states that every satisfiable formula of \(L_{Kh_{i}}\), is satisfiable in a finite model. As a consequence, the satisfiability problem for \(L_{Kh_{i}}\) is decidable. In the next section we will refine this result and provide exact complexity bounds. ### Complexity via selection Here we investigate the computational complexity of the satisfiability problem of \(L_{Kh_{i}}\) under the \(\operatorname{LTS}^{U}\)-based semantics. We will establish membership in \(\NP\) by showing a polynomial-size model property. Given a formula, we will show that it is possible to select just a piece of the canonical model which is relevant for its evaluation. The selected model will preserve satisfiability, and moreover, its size will be polynomial w.r.t. the size of the input formula. **Definition 8.4** (Selection function): Let \(\mathcal{M}^{\Gamma}=\langle W^{\Gamma},R^{\Gamma},\{S^{\Gamma}_{i}\}_{i\in \Agt},V^{\Gamma}\rangle\) be a canonical model for an \(\operatorname{MCS}\Gamma\) (see Definition 6.1); take \(w\in W^{\Gamma}\) and a formula \(\varphi\in L_{Kh_{i}}\). Define \(\Act_{\varphi}:=\{\langle\theta_{1},\theta_{2}\rangle\in\Act^{\Gamma}\mid Kh_{i }(\theta_{1},\theta_{2})\text{ is a subformula of }\varphi\}\). A _canonical selection function_\(\ssel_{w}^{\varphi}\) is a function that takes \(\mathcal{M}^{\Gamma}\), \(w\) and \(\varphi\) as input, returns a set \(W^{\prime}\subseteq W^{\Gamma}\), and is such that: * \(\ssel_{w}^{\varphi}(p)=\{w\}\); * \(\ssel_{w}^{\varphi}(\neg\varphi_{1})=\ssel_{w}^{\varphi}(\varphi_{1})\) * \(\ssel_{w}^{\varphi}(\varphi_{1}\vee\varphi_{2})=\ssel_{w}^{\varphi}(\varphi_{1}) \cup\ssel_{w}^{\varphi}(\varphi_{2})\); * If \([\![\Kh_{i}(\varphi_{1},\varphi_{2})]\!]^{\mathcal{M}^{\Gamma}}\neq\varnothing\) and \([\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}}=\varnothing\): \(\ssel_{w}^{\varphi}(\Kh_{i}(\varphi_{1},\varphi_{2}))=\{w\}\); * If \([\![\Kh_{i}(\varphi_{1},\varphi_{2})]\!]^{\mathcal{M}^{\Gamma}}\neq\varnothing\) and \([\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}}\neq\varnothing\): \(\ssel_{w}^{\varphi}(\Kh_{i}(\varphi_{1},\varphi_{2}))=\{w_{1},w_{2}\}\cup\ssel _{w_{1}}^{\varphi}(\varphi_{1})\cup\ssel_{w_{2}}^{\varphi}(\varphi_{2})\), where \(w_{1}\), \(w_{2}\) are s.t. \((w_{1},w_{2})\in R^{\Gamma}_{(\varphi_{1},\varphi_{2})}\); * If \([\![\Kh_{i}(\varphi_{1},\varphi_{2})]\!]^{\mathcal{M}^{\Gamma}}=\varnothing\) (note that \([\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}}\neq\varnothing\)): For all set of plans \(\pi\), either \([\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}}\notin\mathrm{SE}(\pi)\) or \(R^{\Gamma}_{\pi}([\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}})\nsubseteq[\![ \varphi_{2}]\!]^{\mathcal{M}^{\Gamma}}\). For each \(a\in\Act_{\varphi}\): * if \([\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}}\nsubseteq\mathrm{SE}(\{a\})\): we add \(\{w_{1}\}\cup\ssel_{w_{1}}^{\varphi}(\varphi_{1})\) to \(\ssel_{w}^{\varphi}(\Kh_{i}(\varphi_{1},\varphi_{2}))\), where \(w_{1}\in[\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}}\) and \(w_{1}\notin\mathrm{SE}(\{a\})\); * if \(\sR^{\Gamma}_{\pi}([\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}})\nsubseteq[\![ \varphi_{2}]\!]^{\mathcal{M}^{\Gamma}}\) we add \(\{w_{1},w_{2}\}\cup\ssel_{w_{1}}^{\varphi}(\varphi_{1})\cup\ssel_{w_{2}}^{ \varphi}(\varphi_{2})\) to \(\ssel_{w}^{\varphi}(\Kh_{i}(\varphi_{1},\varphi_{2}))\), where \(w_{1}\in[\![\varphi_{1}]\!]^{\mathcal{M}^{\Gamma}}\), \(w_{2}\in R^{\Gamma}_{a}(w_{1})\) and \(w_{2}\notin[\![\varphi_{2}]\!]^{\mathcal{M}^{\Gamma}}\). \(\dashv\) We can now select a small model which preserves the satisfiability of a given formula. **Definition 8.5** (Selected model): Let \(\mathcal{M}^{\Gamma}\) be the canonical model for an \(\operatorname{MCS}\Gamma\), \(w\) a state in \(\mathcal{M}^{\Gamma}\), and \(\varphi\) an \(L_{Kh_{i}}\)-formula. Let \(\ssel_{w}^{\varphi}\) be a selection function, we define the _model selected by_\(\ssel_{w}^{\varphi}\) as \(\mathcal{M}_{w}^{\varphi}=\langle W_{w}^{\varphi},R_{w}^{\varphi},\{(\ssel_{w}^{ \varphi})\}_{i\in\Agt},V_{w}^{\varphi}\rangle\), where * \(W_{w}^{\varphi}:=\ssel_{w}^{\varphi}(\varphi)\); * \((\mathsf{R}^{\varphi}_{w})_{(\phi_{1},\phi_{2})}:=\mathsf{R}^{\Gamma}_{(\phi_{1}, \phi_{2})}\cap(\mathsf{W}^{\varphi}_{w})^{2}\) for each \(\langle\phi_{1},\phi_{2}\rangle\in\mathsf{Act}_{\varphi}\); * \((\mathsf{S}^{\varphi}_{w})_{i}:=\{|a|\,|\,a\in\mathsf{Act}_{\varphi}\}\cup\{| \langle\langle\perp,\top\rangle\}|\), for \(i\in\mathsf{Agt}\) (and \((\mathsf{R}^{\varphi}_{w})_{(\perp,\top)}:=\varnothing\)); * \(\mathsf{V}^{\varphi}_{w}\) is the restriction of \(\mathsf{V}^{\Gamma}\) to \(\mathsf{W}^{\varphi}_{w}\). Note that, although \(\mathsf{Act}_{\varphi}\) can be an empty set, each collection of sets of plans \((\mathsf{S}^{\varphi}_{w})_{i}\) is not. Moreover, \(\mathsf{Act}_{\varphi}\) can be extended to an infinite set of actions, to be defined over a proper signature. Therefore, \(\mathcal{M}^{\varphi}_{w}\) is an \(\mathsf{LTS}^{U}\). **Proposition 15**: _Let \(\mathcal{M}^{\Gamma}\) be a canonical model, \(w\) a state in \(\mathcal{M}^{\Gamma}\) and \(\varphi\) an \(\mathsf{L}_{\mathsf{Kh}_{\gamma}}\)-formula. Let \(\mathcal{M}^{\varphi}_{w}\) be the selected model by a selection function \(\mathsf{sel}^{\varphi}_{w}\). Then, \(\mathcal{M}^{\Gamma},w\models\varphi\) implies that for all \(\psi\) subformula of \(\varphi\), and for all \(v\in\mathsf{W}^{\varphi}_{w}\) we have that \(\mathcal{M}^{\Gamma},v\models\psi\) iff \(\mathcal{M}^{\varphi}_{w},v\models\psi\). Moreover, \(\mathcal{M}^{\varphi}_{w}\) is polynomial on the size of \(\varphi\)._ Proof.: The proof proceeds by induction in the size of the formula: Case \(\psi=p\): if \(\mathcal{M}^{\Gamma},v\models p\), then \(p\in\mathsf{V}^{\Gamma}(v)\). Given that \(v\in\mathsf{W}^{\varphi}_{w}\), we have \(p\in\mathsf{V}^{\varphi}_{w}(v)\) and therefore \(\mathcal{M}^{\varphi}_{w},v\models p\). The other direction is similar. Case \(\psi=\neg\psi_{1}\): if \(\mathcal{M}^{\Gamma},w\models\neg\psi_{1}\), then \(\mathcal{M}^{\Gamma},w\not\models\psi_{1}\). By IH, \(\mathcal{M}^{\varphi}_{w},w\not\models\psi_{1}\) and therefore \(\mathcal{M}^{\varphi}_{w},w\models\neg\psi_{1}\). The other direction is similar. Case \(\psi=\psi_{1}\vee\psi_{2}\): if \(\mathcal{M}^{\Gamma},v\models\psi_{1}\vee\psi_{2}\), then \(\mathcal{M}^{\Gamma},v\models\psi_{1}\) or \(\mathcal{M}^{\Gamma},v\models\psi_{2}\). By IH, \(\mathcal{M}^{\varphi}_{w},v\models\psi_{1}\) or \(\mathcal{M}^{\varphi}_{w},v\models\psi_{1}\) or \(\mathcal{M}^{\varphi}_{w},v\models\psi_{2}\) and therefore \(\mathcal{M}^{\varphi}_{w},v\models\psi_{1}\vee\psi_{2}\). The other direction is similar. Case \(\psi=\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\): Suppose that \(\mathcal{M}^{\Gamma},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\). We consider two possibilities: * \(\llbracket\!\!\llbracket w_{1}\rrbracket^{\mathcal{M}}=\emptyset\). Since \(\mathcal{M}^{\Gamma},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\) there is a \(\pi\in\mathsf{S}^{\Gamma}_{i}\) s.t. \(\varnothing=\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}} \subseteq\mathrm{SE}^{\mathcal{M}^{\Gamma}}(\pi)\) and \(\varnothing=\mathsf{R}^{\Gamma}_{m}(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{ \mathcal{M}^{\Gamma}})\subseteq\llbracket\!\!\llbracket\psi_{2}\rrbracket^{ \mathcal{M}^{\Gamma}}\). By IH \(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\subseteq \llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\). Notice that, since \(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}=\varnothing\), we also have \(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}=\varnothing\). Let \(\pi^{\prime}=\{\langle\bot,\top\rangle\}\), we know that \(\pi^{\prime}\in(\mathsf{S}^{\varphi}_{w})_{i}\), and \((\mathsf{R}^{\varphi}_{w})_{\pi^{\prime}}(\llbracket\!\!\llbracket\psi_{1} \rrbracket^{\mathcal{M}^{\Gamma}})=\varnothing\). So, there is a \(\pi^{\prime}\in(\mathsf{S}^{\varphi}_{w})_{i}\) s.t. \(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\subseteq \mathrm{SE}^{\mathcal{M}^{\Gamma}}(\pi^{\prime})\) and \((\mathsf{R}^{\varphi}_{w})_{\pi^{\prime}}(\llbracket\!\!\llbracket\psi_{1} \rrbracket^{\mathcal{M}^{\Gamma}})\subseteq\llbracket\!\!\llbracket\psi_{2} \rrbracket^{\mathcal{M}}\). Therefore, \(\mathcal{M}^{\varphi}_{w},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\). * \(\llbracket\!\!\llbracket w_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\neq\varnothing\): since \(\mathcal{M}^{\Gamma},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\) there exists a \(\pi\in\mathsf{S}^{\Gamma}_{i}\) s.t. \(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\subseteq \mathrm{SE}^{\mathcal{M}^{\Gamma}}(\pi)\) and \(\mathsf{R}^{\Gamma}_{\pi}(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^ {\Gamma}})\subseteq\llbracket\!\!\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\Gamma}}\). By Truth Lemma, \(\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\in v\), then \(\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\in\Gamma\) and \(\langle\psi_{1},\psi_{2}\rangle\in\mathsf{Act}_{\Gamma}\). By the definition of \(\mathsf{R}^{\Gamma}_{\langle\psi_{1},\psi_{2}\rangle}\), we have that for all \(w\in\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\), it holds that \(\mathsf{R}^{\Gamma}_{\langle\psi_{1},\psi_{2}\rangle}(w)\neq\varnothing\) and \(\mathsf{R}^{\Gamma}_{\langle\psi_{1},\psi_{2}\rangle}(w)\subseteq\llbracket\! \!\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\Gamma}}\). Thus, \(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\subseteq \mathrm{SE}^{\mathcal{M}^{\Gamma}}(\{\langle\psi_{1},\psi_{2}\rangle\})\) and \(\mathsf{R}^{\Gamma}_{\langle\psi_{1},\psi_{2}\rangle}(\llbracket\!\!\llbracket \psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}})\subseteq\llbracket\!\!\llbracket \psi_{2}\rrbracket^{\mathcal{M}^{\Gamma}}\). Since \(\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\neq\varnothing\), there exist \(w_{1},w_{2}\in\mathsf{W}^{\Gamma}\) s.t. \((w_{1},w_{2})\in\mathsf{R}^{\Gamma}_{\langle\psi_{1},\psi_{2}\rangle}\). Notice that by definition of \(\mathcal{M}^{\varphi}_{w}\), we have that \(\{\langle\psi_{1},\psi_{2}\rangle\}\in(\mathsf{S}^{\varphi}_{w})_{i}\) and that \((\mathsf{R}^{\varphi}_{w})_{\langle\psi_{1},\psi_{2}\rangle}\) is defined. Also, by the definition of \(\mathsf{sel}^{\varphi}_{w}\), Item (5), there exist \(w^{\prime}_{1},w^{\prime}_{2}\in\mathsf{W}^{\varphi}_{w}\) s.t. \((w^{\prime}_{1},w^{\prime}_{2})\in(\mathsf{R}^{\varphi}_{w})_{\langle\psi_{1}, \psi_{2}\rangle}\). Let \(v_{1}\in\llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\subseteq \llbracket\!\!\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\Gamma}}\) (the inclusion holds by IH). Then, we have \(v_{1}\in\mathrm{SE}^{\mathcal{M}^{\Gamma}}(\{\langle\psi_{1},\psi_{2}\rangle\})\) and \(\mathsf{R}^{\Gamma}_{\langle\psi_{1},\psi_{2}\rangle}(v_{1})\subseteq\llbracket\! \!\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\Gamma}}\). Since for all \(v_{2}\in\mathsf{R}^{\Gamma}_{\langle\ Aiming for a contradiction, suppose now that \((R^{\varphi}_{w})_{\langle\psi_{1},\psi_{2}\rangle}(v_{1})=R^{\Gamma}_{\langle\psi_ {1},\psi_{2}\rangle}(v_{1})\cap W^{\varphi}_{w}\notin\llbracket\psi_{2}\rrbracket ^{\mathcal{M}^{\varphi}_{w}}\); and let \(v_{2}\in(R^{\varphi}_{\rangle\langle\psi_{1},\psi_{2}\rangle}(v_{1})\) s.t. \(v_{2}\notin\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\). Then we have that \((R^{\varphi}_{w})_{\langle\psi_{1},\psi_{2}\rangle}(v_{1})\subseteq R^{\Gamma }_{\langle\psi_{1},\psi_{2}\rangle}(v_{1})\), but also by IH \(v_{2}\notin\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varphi}}\). Thus, \(\mathcal{M}^{\Gamma},v\notin\mathsf{Kh}_{i}(v_{1},\psi_{2})\), which is a contradiction. Then, it must be the case that \((R^{\varphi}_{w})_{\langle(\psi_{1},\psi_{2})\rangle}(v_{1})\subseteq \llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\). Since we showed that \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\subseteq\mathrm{SE}^{ \mathcal{M}^{\varphi}_{w}}(\{\langle\psi_{1},\psi_{2}\rangle\})\) and \((R^{\varphi}_{w})_{\langle(\psi_{1},\psi_{2}\rangle\})(\llbracket\psi_{1} \rrbracket^{\mathcal{M}^{\varphi}_{w}})\subseteq\llbracket\psi_{2}\rrbracket ^{\mathcal{M}^{\varphi}_{w}}\), we conclude \(\mathcal{M}^{\varphi}_{w},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\). Assume now \(\mathcal{M}^{\varphi}_{w},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\). Again, we consider two possibilities: * \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}_{w}}=\boldsymbol{\varnothing}\): since \(\mathcal{M}^{\varphi}_{w},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\), then \(\varnothing=\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\subseteq \mathrm{SE}^{\mathcal{M}^{\varphi}_{w}}(\pi^{\prime})\) and \(\varnothing=(R^{\varphi}_{w})_{\pi^{\prime}}(\llbracket\psi_{1}\rrbracket^{ \mathcal{M}^{\varphi}})\subseteq\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{ \varphi}_{w}}\) for some \(\pi^{\prime}\in(S^{\varphi}_{w})_{i}\). We claim that \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}}=\varnothing\). Because otherwise if \(\mathcal{M}^{\Gamma},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\), by \(\mathsf{sel}^{\varphi}_{w}\), Item (5), \(\varnothing\neq(R^{\varphi}_{w})_{\langle\psi_{1},\psi_{2}\rangle}\) is defined and \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\neq\varnothing\), contradicting hypothesis. And if \(\mathcal{M}^{\Gamma},v\not\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\), by \(\mathsf{sel}^{\varphi}_{w}\), item Item (6), and IH, \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}}\neq\varnothing\), again a contradiction. Let \(\pi\) be any set of plans in \(S^{\Gamma}_{i}\); since \(R^{\Gamma}_{\pi}(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}})=\varnothing\), \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}\subseteq\mathrm{SE}(\pi)\) and \(R^{\Gamma}_{\pi}(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}) \subseteq\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varepsilon}}\). Then, \(\mathcal{M}^{\Gamma},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\). * \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\neq\boldsymbol{\varnothing}\): first, notice that by IH, \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}\neq\varnothing\). Also, by \(\mathcal{M}^{\varphi}_{w},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\), we get \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\subseteq\mathrm{SE}^{ \mathcal{M}^{\varphi}_{w}}(\pi^{\prime})\) and \((R^{\varphi}_{w})_{\pi^{\prime}}(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{ \varphi}_{w}})\subseteq\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\), for some \(\pi^{\prime}\in(S^{\varphi}_{w})_{i}\). Aiming for a contradiction, suppose \(\mathcal{M}^{\Gamma},v\not\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\). This implies that for all \(\pi\in S^{\Gamma}_{i}\), \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}\not\subseteq\mathrm{SE}^ {\mathcal{M}^{\varepsilon}}(\pi)\) or \(R^{\Gamma}_{\pi}(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}) \not\subseteq\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varepsilon}}\). Also, by definition of \(\mathsf{Act}_{\varphi}\) we have that for all \(\pi=\{a\}\in(S^{\varphi}_{w})_{i}\), with \(a\in\mathsf{Act}_{\varphi}\), \(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}}\notin\mathrm{SE}^{ \mathcal{M}^{\varepsilon}}(\pi)\) or \(R^{\Gamma}_{\pi}(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}) \not\subseteq\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varepsilon}}\); i.e., for all \(a\in\mathsf{Act}_{\varphi}\)\(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}\not\subseteq \mathrm{SE}^{\mathcal{M}^{\varepsilon}}(\{a\})\) or \(R^{\Gamma}_{\varphi_{a}}(\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}) \not\subseteq\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varepsilon}}\). Thus, there exists \(w_{1}\in\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}\) s.t. \(w_{1}\notin\mathrm{SE}^{\mathcal{M}^{\varepsilon}}(a)\) or there exists \(w_{2}\in\mathrm{R}^{\Gamma}_{a}(w_{1})\) s.t. \(w_{2}\in\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varepsilon}}\). By definition of \(\mathsf{sel}^{\varphi}_{w}\), Item (6), we add witnesses for each \(a\in\mathsf{Act}_{\varphi}\). So, let \(\pi^{\prime}\in(S^{\varphi}_{w})_{i}\). If \(\pi^{\prime}=\{\langle\bot,\top\rangle\}\), trivially we obtain \(\varnothing\neq\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\not\subseteq \mathrm{SE}^{\mathcal{M}^{\varphi}_{w}}(\pi^{\prime})=\varnothing\). Then, take another \(\pi^{\prime}=\{a\}\) s.t. \(a\in\mathsf{Act}_{\varphi}\), and \(w^{\prime}_{1}\in\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}} \subseteq\llbracket\psi_{1}\rrbracket^{\mathcal{M}^{\varepsilon}}\). If \(w^{\prime}_{1}\notin\mathrm{SE}^{\mathcal{M}^{\varepsilon}}(\{a\})\), \(R^{\Gamma}_{a}(w^{\prime}_{1})=\varnothing\) and thus \((R^{\varphi}_{w})_{a}(w^{\prime}_{1})=\varnothing\) and therefore \(w^{\prime}_{1}\notin\mathrm{SE}^{\mathcal{M}^{\varphi}_{w}}(\{a\})\). On the other hand, if there exists \(w_{2}\in R^{\Gamma}_{a}(w^{\prime}_{1})\) s.t. \(w_{2}\notin\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varepsilon}}\), then by \(\mathsf{sel}^{\varphi}_{w}\) and IH, there exists \(w^{\prime}_{2}\in W^{\varphi}_{w}\) s.t. \(w^{\prime}_{2}\in R^{\Gamma}_{a}(w^{\prime}_{1})\) and \(w^{\prime}_{2}\notin\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\), and consequently, there exists \(w^{\prime}_{2}\in(R^{\varphi}_{w})_{a}(w^{\prime}_{1})\) s.t. \(w^{\prime}_{2}\notin\llbracket\psi_{2}\rrbracket^{\mathcal{M}^{\varphi}_{w}}\). In any case, it leads to \(\mathcal{M}^{\varphi}_{w},v\not\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\), a contradiction. Therefore, \(\mathcal{M}^{\Gamma},v\models\mathsf{Kh}_{i}(\psi_{1},\psi_{2})\). Notice now that the selection function adds states from \(\mathcal{M}^{\Gamma}\), only for each \(\mathsf{Kh}_{i}\)-formula **Proposition 16**: _The model checking problem for \(\mathsf{L}_{\mathsf{Kh}_{i}}\) is in \(\mathsf{P}\)._ Proof.: Given a pointed \(\mathrm{LTS}^{U}\)\(\mathcal{M},w\) and a formula \(\varphi\), we define a bottom-up labeling algorithm running in polynomial time which checks whether \(\mathcal{M},w\models\varphi\). We follow the same ideas as for the basic modal logic \(\mathsf{K}\) (see e.g., [10]). Below we introduce the case for formulas of the shape \(\mathsf{Kh}_{i}(\psi,\varphi)\), over an \(\mathrm{LTS}^{U}\)\(\mathcal{M}=\langle\mathsf{W},\mathsf{R},\mathsf{S},\mathsf{V}\rangle\): ``` ProcedureModelChecking(\((\mathcal{M},w)\), \(\mathsf{Kh}_{i}(\psi,\varphi)\)) \(\mathit{lab}(\mathsf{Kh}_{i}(\psi,\varphi))\leftarrow\varnothing\); for all\(\pi\in\mathsf{S}_{i}\)do \(\mathit{kh}\gets True\); for all\(\sigma\in\pi\)do for all\(v\in\mathit{lab}(\psi)\)do \(\mathit{kh}\leftarrow(\mathit{kh}\) & \(v\in\mathrm{SE}(\sigma)\) & \(\mathsf{R}_{v}(v)\subseteq\mathit{lab}(\varphi))\); endfor endfor if\(\mathit{kh}\)then \(\mathit{lab}(\mathsf{Kh}_{i}(\psi,\varphi))\gets W\); endif endfor ``` As \(\mathsf{S}_{i}\) and each \(\pi\in\mathsf{S}_{i}\) are not empty, the first two **for** loops are necessarily executed. If \(\mathit{lab}(\psi)=\varnothing\), then the formula \(\mathsf{Kh}_{i}(\psi,\varphi)\) is trivially true. Otherwise, \(\mathit{kh}\) will remain true only if the appropriate conditions for the satisfiability of \(\mathsf{Kh}_{i}(\psi,\varphi))\) hold. If no \(\pi\) succeeds, then the initialization of \(\mathit{lab}(\mathsf{Kh}_{i}(\psi,\varphi))\) as \(\varnothing\) will not be overwritten, as it should be. Both \(v\in\mathrm{SE}(\sigma)\) and \(\mathsf{R}_{v}\) can be verified in polynomial time. Hence, the model checking problem is in \(\mathsf{P}\). The intended result for satisfiability now follows. **Theorem 8**: _The satisfiability problem for \(\mathsf{L}_{\mathsf{Kh}_{i}}\) over \(\mathrm{LTS}^{U}\)s is \(\mathsf{NP}\)-complete._ Proof.: Hardness follows from \(\mathsf{NP}\)-completeness of propositional logic (a fragment of \(\mathsf{L}_{\mathsf{Kh}_{i}}\)). By Proposition 15, each satisfiable formula \(\varphi\) has a model of polynomial-size on \(\varphi\). Thus, we can guess a polynomial model \(\mathcal{M},w\), and verify \(\mathcal{M},w\models\varphi\) (which can be done in polynomial time, due to Proposition 16). Thus, the satisfiability problem is in the class \(\mathsf{NP}\). ## 9 Final remarks In this article, we introduce a new semantics for the _knowing how_ modality from [57, 58, 59], for multiple agents. It is defined in terms of _uncertainty-based labeled transition systems_ (\(\mathrm{LTS}^{U}\)). The novelty in our proposal is that \(\mathrm{LTS}^{U}\)s are equipped with an indistinguishability relation among plans. In this way, the epistemic notion of uncertainty of an agent -which in turn defines her epistemic state- is reintroduced, bringing the notion of _knowing how_ closer to the notion of _knowing that_ from classical epistemic logics. We believe that the semantics based on \(\mathrm{LTS}^{U}\) can represent properly the situation of a shared, objective description of the affordances of a given situation, together with the different, subjective and personal abilities of a group of agents; this seems difficult to achieve using a semantics based on LTSs alone. We show that the logic of [57, 58, 59] can be obtained by imposing particular conditions over \(\mathrm{LTS}^{U}\); thus, the new semantics is more general. In particular, it provides counter-examples to _EMP_ and _COMP_, which directly link the knowing how modality \(\mathsf{Kh}\) to properties of the universal modality. Indeed, consider _EMP_: even though \(\mathsf{A}(\psi\to\varrho)\) objectively holds in the underlying LTS of an \(\mathrm{LTS}^{U}\), it could be argued that an agent might not be aware of actions or plans to turn those facts into knowledge, resulting in \(\mathsf{Kh}(\psi,\varrho)\) failing in the model. To characterize validities in this language over \(\mathrm{LTS}^{U}\)s, we introduce a sound and strongly complete axiom system. We also define a suitable notion of bisimulation over \(\mathrm{LTS}^{U}\)s, following ideas introduced in [17, 18]. We show that bisimilarity implies formula equivalence, and that finite models form a Hennessy-Milner class (i.e., that formula equivalence implies bisimilarity over finite models). Finally, we prove that the satisfiability problem for our multi-agent knowing how logic over the \(\mathrm{LTS}^{U}\)-based semantics is \(\mathsf{NP}\)-complete. The proof relies on a selection argument on the canonical model, and on the fact that the model checking problem is polynomial. We also provide a filtration technique that, given an arbitrary model satisfying \(\varrho\), returns a finite model that satisfies \(\varrho\). **Future work.** There are several interesting lines of research to explore in the future. First, our framework easily accommodates other notions of executability. For instance, one could require only some of the plans in a set \(\pi\) to be strongly executable, or weaken the condition of _strong_ executability, etc. We can also explore the effects of imposing different restrictions on the construction of the indistinguishability relation between plans. It would be interesting to investigate which logics we obtain in these cases, and their relations with the LTS semantics. Second, to our knowledge, the exact complexity of the satisfiability problem for knowing how over LTSs is open. It would be interesting to solve this problem, for instance, by following and adapting ideas from [40]. Third, the \(\mathrm{LTS}^{U}\) semantics, in the multi-agent setting, leads to natural definitions of concepts of _collective_ knowing how, in the spirit of [13]. For instance, one can easily define a notion of _general knowing how_ as \(\mathsf{EKh}_{\mathbb{C}}(\psi,\varrho):=\bigwedge_{i\in\mathbb{C}}\mathsf{Kh} _{i}(\psi,\varrho)\), whose reading is _"everyone in the group \(G\) knows how to achieve \(\varrho\) given \(\psi\)"_; and _"somebody in the group \(G\) knows how to achieve \(\varrho\) given \(\psi\)"_, as \(\mathsf{SKh}_{\mathbb{C}}(\psi,\varrho):=\bigvee_{i\in\mathbb{C}}\mathsf{Kh}_{i }(\psi,\varrho)\) (see, e.g., [1] for a similar approach in standard epistemic logic). Other, more complex notions such as _distributed_ and _common knowing how_, deserve further exploration. Finally, dynamic modalities capturing epistemic updates can be defined via operations that modify the indistinguishability relation among plans (as is done with other dynamic epistemic operators, see, e.g., [53]). This would allow to express different forms of communication, such as _public_, _private_ and _semiprivate_ announcements concerning (sets of) plans. Some preliminary results have been presented in [4].
2310.03972
Inequality and Nyman-Beurling-Baez-Duarte criteria
We proposed a proof of the Riemann hypothesis. The proof is based on the Nyman-Beurling-Baez-Duarte condition. By proving existence of the solution for a system of inequalities, we can show that there is a sequence, which act as the coefficient of Beurling's sequence, can approximate the constant vector in a weighted Hilbert space.
Kwok Kwan Wong
2023-03-14T08:13:51Z
http://arxiv.org/abs/2310.03972v5
# Inequality and Nyman-Beurling-Baez-Duarte Criteria ###### Abstract. We proposed a proof of the Riemann hypothesis. The proof is based on the Nyman-Beurling-Baez-Duarte condition. By proving existence of the solution for a system of inequalities, we can show that there is a sequence, which act as the coefficient of Beurling's sequence, can approximate the constant vector in a weighted Hilbert space. _Mathematics Subject Classification: 11Mxx, 46Cxx_ _Keywords: Riemann hypothesis, Nyman-Beurling-Baez-Duarte condition, System of inequalities_ ## 1. Introduction The Riemann hypothesis was raised by Riemann in 1859 [10]. The hypothesis is about the zeros of the Riemann-Zeta function \(\zeta\), \(\zeta\) has the trivial zero, which are negative even integers, and the nontrivial zeros. Riemann posed a hypothesis that the real part of the nontrivial zeros are \(\frac{1}{2}\), which we call the _Riemann hypothesis_. To prove or disprove the Riemann hypothesis, many scholars try to formulate the Riemann hypothesis in another way [11, 12, 13]. In particular, Nyman and Beurling show that the Riemann hypothesis is true if and only if the space of the Beurling function is dense in Hilbert space \(L^{2}((0,1))\)[9, 4]. Baez-Duarte has restated and strengthened this condition to be the Riemann hypothesis is true if and only if the characteristic function \(\chi_{(0,1]}\) belongs to the closure of the space of the _natural Beurling function_ in the Hilbert space \(L^{2}((0,\infty))\)[1]. Bagchi reformulates the condition to be if the constant sequence belongs to the closure of the span of the _Beurling sequence_ in the \(l^{2}(\mathbb{N})\) with a weighted inner product [2]. There are numerous working on this approach[8, 14, 7, 3, 5, 6]. Our contribution is that by show that for large enough \(n\), we can bound the all component of Beurling sequence with any positive number. Normally to show a vector belongs to a subspace, one required to find the coefficients of the basis of the subspace. Examples is the natural approximation, we overcome this technical difficulty by only showing the coefficients exists without explicitly constructing them. So we have the following theorem. **Theorem 1.1**.: The Riemann hypothesis is true. The details of the approach and the proof of this theorem are discuss in the next session. ## 2. Our approach to the problem The Hilbert space we consider is \(l^{2}(\mathbb{N}):=H\) over \(\mathbb{C}\) with the norm induced by the inner product. \[\langle a,b\rangle=\sum_{n=1}^{\infty}\frac{a_{n}^{*}b_{n}}{n(n+1)}.\] Observe that bounded sequences belong to \(H\) as well. We adopt the notion in [2], we introduce the sequence \(\gamma_{l}=(\{\frac{n}{l}\})=(1/l,2/l,...)\) for \(l\in\mathbb{N}\), where \(\{x\}\) is the fractional part function. It is easy to see that \(\gamma_{l}\in H\) for all \(l\). Denote the \(span(\gamma_{l},l\in\mathbb{N})=B\), we call the \(B\) the space of the _Beurling sequences_. Let \(\gamma=(1,1,...)\) be the constant sequence, it is easy to see that it belongs to \(H\). Denote \(||.||_{H}\) the norm of \(H\). In [2], it states the following theorem. **Theorem 2.1**.: The Riemann hypothesis is equivalent to \(\gamma\in\overline{B}\), and is equivalent to \(B\) is dense in \(H\). Proof.: See the proof of Theorem 1 of [2]. The above statement is equivalent to there exists sequence \(a_{n,k}\) such that \(||\sum_{k=2}^{n}a_{n,k}\gamma_{k}-\gamma||_{H}\) converge to zero when \(n\) goes to infinity. Let \(e_{i}\) be the sequence with \(1\) in the i-th entry, zero otherwise. Define \(R_{i}:\mathbb{N}\rightarrow\mathbb{N}\), by sending \(p\) to \(p\,mod\,i\). Observe that for any finite \(n\), the i-th component of \(x_{n}-\gamma\) is periodic with period \(L_{n}\), where \(L_{n}\) is the least common multiple of numbers less than or equal to \(n\). If there exists \(a_{n,k}\) such that \(\frac{|x_{n}-\gamma|_{i}^{2}}{i(i+1)}\) is smaller than any positive number for all \(1\leq i\leq L_{n}-1\) for large enough \(n\) then the Riemann hypothesis is true. In the following discussion, we only involve real numbers only, since if there are \(a_{n,k}\) which are real sequences fulfill the conditions, \(a_{n,k}\) is also a complex sequence as well. Our approach is the following: For any finite \(n\geq 2\), we set up a system of inequalities \(S(\epsilon,n)\): \[\sum_{k=2}^{n}a_{k}R_{k}(i)-1\leq\epsilon,i=1,...,L_{n}-1\] \[-\sum_{k=2}^{n}a_{k}R_{k}(i)+1\leq\epsilon,i=1,...,L_{n}-1.\] Where we re-scale \(a_{n,k}\), rename it as \(a_{k}\)(as we have already specify \(n\) in the system, we discard \(n\) in the lower index) and \(\epsilon\) is some positive number smaller than \(1\). Let \(A\) be the coefficient matrix of the first set of inequalities, \(a=(a_{2},a_{3},...,a_{n})^{T}\). The idea is to showing the existence of \(a\) without explicitly constructing it. We first show that the rank of \(A\) is \(n-1\). **Theorem 2.2**.: The rank of \(A\) is \(n-1\). Proof.: Consider \(i\leq n-1\). by multiply first row and minus i-th row of \(A\), we obtain a lower triangular sub-matrix in the form \(k\lfloor\frac{i}{k}\rfloor\), where \(k=2,...n\). so they are linear independent, so the rank of \(A\) is \(n-1\). Now we show a fact in linear algebra. **lemma 2.3**.: Given \(A\in\mathbb{R}^{m\times n}\), \(m>n\), \(x,y\in\mathbb{R}^{n}\). If all the entries of \(A\) is non-negative and each column of \(A\) has at least one non-zero entries, and \(x\geq y\) component-wise, \(x\neq y\), then \(Ax\geq Ay\) component-wise and \(Ax\neq Ay\). Proof.: We use induction to prove this. Let \(k=1\), \(A\) be \(m\) by \(k\) matrix, if \(x\geq y\) with \(x\neq y\), \(Ax=(A_{1},A_{2},...)^{T}x\), since all \(A_{i}\) are non-negative and at least one entries of \(A\) is not zero, clearly \(A_{i}\geq A_{i}y\). Assume this hold for some positive integer \(k\). Consider \(A\) a \(m\) by \(k+1\) matrix, now \(x_{i}\geq y_{i}\) for all \(i\) with \(x\neq y\). Let \(A=(A^{\prime},a)\), where \(A^{\prime}\) is a \(m\) by \(k\) matrix and \(a\) is a \(m\) by \(1\) matrix. All of them have non-negative entries. Let \(x=(x^{\prime},x_{k+1})^{T}\) and \(y=(y^{\prime},y_{k+1})^{T}\) similarly. Now \(x\geq y\) and \(x\neq y\), if all the cases that \(x_{i}>y_{i}\) are in \(x^{\prime},y^{\prime}\), then by assumption \(A^{\prime}x^{\prime}\geq A^{\prime}y^{\prime}\) with \(A^{\prime}x^{\prime}\neq A^{\prime}y^{\prime}\), and \(ax_{k+1}\geq ay_{k+1}\) since \(a\) is non-negative. If \(x_{k+1}>y_{k+1}\), since \(a\) has at least one nonzero entries \(a_{i}\) and it is positive, \(a_{i}x_{k+1}>a_{i}y_{k+1}\), so \(Ax\geq Ay\) and \(Ax\neq Ay\). By induction, this holds for all natural number \(n\). Thus the conclusion. **lemma 2.4**.: There exists \(Av\geq 0\) with \(v\geq 0\). Proof.: Observe that the entries of \(A\) is non-negative and each column and row contains positive value. If \(v\) is a positive vector, then \(Av\) must be a positive vector as well. Now we do the following. Let \(v\) be the positive vector such that of \(Av\geq 0\), let \(\delta>0\), \(A^{+}\) be the Moore-Penrose inverse off \(A\), the inequality \(-\delta v\leq y-A^{+}c\leq\delta v\) is true for some \(y\) with \(c=(1,...,1)^{T}\). Now apply \(A\) to this inequality, since \(A\) preserve inequality by lemma 2.3, \(Av\) is positive, we obtained \(-\delta Av\leq Ay-AA^{+}c\leq\delta Av\). We have the following estimate. **Theorem 2.5**.: Given \(\epsilon>0\), there exists \(y\) such that \(|Ay-AA^{+}c|_{i}\leq\epsilon\) for all \(i\). That is, \(S(\epsilon,n)\) always have solution for a given \(\epsilon>0\). Proof.: By choosing small enough \(\delta\) in above, the conclusion is easily seen. We shift our focus to \(\gamma\). Let \(P_{n}=A_{n}A_{n}^{+}\) where \(A_{n}\) is \(L_{n}-1\times n-1\) matrix \(A\) defined above, and \(A_{n}^{+}\) is its Moore-Penrose inverse, which take the first \(n-1\) entries of vectors as input. By properties of Moore-Penrose inverse, \(P_{n}\) is a orthogonal projection with rank \(n-1\). Denote \(l^{\infty}\) be the space of bounded sequences and \(c_{0}\) be space of sequence converge to zero, with supremum norm be their norm. We first introduce the conditions for strong convergence of linear bounded operator between Banach space. **Theorem 2.6**.: Let \((T_{n})\) be a sequence of linear bounded operator from Banach space \(X\) to \(Y\), then \(T_{n}\) is strongly converge to a linear bounded operator \(T\) if and only if: (1) \(T_{n}x\) converges for \(x\) belongs to dense subset of \(X\), and, (2) \(||T_{n}||<C\) for some \(C>0\). The proof the above theorem is a standard \(\frac{\epsilon}{3}\) argument. Since \(P_{n}\) is sequence of projection operator in finite dimensional space, it is a sequence of bounded operator from \(c_{0}\) to \(c_{0}\). Denote \(||.||_{\infty}\) the supremum norm, \(||.||\) is the operator norm when the target space is endowed with supremum norm, \(||.||_{2}\) is the operator norm when the target space is endowed with inner product norm. We show that \(P_{n}\) strongly converge to identity operator \(I\) in \(c_{0}\). **Theorem 2.7**.: The sequence of \(P_{n}\) strongly converge to \(I\) in \(c_{0}\). Proof.: Consider \(||P_{n}||\), for all \(n\in\mathbb{N}\), \(||P_{n}||\leq||P_{n}||_{2}=1\), so it is uniformly bounded. Now the space \(V=\cup_{n}\mathbb{R}^{n}\) is dense in \(c_{0}\). Consider \(x\in V\), \(x\) is a vector with finitely nonzero terms, for some large enough \(n\), \(P_{n}x=P_{n+m}x\) for \(m\geq 1\), so \(P_{n}x\) converge for all \(x\in V\). By Theorem 2.6, \(P_{n}\) strongly converge to some bounded operator \(P:c_{0}\to c_{0}\). Now by expressing \(P_{n}^{2}=P_{n}\), we can show that \(P^{2}=P\). So \(P\) is a projection in \(c_{0}\). Since \(P\) is bounded, image of \(P\) is closed. Now image of \(P_{n}\) is contained in \(V\) with rising rank, so \(V\subset Im(P)\), since \(Im(P)\) is closed, \(Im(P)=c_{0}\). So \(P=I\) Now we state a condition for a sequence of bounded operator strongly converge to identity. The proof is finished by Martin Argerami. **Theorem 2.8**.: Let \(X\) is a Banach space and \(K(X)\) be the closure of finite rank operator of \(X\). Let \(T_{n}\) be a bounded sequence of operators such that \(T_{n}S\) converge to \(S\) for all \(S\in K(X)\), then \(T_{n}\) strongly converge to identity. Proof.: For any \(x\in X\), there exists a rank one operator \(T\) such that \(Sx=x\)(There exists linear functional \(f\) such that \(f(x)=1\) and let \(Sy=f(y)x\)). So \(T_{n}x=T_{n}Sx\), so it converge to \(Sx=x\), thus the conclusion. Now let \(P_{n}:l^{\infty}\to l^{\infty}\), it can be done because every \(P_{n}\) is finite rank and \(c_{0}\subset l^{\infty}\). We show that \(P_{n}\) strongly converge to \(I\) in \(l^{\infty}\). **Theorem 2.9**.: \(P_{n}\) strongly converge to \(I\) in \(l^{\infty}\). Proof.: We first show that \(P_{n}T\) converge to \(T\) for \(T\) is a finite rank operator. Consider \(||P_{n}T-T||=sup_{||x||\leq 1}(||(P_{n}T-T)x||_{\infty})\). Since \(T\) is finite rank, \(||P_{n}T-T||=sup_{||x||\leq 1}||P_{n}x_{m}-x_{m}||_{\infty}\), where \(x_{m}=Tx\) and it is a finite dimensional vector. Since \(T\) is a compact operator and the closed unit ball \(B_{1}\) is a bounded set, \(T(B_{1})\) is a bounded set, denote the closure of this set to be \(K\), \(K\) is compact since it is closed, and bounded in a finite dimensional space(range of \(T\) is finite dimension and \(T\) is compact operator). Since \(K\) is compact, \(x_{m,n}\) contain a convergent subsequence such that it converge to some \(x_{m}\in K\). Consider \(||P_{n}x_{m,n}-x_{m,n_{k}}||_{\infty}\), \(x_{m,n_{k}}\) being the subsequence, \[||P_{n}x_{m,n_{k}}-x_{m,n_{k}}||_{\infty}\] \[\leq||P_{n}x_{m,n_{k}}-P_{n}x_{m}||_{\infty}+||P_{n}x_{m}-x_{m}|| _{\infty}+||x_{m,n_{k}}-x_{m}||_{\infty}\] \[\leq||P_{n}||||x_{m,n_{k}}-x_{m}||_{\infty}+||P_{n}x_{m}-x_{m}|| _{\infty}+||x_{m,n_{k}}-x_{m}||_{\infty}\] Now \(||P_{n}||\leq||P_{n}||_{2}=1\) since it is orthogonal projection, the first and third term can be made less than \(\frac{\epsilon}{3}\) by large enough \(n\), the second term can be made less than \(\frac{\epsilon}{3}\) by Theorem 2.7. So the whole term is less than \(\epsilon\), so \(||P_{n}x_{m,n_{k}}-x_{m,n_{k}}||_{\infty}\) converge to zero. Observe that this fact holds for any converging subsequence \(x_{m,n_{k}}\). Now the sequence \(||P_{n}x_{m,n}-x_{m,n}||_{\infty}\) is bounded and therefore contain a converging subsequence \(Q_{n_{k}}\) in \(\mathbb{R}\). Assume \(Q_{n_{k}}\) converge to some value \(y\in\mathbb{R}\) other than zero. Since \(Q_{n_{k}}=||P_{n_{k}}x_{m,n_{k}}-x_{m,n_{k}}||_{\infty}\), we can found a subsequence \(x_{n_{k_{j}}}\) such that it converge and \(Q_{n_{k_{j}}}\) converge to zero by above result, contradiction, so \(||P_{n}x_{m,n}-x_{m,n}||_{\infty}\) converge to zero. Since \(||P_{n}T-T||\leq||P_{n}x_{m,n}-x_{m,n}||_{\infty}\), so \(\lim_{n}||P_{n}T-T||=0\). Now consider \(T\) be any bounded operator which belongs to closure of finite rank operator. Consider \(||P_{n}T-T||\), which is equal to \(||P_{n}T+P_{n}T_{m}-P_{n}T_{m}-T||\) for some \(T_{m}\) being a finite rank operator converge to \(T\). By triangle inequality, we have \[||P_{n}T-T||\leq||P_{n}T-P_{n}T_{m}||+||P_{n}T_{m}-T||,\] \[\leq||P_{n}T-P_{n}T_{m}||+||P_{n}T_{m}+T_{m}-T_{m}-T||\] \[\leq||P_{n}||||T_{m}-T||+||P_{n}T_{m}-T_{m}||+||T_{m}-T||\text{ by triangle inequality,}\] Now \(||P_{n}||\leq||P_{n}||_{2}=1\) since it is orthogonal projection, we can choose large enough \(n,m\) such that \(||T_{m}-T||<\frac{\epsilon}{3}\) and \(||P_{n}T_{m}-T_{m}||<\frac{\epsilon}{3}\) since \(P_{n}T_{m}\) converge \(T_{m}\), so the above quantity is less than any positive number \(\epsilon\) given large enough \(n\), so \(\lim_{n}P_{n}T=T\) for all \(T\in K(X)\). Applying Theorem 2.8, \(P_{n}\) strongly converge to identity in \(l^{\infty}\). Now we can prove the Riemann hypothesis. _proof of Theorem 1.1_: Consider \(\lim_{n}||x_{n}-\gamma||_{H}^{2}=\lim_{n}\sum_{i=1}^{\infty}\frac{|Aa_{n}- \gamma|_{i}^{2}}{i(i+1)}\), we have \[||x_{n}-\gamma||_{H}^{2}=\lim_{n}\sum_{i=1}^{\infty}\frac{|Aa_{n}- \gamma|_{i}^{2}}{i(i+1)}\] \[||x_{n}-\gamma||_{H}^{2}=\lim_{n}(\sum_{i\in J}\frac{|Aa_{n}+P_{ n}\gamma-P_{n}\gamma-\gamma|_{i}^{2}}{i(i+1)}+\sum_{i=1}^{\infty}\frac{|1 |}{iL_{n}(iL_{n}+1)})\text{ where }j\in J\text{ if }\] \[R_{L_{n}}(j)<L_{n}\] \[||x_{n}-\gamma||_{H}^{2}\leq\lim_{n}(\sum_{i\in J}\frac{(|Aa_{n} -P_{n}\gamma|_{i}+|P_{n}\gamma-\gamma|_{i})^{2}}{i(i+1)})+\lim_{n}\sum_{i=1}^{ \infty}\frac{1}{iL_{n}(iL_{n}+1)}\text{ by triangle inequality.}\] Now by Theorem 2.5, given \(\epsilon>0\), there always exists \(a_{n}\) such that \(|Aa_{n}-P_{n}\gamma|_{i}<\epsilon\) for all \(n\) and all \(i\), and by Theorem 2.9, \(P_{n}\) strongly converge to \(I\), so the first term goes to zero. For the second term, since \(L_{n}\geq n\), thus \(\frac{1}{iL_{n}(iL_{n}+1)}\leq\frac{1}{in(in+1)}\), which converges to zero. Apply dominated convergence theorem will give us zero as well. So the whole term converge to zero. By Theorem 2.1, the Riemann hypothesis is true. ## Acknowledgement I thank Dr. Billy Leung, Mr. Pak Tik Fong, Mr. Dave Yeung, and Dr. Kenny Yip for their insightful feedback. I thank Martin Argerami for providing insight to the proof. I also thank Prof. Michel Balazard for pointing out the mistake in the original version. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
2307.07103
A Hamiltonian Approach to Barrier Option Pricing Under Vasicek Model
Hamiltonian approach in quantum theory provides a new thinking for option pricing with stochastic interest rates. For barrier options, the option price changing process is similar to the infinite high barrier scattering problem in quantum mechanics; for double barrier options, the option price changing process is analogous to a particle moving in a infinite square potential well. Using Hamiltonian approach, the expressions of pricing kernels and option prices under Vasicek stochastic interest rate model could be derived. Numerical results of options price as functions of underlying prices are also shown.
Qi Chen Hong-tao Wang, Chao Guo
2023-07-14T00:25:52Z
http://arxiv.org/abs/2307.07103v2
# Path Integral Method for Barrier Option Pricing Under Vasicek Model ###### Abstract Path integral method in quantum theory provides a new thinking for time dependent option pricing. For barrier options, the option price changing process is similar to the infinite high barrier scattering problem in quantum mechanics; for double barrier options, the option price changing process is analogous to a particle moving in a infinite square potential well. Using path integral method, the expressions of pricing kernel and option price under Vasicek stochastic interest rate model could be derived. Numerical results of options price as functions of underlying prices are also shown. Introduction In 1973, Black and Scholes derived a closed-form solution for European options [1]. Since then the field of derivative pricing has grown greatly [2; 3]. In resent years, path depend options have become increasingly popular in financial markets, and barrier options are considered to be the simplest types of path dependent options. Owing to the confinement of the barrier, barrier options give investors more flexibility to transactions. Snyder was the first to discuss down-and-out barrier options [4]. Merton derived a closed-form solution for the down-and-out barrier call option [5]. Chiara et al priced barrier options by a Mellin transform method [6]. Karatzas and Wang gave closed-form expressions for the prices of American up-and-out put options [7]. Baaquie et al discussed barrier options and double barrier options by path integral method, and derived the corresponding analytical expressions [8]. Kunitomo and Ikeda studied the barrier options with floating barriers [9]. Chen et al gave the integral expressions for floating barrier option prices by path integral methods [10]. All the papers mentioned above assumed that the model parameters such as interest rate and volatility are constants. However, as we know, interest rate and volatility change with time, and the Black-Scholes model should be revised to be close to the reality. Lo et al give a closed-form expression for the price of barrier options with time dependent parameters [11]. Lemmens et al discussed Stochastic interest rate and volatility by means of path integral method [12]. More about the discussions on option with time dependent parameters could be studied in [13; 14; 15]. In this paper, we assume that the interest rate \(r\) obeys the Vasicek model [16]. Using \(\Delta\)-arbitrage strategy, the partial differential equation (PDE) of option price \(V(S,r,t)\) could be derived, which could be rewritten into a Schrodinger-type equation. Using path integral method, the pricing kernel could be written out by the Hamiltonian of the option system [8; 10]. The pricing kernel represents all the information for option price evolution, which is in correspondence to the propagator in quantum theory. If we can derive the expression of the pricing kernel, the expression of option pricing could also be derived. Our work is organized as follows. In Section 2, we will review the derivation of the option price PDE under Vasicek model. In Section 3, we will derive the integral expressions for the barrier option price by path integral method. In Section 4, we will derive the integral expressions for the double barrier option price by path integral method. Numerical results for option price as a function of underlying asset price will be discussed in Section 5. We summarize our main results in Section 6. ## II Option pricing under the Vasicek model Assuming that the model of stock price \(S_{t}\) obeys Brownian movement, \[\frac{\mathrm{d}S_{t}}{S_{t}}=r_{t}\mathrm{d}t+\sigma_{1}\mathrm{d}W_{t}^{1} \tag{1}\] where \(\sigma_{1}\) is the volatility of \(S_{t}\), and \(r_{t}\) is the stochastic interest rate which obeys Vasicek model [16] \[\mathrm{d}r_{t}=a(\theta-r_{t})\mathrm{d}t+\sigma_{2}\mathrm{d}W_{t}^{2} \tag{2}\] where \(\sigma_{2}\) is the volatility of \(r_{t}\), \(a\) and \(\theta\) indicate regression rate and long-term mean, respectively. \(W_{t}^{1}\) and \(W_{t}^{2}\) are standard Brownian movements, with the covariance \[\mathrm{Cov}(\mathrm{d}W_{t}^{1},\mathrm{d}W_{t}^{2})=\rho\mathrm{d}t,\ \ |\rho|\leq 1 \tag{3}\] Using \(\Delta\)-arbitrage strategy, the option price \(V(S,r,t)\) satisfies the follow partial differential equation (PDE) \[\frac{\partial V}{\partial t}+\frac{1}{2}\sigma_{1}^{2}S^{2}\frac{\partial^{ 2}V}{\partial S^{2}}+\sigma_{1}\sigma_{2}\rho S\frac{\partial^{2}V}{\partial S \partial r}+\frac{1}{2}\sigma_{2}^{2}\frac{\partial^{2}V}{\partial r^{2}}+rS \frac{\partial V}{\partial S}+a(\theta-r)\frac{\partial V}{\partial r}-rV=0 \tag{4}\] for a call option, the final condition at \(t=\tau\) is \[V(S,r,\tau)=(S-K)^{+} \tag{5}\] where \(\tau\) is the maturity, and \(K\) is the exercise price. By means of variable substitutions \[y=\frac{S}{P(r,t;\tau)},\ \ \ \hat{V}(y,t)=\frac{V(S,r,t)}{P(r,t;\tau)} \tag{6}\] the PDE (4) could be simplified into \[\frac{\partial\hat{V}}{\partial t}+\frac{1}{2}\hat{\sigma}^{2}(t)y^{2}\frac{ \partial^{2}\hat{V}}{\partial y^{2}}=0 \tag{7}\] where \(P(r,t;\tau)\) is the zero-coupon bond price. For Vasicek model, \(P(r,t;\tau)\) obeys the following PDE and the final condition \[\begin{split}&\frac{\partial P}{\partial t}+\frac{\sigma^{2}}{2} \frac{\partial^{2}P}{\partial r^{2}}+a(\theta-r)\frac{\partial P}{\partial r}- rP=0\\ & P(r,T)=1\end{split} \tag{8}\] (8) has the following explicit solution \[\begin{split} P(r,t;\tau)&=A(t)e^{-rB(t)}\\ A(t)&=\exp\biggl{[}\frac{1}{a^{2}}[B^{2}(t)-( \tau-t)]\biggl{(}a^{2}\theta-\frac{\sigma^{2}}{2}-\frac{\sigma^{2}}{4a}B^{2}( t)\biggr{)}\biggr{]}\\ B(t)&=\frac{1}{a}(1-e^{-a(\tau-t)})\end{split} \tag{9}\] and \[\hat{\sigma}(t)=\sqrt{\sigma_{1}^{2}+2\rho\sigma_{1}\sigma_{2}B(t)+\sigma_{2} ^{2}B^{(}t)} \tag{10}\] the final condition for (7) is \[\hat{V}(y,\tau)=\frac{V(S,r,\tau)}{P(r,\tau;\tau)}=(y-K)^{+} \tag{11}\] Considering the variable substitution \[y=e^{x},\quad-\infty<x<+\infty \tag{12}\] (7) could be changed into a Schrodinger-type equation \[\frac{\partial\hat{V}}{\partial t}=H\hat{V} \tag{13}\] with the Hamiltonian \(H\) given by \[H=-\frac{1}{2}\hat{\sigma}^{2}(t)\frac{\partial^{2}}{\partial x^{2}}+\frac{1} {2}\hat{\sigma}^{2}(t)\frac{\partial}{\partial x} \tag{14}\] which embodies the dynamic characteristic of the option system. ## III Path Integral Method for Barrier Option Pricing The barrier option Hamiltonian is \[H=-\frac{1}{2}\hat{\sigma}^{2}(t)\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \hat{\sigma}^{2}(t)\frac{\partial}{\partial x}=e^{\frac{1}{2}x}H_{\text{eff}}e^ {-\frac{1}{2}x}+U(x) \tag{15}\] where \[H_{\text{eff}}=-\frac{1}{2}\hat{\sigma}^{2}(t)\frac{\partial^{2}}{\partial x^{ 2}}+\frac{1}{8}\hat{\sigma}^{2}(t) \tag{16}\] and the potential \(U(x)\) is \[U(x)=\begin{cases}0,&x<B,\\ \infty,&x\geq B.\end{cases} \tag{17}\] The Schrodinger equation for \(H_{\text{eff}}\) is \[-\frac{1}{2}\hat{\sigma}^{2}(t)\frac{\partial^{2}\phi}{\partial\phi^{2}}+ \frac{1}{8}\hat{\sigma}^{2}(t)\phi=E\phi,\quad x<B \tag{18}\] where \(\phi\) is the scattering state wave function of the option, which describes the option price at time \(t\), and \(E\) is corresponding to the "option energy". The solution of (18) is \[\phi(x,t)=e^{ip(t)(x-B)}-e^{-ip(t)(x-B)},\quad x<B \tag{19}\] where \(B\) is the barrier level, and \[p^{2}(t)=\frac{2E}{\hat{\sigma}^{2}(t)}-\frac{1}{4} \tag{20}\] for \(x\geq B\), the wave function \(\phi(x,t)=0\). Owing to \(\hat{\sigma}(t)\) changes with time, we discretize \(\tau\) so that there are \(N\) steps to maturity, with each time step \(\epsilon=\tau/N\). \(\hat{\sigma}(t)\) tends to be a constant during each small enough time step. The pricing kernel could be denoted as \[\begin{split} p_{\text{BO}}(x,x^{\prime};\tau)&= \langle x|e^{-\tau H}|x^{\prime}\rangle\\ &=\lim_{\epsilon\to 0}\int_{-\infty}^{B}\mathrm{d}x_{1}\int_{- \infty}^{B}\mathrm{d}x_{2}...\int_{-\infty}^{B}\mathrm{d}x_{N-1}\left\langle x |e^{-\tau H}|x_{1}\right\rangle\left\langle x_{1}|e^{-\tau H}|x_{2}\right\rangle...\left\langle x_{N-1}|e^{-\tau H}|x^{\prime}\right\rangle\end{split} \tag{21}\] where the completeness condition has been used. The \(j\)th matrix element is \[\begin{split}\langle x_{j}|e^{-\epsilon H}|x_{j+1}\rangle=& e^{\alpha(x_{j}-x_{j+1})}e^{-\epsilon\gamma_{j}}\int_{0}^{+\infty} \frac{\mathrm{d}p_{j}}{2\pi}e^{-\frac{1}{2}\epsilon\sigma_{j}^{2}p_{j}^{2}} \big{[}e^{ip_{j}(x_{j}-B)}-e^{-ip_{j}(x_{j}-B)}\big{]}\big{[}e^{-ip_{j+1}(x_{j} -B)}-e^{ip_{j+1}(x_{j}-B)}\big{]}\\ &=e^{\alpha(x_{j}-x_{j+1})}e^{-\epsilon\gamma_{j}}\int_{-\infty}^ {+\infty}\frac{\mathrm{d}p_{j}}{2\pi}e^{-\frac{1}{2}\epsilon\sigma_{j}^{2}p_{j} ^{2}}\big{[}e^{ip_{j}(x_{j}-x_{j+1})}-e^{ip_{j}(x_{j}+x_{j+1}-2B)}\big{]}\end{split} \tag{22}\] Using Gaussian integral formula, the integral in (22) could be calculated as \[\begin{split}&\int_{-\infty}^{+\infty}\frac{\mathrm{d}p_{j}}{2\pi}e^{- \frac{1}{2}\epsilon\sigma_{j}^{2}p_{j}^{2}}\big{[}e^{ip_{j}(x_{j}-x_{j+1})}-e^ {ip_{j}(x_{j}+x_{j+1}-2B)}\big{]}\\ =&\int_{-\infty}^{+\infty}\frac{\mathrm{d}p_{j}}{2 \pi}e^{-\frac{1}{2}\epsilon\sigma_{j}^{2}\big{[}p_{j}-\frac{(x_{j}-x_{j+1})^{ 2}}{\epsilon\sigma_{j}^{2}}\big{]}^{2}}e^{-\frac{(x_{j}-x_{j+1})^{2}}{2\epsilon \sigma_{j}^{2}}}-\int_{-\infty}^{+\infty}\frac{\mathrm{d}p_{j}}{2\pi}e^{- \frac{1}{2}\epsilon\sigma_{j}^{2}\big{[}p_{j}-\frac{(x_{j}+x_{j+1}-2B)}{ \epsilon\sigma_{j}^{2}}\big{]}^{2}}e^{-\frac{(x_{j}+x_{j+1}-2B)^{2}}{2\epsilon \sigma_{j}^{2}}}\\ =&\frac{1}{\sqrt{2\pi\epsilon\sigma_{j}^{2}}}\bigg{[} e^{-\frac{(x_{j}-x_{j+1})^{2}}{2\epsilon\sigma_{j}^{2}}}-e^{-\frac{(x_{j}+x_{j+1}-2B)^{ 2}}{2\epsilon\sigma_{j}^{2}}}\bigg{]}\end{split} \tag{23}\] Similarly, \[\begin{split}\langle x_{j-1}|e^{-2\epsilon H}|x_{j+1}\rangle=& \int_{-\infty}^{B}\mathrm{d}x_{j}\,\langle x_{j-1}|e^{-\epsilon H }|x_{j}\rangle\,\langle x_{j}|e^{-\epsilon H}|x_{j+1}\rangle\\ =&\int_{-\infty}^{B}\mathrm{d}x_{j}e^{\frac{1}{2}(x _{j-1}-x_{j+1})}e^{-\epsilon(\gamma_{j-1}+\gamma_{j})}\frac{1}{\sqrt{2\pi \epsilon\sigma_{j-1}^{2}}}\frac{1}{\sqrt{2\pi\epsilon\sigma_{j}^{2}}}\times\\ &\bigg{[}e^{-\frac{1}{2\epsilon\sigma_{j-1}^{2}}(x_{j-1}-x_{j})^ {2}}-e^{-\frac{1}{2\epsilon\sigma_{j-1}^{2}}(x_{j-1}+x_{j}-2B)^{2}}\bigg{]} \bigg{[}e^{-\frac{1}{2\epsilon\sigma_{j}^{2}}(x_{j}-x_{j+1})^{2}}-e^{-\frac{1 }{2\epsilon\sigma_{j}^{2}}(x_{j}+x_{j+1}-2B)^{2}}\bigg{]}\end{split} \tag{24}\] now we calculate the four integrals in (24) \[\begin{split}&\int_{-\infty}^{B}\mathrm{d}x_{j}e^{-\frac{1}{2 \epsilon\sigma_{j-1}^{2}}(x_{j-1}-x_{j})^{2}-\frac{1}{2\epsilon\sigma_{j}^{2}} (x_{j}-x_{j+1})^{2}}\\ =&\int_{-\infty}^{B}\mathrm{d}x_{j}e^{-\frac{\sigma_{ j-1}^{2}+\sigma_{j}^{2}}{2\epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}}\big{[}x_{j}- \frac{x_{j-1}\sigma_{j}^{2}+x_{j+1}\sigma_{j-1}^{2}}{\sigma_{j-1}^{2}+\sigma_{ j}^{2}}\big{]}^{2}}e^{-\frac{(x_{j-1}-x_{j+1})^{2}}{2\epsilon(\sigma_{j-1}^{2}+ \sigma_{j}^{2})}}\\ =&\bigg{(}\frac{1}{2}\sqrt{\frac{2\pi\epsilon\sigma_ {j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}+\sqrt{\frac{\pi \epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}\ \mathrm{erf}\bigg{[}\frac{B\sigma_{j}^{2}+B\sigma_{j-1}^{2}-x_{j-1}\sigma_{j}^{ 2}-x_{j+1}\sigma_{j-1}^{2}}{\sqrt{2\epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}( \sigma_{j-1}^{2}+\sigma_{j}^{2})}}\bigg{]}\bigg{)}e^{-\frac{(x_{j-1}-x_{j+1})^{ 2}}{2\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\\ =&\bigg{(}\frac{1}{2}\sqrt{\frac{2\pi\epsilon\sigma_ {j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}+\sqrt{\frac{\pi \epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}\ \mathrm{erf}\bigg{[}\frac{B\sigma_{j}^{2}+B\sigma_{j-1}^{2}-x_{j-1}\sigma_{j}^{ 2}-x_{j+1}\sigma_{j-1}^{2}}{\sqrt{2\epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}( \sigma_{j-1}^{2}+\sigma_{j}^{2})}}\bigg{]}\bigg{)}e^{-\frac{(x_{j-1}-x_{j+1})^{ 2}}{2\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\\ =&\bigg{(}\frac{1}{2}\sqrt{\frac{2\pi\epsilon\sigma_ {j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}+\sqrt{\frac{\pi \epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}\ \mathrm{erf}\bigg{[}\frac{B\sigma_{j}^{2}+B\sigma_{j-1}^{2}-x_{j-1}\sigma_{j}^{ 2}-x_{j+1}\sigma_{j-1}^{2}}{\sqrt{2\epsilon\sigma_{j-1}^{2}\sigma_{j-1}^{2} \sigma_{j}^{2}(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\bigg{]}\bigg{)}e^{-\frac{(x_{j-1}-x_{j+1})^{ 2}}{2\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\\ =&\bigg{(}\frac{1}{2}\sqrt{\frac{2\pi\epsilon\sigma_ {j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}+\sqrt{\frac{\pi \epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}\ \mathrm{erf}\bigg{[}\frac{B\sigma_{j}^{2}+B\sigma_{j-1}^{2}-x_{j-1}\sigma_{j}^{ 2}-x_{j+1}\sigma_{j-1}^{2}}{\sqrt{2\epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}( \sigma_{j-1}^{2}+\sigma_{j}^{2})}}\bigg{]}\bigg{)}e^{-\frac{(x_{j-1}-x_{j+1})^{ 2}}{2\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\\ =&\bigg{(}\frac{1}{2}\sqrt{\frac{2\pi\epsilon\sigma_ {j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}+\sqrt{\frac{\pi \epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}\ \mathrm{erf}\bigg{[}\frac{B\sigma_{j}^{2}+B\sigma_{j-1}^{2}-x_{j-1}\sigma_{j}^{ 2}-x_{j+1}\sigma_{j-1}^{2}}{\sqrt{2\epsilon\sigma_{j-1}^{2}\sigma_{j-1}^{2} \ \[\int_{-\infty}^{B}\mathrm{d}x_{j}e^{-\frac{1}{2\epsilon\sigma_{j-1}^{2} }(x_{j-1}+x_{j}-2B)^{2}-\frac{1}{2\epsilon\sigma_{j}^{2}}(x_{j}+x_{j+1}-2B)^{2}} \tag{26}\] \[= \int_{-\infty}^{B}\mathrm{d}x_{j}e^{-\frac{\sigma_{j-1}^{2}+ \sigma_{j}^{2}}{2\epsilon\sigma_{j-1}^{2}+\sigma_{j}^{2}}\left[x_{j}+\frac{(x_{ j-1}-2B)\sigma_{j}^{2}+(x_{j+1}-2B)\sigma_{j-1}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}} \right]^{2}}e^{-\frac{(x_{j-1}-x_{j+1})^{2}}{2\epsilon(\sigma_{j-1}^{2}+\sigma _{j}^{2})}}\] \[= \left(\frac{1}{2}\sqrt{\frac{2\pi\epsilon\sigma_{j-1}^{2}\sigma_{ j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}+\sqrt{\frac{\pi\epsilon\sigma_{j-1}^{2} \sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}\;\mathrm{erf}\left[\frac{-B \sigma_{j}^{2}-B\sigma_{j-1}^{2}+x_{j-1}\sigma_{j}^{2}+x_{j+1}\sigma_{j-1}^{2} }{\sqrt{2\epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}(\sigma_{j-1}^{2}+\sigma_{j}^{ 2})}}\right]\right)e^{-\frac{(x_{j-1}-x_{j+1})^{2}}{2\epsilon(\sigma_{j-1}^{2} +\sigma_{j}^{2})}}\] \[\int_{-\infty}^{B}\mathrm{d}x_{j}e^{-\frac{1}{2\epsilon\sigma_{j -1}^{2}}(x_{j-1}-x_{j})^{2}-\frac{1}{2\epsilon\sigma_{j}^{2}}(x_{j}+x_{j+1}-2B) ^{2}}\] \[= \int_{-\infty}^{B}e^{-\frac{\sigma_{j-1}^{2}+\sigma_{j}^{2}}{2 \epsilon\sigma_{j-1}^{2}+\sigma_{j}^{2}}}\left[x_{j}-\frac{x_{j-1}\sigma_{j}^{ 2}-(x_{j+1}-2B)\sigma_{j-1}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}\;\mathrm{erf }\left[\frac{-B\sigma_{j}^{2}-B\sigma_{j-1}^{2}-x_{j-1}\sigma_{j}^{2}+x_{j+1} \sigma_{j-1}^{2}}{\sqrt{2\epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}(\sigma_{j-1}^ {2}+\sigma_{j}^{2})}}\right]\right)e^{-\frac{(x_{j-1}+x_{j+1}-2B)^{2}}{2 \epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\] \[\int_{-\infty}^{B}\mathrm{d}x_{j}e^{-\frac{1}{2\epsilon\sigma_{j -1}^{2}}(x_{j-1}+x_{j}-2B)^{2}-\frac{1}{2\epsilon\sigma_{j}^{2}}(x_{j}-x_{j+1} )^{2}}\] \[= \int_{-\infty}^{B}e^{-\frac{\sigma_{j-1}^{2}+\sigma_{j}^{2}}{2 \epsilon\sigma_{j-1}^{2}+\sigma_{j}^{2}}\left[x_{j}+\frac{(x_{j-1}-2B)\sigma_{ j}^{2}-x_{j+1}\sigma_{j-1}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}\right]^{2}}e^{-\frac{(x_{ j-1}+x_{j+1}-2B)^{2}}{2\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\] \[= \left(\frac{1}{2}\sqrt{\frac{2\pi\epsilon\sigma_{j-1}^{2}\sigma_ {j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}+\sqrt{\frac{\pi\epsilon\sigma_{j-1 }^{2}\sigma_{j}^{2}}{\sigma_{j-1}^{2}+\sigma_{j}^{2}}}\;\mathrm{erf}\left[\frac {-B\sigma_{j}^{2}+B\sigma_{j-1}^{2}+x_{j-1}\sigma_{j}^{2}-x_{j+1}\sigma_{j-1}^ {2}}{\sqrt{2\epsilon\sigma_{j-1}^{2}\sigma_{j}^{2}(\sigma_{j-1}^{2}+\sigma_{j }^{2})}}\right]\right)e^{-\frac{(x_{j-1}+x_{j+1}-2B)^{2}}{2\epsilon(\sigma_{j-1 }^{2}+\sigma_{j}^{2})}}\] where \[\mathrm{erf}(x)=\int_{0}^{x}e^{-\eta^{2}}\mathrm{d}\eta \tag{29}\] is the error function. Considering the error function is an odd function, (24) could be simplified into \[\langle x_{j-1}|e^{-2\epsilon H}|x_{j+1}\rangle=\frac{e^{\frac{1}{2}(x_{j-1}-x _{j+1})}e^{-\frac{1}{8}\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}{\sqrt{2 \pi\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\left[e^{-\frac{(x_{j-1}-x_{j+1}) ^{2}}{2\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}-e^{-\frac{(x_{j-1}+x_{j+1}- 2B)^{2}}{2\epsilon(\sigma_{j-1}^{2}+\sigma_{j}^{2})}}\right] \tag{30}\] Repeating the above calculation, the pricing kernel is \[\langle x|e^{-\tau H}|x^{\prime}\rangle=\frac{e^{\frac{1}{2}(x-x^{\prime})}e^{ -\frac{1}{8}\epsilon\sum_{j=1}^{N}\sigma_{j}^{2}}}{\sqrt{2\pi\epsilon\sum_{j=1}^ {N}\sigma_{j}^{2}}}\bigg{[}e^{-\frac{(x-x^{\prime})^{2}}{2\epsilon\sum_{j=1}^{N} \sigma_{j}^{2}}}-e^{-\frac{(x+x^{\prime}-2B)^{2}}{2\epsilon\sum_{j=1}^{N}\sigma_ {j}^{2}}}\bigg{]} \tag{31}\] where \[\epsilon\sum_{j=1}^{N}\sigma_{j}^{2} =\epsilon\sum_{j=1}^{N}[\sigma_{1}^{2}+2\rho\sigma_{1}\sigma_{2}B(t )+\sigma_{2}^{2}B(t)^{2}] \tag{32}\] \[=(\sigma_{1}^{2}+\frac{2\rho}{a}\sigma_{1}\sigma_{2}+\frac{\sigma _{2}^{2}}{a^{2}})\tau-\frac{2\sigma_{2}}{a^{2}}(\rho\sigma_{1}+\frac{\sigma_{2 }}{a})(1-e^{-a\tau})+\frac{\sigma_{2}^{2}}{2a^{3}}(1-e^{-2a\tau})\] and the option price could be denoted as \[V(x,r;\tau)=P(r;t,\tau)\int_{\ln K}^{B}\left\langle x|e^{-\tau H}|x^{\prime} \right\rangle(e^{x^{\prime}}-K) \tag{33}\] ## IV Path integral method for double barrier option pricing The double barrier option Hamiltonian is \[H =e^{\frac{1}{2}x}H_{\rm eff}e^{-\frac{1}{2}x}+U(x) \tag{34}\] \[=e^{\frac{1}{2}x}\bigg{(}-\frac{1}{2}\hat{\sigma}^{2}(t)\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{8}\hat{\sigma}^{2}(t)\bigg{)}e^{- \frac{1}{2}x}+U(x)\] the potential \(U(x)\) is \[U(x)=\begin{cases}\infty,&x\leq B_{1},\\ \quad 0,&B_{1}<x<B_{2},\\ \infty,&x\geq B_{2}.\end{cases} \tag{35}\] where \(B_{1}\) and \(B_{2}\) are the lower and higher barrier levels, respectively. The Schrodinger equation for \(H_{\rm eff}\) is \[-\frac{1}{2}\hat{\sigma}^{2}(t)\frac{\partial^{2}\phi}{\partial\phi^{2}}+\frac {1}{8}\hat{\sigma}^{2}(t)\phi=E\phi,\quad B_{1}<x<B_{2} \tag{36}\] with the eigenstate \[\phi_{n}(x)=\begin{cases}\sqrt{\frac{n\pi}{B_{2}-B_{1}}}\sin[p_{n}(x-B_{1})], &B_{1}<x<B_{2},\\ \quad 0,&x\leq B_{1},\ x\geq B_{2}.\end{cases} \tag{37}\] where \[p_{n}=\frac{n\pi}{B_{2}-B_{1}},\ \ E_{n}=\frac{1}{2}\hat{\sigma}^{2}(t)p_{n}^{2},\ \ n=1,2,3,... \tag{38}\] Owing to \(\hat{\sigma}(t)\) changes with time, we also discretize \(\tau\) to \(N\) steps, with each step \(\epsilon=\tau/N\). The \(j\)th matrix element is \[\begin{split}\bra{x_{j}}e^{-\epsilon H}|x_{j+1}\rangle&= e^{\frac{1}{2}(x_{j}-x_{j+1})}e^{-\frac{1}{8}\epsilon\sigma_{j}^{2}}\sum_{n=1}^{ \infty}\bra{x_{j}}e^{-\frac{1}{2}\epsilon\hat{\sigma}^{2}(t)\hat{p}^{2}}|n \rangle\bra{n}x_{j+1}\\ &=e^{\frac{1}{2}(x_{j}-x_{j+1})}e^{-\frac{1}{8}\epsilon\sigma_{j} ^{2}}\sum_{n=1}^{\infty}e^{-\frac{1}{2}\epsilon p_{n}^{2}\sigma_{j}^{2}}\phi_{ n}(x_{j})\phi_{n}(x_{j+1})\end{split} \tag{39}\] where \(\hat{p}=-i\frac{\partial}{\partial x}\) is the momentum operator, and \[\bra{x_{j}}n\rangle=\phi_{n}(x_{j})=\sqrt{\frac{2}{B_{2}-B_{1}}}\sin p_{n}(x_ {j}-B_{1}) \tag{40}\] similarly, \[\begin{split}\bra{x_{j}}e^{-2\epsilon H}|x_{j+2}\rangle& =\int_{B_{1}}^{B_{2}}\mathrm{d}x_{j+1}\bra{x_{j}}e^{-\epsilon H}|x _{j+1}\rangle\bra{x_{j+1}}e^{-\epsilon H}|x_{j+2}\rangle\\ &=e^{\frac{1}{2}(x_{j}-x_{j+2})}e^{-\frac{1}{8}\epsilon(\sigma_{ j}^{2}+\sigma_{j+1}^{2})}\sum_{n=1}^{\infty}e^{-\frac{1}{2}\epsilon p_{n}^{2}( \sigma_{j}^{2}+\sigma_{j+1}^{2})}\phi_{n}(x_{j})\phi_{n}(x_{j+2})\end{split} \tag{41}\] where the orthonormalization condition \[\int_{B_{1}}^{B_{2}}\mathrm{d}x\ \phi_{n}(x)\phi_{n^{\prime}}(x)=\delta_{nn^{ \prime}}=\begin{cases}0,&n\neq n^{\prime},\\ 1,&n=n^{\prime}.\end{cases} \tag{42}\] Repeating the above calculation, the pricing kernel for double barrier option is \[\bra{x}e^{-\tau H}|x^{\prime}\rangle=e^{\frac{1}{2}(x-x^{\prime})}e^{-\frac{1 }{8}\epsilon\sum_{j=1}^{N}\sigma_{j}^{2}}\sum_{n=1}^{\infty}e^{-\frac{1}{2} \epsilon p_{n}^{2}\sum_{j=1}^{N}\sigma_{j}^{2}}\phi_{n}(x)\phi_{n}(x^{\prime}) \tag{43}\] and the option price is \[V(x,r;\tau)=P(r;t,\tau)\int_{\ln K}^{B_{2}}\bra{x}e^{-\tau H}|x^{\prime} \rangle\left(e^{x^{\prime}}-K\right) \tag{44}\] ## V Numerical results In Fig. 1-Fig. 3, we show the up-and-out barrier call price (left) and the up-and-out double barrier call price (right) as functions of underlying price. Without loss of generality, we set \(\sigma_{1}=\sigma_{2}=0.3\). In Fig. 1, solid lines of different colors represent different price curves under different regression rates \(a\). It is shown that the option prices increase with the increasing of \(a\). In Fig. 2, solid lines of different colors represent different price curves under different long-term mean \(\theta\). It is shown that the option prices decrease with the increasing of \(\theta\). In Fig. 3, dashed lines represent price curves in the case of \(\rho<0\), chain lines represent price curves in the case of \(\rho>0\), and the solid line is corresponding to the case of \(\rho=0\). ## VI Conclusion Barrier option price changing with time could be analogous to a particle moving under some special potential. For stochastic interest rates, the maturity should be split into \(N\) steps, and each matrix element during a tiny step is calculated by Gaussian integral. After \(N\) times integral calculation, the pricing kernel could be written in the form of series operation. Path integral is an effective method linking option price changing to a particle moving under some potential in the space. The pricing pf other barrier options such as step options could be studied by defining appropriate potentials.
2306.01883
Survey on generalizations of Hopficity of modules
The main aim of this paper is the Hopficity of module classes, the study of modules (rings) by properties of their endomorphisms is a classical research subject. In 1986, Hiremath \cite{Hi} introduced the concepts of Hopfian modules and rings, the notion of Hopfian modules are defined as a generalization of modules of finite length as the modules whose surjective endomorphisms are isomorphisms. Later, the dual concepts co-Hopfian modules and rings were given. Hopfian and co-Hopfian modules (rings) have been investigated by several authors. For example, Hiremath \cite{Hi}, Varadarajan \cite{Va}, \cite{Va1}, Xue \cite{Xu}, Haghany \cite{Hag}, Liu \cite{Li}, and Yang and Liu \cite{Yl}. In 2001, Haghany and Vedadi \cite{Ha}, and in 2002, Ghorbani and Haghany \cite{Gh}, respectively, introduced and investigated the weakly co-Hopfian and generalized Hopfian modules. These modules and several generalizations of them are extensively studied also by several authors.
Abderrahim El Moussaouy
2023-06-02T19:27:07Z
http://arxiv.org/abs/2306.01883v1
# Survey on generalizations of Hopficity of modules ###### Abstract. The main aim of this paper is the Hopficity of module classes, the study of modules (rings) by properties of their endomorphisms is a classical research subject. In 1986, Hiremath [26] introduced the concepts of Hopfian modules and rings, the notion of Hopfian modules are defined as a generalization of modules of finite length as the modules whose surjective endomorphisms are isomorphisms. Later, the dual concepts co-Hopfian modules and rings were given. Hopfian and co-Hopfian modules (rings) have been investigated by several authors. For example, Hiremath [26], Varadarajan [48], [49], Xue [55], Haghany [25], Liu [34], and Yang and Liu [56]. In 2001, Haghany and Vedadi [24], and in 2002, Ghorbani and Haghany [21], respectively, introduced and investigated the weakly co-Hopfian and generalized Hopfian modules. These modules and several generalizations of them are extensively studied also by several authors. Key words and phrases:Hopfian modules; co-Hopfian modules; generalized Hopfian modules; weakly co-Hopfian modules; generalized co-Hopfian modules; weakly Hopfian modules; semi Hopfian modules; semi co-Hopfian modules; \(\mu\)-Hopfian modules; \(\delta\)-weakly Hopfian modules; \(\gamma\)-Hopfian modules; jacobson Hopfian modules 2 ## Resume L'objectif principal de cet article est d'etudier les classes de Hopfcite de modules. L'etude des modules (anneaux) par les proprietes de leurs endomorphismes est un sujet de recherche classique. En 1986, Hiremath [26] a introduit les concepts de modules Hopfiens et d'anneaux Hopfiens, la notion de modules Hopfiens est definie comme une generalisation de modules de longueur finie comme les modules dont les endomorphismes surjectifs sont des isomorphismes. Plus tard, les deux concepts de modules co-Hopfiens et d'anneaux co-Hopfiens ont ete donnes. Les modules (anneaux) Hopfiens et co-Hopfiens ont ete etudies par plusieurs auteurs. Par exemple, Hiremath [26], Varadarajan [48], [49], Xue [55], Haghany [25], Liu [34] et Yang et Liu [56]. En 2001, Haghany et Vedaldi, [24], et en 2002, Ghorbani et Haghany [21], respectivement, ont introduit et etudie les modules faiblement co-Hopfiens et Hopfiens generalises. Ces modules et plusieurs generalisations ont ete etudies egalement par plusieurs auteurs. ## Notations Nous fixons les notations suivantes. Soient \(A\) un anneau associatif unitaire, \(M\), un \(A\)-module a droite et \(N\) un sous-module de \(M\). \(\bullet\) End\((M)\) l'anneau des endomorphismes de \(M\). \(N\leq M\): \(N\) est un sous-module de \(M\), \(N\trianglelefteq M\): \(N\) est un sous-module totalement invariant de \(M\) (\(f(N)\leq N\) pour tout \(f\in\) End\((M)\)), \(N\leq^{\oplus}M\): \(N\) est un facteur direct de \(M\), \(N\leq^{e}M\): \(N\) est un sous-module essentiel de \(M\), \(N\ll M\): \(N\) est un sous-module superflu dans \(M\), \(N\ll_{\mu}M\): \(N\) est un sous-module \(\mu\)-superflu dans \(M\), \(N\ll_{\delta}M\): \(N\) est un sous-module \(\delta\)-superflu dans \(M\), \(N\ll_{\gamma}M\): \(N\) est un sous-module \(\gamma\)-superflu dans \(M\), \(N\ll_{J}M\): \(N\) est un sous-module Jacobson-superflu dans \(M\), \(Z(M)\): le sous-module singulier de \(M\), \(Z^{*}(M)\): le sous-module cosingulier de \(M\), \(\bullet\) Rad\((M)\) le radical de Jacobson de \(M\). \(J(A)\) le radical de Jacobson de \(A\). \(\delta(A)\) l'intersection de tous les ideaux essentiels maximaux de \(A\). \(\bullet\) E\((M)\) l'enveloppe injective de \(M\). \(\bullet\) CCA la condition de chaine ascendante. \(\bullet\) CCD la condition de chaine descendante. \(A_{p}=\{\frac{a}{p^{n}}\); \(a\in\mathds{Z}\) et \(n\in\mathds{N}\}\), ou \(p\) est un nombre premier. \(\mathbb{Z}_{p^{\infty}}=A_{p}/\mathds{Z}\). \(M_{n}(A)\): l'ensemble des matrices carrees d'ordre \(n\) a coefficients dans \(A\). Dans cet article, tous les anneaux sont consideres associatifs unitaires et les modules sont consideres des modules a droite sauf mention du contraire. ## 1. Introduction Au debut des annees quatre-vingt, A. Kaidi et M. Sanghare ont introduit, la notion des modules verifiant les proprietes (I), (S) et (F), [29]. On dit qu'un A-module a droite \(M\) verifie la propriete (I) (resp. (S)) si tout endomorphisme injectif (resp. surjectif) de \(M\) est un automorphisme de \(M\), on dit que \(M\) verifie la propriete (F) si pour tout endomorphisme \(f\) de \(M\), il existe un entier \(n\geq 1\) tel que \(M=Kerf^{n}\oplus Imf^{n}\). En 1986, la notion du module verifiant la condition (S) a ete nommee par Hiremath "module Hopfien", [26]. Un peu plus tard la notion du module verifiant la condition (I) a ete nommee par Varadarajan "module co-Hopfien",[49]. On dit qu'un sous-module \(N\) d'un \(A\)-module \(M\) est essentiel de \(M\) (\(N\leq^{e}M\)) si \(N\cap L=0\) implique \(L=0\), pour tout sous-module \(L\) de \(M\). En 2001, [24], A. Haghany et M. R. Vedadi ont introduit la notion du module faiblement co-Hopfien. Un \(A\)-module \(M\) est appele faiblement co-Hopfien, si pour tout endomorphisme injectif \(f\) de \(M\), l'image de \(f\) est essentiel de \(M\) (\(Imf\leq^{e}M\)). On dit qu'un sous-module \(N\) d'un \(A\)-module \(M\) est superflu dans \(M\) (\(N\ll M\)) si \(N+L=M\) implique \(L=M\), pour tout sous-module \(L\) de \(M\). En 2002, [21], A. Ghorbani et A. Haghany ont introduit la notion du module Hopfien generalise. On dit qu'un \(A\)-module \(M\) est Hopfien generalise, si pour tout endomorphisme surjectif \(f\) de \(M\), le noyau de \(f\) est superflu dans \(M\) (\(Kerf\ll M\)). En 2005, [52], Y. Wang a introduit la notion du module co-Hopfien generalise et la notion du module faiblement Hopfien. On dit qu'un \(A\)-module \(M\) est faiblement Hopfien, si tout endomorphisme surjectif superflu \(f\) de \(M\) est un automorphisme. Et on dit qu'un \(A\)-module \(M\) est co-Hopfien generalise, si tout endomorphisme injectif essentiel \(f\) de \(M\) est un automorphisme. En 2007, [27], A. Hmaimou, A. kaidi et E. Sanchez Campos ont introduit la notion du module fortement Hopfien et la notion du module fortement co-Hopfien. On dit qu'un \(A\)-module \(M\) est fortement Hopfien, si pour tout endomorphisme \(f\) de \(M\) la suite croissante: \(Kerf\subseteq Kerf^{2}\subseteq...\subseteq Kerf^{n}\subseteq...\) est stationnaire. Et on dit qu'un \(A\)-module \(M\) est fortement co-Hopfien, si pour tout endomorphisme \(f\) de \(M\) la suite decroissante: \(Imf\supseteq Imf^{2}\supseteq...\supseteq Imf^{n}\supseteq...\) est stationnaire. En 2008, [3], P. Aydogdu et A.C. Ozcan ont introduit la notion du module semi Hopfien et la notion du module semi co-Hopfien. On dit qu'un \(A\)-module \(M\) est semi Hopfien, si pour tout endomorphisme surjectif \(f\) de \(M\), le noyau de \(f\) est un facteur direct de \(M\). Et on dit qu'un \(A\)-module \(M\) est semi co-Hopfien, si pour tout endomorphisme injectif \(f\) de \(M\), l'image de \(f\) est un facteur direct de \(M\). De tels modules et d'autres generalisations ont ete introduits et etudies par plusieurs auteurs ([3], [17], [18], [19], [20], [22], [24], [26],[27], [49], [12], [13], [10], [9], [14],[15],[16], [52]). ## 2. Modules Hopfien et co-Hopfien **Definition 2.1**.: _[_26_]__. _Un \(A\)-module \(M\) est dit Hopfien si tout endomorphism surjectif de \(M\) est bijectif._ **Definition 2.2**.: _[_49_]__. _Un \(A\)-module \(M\) est dit co-Hopfien si tout endomorphism injectif de \(M\) est bijectif._ **Proposition 2.3**.: _[_51_]__. _Si \(A\) est un anneau commutatif, alors tout \(A\)-module de type fini est Hopfien._ **Remarque 2.4**.: _[_43_]__. _Tout module noetherien (resp, artinien) est Hopfien (resp, co-Hopfien)._ Un module Hopfien (resp, co-Hopfien) n'est pas en general noetherien (resp, artinien) comme le montre l'exemple suivant: **Exemple 2.5**.: _[_27_]__. _Le groupe additif \(\mathbb{Q}\) des nombres rationnels est Hopfien et co-Hopfien mais n'est ni noetherien ni artinien._ **Definition 2.6**.: _[_6, 36_]__. L'anneau \(A\) est dit Dedekind Fini si pour tout \(a,b\in A\), \(ab=1\Rightarrow ba=1\). Le module \(M\) est dit Dedekind Fini si l'anneau \(\operatorname{End}(M)\) est Dedekind Fini. On peut verifier que \(M\) est Dedekind Fini si et seulement si \(M\) n'est pas isomorphe a un facteur direct propre de lui-meme._ **Proposition 2.7**.: _[_6_]__. _Soit \(M\) un \(A\)-module Hopfien ou co-Hopfien, alors \(M\) est Dedekind Fini._ La reciproque n'est pas en general vraie comme le montre l'exemple suivant: **Exemple 2.8**.: _Le groupe abelien \(\mathbb{Z}\) est Dedekind Fini mais n'est pas co-Hopfien et le groupe \(\mathbb{Z}_{p^{\infty}}\) est Dedekind Fini mais n'est pas Hopfien._ **Definition 2.9**.: _. Un \(A\)-module \(E\) est dit injectif si pour tout homomorphisme injectif \(g\) de M vers N et pour tout homomorphisme, \(\gamma\) de M vers E, il existe un homomorphisme \(h\) de N vers E tel que : \(\gamma=hg\) (i.e., il existe \(h:N\to E\) tel que le diagramme_ _est commutatif)._ **Definition 2.10**.: _. Un \(A\)-module \(P\) est dit projectif si pour tout homomorphisme surjectif \(g\) de M vers N et pour tout homomorphisme, \(\gamma\) de P vers N, il existe un homomorphisme \(h\) de P vers M tel que : \(\gamma=gh\) (i.e., il existe \(h:P\to M\) tel que le diagramme_ _est commutatif)._ **Definition 2.11**.: _[_31_]__. Un \(A\)-module \(M\) est dit quasi-projectif (resp, quasi-injectif) si pour tout homomorphisme surjectif (resp, injectif) \(g\) de M vers N (resp, de N vers M) et pour tout homomorphisme, \(\gamma\) de M (resp, N) vers N (resp, vers M), il existe un endomorphisme \(h\) de M tel que : \(\gamma=gh\) (resp, \(\gamma=hg\)) (i.e., il existe \(h:M\to M\) tel que le diagramme_ _resp,_ _est commutatif)._ **Proposition 2.12**.: _[_44_]__. Soit \(M\) un \(A\)-module injectif, si \(M\) est Hopfien alors il est co-Hopfien._ **Proposition 2.13**.: _[_44_]__. Soit \(M\) un \(A\)-module projectif, si \(M\) est co-Hopfien alors il est Hopfien._ **Proposition 2.14**.: _[_37_]__. Soit \(M\) un \(A\)-module quasi-injectif, alors \(M\) est Dedekind Fini si et seulement si \(M\) est co-Hopfien._ **Proposition 2.15**.: _[_21_]__. Soit \(M\) un \(A\)-module quasi-projectif, alors \(M\) est Dedekind Fini si et seulement si \(M\) est Hopfien._ ## 3. Quelques generalisations des modules superflus **Definition 3.1**.: _[_33_]__. Soit \(M\) un \(A\)-module et soit \(N\) un sous-module de \(M\), on dit que \(N\) est superflu dans \(M\) (\(N\ll M\)) si \(N+L=M\Rightarrow L=M\), pour tout sous-module \(L\) de \(M\)._ Un module \(M\) est dit creux si tout sous-module propre de \(M\) est superflu. **Definition 3.2**.: _[_45_]__. L'enveloppe injective \(E(M)\) d'un module \(M\) est l'extension essentielle maximale de \(M\)._ **Proposition 3.3**.: _[_33_]__. Soit \(M\) un \(A\)-module, alors \(M\) est un module superflu si et seulement si \(M\) est superflu dans son enveloppe injective \(E(M)\)._ **Lemme 3.4**.: _[_54_]_ _Soient \(M\), \(N\) et \(L\) des modules. Alors les deux epimorphismes \(f:M\to N\) et \(g:N\to L\) sont superflus si et seulement si \(gf\) est superflu._ **Definition 3.5**.: _[_23_]__. Soit \(M\) un \(A\)-module, on appelle sous-module singulier de \(M\), l'ensemble \(Z(M)\) des elements \(x\) de \(M\) tel que \(Ann(x)\) soit un ideal essentiel dans \(A\). Un \(A\)-module \(M\) est dit singulier (resp, non singulier) si \(Z(M)=M\) (resp., \(Z(M)=0\))._ Si \(N\) est un sous-module essentiel de \(M\) alors \(M/N\) est singulier, mais la reciproque n'est pas en general vraie comme le montre l'exemple suivant, soit \(M=\mathbb{Z}/2\mathbb{Z}\) et \(N=0\). \(M/N\) est singulier mais \(N\) n'est pas essentiel dans \(M\). **Proposition 3.6**.: _[_22_]__. Soit \(M\) un \(A\)-module injectif non singulier, alors \(M\) est Hopfien si et seulement si \(M\) est co-Hopfien._ **Definition 3.7**.: _[_57_]__. Soit \(M\) un \(A\)-module et soit \(N\) un sous-module de \(M\), on dit que \(N\) est \(\delta\)-superflu dans \(M\) (\(N\ll_{\delta}M\)) si \(N+L=M\) tel que \(M/L\) est singulier implique \(L=M\), pour tout sous-module \(L\) de \(M\)._ **Lemme 3.8**.: _[_57_]__. Soit \(M\) un \(A\)-module._ 1. _Soient_ \(K\leq B\leq M\)_. Alors_ \(B\ll_{\delta}M\) _si et seulement si_ \(K\ll_{\delta}M\) _et_ \(B/K\ll_{\delta}M/K\)_._ 2. _Soient_ \(K\) _et_ \(B\) _deux sous-modules de_ \(M\)_, alors_ \(K+B\ll_{\delta}M\) _si et seulement si_ \(K\ll_{\delta}M\) _et_ \(B\ll_{\delta}M\)_._ 3. _Soient_ \(K\) _et_ \(B\) _deux sous-modules de_ \(M\) _avec_ \(K\leq B\)_, si_ \(K\ll_{\delta}B\)_, alors_ \(K\ll_{\delta}M\)_._ 4. _Soit_ \(f:M\to N\) _un homomorphisme tel que_ \(K\ll_{\delta}M\)_, alors_ \(f(K)\ll_{\delta}N\)_._ 5. _Soit_ \(M=M_{1}\oplus M_{2}\) _un_ \(A\)_-module et soient_ \(A_{1}\leq M_{1}\) _et_ \(A_{2}\leq M_{2}\)_, alors_ \(A_{1}\oplus A_{2}\ll_{\delta}M_{1}\oplus M_{2}\) _si et seulement si_ \(A_{1}\ll_{\delta}M_{1}\) _et_ \(A_{2}\ll_{\delta}M_{2}\)_._ **Definition 3.9**.: _[_40_]__. Soit \(M\) un \(A\)-module, on appelle sous-module cosingulier de \(M\), l'ensemble \(Z^{*}(M)\) des elements \(m\) de \(M\) tel que \(mA\) soit un module superflu. Un \(A\)-module \(M\) est dit cosingulier (resp, non cosingulier) si \(Z^{*}(M)=M\) (resp., \(Z^{*}(M)=0\))._ **Lemme 3.10**.: _[_39_]__. Soit \(M\) un \(A\)-module._ 1. _Si_ \(M\) _est superflu alors_ \(Z^{*}(M)=M\)_._ 2. _Si_ \(M\) _est semi simple injectif alors_ \(Z^{*}(M)=0\)__ **Lemme 3.11**.: _[_53_]__. Soit \(f:M\to N\) un homomorphisme et soit \(A\) un sous-module de \(M\) tel que \(M/A\) est cosingulier, alors \(f(M)/f(A)\) est cosingulier._ **Definition 3.12**.: _[_53_]__. Soit \(M\) un \(A\)-module et soit \(N\) un sous-module de \(M\), on dit que \(N\) est \(\mu\)-superflu dans \(M\) (\(N\ll_{\mu}M\)) si \(N+L=M\) tel que \(M/L\) est cosingulier implique \(L=M\), pour tout sous-module \(L\) de \(M\)._ **Lemme 3.13**.: _[_53_]__. Soit \(M\) un \(A\)-module._ 1. _Soient_ \(K\leq B\leq M\)_. Alors_ \(B\ll_{\mu}M\) _si et seulement si_ \(K\ll_{\mu}M\) _et_ \(B/K\ll_{\mu}M/K\)_._ 2. _Soient_ \(K\) _et_ \(B\) _deux sous-modules de_ \(M\)_, alors_ \(K+B\ll_{\mu}M\) _si et seulement si_ \(K\ll_{\mu}M\) _et_ \(B\ll_{\mu}M\)_._ 3. _Soient_ \(K\) _et_ \(B\) _deux sous-modules de_ \(M\) _avec_ \(K\leq B\)_, si_ \(K\ll_{\mu}B\)_, alors_ \(K\ll_{\mu}M\)_._ 4. _Soit_ \(f:M\to N\) _un homomorphisme tel que_ \(K\ll_{\mu}M\)_, alors_ \(f(K)\ll_{\mu}N\)_._ 5. _Soit_ \(M=M_{1}\oplus M_{2}\) _un_ \(A\)_-module et soient_ \(A_{1}\leq M_{1}\) _et_ \(A_{2}\leq M_{2}\)_, alors_ \(A_{1}\oplus A_{2}\ll_{\mu}M_{1}\oplus M_{2}\) _si et seulement si_ \(A_{1}\ll_{\mu}M_{1}\) _et_ \(A_{2}\ll_{\mu}M_{2}\)_._ **Lemme 3.14**.: _[_53_]__. Soient \(M\) un \(A\)-module et \(K\leq N\) deux sous-modules de \(M\), si \(N\) est un facteur direct de \(M\) et \(K\ll_{\mu}M\), alors \(K\ll_{\mu}N\)._ **Definition 3.15**.: _[_35_]__. Soit \(M\) un \(A\)-module et soit \(N\) un sous-module de \(M\), on dit que \(N\) est \(\gamma\)-superflu dans \(M\) (\(N\ll_{\gamma}M\)) si \(N+L=M\) tel que \(M/L\) est non cosingulier implique \(L=M\), pour tout sous-module \(L\) de \(M\)._ **Lemme 3.16**.: _[_35_]__. Soit \(M\) un \(A\)-module._ 1. _Soient_ \(K\leq B\leq M\)_. Alors_ \(B\ll_{\gamma}M\) _si et seulement si_ \(K\ll_{\gamma}M\) _et_ \(B/K\ll_{\gamma}M/K\)_._ 2. _Soient_ \(K\) _et_ \(B\) _deux sous-modules de_ \(M\) _avec_ \(K\leq B\)_, si_ \(K\ll_{\gamma}B\)_, alors_ \(K\ll_{\gamma}M\)_._ 3. _Soit_ \(f:M\to N\) _un epimorphisme tel que_ \(K\ll_{\gamma}M\)_, alors_ \(f(K)\ll_{\gamma}N\)_._ 4. _Soit_ \(M=M_{1}\oplus M_{2}\) _un_ \(A\)_-module et soient_ \(A_{1}\leq M_{1}\) _et_ \(A_{2}\leq M_{2}\)_, alors_ \(A_{1}\oplus A_{2}\ll_{\gamma}M_{1}\oplus M_{2}\) _si et seulement si_ \(A_{1}\ll_{\gamma}M_{1}\) _et_ \(A_{2}\ll_{\gamma}M_{2}\)_._ **Definition 3.17**.: _[_1_]__. Soit \(M\) un \(A\)-module._ \(\operatorname{Rad}(M)=\bigcap\{K\leq M/K\) _est maximal dans_ \(M\}\)_=_ \(\sum\{K\leq M/K\) _est superflu dans_ \(M\}\)__ **Definition 3.18**.: _[_28_]__. Soit \(M\) un \(A\)-module et soit \(N\) un sous-module de \(M\), on dit que \(N\) est Jacobson-superflu dans \(M\) (\(N\ll_{J}M\)) si \(N+L=M\) tel que \(\operatorname{Rad}(M/L)=M/L\) implique \(L=M\), pour tout sous-module \(L\) de \(M\)._ **Lemme 3.19**.: _[_28_]__. Soit \(M\) un \(A\)-module._ 1. _Soient_ \(K\leq B\leq M\)_. Alors_ \(B\ll_{J}M\) _si et seulement si_ \(K\ll_{J}M\) _et_ \(B/K\ll_{J}M/K\)_._ 2. _Soient_ \(K\) _et_ \(B\) _deux sous-modules de_ \(M\)_, alors_ \(K+B\ll_{J}M\) _si et seulement si_ \(K\ll_{J}M\) _et_ \(B\ll_{J}M\)_._ 3. _Soient_ \(K\) _et_ \(B\) _deux sous-modules de_ \(M\) _avec_ \(K\leq B\)_, si_ \(K\ll_{J}B\)_, alors_ \(K\ll_{J}M\)_._ 4. _Soit_ \(f:M\to N\) _un homomorphisme tel que_ \(K\ll_{J}M\)_, alors_ \(f(K)\ll_{J}N\)_._ 5. _Soit_ \(M=M_{1}\oplus M_{2}\) _un_ \(A\)_-module et soient_ \(A_{1}\leq M_{1}\) _et_ \(A_{2}\leq M_{2}\)_, alors_ \(A_{1}\oplus A_{2}\ll_{J}M_{1}\oplus M_{2}\) _si et seulement si_ \(A_{1}\ll_{J}M_{1}\) _et_ \(A_{2}\ll_{J}M_{2}\)_._ ## 4. Modules Hopfien generalise et faiblement co-Hopfien **Definition 4.1**.: _[_21_]__. Un \(A\)-module \(M\) est appele Hopfien generalise, si pour tout endomorphisme surjectif \(f\) de \(M\), le noyau de \(f\) est superflu dans \(M\)\((Kerf\ll M)\)._ **Corollaire 4.2**.: _[_21_]__. Soit \(M\) un \(A\)-module quasi-projectif, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est Hopfen._ 2. \(M\) _est Hopfen generalise._ 3. \(M\) _est Dedekind Fini._ **Definition 4.3**.: _[_54_]__. Soient \(M\) un \(A\)-module et \(N\) un sous-module de \(M\), on dit que \(N\) est essentiel de \(M\) (\(N\leq^{e}M\)) si \(N\cap L=0\Rightarrow L=0\), pour tout sous-module \(L\) de \(M\)._ Un module \(M\) est dit uniforme si tout sous-module non nul de \(M\) est essentiel. **Definition 4.4**.: _[_24_]__. Un \(A\)-module \(M\) est appele faiblement co-Hopfien, si pour tout endomorphisme injectif \(f\) de \(M\), l'image de \(f\) est essentiel de \(M\)\((Imf\leq^{e}M)\)._ **Corollaire 4.5**.: _[_24_]__. Soit \(M\) un \(A\)-module quasi-injectif, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est co-Hopfien._ 2. \(M\) _est faiblement co-Hopfien._ 3. \(M\) _est Dedekind Fini_ **Remarkques 4.6**.: _[_21_]__[_24_]__. Soit \(M\) un \(A\)-module._ 1. _Si_ \(M\) _verifie la CCD sur les sous-modules non essentiels, alors_ \(M\) _est faiblement co-Hopfien._ 2. _Si_ \(M\) _verifie la CCA sur les sous-modules non superflus, alors_ \(M\) _est Hopfen generalise._ ## 5. Modules faiblement Hopfien et co-Hopfien generalise **Definition 5.1**.: _[_52_]__. Un \(A\)-module \(M\) est dit faiblement Hopfien si tout endomorphisme surjectif superflu \(f\)\((Kerf\ll M)\) de \(M\) est bijectif._ **Definition 5.2**.: _[_52_]__. Un \(A\)-module \(M\) est dit co-Hopfien generalise si tout endomorphisme injectif essentiel \(f\)\((Imf\leq^{e}M)\) de \(M\) est bijectif._ **Remarques 5.3**.: _[_52_]__. Soit \(M\) un \(A\)-module._ 1. \(M\) _est co-Hopfien si et seulement si_ \(M\) _est faiblement co-Hopfien et co-Hopfien generalise._ 2. \(M\) _est Hopfien si et seulement si_ \(M\) _est faiblement Hopfien et Hopfien generalise._ 3. _Si_ \(M\) _verifie la CCD sur les sous-modules essentiels alors_ \(M\) _est co-Hopfien generalise._ 4. _Si_ \(M\) _verifie la CCA sur les sous-modules superflus alors_ \(M\) _est faiblement Hopfien._ **Theoreme 5.4**.: _[_11_]_ _Soit \(M\) un module quasi-projectif et soit \(N\) un sous-module totalement invariant superflu dans \(M\), si \(M\) est faiblement Hopfien alors \(M/N\) est faiblement Hopfien._ **Corollaire 5.5**.: _[_11_]_ _Soit \(M\) un module quasi-projectif de type fini, si \(M\) est faiblement Hopfien alors \(M/Rad(M)\) est faiblement Hopfien._ **Proposition 5.6**.: _[_11_]_ _Soit \(M\) un module quasi-projectif, si \(M\) est co-Hopfien alors il est faiblement Hopfien._ **Proposition 5.7**.: _[_11_]_ _Soit \(M\) un module quasi-injectif, si \(M\) est Hopfien alors il est co-Hopfien generalise._ ## 6. Modules fortement Hopfien et fortement co-Hopfien **Definition 6.1**.: _[_27_]__. Un \(A\)-module \(M\) est appele fortement Hopfien, si pour tout endomorphisme \(f\) de \(M\) la suite croissante: \(Kerf\subseteq Kerf^{2}\subseteq...\subseteq Kerf^{n}\subseteq...\) est stationnaire._ **Proposition 6.2**.: _[_27_]__. Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est fortement Hopfien._ 2. _Pour tout endomorphisme_ \(f\) _de_ \(M\)_, il existe_ \(n\geq 1\) _tel que_ \(Kerf^{n}=Kerf^{n+1}\)_._ 3. _Pour tout endomorphisme_ \(f\) _de_ \(M\)_, il existe_ \(n\geq 1\) _tel que_ \(Kerf^{n}\cap Imf^{n}=(0)\)_._ **Definition 6.3**.: _[_27_]__. Un \(A\)-module \(M\) est appele fortement co-Hopfien, si pour tout endomorphisme \(f\) de \(M\) la suite decroissante: \(Imf\supseteq Imf^{2}\supseteq...\supseteq Imf^{n}\supseteq...\) est stationnaire._ **Proposition 6.4**.: _[_27_]__. Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est fortement co-Hopfien._ _._ 2. _Pour tout endomorphisme_ \(f\) _de_ \(M\)_, il existe_ \(n\geq 1\) _tel que_ \(Imf^{n}=Imf^{n+1}\)_._ 3. _Pour tout endomorphisme_ \(f\) _de_ \(M\)_, il existe_ \(n\geq 1\) _tel que_ \(M=Kerf^{n}+Imf^{n}\)_._ **Definition 6.5**.: _[_2_]__. Un A-module \(M\) est dit un module de Fitting si pour tout endomorphisme \(f\) de \(M\), il existe un entier \(n\geq 1\) tel que \(M=Kerf^{n}\oplus Imf^{n}\)._ **Remarques 6.6**.: _[_27_]__. Soit \(M\) un A-module:_ 1. _Tout module noetherien (resp, artinien) est fortement Hopfien (resp, fortement co-Hopfen)._ 2. _Tout module fortement Hopfien (resp, fortement co-Hopfen) est Hopfien (resp, co-Hopfen)._ 3. _Tout module fortement Hopfien (resp, fortement co-Hopfen) est Dedekind Fini._ 4. \(M\) _est fortement Hopfien et fortement co-Hopfen si et seulement si_ \(M\) _est un module de Fitting._ 5. _Tout module de longueur finie est de Fitting._ 6. _Tout module de Fitting est Hopfien et co-Hopfen._ **Exemple 6.7**.: _[_27_]__[_56_]__. Il existe un_ \(\mathbb{Z}\)_-module Hopfien et co-Hopfen et qui n'est ni fortement Hopfien ni fortement co-Hopfen. En effet: Soit_ \((p_{n})_{n\geq 1}\) _une suite de nombres premiers tels que_ \(p_{1}<p_{2}<...<p_{n}<...\)_. Posons_ \(M_{n}=\mathbb{Z}/p_{n}^{n}\mathbb{Z}\) _et considerons_ \(M=\oplus_{n\geq 1}M_{n}\)_._ \(M\) _est co-Hopfen (resp, Hopfen), par contre_ \(M\) _n'est ni fortement Hopfien ni fortement co-Hopfen. D'autre part, soit_ \(P\) _l'ensemble des nombres premiers. Le_ \(\mathbb{Z}\)_-module_ \(M=\oplus_{p\in P}\mathbb{Z}_{p}\) _est fortement Hopfien et fortement co-Hopfen mais n'est ni artinien ni noetherien._ **Theoreme 6.8**.: _[_27_]__. Soit \(M\) un A-module:_ 1. _Si_ \(M\) _est quasi-projectif et fortement co-Hopfen alors_ \(M\) _est fortement Hopfen._ 2. _Si_ \(M\) _est quasi-injectif et fortement Hopfien alors_ \(M\) _est fortement co-Hopfen._ ## 7. Modules semi Hopfien et semi co-Hopfien **Definition 7.1**.: [3]_. Un A-module \(M\) est appele semi Hopfien, si pour tout endomorphisme surjectif \(f\) de \(M\), le noyau de \(f\) est un facteur direct de \(M\)._ **Exemples 7.2**.: [3]_._ 1. _Tout module semi simple est semi Hopfien._ 2. _D'apres_ _[_26_, Theoreme 16(ii)]__, un espace vectoriel V sur un corps F est Hopfien si et seulement s'il est de dimension finie. Donc un espace vectoriel de dimension infinie est semi Hopfien, mais il n'est pas Hopfien._ 3. _Tout module verifiant D2 est semi Hopfien. (On dit qu'un module_ \(M\) _verifie D2 si tout sous-module_ \(N\) _tel que_ \(M/N\) _est isomorphe a un facteur direct de_ \(M\) _est un facteur direct de_ \(M\)_)._ _._ 4. _Tout module quasi-projectif est semi Hopfien._ **Proposition 7.3**.: _[_3_]__. Soit \(M\) un \(A\)-module semi-Hopfien, si \(M\) est Dedekind Fini alors il est Hopfien._ **Definition 7.4**.: _[_3_]__. Un \(A\)-module \(M\) est appeele semi co-Hopfien, si pour tout endomorphisme injectif \(f\) de \(M\), l'image de \(f\) est un facteur direct de \(M\)._ **Proposition 7.5**.: _[_3_]__. Soit \(M\) un \(A\)-module semi-co-Hopfien, si \(M\) est Dedekind Fini alors il est co-Hopfien._ **Remarque 7.6**.: _Les anneaux fortement \(\pi\)-reguliers a gauche et a droite ont ete introduits par Kaplansky [30], Azumaya a montre en 1954 qu'un anneau \(A\) est fortement \(\pi\)-regulier si pour tout \(a\in A\) il existe \(m\in\mathbb{N}\) et \(c\in A\) tel que \(ac=ca\) et \(a^{m}=ca^{m+1}\)[4]. Dischinger a montre en 1976 que la propriete fortement \(\pi\)-regularite est symetrique [8]._ **Exemple 7.7**.: _D'apres [27, Remarque 2.16(3)], l'anneau \(A=\prod_{n\geq 1}\mathbb{Z}/2^{n}\mathbb{Z}\) est Hopfien (tout anneau commutatif est Hopfien) mais n'est pas fortement Hopfien. Puisque tout anneau Hopfien est semi Hopfien, alors l'anneau \(\prod_{n\geq 1}\mathbb{Z}/2^{n}\mathbb{Z}\) est semi Hopfien mais n'est pas fortement Hopfien._ **Theoreme 7.8**.: _[_14_]__. Soit \(M\) un \(A\)-module, alors:_ 1. _Si_ \(M\) _est semi Hopfien et fortement co-Hopfien, alors_ \(End_{A}(M)\) _est fortement_ \(\pi\)_-regulier._ 2. _Si M est semi co-Hopfien et fortement Hopfien, alors_ \(End_{A}(M)\) _est fortement_ \(\pi\)_-regulier._ **Corollaire 7.9**.: _[_14_]__Tout module semi Hopfien et fortement co-Hopfien ou semi co-Hopfien et fortement Hopfien est un module de Fitting._ Le resultat suivant presente un analogue du theoreme de Hopkins-Levitzki. **Corollaire 7.10**.: _[_14_]__Soit \(M\) un \(A\)-module, alors:_ 1. _Si_ \(M\) _est semi Hopfien et fortement co-Hopfien, alors_ \(M\) _est fortement Hopfien._ 2. _Si M est semi co-Hopfien et fortement Hopfien, alors_ \(M\) _est fortement co-Hopfien._ Il est facile de voir que tout module Hopfien est semi Hopfien, mais la reciproque n'est pas vraie en general comme le montre l'exemple suivant: **Exemple 7.11**.: _[_14_]__D'apres [26, Theoreme 16(ii)], un espace vectoriel \(V\) sur un corps \(F\) est Hopfien si et seulement s'il est de dimension finie. Alors un espace vectoriel de dimension infinie sur un corps est semi Hopfien mais n'est pas Hopfien._ **Proposition 7.12**.: _[_14_]__Soit \(M\) un module semi Hopfien, si \(M\) est indecomposable alors il est Hopfien._ **Theoreme 7.13**.: _[_14_]_ _Soit \(M\) un \(A\)-module, alors:_ _(1) Si \(M\) est semi Hopfen et co-Hopfen, alors \(M\) est Hopfen._ _(2) Si \(M\) est semi co-Hopfen et Hopfen, alors \(M\) est co-Hopfen._ **Definition 7.14**.: _[_7_]__. Un \(A\)-module \(M\) est dit quasi-principalement projectif si pour tout endomorphisme \(f\) de \(M\) et pour tout homomorphisme \(g\) de M vers \(f(M)\), il existe un endomorphisme \(h\) de \(M\) tel que : \(g=fh\)_ Puisque tout \(A\)-module quasi-principalement projectif est semi Hopfen d'apres [32, Proposition 3.2], alors il est facile de voir le corollaire suivant. **Corollaire 7.15**.: _[_14_]_ _Soit \(M\) un module quasi-principalement projectif, si \(M\) est co-Hopfen alors il est Hopfen._ **Definition 7.16**.: _[_38_]__. Un \(A\)-module \(M\) est dit quasi-principalement injectif si pour tout endomorphisme non \(\mbox{nul}\ f\) de M et pour tout homomorphisme \(g\) de \(f(M)\) vers \(M\), il existe un endomorphisme \(h\) de \(M\) tel que : \(g=hf\)_ Puisque tout \(A\)-module quasi-principalement injectif est semi co-Hopfen d'apres [32, Proposition 3.1], alors il est facile de voir le corollaire suivant. **Corollaire 7.17**.: _[_14_]_ _Soit \(M\) un module quasi-principalement injectif, si \(M\) est Hopfen alors il est co-Hopfen._ Il est facile de voir que tout module Hopfen est Hopfen generalise, mais la reciproque n'est pas toujours vraie comme le montre l'exemple suivant. **Exemple 7.18**.: _[_21_, exemple 1.7]__. Soit \(G=\mathbb{Z}_{p^{\infty}}\). Puisque dans \(G\) tout sous-groupe propre est superflu, donc \(G\) est un groupe abelien Hopfen generalise. Mais \(G\) n'est pas Hopfen puisque la multiplication par \(p\) induit un epimorphisme de \(G\) qui n'est pas un isomorphisme._ **Proposition 7.19**.: _[_14_]_ _Soit \(M\) un module semi Hopfien. Alors les assertions suivantes sont equivalentes:_ _(1) \(M\) est Hopfien._ _(2) \(M\) est Hopfien generalise._ **Proposition 7.20**.: _[_14_]_ _Soit \(M\) un module semi co-Hopfien. Alors les assertions suivantes sont equivalentes:_ _(1) \(M\) est co-Hopfien._ _(2) \(M\) est faiblement co-Hopfien_ ## 8. Modules \(\mu\)-Hopfiens **Definition 8.1**.: _[_12_]_ _Un \(A\)-module \(M\) est appele \(\mu\)-Hopfien, si pour tout endomorphism surjectif \(f\) de \(M\), le noyau de \(f\) est \(\mu\)-superflu dans \(M\)\((Kerf\ll_{\mu}M)\)._ **Lemme 8.2**.: _[_12_]_ _Soit \(M\) un \(A\)-module et soit \(N\) un sous-module de \(M\), alors les assertions suivantes sont equivalentes:_ _(1) \(N\ll_{\mu}M\)._ _(2) Si \(X+N=M\), alors \(X\) est un facteur direct de \(M\) avec \(M/X\) est un module semi simple injectif._ Le resultat suivant presente une caracterisation des modules \(\mu\) -Hopfiens. **Theoreme 8.3**.: _[_12_]_ _Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est_ \(\mu\)_-Hopfien._ 2. _Pour tout endomorphism surjectif_ \(f\) _de_ \(M\)_, si_ \(N\ll_{\mu}M\)_, alors_ \(f^{-1}(N)\ll_{\mu}M\)_._ 3. _Si_ \(N\leq M\) _et s'il existe un epimorphisme_ \(M/N\to M\)_, alors_ \(N\ll_{\mu}M\)_._ 4. _Si_ \(M/N\) _est non nul et cosingulier pour tout_ \(N\leq M\) _et si_ \(f\) _est un endomorphism surjectif de_ \(M\) _alors_ \(f(N)\neq M\)_._ 5. _Il existe un sous-module_ \(\mu\)_-superflu totalement invariant_ \(N\) _de_ \(M\) _tel que_ \(M/N\) _est_ \(\mu\)_-Hopfien._ 6. _Pour tout module_ \(X\) _tel qu'il existe un epimorphisme_ \(M\to M\oplus X\)_, alors_ \(X\) _est semi simple injectif._ L'exemple suivant montre que la classe des modules Hopfiens est une sous-classe propre des modules \(\mu\)-Hopfiens. **Lemme 8.4**.: _[_12_]_ _Soit \(G=\mathbb{Z}_{p^{\infty}}\). Puisque dans \(G\) chaque sous-groupe propre est \(\mu\)-superflu. Donc \(G\) est un groupe \(\mu\)-Hopfien. Mais \(G\) n'est pas Hopfien car la multiplication par \(p\) induit un epimorphisme de \(G\) qui n'est pas un isomorphisme._ **Theoreme 8.5**.: _[_12_]_ _Soit \(M\) un module (quasi-)projectif et soit \(f\in\operatorname{End}(M)\). Alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est_ \(\mu\)_-Hopfien._ 2. _Si_ \(f\) _est un epimorphisme, alors_ \(Ker(f)\) _est semi simple injectif._ **Theoreme 8.6**.: _[_12_]_ _Soit A un anneau. Alors les assertions suivantes sont equivalentes:_ 1. _Tout A-module est_ \(\mu\)_-Hopfien._ 2. _Tout A-module projectif est_ \(\mu\)_-Hopfien._ 3. _Tout A-module libre est_ \(\mu\)_-Hopfien._ 4. _A est semi simple._ Il est clair que tout module Hopfien generalise est \(\mu\)-Hopfien. L'exemple suivant montre que la reciproque n'est pas vraie en general. De plus, cet exemple montre aussi qu'un module \(\mu\)-Hopfien n'est pas necessairement Dedekind Fini. **Exemple 8.7**.: _[_12_]___ _Soit A un anneau semi simple. Alors d'apres le theoreme 8.6, \(M=A^{(\mathbb{N})}\) est un A-module \(\mu\)-Hopfien. Puisque \(A^{(\mathbb{N})}\cong A^{(\mathbb{N})}\oplus A^{(\mathbb{N})}\) et \(A^{(\mathbb{N})}\neq 0\), \(M\) n'est pas un module Hopfien generalise (Dedekind Fini) (voir [21, Corollaire 1.4])._ **Proposition 8.8**.: _[_12_]_ _Soit \(N\) un sous-module totalement invariant de \(M\) tel que \(M/N\) est Hopfen. Si \(N\) est \(\mu\)-Hopfien alors \(M\) l'est._ **Proposition 8.9**.: _[_12_]_ _Soit \(M\) un \(A\)-module. Si \(M\) verifie CCA sur les sous modules non \(\mu\)-superflus alors il est \(\mu\)-Hopfien._ **Proposition 8.10**.: _[_12_]_ _Soit \(M\) un \(A\)-module verifie la propriete suivante, pour tout endomorphisme \(f\) de \(M\) il existe un entier \(n\geq 1\) tel que \(Kerf^{n}\cap Imf^{n}\ll_{\mu}M\). Alors \(M\) est \(\mu\)-Hopfien._ **Theoreme 8.11**.: _[_12_]_ _La propriete \(\mu\)-Hopfien est preservee par l'equivalence de Morita._ **Proposition 8.12**.: _[_12_]_ _Tout facteur direct d'un module \(\mu\)-Hopfien \(M\) est \(\mu\)-Hopfien._ **Proposition 8.13**.: _[_12_]_ _Soit \(M=M_{1}\oplus M_{2}\) un \(A\)-module. Si pour tout \(i\in\{1,2\}\), \(M_{i}\) est un sous-module totalement invariant de \(M\), alors \(M\) est \(\mu\)-Hopfien si et seulement si \(M_{i}\) est \(\mu\)-Hopfien pour tout \(i\in\{1,2\}\)._ **Definition 8.14**.: _[_12_]_ _Soient \(M\) et \(N\) deux \(A\)-modules. \(M\) est appele \(\mu\)-Hopfien relatif a \(N\), si pour tout epimorphisme \(f:M\to N\), \(Ker(f)\ll_{\mu}M\)._ ## 9. Modules \(\delta\)- faiblement Hopfiens **Definition 9.1**.: _[_9_]___ _Un \(A\)-module \(M\) est dit \(\delta\)-faiblement Hopfien si tout endomorphisme surjectif \(\delta\)-superflu \((Kerf\ll_{\delta}M)\) de \(M\) est bijectif._ **Example 9.2**.: _[_9_]_ _Il existe un epimorphisme \(\delta\)-superflu qui n'est pas un isomorphisme. Soit \(G=\mathbb{Z}_{p^{\infty}}\), comme dans \(G\) tout sous groupe propre est \(\delta\)-superflu (car dans \(G\) tout sous groupe propre est superflu), donc tout endomorphisme surjectif de \(G\) est \(\delta\)-superflu, mais la multiplication par \(p\) induit un epimorphisme de \(G\) qui n'est pas un isomorphisme._ **Lemme 9.3**.: _[_9_]_ _Soit \(M\) un \(A\)-module. Alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est_ \(\delta\)_-fablement Hopfien._ 2. _Pour tout sous-module_ \(\delta\)_-superflu_ \(K\) _de_ \(M\)_,_ \(M/K\cong M\) _si et seulement si_ \(K=0\)_._ **Proposition 9.4**.: _[_9_]_ _Soit \(M\) un module \(\delta\)-fablement Hopfien. Si \(M\cong M\oplus N\) pour certain module projectif semi simple \(N\), alors \(N=0\). De plus, si \(M\) est projectif, alors la reciproque est vraie._ **Proposition 9.5**.: _[_9_]_ _Soit \(A\) un anneau semi simple artinien. Alors un \(A\)-module libre \(F\) est \(\delta\)-fablement Hopfien si et seulement s'il est de rang fini._ Le resultat suivant presente une caracterisation des modules projectifs \(\delta\)-fablement Hopfiens. **Theoreme 9.6**.: _[_9_]_ _Soit \(M\) un module projectif et \(f\in End(M)\), alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est_ \(\delta\)_-fablement Hopfien._ 2. _Si_ \(f\) _estversible_ \(\grave{a}\) _droite et_ \(Ker(f)\) _est semi simple, alors_ \(f\) _estversible_ \(\grave{a}\) _gauche._ 3. _Si_ \(f\) _estversible_ \(\grave{a}\) _droite et_ \(Ker(f)\ll_{\delta}M\)_, alors_ \(f\) _estversible_ \(\grave{a}\) _gauche._ 4. _Si_ \(f\) _admet un inverse_ \(\grave{a}\) _droite_ \(g\) _et_ \((1-gf)M\ll_{\delta}M\)_, alors_ \(f\) _estversible_ \(\grave{a}\) _gauche._ 5. _Si_ \(f\) _est surjectif et_ \(Ker(f)\) _est semi simple projectif, alors_ \(f\) _estversible_ \(\grave{a}\) _gauche._ Le resultat suivant presente une caracterisation des anneaux dans lesquels tout module quasi-projectif (projectif, libre) est \(\delta\)-fablement Hopfien. **Theoreme 9.7**.: _[_9_]_ _Soit \(A\) un anneau, alors les assertions suivantes sont equivalentes:_ 1. _Tout_ \(A\)_-module quasi-projectif est_ \(\delta\)_-fablement Hopfien._ 2. _Tout_ \(A\)_-module projectif est_ \(\delta\)_-fablement Hopfien._ 3. _Tout_ \(A\)_-module libre est_ \(\delta\)_-fablement Hopfien._ 4. _Tout ideal_ \(\grave{a}\) _droite maximal de_ \(A\) _est essentiel dans_ \(A_{A}\)_._ 5. _A n'a pas de_ \(A\)_-module non nul semi simple projectif._ 6. \(\delta(A)=J(A)\)_._ On dit qu'un anneau \(A\) est \(GV\)-anneau a droite [42], si tout \(A\)-module simple est soit projectif, soit injectif. Il est clair qu'un anneau \(A\) est \(GV\)-anneau si et seulement si tout \(A\)-module simple singulier est injectif. Noter aussi que d'apres [46, Corollaire 3.3], un anneau \(A\) est \(GV\)-anneau si et seulement si et seulement si tout \(A\)-module superflu est projectif. **Corollaire 9.8**.: _[_9_]_ _Soit \(A\) un \(GV\)-anneau. Alors tout \(A\)-module superflu indecomposable est \(\delta\)-faiblement Hopfien._ Il est clair que tout module \(\delta\)-faiblement Hopfien est faiblement Hopfien. L'exemple suivant montre que la reciproque n'est pas toujours vraie. **Exemple 9.9**.: _[_9_]_ _D'apres_ [52, Example 3.13] _tout espace vectoriel de dimension infinie est faiblement Hopfien. Mais d'apres_ [47, Lemme 2.9]_, \(M\ll_{\delta}M\) si \(M\) est un \(A\)-module projectif semi simple, donc tout endomorphism surjectif de \(M\) est \(\delta\)-superflu. Alors \(M\) n'est pas \(\delta\)-faiblement Hopfien lorsque \(M\) n'est pas Dedekind Fini, c'est le cas des espaces vectoriels de dimension infinie._ **Exemple 9.10**.: _[_9_]_ _Soit \(P\) l'ensemble de tous les nombres premiers et \(\mathbb{Q}/\mathbb{Z}=\bigoplus_{p\in P}\mathbb{Z}_{p^{\infty}}\). Si \(\bigoplus_{p\in P}\mathbb{Z}_{p^{\infty}}\) est un \(\mathbb{Z}\)-module \(\delta\)-faiblement Hopfien, alors \(\mathbb{Z}_{p^{\infty}}\) est \(\delta\)-faiblement Hopfien d'apres la proposition 3.2, contradiction avec l'exemple 9.2. Donc \(\mathbb{Q}/\mathbb{Z}\) n'est pas \(\delta\)-faiblement Hopfien, mais \(\mathbb{Q}\) est un \(\mathbb{Z}\)-module \(\delta\)-faiblement Hopfien._ **Theoreme 9.11**.: _[_9_]_ _Soit \(M\) un module quasi-projectif uniforme, si \(N\) est un sous-module non nul totalement invariant \(\delta\)-superflu de \(M\) alors \(M/N\) est Hopfien._ **Definition 9.12**.: _[_57_]_ _Soit \(\mathfrak{S}\) la classe de tous les modules simples singuliers. Pour un module \(M\), soit \(\delta(M)=Rej_{M}(\mathfrak{S})=\cap\{N\subseteq M;M/N\in\mathfrak{S}\}\)._ **Corollaire 9.13**.: _[_9_]_ _Soit \(M\) un module quasi-projectif uniforme tel que \(\delta(M)\) est \(\delta\)-superflu dans \(M\). Alors \(M/\delta(M)\) est \(\delta\)-faiblement Hopfien._ **Proposition 9.14**.: _[_9_]_ _Soit \(M\) un module quasi-projectif, si \(M\) est co-Hopfien alors il est \(\delta\)-faiblement Hopfien._ **Definition 9.15**.: _[_41_]_ _Soit \(M\) un \(A\)-module. On dit que \(M\) est duo module si tout sous module de \(M\) est totalement invariant._ **Corollaire 9.16**.: _[_9_]_ _Soit \(M=M_{1}\oplus M_{2}\) un duo module. Alors \(M\) est \(\delta\)-faiblement Hopfien si et seulement si \(M_{1}\) et \(M_{2}\) sont \(\delta\)-faiblement Hopfiens._ Il est clair que tout module Hopfien est \(\delta\)-faiblement Hopfien. L'exemple suivant montre que la reciproque n'est pas toujours vraie, Ainsi il montre qu'un module \(\delta\)-faiblement Hopfien n'est pas toujours Dedekind Fini. **Exemple 9.17**.: _[_9_]_ _Si \(M\) est un \(A\)-module singulier semi simple, alors le seul sous-module \(\delta\)-superflu de \(M\) est zero. Donc tout endomorphism surjectif \(\delta\)-superflu de \(M\) est injectif. Mais \(M\) n'est pas Dedekind Fini et donc n'est pas Hopfien._ **Theoreme 9.18**.: _[_9_]_ _Soit \(M\) un \(A\)-module verifiant la CCA sur les sous-modules \(\delta\)-superflus. Alors \(M\) est \(\delta\)-faiblement Hopfien._ ## 10. Modules \(\gamma\)-Hopfiens **Definition 10.1**.: _[_13_]___ _Un \(A\)-module \(M\) est appele \(\gamma\)-Hopfien, si pour tout endomorphisme surjectif \(f\) de \(M\), le noyau de \(f\) est \(\gamma\)-superflu dans \(M\)\((Kerf\ll_{\gamma}M)\)._ Le resultat suivant presente une caracterisation des Modules \(\gamma\) -Hopfiens. **Theoreme 10.2**.: _[_13_]_ _Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est_ \(\gamma\)_-Hopfien,_ 2. _Pour tout endomorphisme surjectif_ \(f\) _de_ \(M\)_, si_ \(N\ll_{\gamma}M\)_, alors_ \(f^{-1}(N)\ll_{\gamma}M\)_._ 3. _Si_ \(N\leq M\) _et s'il existe un epimorphisme_ \(M/N\to M\)_, alors_ \(N\ll_{\gamma}M\)_._ 4. _Si_ \(M/N\) _est non nul et non cosingulier pour tout_ \(N\leq M\) _et si_ \(f\) _est un endomorphisme surjectif de_ \(M\) _alors_ \(f(N)\neq M\)_._ L'exemple suivant montre que la classe des modules Hopfiens est une sous-classe propre des modules \(\gamma\)-Hopfiens. **Example 10.3**.: _[_13_]___ _Soit \(G=\mathbb{Z}_{p^{\infty}}\). Puisque \(G\) est creux alors tout sous-groupe propre de \(G\) est \(\gamma\)-superflu, donc \(G\) est un groupe \(\gamma\)-Hopfien. Mais \(G\) n'est pas Hopfien car la multiplication par \(p\) induit un epimorphisme de \(G\) qui n'est pas un isomorphisme._ **Lemme 10.4**.: _[_13_]_ _Soit \(M\) un \(A\)-module et soit \(N\) un sous-module de \(M\), alors les assertions suivantes sont equivalentes:_ 1. \(N\ll_{\gamma}M\)_._ 2. _Si_ \(X+N=M\)_, alors_ \(X\) _est un facteur direct de_ \(M\) _avec_ \(M/X\) _est un module semi simple cosingulier._ **Theoreme 10.5**.: _[_13_]_ _Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est_ \(\gamma\)_-Hopfien,_ 2. _Pour tout module_ \(X\) _tel qu'il existe un epimorphisme_ \(M\to M\oplus X\)_, alors_ \(X\) _est semi simple cosingulier._ **Theoreme 10.6**.: _[_13_]_ _Soit \(A\) un anneau. Alors les assertions suivantes sont equivalentes:_ 1. _Tout_ \(A\)_-module est_ \(\gamma\)_-Hopfien,_ 2. _Tout_ \(A\)_-module projectif est_ \(\gamma\)_-Hopfien,_ 3. _Tout_ \(A\)_-module libre est_ \(\gamma\)_-Hopfien,_ 4. _A est semi simple cosingulier._ Il est clair que chaque module Hopfien generalise est \(\gamma\)-Hopfien. L'exemple suivant montre que la reciproque n'est pas vraie en general. De plus, cet exemple montre aussi qu'un module \(\gamma\)-Hopfien n'est pas en general Dedekind Fini. **Example 10.7**.: _[_13_]_ _Soit A un anneau semi simple cosingulier. Alors d'apres le theoreme 10.6, \(M=A^{(\mathbb{N})}\) est un \(A\)-module \(\gamma\)-Hopfien. Puisque \(A^{(\mathbb{N})}\cong A^{(\mathbb{N})}\oplus A^{(\mathbb{N})}\) et \(A^{(\mathbb{N})}\neq 0\), \(M\) n'est pas un module Hopfien generalise (Dedekind Fini) (voir [21, Corollaire 1.4])._ **Theoreme 10.8**.: _[_13_]_ _La propriete \(\gamma\)-Hopfien est preservee par l'equivalence de Morita._ **Corollaire 10.9**.: _[_13_]_ _Soit \(n\geq 2\). Alors les assertions suivantes sont equivalentes pour un anneau \(A\):_ _(1) Tout \(A\)-module engendre par \(n\) elements est \(\gamma\)-Hopfien._ _(2) Tout \(M_{n}(A)\)-module cyclique est \(\gamma\)-Hopfien._ **Theoreme 10.10**.: _[_13_]_ _Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ _(1) \(M\) est \(\gamma\)-Hopfien._ _(2) Il existe un sous-module \(\gamma\)-superflu totalement invariant \(N\) de \(M\) tel que \(M/N\) est \(\gamma\)-Hopfien._ **Corollaire 10.11**.: _[_13_]_ _Soit \(M\) un module faiblement co-Hopfien. Si \(M\) verifie CCA sur les sous-modules non \(\gamma\)-superflus \(N\) tel que \(M/N\) est faiblement co-Hopfien, alors \(M\) est \(\gamma\)-Hopfien._ **Proposition 10.12**.: _Soit \(M\) un \(A\)-module. Si \(M\) verifie CCD sur les sous-modules non \(\gamma\)-superflus alors il est \(\gamma\)-Hopfien._ **Proposition 10.13**.: _[_13_]_ _Soit \(M\) un \(A\)-module verifie la propriete suivante, pour tout endomorphisme \(f\) de \(M\) il existe un entier \(n\geq 1\) tel que \(Kerf^{n}\cap Imf^{n}\ll_{\gamma}M\). Alors \(M\) est \(\gamma\)-Hopfien._ Dans le corollaire suivant, on donne une caracterisation d'un anneau \(A\) dans lequel tout \(A\)-module libre de type fini est \(\gamma\)-Hopfien. **Corollaire 10.14**.: _[_13_]_ _Soit \(A\) un anneau. Alors les assertions suivantes sont equivalentes:_ _(1) Tout \(A\)-module libre de type fini est \(\gamma\)-Hopfien._ _(2) Tout \(A\)-module projectif de type fini est \(\gamma\)-Hopfien._ _(3) \(M_{n}(A)\) est un \(M_{n}(A)\)-module \(\gamma\)-Hopfien pour tout \(n\geq 1\)._ **Proposition 10.15**.: _[_13_]_ _Soit \(M\) un module semi Hopfien, si \(M\) est co-Hopfien alors il est \(\gamma\)-Hopfien._ ## 11. Modules Jacobson Hopfiens **Definition 11.1**.: _[_10_]_ _Un \(A\)-module \(M\) est appele Jacobson Hopfien, si pour tout endomorphisme surjectif \(f\) de \(M\), le noyau de \(f\) est Jacobson-superflu dans \(M\)\((Kerf\ll_{J}M)\)._ Le resultat suivant presente une caracterisation des Modules Jacobson Hopfiens. **Theoreme 11.2**.: _[_10_]_ _Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est Jacobson Hopfien._ 2. _Pour tout endomorphisme surjectif_ \(f\) _de_ \(M\)_, si_ \(N\ll_{J}M\)_, alors_ \(f^{-1}(N)\ll_{J}M\)_._ 3. _Pour tout epimorphisme_ \(f:M/N\to M\)_, on a_ \(N\ll_{J}M\)_._ 4. _Si_ \(M/N\) _est non nul et_ \(\operatorname{Rad}(M/N)=M/N\) _pour tout_ \(N\leq M\) _et si_ \(f\) _est un endomorphisme surjectif de_ \(M\) _alors_ \(f(N)\neq M\)_._ L'exemple suivant montre que la classe des modules Hopfiens est une sous-classe propre des modules Jacobson Hopfiens. **Example 11.3**.: _[_10_]_ _Soit \(M=\mathbb{Z}_{p^{\infty}}\). Puisque tout sous-module de \(M\) est Jacobson-superflu dans \(M\) car \(M\) est creux, alors il est clair que \(M\) est Jacobson Hopfien mais \(M\) n'est pas Hopfien. Noter que la multiplication par \(p\) induit un epimorphisme de \(G\) qui n'est pas un isomorphisme._ **Remarque 11.4**.: _[_10_]_ _Selon les definitions, tout module creux est Jacobson Hopfien, mais la reciproque n'est pas vraie en general. Noter que \(M=\mathbb{Z}_{6}\) est un \(\mathbb{Z}\)-module semi simple n'est pas creux. Puisque pour tout module semi simple \(M\) on a \(\operatorname{Rad}(M)=0\), alors tout sous-module propre est Jacobson-superflu dans \(M\) mais \(M\) n'a aucun sous-module superflu non nul._ **Lemme 11.5**.: _[_10_]_ _Soit \(M\) un \(A\)-module et soit \(K\) un sous-module de \(M\), alors les assertions suivantes sont equivalentes:_ 1. \(K\ll_{J}M\)_._ 2. _Si_ \(X+K=M\)_, alors_ \(X\) _est un facteur direct de_ \(M\) _avec_ \(M/X\) _est un module semi simple._ **Theoreme 11.6**.: _[_10_]_ _Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est Jacobson Hopfien._ 2. _Pour tout module_ \(X\) _tel qu'il existe un epimorphisme_ \(M\to M\oplus X\)_, alors_ \(X\) _est semi simple._ **Theoreme 11.7**.: _[_10_]_ _Soit \(M\) un module (quasi-)projectif et \(f\in\operatorname{End}(M)\). Alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est Jacobson Hopfien._ 2. _Si_ \(f\) _est un epimorphisme, alors_ \(Ker(f)\) _est semi simple._ **Theoreme 11.8**.: _[_10_]_ _Soit \(A\) un anneau. Alors les assertions suivantes sont equivalentes:_ 1. _Tout_ \(A\)_-module est Jacobson Hopfien._ _._ 2. _Tout A-module projectif est Jacobson Hopfien._ 3. _Tout A-module libre est Jacobson Hopfien._ 4. \(A\) _est semi simple._ Il est clair que chaque module Hopfien generalise est Jacobson Hopfien. L'exemple suivant montre que la reciproque n'est pas vraie en general. De plus, cet exemple montre aussi qu'un module Jacobson Hopfien n'est pas en general Dedekind Fini. **Exemple 11.9**.: _[_10_]___ _Soit \(A\) un anneau semi simple. Alors d'apres le theoreme 11.8, \(M=A^{(\mathbb{N})}\) est un \(A\)-module Jacobson Hopfien. Puisque \(A^{(\mathbb{N})}\cong A^{(\mathbb{N})}\oplus A^{(\mathbb{N})}\) et \(A^{(\mathbb{N})}\neq 0\), \(M\) n'est pas Hopfien generalise (Dedekind Fini) (voir [21, Corollaire 1.4])._ **Theoreme 11.10**.: _[_10_]_ _Soit \(M\) un \(A\)-module, alors les assertions suivantes sont equivalentes:_ 1. \(M\) _est Jacobson Hopfien._ 2. _Il existe un sous-module Jacobson-superflu totalement invariant_ \(N\) _de_ \(M\) _tel que_ \(M/N\) _est Jacobson Hopfien._ **Proposition 11.11**.: _[_10_]_ _Soit \(N\) un sous-module totalement invariant de \(M\) tel que \(M/N\) est Hopfien. Si \(N\) est Jacobson Hopfien alors \(M\) est Jacobson Hopfien._ **Proposition 11.12**.: _[_10_]_ _Soit \(M\) un \(A\)-module. Si \(M\) verifie CCA sur les sous modules non Jacobson-superflus alors il est Jacobson Hopfien._ **Proposition 11.13**.: _[_10_]_ _Soit \(M\) un \(A\)-module. Si \(M\) verifie CCD sur les sous modules non Jacobson-superflus alors il est Jacobson Hopfien._ **Proposition 11.14**.: _[_10_]_ _Soit \(M\) un \(A\)-module verifie la propriete suivante, pour tout endomorphisme \(f\) de \(M\) il existe un entier \(n\geq 1\) tel que \(Kerf^{n}\cap Imf^{n}\ll_{J}M\). Alors \(M\) est Jacobson Hopfien._ **Exemples 11.15**.: 1. _Tout sous-module propre de module semi simple_ \(M\) _est Jacobson-superflu, alors pour tout endomorphisme_ \(f\) _de_ \(M\) _il existe un entier_ \(n\geq 1\) _tel que_ \(Kerf^{n}\cap Imf^{n}\ll_{J}M\)_. Donc_ \(M\) _est Jacobson Hopfien._ 2. _Si_ \(M\) _est noetherien, alors pour tout endomorphisme_ \(f\) _de_ \(M\) _il existe un entier_ \(n\geq 1\) _tel que_ \(Kerf^{n}\cap Imf^{n}=0\)_. Donc_ \(M\) _est Jacobson Hopfien._ Dans le corollaire suivant, on donne une caracterisation d'un anneau \(A\) dans lesquels tout \(A\)-module libre de type fini est Jacobson Hopfien. **Corollaire 11.16**.: _[_10_]_ _Soit \(A\) un anneau. Alors les assertions suivantes sont equivalentes:_ 1. _Tout_ \(A\)_-module libre de type fini est Jacobson Hopfien._ 2. _Tout_ \(A\)_-module projectif de type fini est Jacobson Hopfien._ 3. \(M_{n}(A)\) _est un_ \(M_{n}(A)\)_-module Jacobson Hopfien pour tout_ \(n\geq 1\)_._ **Proposition 11.17**.: _Soit \(M\) un module semi Hopfien, si \(M\) est co-Hopfien alors il est Jacobson Hopfien._ ## 12. Proprietes des extensions polynominales Soit \(M\) un \(A\)-module. D'apres [48], nous allons rappeler birevement les definitions des modules \(M[x]\) et \(M[x]/(x^{n+1})\). Les elements de \(M[x]\) sont des sommes formelles de la forme \(a_{0}+a_{1}x+...+a_{k}x^{k}\) avec \(k\) un entier superieur ou egal a \(0\) et \(a_{i}\in M\). On note cette somme par \(\sum_{i=1}^{k}a_{i}x^{i}\). Pour l'addition en ajoutant les coefficients correspondants. La structure de \(A[x]\)-module est definie par \[(\sum_{i=0}^{k}\lambda_{i}x^{i}).(\sum_{j=0}^{z}a_{j}x^{j})=\sum_{\mu=0}^{k+z} c_{\mu}x^{\mu},\] ou \(c_{\mu}=\sum_{i+j=\mu}\lambda_{i}a_{j}\), pour tout \(\lambda_{i}\in A\), \(a_{j}\in M\). Tout element non nul \(\beta\) de \(M[x]\) s'ecrit uniquement sous la forme \((\sum_{i=k}^{l}m_{i}x^{i})\) avec \(l\geq k\geq 0\), \(m_{i}\in M\), \(m_{k}\neq 0\) et \(m_{l}\neq 0\). Dans ce cas, nous nous refereons a \(k\) comme l'ordre de \(\beta\), \(l\) comme le degre de \(\beta\), et \(m_{k}\) comme le coefficient initial de \(\beta\). Soit \(n\) un entier superieur ou egal a \(0\) et \[I_{n+1}=\{0\}\cup\{\beta;0\neq\beta\in A[x]\text{, l'ordre de }\beta\geq n+1\}.\] Alors \(I_{n+1}\) est un ideal bilateral de \(A[x]\). L'anneau quotient \(A[x]/I_{n+1}\) est appele l'anneau polynomial tronque, tronque au degre \(n+1\). Si \(A\) est unitaire, \(I_{n+1}\) est l'ideal engendre par \(x^{n+1}\). Si \(A\) n'est pas unitaire, nous designerons "symboliquement" l'anneau \(A[x]/I_{n+1}\) par \(A[x]/(x^{n+1})\). Tout element de \(A[x]/(x^{n+1})\) s'ecrit uniquement sous la forme \((\sum_{i=0}^{n}\lambda_{i}x^{i})\) avec \(\lambda_{i}\in A\). Soit \[D_{n+1}=\{0\}\cup\{\beta;0\neq\beta\in M[x]\text{, l'ordre de }\beta\geq n+1\}.\] Alors \(D_{n+1}\) est un \(A[x]\)-sous-module de \(M[x]\). Puisque \(I_{n+1}M[x]\subset D_{n+1}\), on dit que \(A[x]/(x^{n+1})\) agit sur \(M[x]/D_{n+1}\). On note le module \(M[x]/D_{n+1}\) par \(M[x]/(x^{n+1})\). L'action de \(A[x]/(x^{n+1})\) sur \(M[x]/(x^{n+1})\) est donnee par \[(\sum_{i=0}^{n}\lambda_{i}x^{i}).(\sum_{j=0}^{n}a_{j}x^{j})=\sum_{\mu=0}^{n}c_ {\mu}x^{\mu},\] ou \(c_{\mu}=\sum_{i+j=\mu}\lambda_{i}a_{j}\), pour tout \(\lambda_{i}\in A\), \(a_{j}\in M\). Tout element non nul \(\beta\) de \(M[x]/D_{n+1}\) s'ecrit uniquement sous la forme \((\sum_{i=k}^{n}m_{i}x^{i})\) avec \(n\geq k\geq 0\), \(m_{i}\in M\), \(m_{k}\neq 0\). Dans ce cas, nous nous refereons \(k\) comme l'ordre de \(\beta\) et \(m_{k}\) comme le coefficient initial de \(\beta\). Le \(A[x_{1},...,x_{k}]/(x_{1}^{n_{1}+1},...,x_{k}^{n_{k}+1})\)-module \(M[x_{1},...,x_{k}]/(x_{1}^{n_{1}+1},...,x_{k}^{n_{k}+1})\) est defini de la meme maniere. **Lemme 12.1**.: _[_20_, Lemme 2.1]__. Soit \(M\) un \(A\)-module et \(K\ll M\). Alors \(K[x]/(x^{n+1})\ll M[x]/(x^{n+1})\) comme \(A[x]/(x^{n+1})\)-modules, ou \(n\geq 0\)._ **Lemme 12.2**.: _[_50_, Lemme 1.7]__. Soit \(N\) un \(A\)-sous-module de \(M\). Alors les assertions suivantes sont equivalentes:_ 1. \(N\) _est un_ \(A\)_-module essentiel dans_ \(M\)_._ 2. \(N[x]\) _est un_ \(A[x]\)_-module essentiel dans_ \(M[x]\)_._ 3. \(N[x]/(x^{n+1})\) _est un_ \(A[x]/(x^{n+1})\)_-module essentiel dans_ \(M[x]/(x^{n+1})\) **Theoreme 12.3**.: _[_11_]_ _Soit \(M\) un \(A\)-module. Si \(M[x]/(x^{n+1})\) est un \(A[x]/(x^{n+1})\)-module faiblement Hopfien, alors \(M\) est un \(A\)-module faiblement Hopfien._ **Theoreme 12.4**.: _[_11_]_ _Soit \(M\) un \(A\)-module. Si \(M[x]/(x^{n+1})\) est un \(A[x]/(x^{n+1})\)-module co-Hopfien generalise, alors \(M\) est un \(A\)-module co-Hopfen generalise._ **Theoreme 12.5**.: _[_11_]_ _Soit \(M\) un \(A\)-module. Si \(M[x_{1},...,x_{k}]/(x_{1}^{n_{1}+1},...,x_{k}^{n_{k}+1})\) est un \(A[x_{1},...,x_{k}]/(x_{1}^{n_{1}+1},...,x_{k}^{n_{k}+1})\)-module faiblement Hopfien (resp, co-Hopfien generalise), alors \(M\) est un \(A\)-module faiblement Hopfien (resp, co-Hopfien generalise)._ **Lemme 12.6**.: _[_14_]_ _Soit \(M\) un \(A\)-module et soit \(N\) un sous-module de \(M\). Si \(N[x]/(x^{n+1})\) est un facteur direct de \(M[x]/(x^{n+1})\), alors \(N\) est un facteur direct de \(M\)._ **Theoreme 12.7**.: _[_14_]_ _Soit \(M\) un \(A\)-module. Si \(M[x]/(x^{n+1})\) est un \(A[x]/(x^{n+1})\)-module semi Hopfien, alors \(M\) est un \(A\)-module semi Hopfien._ **Theoreme 12.8**.: _[_14_]_ _Soit M un \(A\)-module. Si \(M[x]/(x^{n+1})\) est un \(A[x]/(x^{n+1})\)-module semi co-Hopfien, alors M est un \(A\)-module semi co-Hopfien._ **Theoreme 12.9**.: _[_14_]_ _Soit M un \(A\)-module. Si \(M[x_{1},...,x_{k}]/(x_{1}^{n_{1}+1},...,x_{k}^{n_{k}+1})\) est un \(A[x_{1},...,x_{k}]/(x_{1}^{n_{1}+1},...,x_{k}^{n_{k}+1})\)-module semi Hopfien (resp, semi co-Hopfien), alors M est un \(A\)-module semi Hopfen (resp, semi co-Hopfien)._ **Lemme 12.10**.: _[_12_]_ _Soient \(M\) un \(A\)-module et \(K\ll_{\mu}M\). Alors \(K[x]/(x^{n+1})\ll_{\mu}M[x]/(x^{n+1})\) comme \(A[x]/(x^{n+1})\)-modules, ou \(n\geq 0\)._ **Theoreme 12.11**.: _[_12_]_ _Soit \(M\) un \(A\)-module. Alors \(M[x]/(x^{n+1})\) est un \(A[x]/(x^{n+1})\)-module \(\mu\)-Hopfien si et seulement si \(M\) est un \(A\)-module \(\mu\)-Hopfien._ **Corollaire 12.12**.: _[_12_]_ _Soit \(M\) un \(A\)-module. Alors \(M[x_{1},...,x_{k}]/(x_{1}^{n_{1}+1},...,x_{k}^{n_{k}+1})\) est \(A[x_{1},...,x_{k}]/(x_{1}^{n_{1}+1},...,x_{k}^{n_{k}+1})\)-module \(\mu\)-Hopfien si et seulement si \(M\) est un \(A\)-module \(\mu\)-Hopfien._
2308.09973
Combinatorial isoperimetric inequality for the free factor complex
We show that the free factor complex of the free group of rank at least 3 does not satisfy a combinatorial isoperimetric inequality: that is, for every N greater than or equal to 3, there is a loop of length 4 in the free factor complex that only bounds discs containing at least O(N) triangles. To prove the result, we construct a coarsely Lipschitz function from the `upward link' of a free factor to the set of integers.
Radhika Gupta
2023-08-19T10:24:37Z
http://arxiv.org/abs/2308.09973v1
# Combinatorial isoperimetric inequality for the free factor complex ###### Abstract. We show that the free factor complex of the free group of rank \(n\geq 3\) does not satisfy a combinatorial isoperimetric inequality: that is, for every \(N\geq 3\), there is a loop of length \(4\) in the free factor complex that only bounds discs containing at least O(N) triangles. To prove the result, we construct a coarsely Lipschitz function from the 'upward link' of a free factor to \(\mathbb{Z}\). ## 1. Introduction Webb [20] showed that arc complexes associated to almost all hyperbolic surfaces do not admit a \(\operatorname{CAT}(0)\)_metric with finitely many shapes_. That is, they do not admit \(\operatorname{CAT}(0)\) metrics which have finitely many isometry types of simplices, in the induced metric. Moreover, he showed that the same is true for the free splitting complex and the cyclic splitting complex associated to the outer automorphism group \(\operatorname{Out}(\mathbb{F})\) of a free group \(\mathbb{F}\) of rank \(n\geq 6\). He proved this by showing that none of the above complexes satisfy a _combinatorial isoperimetric inequality_. In contrast, he showed that the curve complex of a hyperbolic surface satisfies a _linear_ combinatorial isoperimetric inequality. In this article, we show that the _free factor complex_ follows suit with the other two hyperbolic \(\operatorname{Out}(\mathbb{F})\)-complexes. Thus still leaving open the question: is there a cocompact complex for \(\operatorname{Out}(\mathbb{F})\), analogous to the curve complex, that satisfies a linear combinatorial isoperimetric inequality? The free factor complex \(\mathcal{F}_{n}\) associated to \(\mathbb{F}\) is the simplicial complex whose vertices are conjugacy classes of proper free factors of \(\mathbb{F}\) and a collection \(\{A_{1},\ldots,A_{k}\}\) of vertices spans a simplex if \(A_{1}\subset A_{2}\subset\cdots\subset A_{k}\), where the inclusions are up to conjugation. We show: **Theorem A**.: _Let \(\mathcal{F}_{n}\) be the free factor complex of free group of rank \(n\geq 3\). There exists a family of loops \(c_{N}\), for \(N\in\mathbb{N}\), of combinatorial length 4 in \(\mathcal{F}_{n}^{(1)}\) such that the following holds: whenever \(P\) is a triangulation of a disc and \(f\colon P\to\mathcal{F}_{n}^{(2)}\) is a simplicial map with \(f|_{\partial P}\) mapping bijectively onto \(c_{N}\), then \(P\) must have at least O(N) triangles._ As a direct consequence of Webb's main result [20, Theorem 1.1], we get **Corollary 1.1**.: _For \(n\geq 3\), \(\mathcal{F}_{n}\) does not admit a \(\operatorname{CAT}(0)\) metric with finitely many shapes._ We refer the reader to [20] for motivation to study \(\operatorname{CAT}(0)\) metrics on these spaces. The proof strategy for the theorem is as follows. For any \(N>0\), we construct an explicit loop \(c_{N}\) of length \(4\) with vertices \(A_{0},A,A_{N},B\), where \(A\) is a rank one free factor. See Figure 1. We show that any path between \(A_{0}\) and \(A_{N}\) in the link of \(A\) has length bounded from below by O(N). We do this by defining (see Proposition 3.6) a coarsely Lipschitz function from the link of \(A\) to \(\mathbb{Z}\). Using this, we conclude that at least O(N) triangles are needed to cap off any disc with boundary \(c_{N}\). In fact, in Section 3 we define a family of coarsely Lipschitz functions from the upward link, denoted \(\mathcal{F}^{\uparrow}(A)\), of a free factor \(A\) of corank at least \(3\) to \(\mathbb{Z}\). Choose a complementary free factor \(B\), a cyclically reduced filling element \(w\) in \(B\) that is not a power, and a primitive element \(b\) in \(B\). Then we define \(\Psi_{w,b,B}\colon\mathcal{F}^{\uparrow}(A)\to\mathbb{Z}\) to roughly measure how much \(b\) gets conjugated by \(w\), in other words how much \(b\) twists about \(w\), in a basis of \(\mathbb{F}\) that contains a basis of \(X\in\mathcal{F}^{\uparrow}(A)\) as a subbasis. Proposition 3.5 is the main technical lemma proving the coarse Lipshitzness of the above function. It uses the 'active interval lemma' [1] and understanding of unfolding paths [1] in relative outer space. _Acknowledgments._ The author would like to thank Richard Webb for asking her this question and useful discussions. She is grateful to Mladen Bestvina for sharing his ideas and she would also like to thank Jean Pierrre Mutanguha for reading a draft. The author was supported by the Department of Atomic Energy, Government of India, under project no.12-R&D-TIFR-5.01-0500 and also by an endowment of the Infosys Foundation. ## 2. Background ### Combinatorial isoperimetric inequalities We recall some definitions from [20]. Let \(P\) and \(K\) be simplicial complexes. A _simplicial map_ is a map \(c\colon P\to K\) that sends each simplex of \(P\) to a simplex of \(K\) by a linear map taking vertices to vertices. Let \(K^{(i)}\) denote the \(i\)-skeleton of \(K\). A _combinatorial loop_\(c\) in \(K\) is a sequence of vertices \((v_{1},\dots,v_{k})\) in \(K\) where \(v_{k}\) is adjacent (or equal) to \(v_{1}\) and \(v_{i}\) is adjacent (or equal) to \(v_{i+1}\) for \(1\leq i\leq j-1\). The _combinatorial length_\(l_{C}(c)\) of \(c\) is equal to \(k\). A combinatorial loop \(c\) of combinatorial length \(k\) can also be described as a simplicial map \(c\colon P\to K\), where \(P\) is a triangulation of \(S^{1}\) with \(k\)\(1\)-simplices. Let \(D^{2}\) be the closed unit disc with boundary \(S^{1}\). We say that a combinatorial loop \(c\) can be _capped off with at most \(m\) triangles_ if there is a triangulation \(P\) of \(D^{2}\) into at most \(m\)\(2\)-simplices and a simplicial map \(c^{\prime}\colon P\to K\) such that \(c^{\prime}|_{S^{1}}=c\). A function \(f\colon\mathbb{N}\to\mathbb{N}\) is called a _combinatorial isoperimetric bound_ for \(K\) if every combinatorial loop \(c\) in \(K\) can be capped off with at most \(f(l_{C}(c))\) many triangles. We say \(K\) satisfies a _linear combinatorial isoperimetric inequality_ if there exists a combinatorial isoperimetric bound \(f\) for \(K\) such that \(f(n)=\mathrm{O}(n)\). We say \(K\)_satisfies no combinatorial isoperimetric inequality_ if no combinatorial isoperimetric bound for \(K\) exists. Figure 1. The loop \(c_{N}\). ### Outer space We say an \(\mathbb{R}\)-tree is an \(\mathbb{F}\)-tree if it admits a non-trivial action of \(\mathbb{F}\). We denote by \(\mathcal{O}\) the (unprojectivized) Outer space of \(\mathbb{F}\), defined in [10], consisting of minimal, metric, simplicial \(\mathbb{F}\)-trees up to equivariant isometry. Let \(A\) be a free factor of \(\mathbb{F}\). Let \(\mathcal{O}(\mathbb{F},A)\) be the unprojectivised _Outer space relative to \(A\)_ ([12]), that is, the space of minimal, metric, simplicial \(\mathbb{F}\)-trees with trivial edge stabilizers and vertex stabilizers conjugates of \(A\), up to equivariant isomorphism. Let \(\mathbb{P}\mathcal{O}(\mathbb{F},A)\) denote the projectivized outer space relative to \(A\). For \(T\in\mathcal{O}(\mathbb{F},A)\), the _covolume of \(T\)_ is the sum of the lengths of edges of \(T/\mathbb{F}\), which is a finite graph. Then we think of points in \(\mathbb{P}\mathcal{O}(\mathbb{F},A)\) as covolume \(1\) trees. Let \(\mathcal{FS}(\mathbb{F},A)\) be the free splitting graph relative to \(A\) ([13]). A vertex is given by minimal, simplicial \(\mathbb{F}\)-tree \(T\), without the metric, with trivial edge stabilizers and such that \(A\) is elliptic in \(T\), up to equivariant isomorphism. Two such trees \(T_{1},T_{2}\) are joined by an edge if there is an equivariant collection of edges in \(T_{1}\) that can be collapsed to obtain \(T_{2}\). There is a natural function from \(\mathbb{P}\mathcal{O}(\mathbb{F},A)\) to \(\mathcal{FS}(\mathbb{F},A)\), given by forgetting the metric on the tree, such that the image is contained in the vertex set of \(\mathcal{FS}(\mathbb{F},A)\). The _vertex group system_ of a free splitting of \(\mathbb{F}\) is the collection of conjugacy classes of its vertex stabilizers. ### Train track structure and morphism Let \(\Gamma\) be a simplicial \(\mathbb{F}\)-tree. A direction \(d\) based at \(p\in\Gamma\) is a component of \(\Gamma-\{p\}\). A turn is an unordered pair of directions based at the same point. An _illegal turn structure_ on \(\Gamma\) is an equivalence relation on the set of directions at each point \(p\in\Gamma\). The classes of this relation are called _gates_. A turn \((d,d^{\prime})\) is _legal_ if \(d\) and \(d^{\prime}\) do not belong to the same gate. If in addition there are at least two gates at every vertex of \(\Gamma\), then the illegal turn structure is called a _train track structure_. A path is legal if it only crosses legal turns. Given two \(\mathbb{F}\)-trees \(\Gamma\) and \(\Gamma^{\prime}\), an \(\mathbb{F}\)-equivariant map \(f\colon\Gamma\to\Gamma^{\prime}\) is called a _morphism_ if every segment of \(\Gamma\) can be subdivided into finitely many subintervals such that \(f\) is an isometry when restricted to each subinterval. A morphism between \(\mathbb{F}\)-trees induces a _train track structure_ on the domain \(\Gamma\). A morphism is called _optimal_ if there are at least two gates at each point of \(\Gamma\). See [1, 1] for more details. ### Folding Let \(T,T^{\prime}\in\mathcal{O}(\mathbb{F},A)\). A _folding path with its natural parametrization_\((T_{t})_{t\in\mathbb{R}^{+}}\), guided by an optimal morphism \(f\colon T\to T^{\prime}\) can be defined as follows (see [1, Section 2] and [12, Section 3]): Given \(a,b\in T\) with \(f(a)=f(b)\), the _identification time_ of \(a\) and \(b\) is defined as \(\tau(a,b)=\sup_{x\in[a,b]}d_{T^{\prime}}(f(x),f(a))\). Define \(L:=\frac{1}{2}\operatorname{BBT}(f)\), where BBT is the bounded backtracking constant for \(f\). For each \(t\in[0,L]\), one defines an equivalence relation \(\sim_{t}\) by \(a\sim_{t}b\) if \(f(a)=f(b)\) and \(\tau(a,b)<t\). The tree \(T_{t}\) is then a quotient of \(T\) by the equivalence relation \(\sim_{t}\). The authors of [12] prove that for each \(t\in[0,L]\), \(T_{t}\) is an \(\mathbb{R}\)-tree. The collection of trees \((T_{t})_{t\in[0,L]}\) comes equipped with \(\mathbb{F}\)-equivariant morphisms \(f_{s,t}\colon T_{t}\to T_{s}\) for all \(t<s\) and these maps satisfy the semi-flow property: for all \(r<s<t\), we have \(f_{t,s}\circ f_{s,r}=f_{t,r}\). Moreover \(T_{L}=T^{\prime}\) and \(f_{L,0}=f\). Let \(\overline{T},\overline{T^{\prime}}\) be the covolume \(1\) representatives of \(T\) and \(T^{\prime}\) in \(\mathbb{P}\mathcal{O}(\mathbb{F},A)\). Then a folding path \((S_{t})_{t}\) between them is the projection of the folding path between \(T\) and \(T^{\prime}\). In other words, \(S_{t}\) is obtained by rescaling \(T_{t}\) to covolume \(1\). **Lemma 2.1** ([1, Lemma 4.1, Corollary A.3]).: _Let \(f\colon T\to T^{\prime}\) be an optimal morphism between two points \(T,T^{\prime}\) in \(\mathbb{P}\mathcal{O}(\mathbb{F},A)\). Then \(d_{\operatorname{FS}(\mathbb{F},A)}(T,T^{\prime})\) is bounded above by a linear function of cardinality of \(f^{-1}(y)\) for any point \(y\in T^{\prime}\)._ ### Unfolding In [1], Bestvina and Feighn analyzed in detail folding and unfolding paths in Outer space \(\mathcal{O}\) of a free group. The latter are folding paths traversed in reverse. The following is a relative version of their result and is used in the proof of the active interval lemma in the next section. **Theorem 2.2** (cf. [1, Theorem 4.2]).: _Let \(T_{t}\), \(t\in[0,\delta]\) be a folding path in \(\mathcal{O}(\mathbb{F},A)\) with its natural parametrization. There is a partition \(0=t_{0}<t_{1}<\cdots<t_{N}=\delta\) of \([0,\delta]\) such that the restriction of \(T_{t}\) to each \([t_{i},t_{i+1}]\) is given by folding an orbit of a gadget in \(T_{t}\)._ We briefly describe why the proof of the above result in Outer space works in relative outer space as well. Let \(N\) be a small neighborhood of a vertex \(v\) in \(T_{t}\). Then \(N\) is a wedge of (infinitely many) arcs with wedge point \(v\). The above theorem says that for small \(\epsilon\), the pre-image \(N_{\epsilon}\) of \(N\) in \(T_{t-\epsilon}\) is a tree that comes with a height function and a collection of _widgets_ (see Figure 2 and [1, Definition 4.1]). A widget is topologically a cone on finitely many points and a _gadget_ is a finite union of widgets. In \(N_{\epsilon}\), the folding happens at vertices with two gates where one gate is a single direction. The picture resembles the scenario when we are in the middle of folding a collection of edges based at a vertex in Stallings folding process. Note that in the case of a folding path in Outer space of free group we get a collection of finitely many widgets in \(N_{\epsilon}\), which is used essentially in the proof of [1, Theorem 4.2]. That will no longer be true for folding paths in relative outer space as the trees have infinite valence vertices. However, it is still true that \(N_{\epsilon}\) will be a collection of finitely many _orbits_ of widgets, which are topologically still cones on finitely many points. This is because edge stabilizers are trivial in \(T_{t}\) and hence the pre-image of an edge under folding is a finite collection of edges. Thus the unfolding process in relative outer space looks basically the same as that in Outer space. ### Active intervals lemma Let \(T\in\mathbb{PO}(\mathbb{F},A)\) with a train track structure. Let \(w\in\mathbb{F}\) be a cyclically reduced element that is not a power of another element and it is not contained in \(A\), up to conjugation. The minimal subtree of \(w\) in \(T\) is the axis of \(w\) and the core of \(T/\langle w\rangle\) is a circle, denoted \(C_{w}(T)\). Since the quotient map \(q\colon T\to T/\langle w\rangle\) does not collapse any edges, the train track structure on \(T\) induces a train track structure on \(T/\langle w\rangle\). The restriction of the train track structure (resp. metric) on \(T\) to the axis of \(w\) defines a train track structure (resp. metric) on \(C_{w}(T)\) as well. **Lemma 2.3** (cf. [1, Lemma 3.2]).: _Given a folding path \(T_{t}\) in \(\mathbb{PO}(\mathbb{F},A)\), \(t\in[\alpha,\delta]\) and a cyclically reduced word \(w\in\mathbb{F}\) not contained in any conjugate of \(A\), the interval \([\alpha,\delta]\) can be subdivided into three segments \([\alpha,\beta),[\beta,\gamma),[\gamma,\delta]\) so that:_ 1. _for every_ \(t\) _in the first segment_ \([\alpha,\beta)\)_,_ \(C_{w}(T_{t})\) _has an illegal turn,_ Figure 2. Widget 2. _for every_ \(t\) _in the middle segment_ \([\beta,\gamma)\)_,_ \(C_{w}(T_{t})\) _has volume at most 2, and there are no illegal turns,_ 3. _for every_ \(t\) _in in the last segment_ \([\gamma,\delta]\)_,_ \(C_{w}(T_{t})\) _has a legal segment of length at least 2._ For completeness, we include a proof which is essentially the same as the original proof in the setting of Outer space. Proof.: Let \(J\) be the set of times \(t_{0}\) in \([\alpha,\delta]\) such that \(C_{w}(T_{t_{0}})\) contains an illegal turn. We first show that \(J\) is an open set. Let \(t_{0}\) be in \(J\). If \(t_{0}\neq\delta\), then it is clear from the definition of folding that \([t_{0},t_{0}+\epsilon)\subset J\) for small \(\epsilon>0\). We need to show that \((t_{0}-\epsilon,t_{0}]\) is also in \(J\). We will show that \(C_{w}(T_{t_{0}-\epsilon})\) also has an illegal turn. Let \(v\) be a vertex of \(C_{w}(T_{t_{0}})\) of valence two and one gate. Let \(\overline{T}_{t}\) be the quotient of \(T_{t}\) by the action of \(w\). Let \(N\) be a small neighborhood of \(v\) in \(\overline{T}_{t}\). Also let \(\overline{N}_{\epsilon}\) be the pre-image of \(N\) under the induced map \(\overline{T}_{t_{0}-\epsilon}\to\overline{T}_{t_{0}}\). All the edges in a widget in \(\overline{N}_{\epsilon}\) map to a small segment with one end point \(v\). If this segment is in \(C_{w}(T_{t_{0}})\), then the widget is called _essential_, and _inessential_ otherwise. Similarly, an edge below height 0 in \(\overline{N}_{\epsilon}\) is essential if it's image is in \(C_{w}(T_{t_{0}})\) and inessential otherwise. In our case, there are two essential edges and widgets in \(\overline{N}_{\epsilon}\). Let \(\overline{N}^{\prime}_{\epsilon}\) be the hull of all essential edges and widgets, together with edges above essential widgets. Using the observations made in the detailed proof of [1, Lemma 3.2], we get three possibilities for \(\overline{N}^{\prime}_{\epsilon}\) as shown in Figure 3. Since \(\overline{N}^{\prime}_{\epsilon}\) is contained in \(C_{w}(T_{t_{0}-\epsilon})\), we find an illegal turn in \(C_{w}(T_{t_{0}-\epsilon})\). Thus \(J\) is an open set. Moreover, \(J\) must be an initial segment. Otherwise, there would be a \(t_{0}\notin J\) but \((t_{0},t_{0}+\epsilon)\subset J\). However, a local fold cannot produce an illegal turn at a valence two vertex. To finish the proof, we have to consider the last segment. Suppose that \(C_{w}(T_{t_{0}})\) has a legal segment of length at least 2. For this argument use the natural parametrization of folding paths and work in unprojectivized relative outer space. After time \(\epsilon\), the covolume of \(T_{t_{0}+\epsilon}\) is at most \(1-\epsilon\). The legal segment in \(C_{w}(T_{t_{0}})\) of length \(L\geq 2\) may lose a piece of length \(\epsilon\) at each end. Therefore, after rescaling by at least \(1/(1-\epsilon)\) to bring back the covolume of \(T_{t_{0}+\epsilon}\) to 1, the length of the legal segment is at least \((L-2\epsilon)/(1-\epsilon)\), which is at least 2 for \(L\geq 2\). ## 3. Coarsely Lipschitz function to \(\mathbb{Z}\) Let \(A\) be a free factor of \(\mathbb{F}\) and let \(B\) be a complementary free factor, that is, \(\mathbb{F}=A*B\). Choose a filling element \(w\in B\), that is, \(w\) is not contained in any proper free factor of \(B\). Also choose \(w\) such that it is cyclically reduced and not a power of another element. Choose \(b\in B\) primitive, that is, a basis element. Let \(\mathcal{F}^{\uparrow}(A)\) be the subcomplex of \(\mathcal{F}_{n}\) whose vertices are given by conjugacy classes of free factors that properly contain \(A\) up to conjugation. In other words, \(\mathcal{F}^{\uparrow}(A)\) is the 'upward link' of the vertex \(A\) in \(\mathcal{F}_{n}\). The _corank_ of \(A\) is the rank of any complementary free factor of \(A\). In this section, we will define a coarsely Lipschitz function \(\Psi_{w,b,B}\colon\mathcal{F}^{\uparrow}(A)\to\mathbb{Z}\) and use it to prove our main theorem in the next section. **Lemma 3.1**.: _If corank of \(A\) is at least 3, then \(\mathcal{F}^{\uparrow}(A)\) is connected._ Proof.: Let \(G\) be a marked metric graph in \(\mathbb{P}\mathcal{O}\) such that \(G\) has a subgraph \(H\) with fundamental group \(A\). Let \(v\) be a vertex of this subgraph. Let \(P\) and \(Q\) be two subgraphs of \(G\) that properly contain \(H\) and have rank at least \(1+\operatorname{rank}(A)\). Then they determine two free factors \(\dot{P}\) and \(\dot{Q}\) in \(\mathcal{F}^{\uparrow}(A)\). We claim that \(d_{\mathcal{F}^{\uparrow}(A)}(\dot{P},\dot{Q})\leq 4\) (see [1, Section 3]). Indeed, let \(P^{\prime}\), respectively \(Q^{\prime}\), be a subgraph of \(G\) containing \(P\), respectively \(Q\), and all edges of \(G\) except one. Since \(G\backslash H\) has rank at least \(3\), \(P^{\prime}\cap Q^{\prime}\) contains \(H\) and another circle based at \(v\). Let \(R\) be the union of \(H\) and this circle. Then \(\dot{P},\dot{P}^{\prime},\dot{R},\dot{Q}^{\prime},\dot{Q}\) is a path of length \(4\) in \(\mathcal{F}^{\uparrow}(A)\). Let \(\Pi(G)\) denote the collection of free factors in \(\mathcal{F}^{\uparrow}(A)\) arising from subgraphs of \(G\) containing \(H\). Then we just showed that \(\Pi(G)\) is a connected set of diameter at most \(4\). Now for \(X,Y\in\mathcal{F}^{\uparrow}(A)\), let \(G_{X}\) and \(G_{Y}\) be two marked metric graphs in \(\mathbb{P}\mathcal{O}\) such that they both have subgraphs with fundamental group \(A\) and \(X\in\Pi(G_{X})\), \(Y\in\Pi(G_{Y})\). By [1, Proposition 2.5], there is a folding path between \(G_{X}\) and \(G_{Y}\) with an invariant subgraph of fundamental group \(A\). We claim that the folding path maps to a path connecting \(X\) and \(Y\) in \(\mathcal{F}^{\uparrow}(A)\). We may assume that the folding path is obtained by a continuous parametrization of Stallings foldings (see [1, Section 2.2A]). If one performs an elementary Stallings fold on \(G_{X}\), that is fold two edges to obtain a new marked graph \(G^{\prime}\), then \(\Pi(G_{X})\) and \(\Pi(G^{\prime})\) intersect non-trivially. Thus we can produce a path joining \(X\) and \(Y\) in \(\mathcal{F}^{\uparrow}(A)\). Let \(X\) be the conjugacy class of a free factor in \(\mathcal{F}^{\uparrow}(A)\). For any \(T\in\mathbb{P}\mathcal{O}(\mathbb{F},X)\), let \(e\) denote the base point, that is, the vertex fixed by \(A\). For \(g\in\mathbb{F}\), let \(\mathrm{a}_{T}(g)\) denote the axis or fixed point set of \(g\) acting on \(T\) and \(\tau_{T}(g)\) denote the translation length. **Lemma 3.2**.: _The element \(w\) is a hyperbolic isometry of any \(T\in\mathbb{P}\mathcal{O}(\mathbb{F},X)\), where \(X\in\mathcal{F}^{\uparrow}(A)\)._ Proof.: By Kurosh subgroup theorem, the intersection of two free factors of \(\mathbb{F}\) is a free factor. Therefore, \(X\cap B\), if non-empty is a free factor of \(\mathbb{F}\) and in particular of \(B\) and \(X\). Suppose some conjugate of \(w\) is contained in \(X\), then up to conjugation \(w\) would be contained in a proper free factor of \(B\), which is a contradiction to \(w\) being filling in \(B\). Therefore, \(w\) is not contained in \(X\) up to conjugation. Hence, \(w\) is hyperbolic in \(T\). Recall \(b\in B\) is a fixed primitive element. Let \(L_{T}(b)\) be the shortest path from \(e\) to \(\mathrm{a}_{T}(b)\) in \(T\). We will call it the _leg of \(b\)_ in \(T\). Let \[\Phi_{w,b,B}(T):=\left\lceil\frac{\mathrm{diam}_{T}(L_{T}(b)\cap\mathrm{a}_{T }(w))}{\tau_{T}(w)}\right\rceil.\] Informally, \(\Phi_{w,b,B}(T)\) is the number of fundamental domains of \(w\) crossed by the leg of \(b\) in \(T\). See Figure 4 for an illustration of the definition. For comparison, Clay-Pettet [1] define the _relative twist of \(T\) and \(b\) relative to \(w\)_ as the supremum of the number of fundamental domains of \(w\) in the intersection of the axis of \(w\) and axes of all conjugates of \(b\) in \(T\). In our setting, this number is one since \(b\in B\) is primitive and \(w\) is filling in \(B\). **Example 3.3**.: This example is to illustrate the setup in the above definition. Let \(\mathbb{F}=\langle a,b,c,d\rangle\) and \(w=c^{2}b^{2}d^{2}c^{2}\). It is easy to check using Whitehead's algorithm that \(w\) is filling in \(\langle b,c,d\rangle\). Now consider a new basis \(\{a,x,c,d\}\) where \(b=d^{-1}c^{-1}xcd\). Then in the new basis \(w=c^{2}d^{-1}c^{-1}xcd^{3}c^{2}\). Let \(T\) be the Bass-Serre tree of a graph of groups with one vertex group stabilized by \(\langle a\rangle\) and three loops labeled \(x,c,d\). Then in \(T\), the axis of \(w\) and \(b\) are disjoint. **Example 3.4**.: Let \(\mathbb{F}=\langle a,b,c,d\rangle\) and \(w=b^{2}c^{2}d^{2}\in\langle b,c,d\rangle\). It is easy to check using Whitehead's algorithm that \(w\) is filling in \(\langle b,c,d\rangle\). Let \(A=\langle a\rangle,B=\langle b,c,d\rangle\) and let \(X=[\langle a,b\rangle]\) and \(Y=[\langle a,w^{N}bw^{-N}\rangle]\) be two points in \(\mathcal{F}^{\uparrow}(A)\) for some \(N>0\). Let \(T_{X}\) be the Bass-Serre tree of the graph of groups with one vertex stabilized by \(X\) and two loops labeled by \(c\) and \(d\). Similarly, let \(T_{Y}\) be the Bass-Serre tree of the graph of groups with one vertex stabilized by \(Y\) and two loops labeled \(w^{N}cw^{-N}\) and \(w^{N}dw^{-N}\). Then \(\Phi_{w,b,B}(T_{X})=0\) and \(\Phi_{w,b,B}(T_{Y})=N\). **Proposition 3.5**.: _There exists a constant \(C>0\), depending only on the rank \(n\) of \(\mathbb{F}\), such that the following holds: For \(X\in\mathcal{F}^{\dagger}(A)\), let \(T\) and \(T^{\prime}\) be two free splittings in \(\mathcal{FS}(\mathbb{F},X)\) with vertex group systems equal to \(X\). Then \(|\Phi_{w,b,B}(T)-\Phi_{w,b,B}(T^{\prime})|\leq C\)._ Proof.: First suppose that \(T\) and \(T^{\prime}\) are distance one apart in \(\mathcal{FS}(\mathbb{F},X)\), that is, there is a collapse map from \(T\) to \(T^{\prime}\). Since \(w\) is hyperbolic in both \(T\) and \(T^{\prime}\), the collapse map changes the number of fundamental domains of \(w\) crossed by the leg of \(b\) by at most one. Therefore, \(|\Phi_{w,b,B}(T)-\Phi_{w,b,B}(T^{\prime})|\leq 1\). Now consider \(T^{\prime}\) as a point in \(\mathbb{P}\mathcal{O}(\mathbb{F},X)\), with some metric and, up to changing \(T\) to another tree in a simplex in \(\mathbb{P}\mathcal{O}(\mathbb{F},X)\) containing \(T\), consider an optimal morphism \(f\colon T\to T^{\prime}\), where \(T\) has the pull back metric. Let \((T_{t})_{t\in[0,\delta]}\) be a folding path with natural parametrization guided by \(f\) where \(T_{0}=T\) and \(T_{\delta}=T^{\prime}\). Note that \(C_{w}(T_{t})\) is a loop. Let \(\mathrm{a}_{t}(b)\) and \(\mathrm{a}_{t}(w)\) denote the axis of \(b\) and \(w\) in \(T_{t}\) and let \(\tau_{t}(w)\) be the translation length of \(w\) in \(T_{t}\). By Lemma 2.3, the folding path \((T_{t})_{t}\) can be broken into three intervals. * For every \(t\) in the first interval, \(C_{w}(T_{t})\) has an illegal turn. Therefore, the only folding between edges of \(\mathrm{a}_{t}(w)\) happens at these illegal turns. Let \(E\) be any edge of \(T_{t}\) not in \(\mathrm{a}_{t}(w)\). Under the folding map, \(E\) cannot fold over \(\mathrm{a}_{t}(w)\) for a length more than \(\tau_{t}(w)\) because \(C_{w}(T_{t})\) has an illegal turn. * For every \(t\) in the last interval, \(C_{w}(T_{t})\) has a legal segment of length at least \(2\). Then any edge of \(T_{t}\) again cannot fold over \(\mathrm{a}_{t}(w)\) for a length more than \(\tau_{t}(w)\). We claim that \(|\Phi_{w,b,B}(T_{t})-\Phi_{w,b,B}(T_{s})|\) is uniformly bounded for \(s,t\) both in the first interval or both in the last interval. Let \(f_{s,t}\colon T_{t}\to T_{s}\) be the morphism induced by the folding path. For \(T_{t}\), let \(G_{t}\) denote the quotient graph under the action of \(\mathbb{F}\). The fundamental group of \(G_{t}\), as a graph of groups, is a free product \(X*F_{m}\), where \(m\) is the corank of \(X\). Choose a maximal tree in \(G_{t}\). Then the edges not in this maximal tree correspond to a basis for \(F_{m}\). Fix one such partial basis coming from \(G_{t}\) and one from \(G_{s}\). Since an edge of \(T_{t}\) folds over at most one fundamental domain of \(w\), we get that the image under \(f_{s,t}^{*}\) of a basis element of \(\pi_{1}(G_{t})\), as chosen above, acquires a suffix or prefix \(w^{N+1}\) for \(N\) at most the number of edges in a maximal tree in \(G_{t}\) which is at most \(2n-3\). Therefore, \(f_{s,t}^{*}\) conjugates \(b\) by \(w^{N+1}\), if at all, for uniformly bounded \(N\). This implies that the difference in the number of fundamental domains of \(w\) crossed by the leg of \(b\), in \(T_{t}\) and \(T_{s}\), is uniformly bounded. Lastly, * for any \(t\) in the middle segment, \(C_{w}(T_{t})\) is a legal loop of length at most two. In addition, we claim that it crosses the orbit of every edge of \(T_{t}\) at least twice. Indeed, if \(C_{w}(T_{t})\) does not cross the orbit of an edge \(E\), then \(w\) is contained in a proper free factor \(Y\) given by the fundamental group of \(G_{t}\backslash\{E\}\). Then \(w\) is contained in \(Y\cap B\), which is a proper free factor of \(B\), a contradiction. If \(C_{w}(T_{t})\) crosses the orbit of some edge only once, then \(w\) will be a primitive element, again a contradiction. Thus, \(C_{w}(T_{t})\) crosses the orbit of every edge exactly twice. Now for \(t<s\) in the middle segment, \(C_{w}(T_{t})\) is a legal loop that maps to another legal loop \(C_{w}(T_{s})\) and they cross the orbit of every edge of \(T_{t}\), resp. \(T_{s}\), exactly twice. Therefore, the pre-image of an edge of \(T_{s}\) is a single edge and by Lemma 2.1 they are uniformly bounded distance apart in \(\mathcal{FS}(\mathbb{F},X)\). Now by the first para of this proof, we get that \(|\Phi_{w,b,B}(T_{t})-\Phi_{w,b,B}(T_{s})|\) is uniformly bounded for \(t,s\) in the middle segment. Combining the behaviour in the three segments of the folding path from \(T\) to \(T^{\prime}\), we conclude that \(|\Phi_{w,b,B}(T)-\Phi_{w,b,B}(T^{\prime})|\) is uniformly bounded. We are now ready to define a coarsely Lipschitz function from \(\mathcal{F}^{\uparrow}(A)\to\mathbb{Z}\). Choose a complementary free factor \(B\) (of \(A\)) of rank at least 3, and, choose \(b,w\in B\) such that \(b\) is primitive in \(B\) and \(w\) is a cyclically reduced filling element in \(B\) that is not a power of another element. Define \(\Psi_{w,b,B}\colon\mathcal{F}^{\uparrow}(A)\to\mathbb{Z}\) by setting \(\Psi_{w,b,B}(X)\) equal to \(\Phi_{w,b,B}(T)\) for any \(T\in\mathcal{FS}(\mathbb{F},X)\) with vertex group system equal to \(X\in\mathcal{F}^{\uparrow}(A)\). **Proposition 3.6**.: _For a free factor \(A\) of \(\mathbb{F}\) of corank at least 3, the function \(\Psi_{w,b,B}\colon\mathcal{F}^{\uparrow}(A)\to\mathbb{Z}\) is coarsely Lipschitz._ Proof.: The function is coarsely well-defined by Proposition 3.5. Consider two vertices \(X\) and \(Y\) in \(\mathcal{F}^{\uparrow}(A)\) joined by an edge, with \(Y\subset X\), up to conjugation. Pick \(T_{X}\in\mathcal{FS}(\mathbb{F},X)\) with vertex group system \(X\) and \(T_{Y}\in\mathcal{FS}(\mathbb{F},Y)\) with vertex group system \(Y\), such that there is a proper subtree of \(T_{Y}\) which is collapsed to obtain \(T_{X}\). Since \(w\) is hyperbolic in both \(T_{X}\) and \(T_{Y}\), under the collapse map a fundamental domain of the axis of \(w\) in \(T_{Y}\) maps to a unique non-degenerate fundamental domain of the axis of \(w\) in \(T_{Y}\). Thus \(|\Phi_{w,b,B}(T_{X})-\Phi_{w,b,B}(T_{Y})|\leq 1\) and hence \(\Psi_{w,b,B}\) is a coarsely Lipschitz function. ## 4. Proof of main theorem We are now ready to prove the main theorem. **Theorem 4.1**.: _Let \(\mathcal{F}_{n}\) be the free factor complex of free group of rank \(n\geq 4\). There exists a family of loops \(c_{N}\) of combinatorial length 4 in \(\mathcal{F}_{n}^{(1)}\) such that the following holds: whenever \(P\) is a triangulation of a disc and \(f\colon P\to\mathcal{F}_{n}^{(2)}\) is a simplicial map with \(f|_{\partial P}\) mapping bijectively onto \(c_{N}\), then \(P\) must have at least \(\mathrm{O(N)}\) triangles._ Proof.: Let \(\mathbb{F}=\langle a,b,a_{3},\ldots,a_{n}\rangle\). Let \(w\in B=\langle b,a_{3},\ldots,a_{n}\rangle\) be cyclically reduced, filling in \(B\) and not a power of another element. For any \(N>0\), let \(c_{N}\) be the length 4 loop in \(\mathcal{F}_{n}\) with vertices \(A_{0}=[\langle a,b\rangle],A_{1}=[\langle a\rangle],A_{N}=[\langle a,w^{N}bw^ {-N}\rangle],A_{2}=[\langle b\rangle]\) (see Figure 1). Let \(P\) be a triangulation of a disc \(D^{2}\) and \(c\colon P\to\mathcal{F}_{n}\) a simplicial map such that \(c|_{\partial D^{2}}=c_{N}\). Since \(A_{1}\) is a rank one free factor, \(\mathcal{F}^{\uparrow}(A_{1})\) is the full link of \(A_{1}\) in \(\mathcal{F}\). By Proposition 3.6, the function \(\Psi_{w,b,B}\colon\mathcal{F}^{\uparrow}(A_{1})\to\mathbb{Z}\) is coarsely Lipschitz and by Example 3.4, \(\Psi_{w,b,B}(A_{0})\) is coarsely 0 and \(\Psi_{w,b,B}(A_{N})\) is coarsely \(N\). Thus, the distance between \(A_{0}\) and \(A_{N}\) in the link of \(A_{1}\) is at least \(c_{1}N+c_{2}\) for some uniform constants \(c_{1},c_{2}>0\). Let \(x_{i}\) be the pre-image of \(A_{i}\) on the boundary of the disc \(D\). Then in \(P\), there is an edge path from \(x_{0}\) to \(x_{N}\) of length at least \(c_{1}N+c_{2}\). If we count every triangle thrice, then we count every edge at least once. Therefore, there are at least \((c_{1}N+c_{2})/3\) many triangles in \(P\). Thus, for arbitrary \(N>0\), the length 4 loop \(c_{N}\) requires at least \(\mathrm{O(N)}\) triangles to be capped off. **The case \(n=3\).** Let \(\mathbb{F}_{3}=\langle a,b,c\rangle\), \(A=[\langle a\rangle]\) and \(w\) filling, cyclically reduced element in \(\langle b,c\rangle\) that is not a power of another element. The complex \(\mathcal{F}^{\uparrow}(A)\) is not connected as a subset of \(\mathcal{F}_{3}\) since all the free factors containing \(A\) are rank 2. We can define a different structure on \(\mathcal{F}^{\uparrow}(A)\) as follows: two free factors \([\langle a,x\rangle]\) and \([\langle a,y\rangle]\) in \(\mathcal{F}^{\uparrow}(A)\) are joined by an edge if \(\{a,x,y\}\) is a basis for \(\mathbb{F}_{3}\). An argument similar to Lemma 3.1 using folding paths, shows that \(\mathcal{F}^{\uparrow}(A)\) is connected. For \(X\in\mathcal{F}^{\uparrow}(A)\), there is only one free splitting \(T_{X}\) with vertex group system equal to \(X\), up to equivariant isomorphism. Therefore, we set \(\Psi_{w,b,B}(X)\) equal to \(\Phi_{w,b,B}(T_{X})\). Now we show that \(\Psi_{w,b,B}\colon\mathcal{F}^{\uparrow}(A)\to\mathbb{Z}\) is a Lipschitz function. Let \(X\) and \(Y\) be distance one in \(\mathcal{F}^{\uparrow}(A)\), that is we may assume \(X=[\langle a,x\rangle],Y=[\langle a,y\rangle]\) and \(\{a,x,y\}\) is a basis of \(\mathbb{F}_{3}\). Let \(T_{X}\) be the Bass-Serre tree of the graph of groups with vertex group \(X\) and edge labeled \(y\), and vice versa for \(T_{Y}\). Let \(T\) be a common refinement of \(T_{X}\) and \(T_{Y}\) described at the Bass-Serre tree of the graph of groups with one vertex stabilized by \([\langle a\rangle]\) and two loops labeled \(x\) and \(y\). Then there are collapse maps \(p_{X}\colon T\to T_{X}\) and \(p_{Y}\colon T\to T_{Y}\). Defining \(\Phi_{w,b,B}(T)\) as before, we see that \(|\Phi_{w,b,B}(T)-\Phi_{w,b,B}(T_{X})|\leq 1\) and same for \(Y\) since \(w\) is hyperbolic in all three trees. Therefore, \(|\Phi_{w,b,B}(T_{Y})-\Phi_{w,b,B}(T_{X})|\leq 2\) which implies that \(\Psi_{w,b,B}\) is Lipschitz. Now to complete the proof of the main theorem, we observe that if \(X\) and \(Y\) are distance one in \(\mathcal{F}^{\uparrow}(A)\), then they are distance at most \(4\) in \(\mathcal{F}_{3}\). Therefore, the same argument as before shows that at least O(N)-triangles are needed to cap off \(c_{N}\). The proof of Proposition 3.6 when \(A\) has corank \(2\) is similar to the \(n=3\) case outlined here. ## 5. Concluding remarks The free factor complex \(\mathcal{F}_{n}\) is quasi-isometric to another complex called the _complex of free factor systems_, denoted \(\mathcal{FF}_{n}\). The complex \(\mathcal{FF}_{n}\) is closely related to the simplicial closure of unreduced Outer space (see [1]). A free factor system of \(\mathbb{F}\) is a finite collection of the form \(\mathcal{A}=\{[A_{1}],...,[A_{k}]\}\), where \(k>0\), each \(A_{i}\) is a proper, non-trivial free factor of \(\mathbb{F}\), such that there exists a free factorization \(\mathbb{F}=A_{1}*\cdots A_{k}*F_{M}\). There is a partial ordering on the set of free factor systems given as follows: \(\mathcal{A}\subseteq\mathcal{A}^{\prime}\) if for every \([A_{i}]\in\mathcal{A}\) there exists \([A^{\prime}_{j}]\in\mathcal{A}^{\prime}\) such that \(A_{i}\subseteq A^{\prime}_{j}\) up to conjugation. The vertices of \(\mathcal{FF}_{n}\) are free factor systems and two vertices \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) are joined by an edge if \(\mathcal{A}_{1}\sqsubseteq\mathcal{A}_{2}\) or vice versa. Now recall the length \(4\) loop \(c_{N}\) in \(\mathcal{F}_{n}\). It can also be viewed as a loop in \(\mathcal{FF}_{n}\). However, in this complex, the vertex \(\{[\langle a\rangle],[\langle b\rangle]\}\) is connected to both \(A_{0}\) and \(A_{N}\) and hence the loops \(c_{N}\) can be capped off with \(4\) triangles. It is curious to know if \(\mathcal{FF}_{n}\) satisfies some kind of combinatorial isoperimetric bound.
2305.06464
Brauertsch fields
We prove a local-to-global principle for Brauer classes: for any finite collection of non-trivial Brauer classes on a variety over a field of transcendence degree at least 3, there are infinitely many specializations where each class stays non-trivial. This is deduced from a Grothendieck--Lefschetz-type theorem for Brauer groups of certain smooth stacks. This also leads to the notion of a Brauertsch field.
Daniel Krashen, Max Lieblich, Minseon Shin
2023-05-10T21:14:49Z
http://arxiv.org/abs/2305.06464v1
# Brauertsch fields ###### Abstract. We prove a local-to-global principle for Brauer classes: for any finite collection of non-trivial Brauer classes on a variety over a field of transcendence degree at least \(3\), there are infinitely many specializations where each class stays non-trivial. This is deduced from a Grothendieck-Lefschetz-type theorem for Brauer groups of certain smooth stacks. This also leads to the notion of a Brauertsch field. ###### Contents * 1 Introduction * 2 Background and existing literature * 3 Proofs of the main theorems * 4 Brauer classes vanishing at a prescribed set of points * 5 Applications to rational points on genus \(1\) curves ## 1. Introduction In this paper we address the following basic question. **Question 1.1**.: Suppose \(S\) is a variety over a field \(F\) and \(\alpha\in\operatorname{Br}(S)\) is a non-zero Brauer class. For how many closed points \(s\in S\) is the specialization \(\alpha|_{s}\) non-zero? By analogy with Hilbertian fields, we can codify the non-triviality of specializations of Brauer classes. **Definition 1.2**.: A field \(F\) is _Brauertsch1_ if for any curve \(S\) over \(F\) and any finite collection of Brauer classes \(\alpha_{1},\dots,\alpha_{m}\in\operatorname{Br}(F(S))\) such that \(\operatorname{per}(\alpha_{i})\) is invertible in \(F\) for all \(i\), there are infinitely many closed points \(s\in S\) such that each \(\alpha_{i}\) is unramified at \(s\) and \(\operatorname{per}(\alpha_{i}|_{s})=\operatorname{per}(\alpha_{i})\). Footnote 1: **Brauer-Hilbert-sch** Our main theorem is then the following. **Theorem 1.3**.: _If \(F\) has transcendence degree at least \(3\) over a perfect field, then \(F\) is Brauertsch._ Introduction Let \(F\) be a field of characteristic \(n\). Let \(F\) be a field of characteristic \(n\). Let \(F\) be a field of characteristic \(n\). Let \(F\) be a field of characteristic \(n\). Let \(F^{\prime}\) be a field of characteristic \(n\). We prove the following theorem, which implies Theorem 1.3 and in particular that Question 1.7 has an affirmative answer if \(S\) is of finite type over \(\mathbf{C}\) and \(\dim\overline{\{s\}}\geq 4\). **Notation 1.8**.: For a scheme \(S\) and any nonnegative integer \(d\in\mathbf{Z}_{\geq 0}\), we denote the set of points of dimension \(d\) by \(S_{(d)}\) and the set of points of codimension \(d\) by \(S^{(d)}\). If \(S\) is an irreducible scheme, we denote its generic point by \(\eta_{S}\). **Theorem 1.9**.: _Let \(k\) be a perfect field, let \(F/k\) be a finitely generated extension of transcendence degree \(\operatorname{trdeg}_{k}(F)\geq 3\), let \(S\) be a finite type \(F\)-scheme of \(\dim S\geq 1\), let \(G\subseteq\operatorname{Br}(S)\) be a finite subgroup such that \(|G|\) is invertible in \(k\). For any \(d\geq 1\) and any point \(s\in S[G^{-1}]\cap S_{(d)}\), the set \(S[G^{-1}]\cap S_{(d-1)}\cap\overline{\{s\}}\) is infinite._ **Corollary 1.10**.: _Let \(F\) be as in Theorem 1.9, and suppose \(S\) is a smooth curve over \(F\). If \(\alpha\in\operatorname{Br}(S)\) is a nontrivial Brauer class, there exist infinitely many closed points \(s\in S\) such that the period of \(\alpha|_{s}\in\operatorname{Br}(\kappa(s))\) equals the period of \(\alpha\). That is, the restriction map \(\operatorname{Br}(S)\to\operatorname{Br}(\kappa(s))\) is injective on the subgroup generated by \(\alpha\)._ To prove Theorem 1.9, we construct a model \(X\) of \(S\) which is smooth and projective over \(k\) and over which \(\alpha\) is defined (away from its ramification divisor \(D\subset X\)). Using that \(\alpha\) becomes unramified over the root stack \(\mathfrak{X}=\sqrt[t]{(X,D)}\) associated to \(D\) where \(\ell=\operatorname{per}\alpha\), we reduce to the task of lifting an \(\alpha\)-twisted line bundle from a smooth ample divisor of \(\mathfrak{X}\) to one over \(\mathfrak{X}\) itself. For this, we prove a Grothendieck-Lefschetz theorem for the Picard group and Brauer group of Deligne-Mumford stacks (see Proposition 2.1). In Section 4, we investigate the problem of constructing Brauer classes \(\alpha\in\operatorname{Br}(S)\) whose nonvanishing locus \(S[\alpha^{-1}]\) avoids a prescribed set of points \(T\subseteq S\), showing that this is always possible if \(T\) is a singleton (Proposition 4.3) or if we allow a localization (Proposition 4.2). ### Local-to-global consequences As an application of our results, in Section 5 we prove two kinds of local-to-global principles. In general, given a field \(L\) and a collection of overfields \(\Omega\) of \(L\), we define \[\operatorname{\text{\rm III}}_{\Omega}\operatorname{Br}(L):=\ker\left[ \operatorname{Br}(L)\to\prod_{L^{\prime}\in\Omega}\operatorname{Br}(L^{\prime })\right]\] and we say that _local-to-global holds with respect to \(\Omega\)_ if \(\operatorname{\text{\rm III}}_{\Omega}\operatorname{Br}(L)=0\). In practice, one is particularly interested in the case where \(\Omega\) consists of a collection of completions of \(L\) with respect to a geometric collection of discrete valuations. To this effect, if we are given an integral normal \(F\)-scheme \(S\), we set \(\Omega_{S}\) to be the set of completions of the function field \(\kappa(\eta_{S})\) with respect to discrete valuations corresponding to prime divisors of \(S\). **Corollary 1.11**.: _Let \(F\) be as in Theorem 1.9, let \(S\) be a smooth \(F\)-scheme of \(\dim S\geq 1\), and let \(K:=\kappa(\eta_{S})\) be the function field of \(S\). Then local-to-global holds with respect to \(\Omega_{S}\), i.e. the natural map_ \[\operatorname{Br}(K)\to\prod_{s\in S^{(1)}}\operatorname{Br}(K_{s})\] _is injective, where we denote \(K_{s}:=\operatorname{Frac}(\mathfrak{C}_{S,s}^{\wedge})\) for all \(s\in S^{(1)}\)._ We note that Corollary 1.11 is a particularly strong version of the local-to-global principle, where such results often require that \(\alpha\) vanishes at the completions of \(K\) at _all_ codimension \(1\) points on a proper model (e.g. the case of \(\mathbb{P}^{1}\) over a \(C_{1}\)-field, or the case of a semiglobal field, where one also needs all completions on a proper model over the valuation ring). The second application is a local-to-global principle for genus 1 curves over function fields of fourfolds, which we prove in Section 5. **Corollary 1.12**.: _Let \(F,S,K\) be as in Corollary 1.11, and let \(C\to\operatorname{Spec}K\) be a genus 1 curve. Suppose that for all codimension 1 points \(s\in S^{(1)}\), the base change \(C\times_{\operatorname{Spec}K}\operatorname{Spec}K_{s}\) admits a \(K_{s}\)-point. Then \(C\) admits a \(K\)-point._ ### Open questions It is tempting to ask the following question, to which we do not at the moment know the answer. **Question 1.13**.: Given \(n>1\), we say that a field \(F\) is _\(n\)-Brauertsch_ if for any curve \(S\) over \(F\) and any finite collection \(\alpha_{1},\dots,\alpha_{m}\in\operatorname{H}^{n}(S,\mathbf{G}_{m})\) of classes with order invertible in \(F\), there are infinitely many closed points \(s\in S\) such that the specialization \(\alpha_{i}|_{s}\in\operatorname{H}^{n}(\operatorname{Spec}\kappa(s),\mathbf{G }_{m})\) has the same order as \(\alpha_{i}\) for all \(i\). We say that \(F\) is _\(n\)-Hilbertian_ if the above condition is only assumed to hold for an open subset \(U\subset\mathbf{P}^{1}_{F}\), one cohomology class, and closed points \(s\in U\) with residue field \(F\). 1. Is a field of transcendence degree at least \(n+1\) over a perfect field \(n\)-Hilbertian (resp. \(n\)-Brauertsch)? 2. More generally, which fields are \(n\)-Hilbertian (resp. \(n\)-Brauertsch)? 3. How are the \(n\)-Hilbertian and \(n\)-Brauertsch conditions related? **Remark 1.14**.: If we let \(n=1\) in Question 1.13, the coefficient sheaf \(\mathbf{G}_{m}\) is not the right choice (by Hilbert's Theorem 90), and one must take finite coefficients (for example, \(\boldsymbol{\mu}_{t}\)) for the property to naturally compare with the classical Hilbertian property. One could then ask whether the notion of Hilbertian is equivalent to being 1-Hilbertian (resp. 1-Brauertsch) in this sense. Starting with \(n=2\), the inclusion \(\mathbf{Q}/\mathbf{Z}\subset\mathbf{G}_{m}\) induces an isomorphism on cohomology. Moreover, the map \(\mathbf{Z}/t\mathbf{Z}=\frac{1}{t}\mathbf{Z}/\mathbf{Z}\to\mathbf{Q}/\mathbf{Z}\) induces a surjection on \(t\)-torsion cohomology. Thus, there is no distinction (for the purposes of the question) between cohomology of order \(t\) with coefficients in \(\boldsymbol{\mu}_{t}\), \(\mathbf{Q}/\mathbf{Z}\), or \(\mathbf{G}_{m}\). **Remark 1.15**.: The results in this paper, those of [11], and the classical theory of Hilbertian fields imply that the the answer to Question 1.13(1) is "yes" for \(n\leq 2\). What we call "2-Hilbertian" is Fein, Saltman, and Schacher's notion of "Brauer-Hilbertian" in [11]. The whole picture is likely to be more complicated for \(n>2\). Indeed, the methods of this paper use the fact that restriction to the generic point is injective for low degree cohomology groups with finite coefficients. This fails in higher degree, rendering an approach to this question based upon purity and the Lefschetz hyperplane theorem for root stacks quite a bit more subtle. That is, there is no reasonable Lefschetz theorem for unramified cohomology (in the sense of Colliot-Thelene). **1.16** (Acknowledgments).: We thank Brian Conrad, Giovanni Inchiostro, Julia Hartmann, Sandor Kovacs, and Masahiro Nakahara for helpful conversations. ## 2. A Grothendieck-Lefschetz theorem for the Brauer group Grothendieck's incarnation of the Lefschetz hyperplane theorem for Picard groups (as in [1]) relies on the basic deformation theory of invertible sheaves to lift sheaves off of a divisor to the completion of the ambient variety along that divisor, followed by the algebraization of a certain formal matrix that describes that lifted sheaf. As we explain here, a very similar argument works when the ambient space is a tame, smooth, separated Deligne-Mumford stack with projective coarse moduli space. In Section 3, we show how this applies to specializations in the Brauer group. **Proposition 2.1** (Grothendieck-Lefschetz).: _Let \(k\) be a field, let \(\mathscr{X}\) be a tame, smooth, separated Deligne-Mumford stack over \(k\) with coarse moduli space \(\pi:\mathscr{X}\to X\). Let \(Y\subset X\) be a smooth ample divisor with ideal sheaf \(I\subset\mathscr{O}_{X}\). Assume that_ 1. \(X\) _is a smooth projective_ \(k\)_-scheme of dimension_ \(\dim X\geq 4\)_,_ 2. \(\pi\) _is flat, and_ 3. \(\operatorname{H}^{i}(X,I^{n})=0\) _for_ \(n\geq 1\) _and_ \(0\leq i\leq 3\)_._ _Let \(Y_{n}:=\operatorname{Spec}_{X}\mathscr{O}_{X}/I^{n+1}\subset X\) be the \(n\)th thickening of \(Y\) in \(X\) and set \(\mathscr{Y}_{n}:=\mathscr{X}\times_{X}Y_{n}\) for all \(n\in\mathbf{N}\)._ 1. _For any finite locally free_ \(\mathscr{O}_{\mathscr{X}}\)_-module_ \(\mathscr{E}\) _and any open subset_ \(U\subseteq X\) _containing_ \(Y\)_, the map_ \[\Gamma(\mathscr{X}|_{U},\mathscr{E})\to\varprojlim_{n\in\mathbf{N}}\Gamma( \mathscr{Y}_{n},\mathscr{E}|_{\mathscr{Y}_{n}})\] (2.1.1) _is an isomorphism._ 2. _For any algebraic stack_ \(\mathscr{S}\)_, let_ \(\operatorname{Vect}_{r}(\mathscr{S})\) _denote the category of finite locally free_ \(\mathscr{O}_{\mathscr{S}}\)_-modules of rank_ \(r\)_. For any open subset_ \(U\subseteq X\) _containing_ \(Y\)_, the functor_ \[\operatorname{Vect}_{r}(\mathscr{X}|_{U})\to\varprojlim_{n\in\mathbf{N}} \operatorname{Vect}_{r}(\mathscr{Y}_{n})\] (2.1.2) _is fully faithful. The functor_ \[\varprojlim_{Y\subset U\subset X}\operatorname{Vect}_{r}(\mathscr{X}|_{U})\to \varprojlim_{n\in\mathbf{N}}\operatorname{Vect}_{r}(\mathscr{Y}_{n})\] (2.1.3) _is an equivalence of categories._ 3. _The restriction_ \[\xi_{\mathscr{X},i}:\operatorname{H}^{i}(\mathscr{X},\mathbf{G}_{m})\to \operatorname{H}^{i}(\mathscr{X}|_{Y},\mathbf{G}_{m})\] (2.1.4) _is an isomorphism for_ \(i=0,1\)_._ 4. _The restriction_ \[\xi_{\mathscr{X},2}[\ell]:\operatorname{H}^{2}(\mathscr{X},\mathbf{G}_{m})[ \ell]\to\operatorname{H}^{2}(\mathscr{X}|_{Y},\mathbf{G}_{m})[\ell]\] _is injective for any positive integer_ \(\ell\) _which is invertible in_ \(k\)_._ Proof.: By Kresch-Vistoli [13, Theorem 1], there exists a smooth projective \(k\)-scheme \(X^{0}\) admitting a finite flat surjective morphism \(X^{0}\to\mathscr{X}\). For \(p\geq 0\), let \(X^{p}:=X^{0}\times_{\mathscr{X}}\cdots\times_{\mathscr{X}}X^{0}\) denote its \((p+1)\)-fold fiber product. Since \(X^{0}\to\mathscr{X}\) is finite, it is in particular representable by schemes, so \(X^{p}\) is a scheme for all \(p\geq 0\). Since \(\pi:\mathscr{X}\to X\) is flat, every \(X^{p}\) is finite flat over \(X\), hence is Cohen-Macaulay of dimension \(\dim X^{p}=\dim X\) by [16, 00R5]. Let us form the following cartesian diagram: \[\begin{array}{ccccc}Y_{0}^{2}&\longrightarrow&Y_{1}^{2}&\longrightarrow&Y_{2}^ {2}&\longrightarrow&\cdots&\longrightarrow&X^{2}\\ \big{\|}\big{\|}&\big{\|}\big{\|}&\big{\|}\big{\|}&\big{\|}&\big{\|}\\ Y_{0}^{1}&\longrightarrow&Y_{1}^{1}&\longrightarrow&Y_{2}^{1}&\longrightarrow& \cdots&\longrightarrow&X^{1}\\ \big{\|}\big{\|}&\big{\|}&\big{\|}&\big{\|}&\big{\|}\\ Y_{0}^{0}&\longrightarrow&Y_{1}^{0}&\longrightarrow&Y_{2}^{0}&\longrightarrow &\cdots&\longrightarrow&X^{0}\\ \big{\|}&\big{\|}&\big{\|}&\big{\|}&\big{\|}\\ \mathscr{Y}_{0}&\longrightarrow&\mathscr{Y}_{1}&\longrightarrow&\mathscr{Y}_ {2}&\longrightarrow&\cdots&\longrightarrow&\mathscr{X}\end{array} \tag{2.1.5}\] (1): Let \(\mathscr{C}\) be a finite locally free \(\mathscr{C}_{\mathscr{X}}\)-module. For any open subset \(U\subseteq X\) containing \(Y\), we have a commutative diagram \[\begin{array}{ccccc}\varprojlim_{n\in\mathbf{N}}\Gamma(Y_{n}^{1},\mathscr{C }|_{Y_{n}^{1}})&\xleftrightarrow{\phi^{1}}&\Gamma(X^{1}|_{U},\mathscr{C}|_{X ^{1}|_{U}})\\ \big{\|}\big{\|}\\ \varprojlim_{n\in\mathbf{N}}\Gamma(Y_{n}^{0},\mathscr{C}|_{Y_{n}^{0}})& \xleftrightarrow{\phi^{0}}&\Gamma(X^{0}|_{U},\mathscr{C}|_{X^{0}|_{U}})\\ \big{\|}\big{\|}\\ \varprojlim_{n\in\mathbf{N}}\Gamma(\mathscr{Y}_{n},\mathscr{C}|_{\mathscr{Y}_ {n}})&\xleftrightarrow{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq (3): We have an exact sequence \[\mathrm{H}^{i}(\mathcal{Y}\!\!\!/_{n},(I^{n}/I^{n+1})|_{\mathcal{Y}\!\!\!/_{n}}) \to\mathrm{H}^{i}(\mathcal{Y}\!\!\!/_{n},\mathbf{G}_{m})\to\mathrm{H}^{i}( \mathcal{Y}\!\!\!/_{n-1},\mathbf{G}_{m})\to\mathrm{H}^{i+1}(\mathcal{Y}\!\!\!/ _{n},(I^{n}/I^{n+1})|_{\mathcal{Y}\!\!\!/_{n}})\] for all \(i\geq 0\) and \(n\geq 0\). Since \(\mathcal{X}\) is tame, by [1, 8.1] we have that \(\pi\) is a good moduli space morphism, so by [1, 4.7(i)] the pullback \(\pi|_{Y_{n}}:\mathcal{Y}\!\!\!/_{n}\to Y_{n}\) is also a good moduli space morphism for any \(n\geq 0\). By Lemma 2.2, the pullback morphism \[(\pi|_{Y_{n}})^{*}:\mathrm{H}^{i}(Y_{n},I^{n}/I^{n+1})\to\mathrm{H}^{i}( \mathcal{Y}\!\!\!/_{n},(I^{n}/I^{n+1})|_{\mathcal{Y}\!\!\!/_{n}})\] is an isomorphism for all \(i\geq 0\). By (iii), we have \(\mathrm{H}^{i}(X,I^{n}/I^{n+1})=0\) for all \(n\geq 1\) and \(0\leq i\leq 2\). In particular, the restrictions \[\mathrm{H}^{i}(\mathcal{Y}\!\!\!/_{n},\mathbf{G}_{m})\to\mathrm{H}^{i}( \mathcal{Y}\!\!\!/_{n-1},\mathbf{G}_{m}) \tag{2.1.6}\] are isomorphisms for \(i=0,1\) and \(n\geq 1\). Taking \(\mathcal{C}=\mathcal{C}_{\mathcal{X}}\) in (1), we have that \(\xi_{\mathcal{X},0}\) is an isomorphism. We take \(r=1\) in (2.1.3). Since \(X\) is a proper \(k\)-scheme, if \(U\subseteq X\) is any open subset containing \(Y\), then it contains all the codimension \(1\) points of \(X\). Hence, since \(X\) is regular, the restriction \(\mathrm{Vect}_{1}(\mathcal{X})\to\varinjlim_{Y\subset U\subset X}\mathrm{Vect }_{1}(\mathcal{X}|_{U})\) is an equivalence of categories. The projection is essentially surjective since the maps (2.1.6) are isomorphisms for \(i=1\) and all \(n\geq 1\); thus \(\xi_{\mathcal{X},1}\) is surjective. We prove that \(\xi_{\mathcal{X},1}\) is injective. Let \(L\in\mathrm{Pic}(\mathcal{X})\) be a line bundle. For any \(n\geq 0\), we have a spectral sequence \[\mathrm{E}_{1}^{p,q}=\mathrm{H}^{q}(X^{p},(I^{n}|_{\mathcal{X}}\otimes_{ \mathcal{C}_{\mathcal{X}}}L)|_{X^{p}})\implies\mathrm{H}^{p+q}(\mathcal{X},I^{ n}|_{\mathcal{X}}\otimes_{\mathcal{C}_{\mathcal{X}}}L)\] associated to the covering \(X^{0}\to\mathcal{X}\). For any \(p\geq 0\), we have that the projection maps \(X^{p}\to X^{0}\) are finite, hence \(I|_{X^{p}}\) is an anti-ample invertible \(\mathcal{C}_{X^{p}}\)-module. By [1, III, 7.6], there exists some \(n_{L}\gg 0\) such that \(\mathrm{E}_{1}^{p,q}=0\) for \(n\geq n_{L}\) and \(0\leq p+q\leq 2\). Thus, for \(n\geq n_{L}\) we have \[\mathrm{H}^{q}(\mathcal{X},I^{n}|_{\mathcal{X}}\otimes_{\mathcal{C}_{\mathcal{ X}}}L)=0\] for all \(0\leq q\leq 2\), hence \[\mathrm{H}^{q}(\mathcal{X},I^{n}/I^{n+1}|_{\mathcal{X}}\otimes_{\mathcal{C}_{ \mathcal{X}}}L)=0\] for all \(0\leq q\leq 1\) and the restrictions \[\Gamma(\mathcal{Y}\!\!\!/_{n+1},L|_{\mathcal{Y}\!\!\!/_{n+1}})\to\Gamma( \mathcal{Y}\!\!\!/_{n},L|_{\mathcal{Y}\!\!\!/_{n}}) \tag{2.1.7}\] are isomorphisms. If \(L|_{\mathcal{X}|_{Y}}\) is a trivial \(\mathcal{C}_{\mathcal{X}|_{Y}}\)-module, then \(L|_{\mathcal{Y}\!\!\!/_{n}}\) and \(L^{\vee}|_{\mathcal{Y}\!\!\!/_{n}}\) are trivial \(\mathcal{C}_{\mathcal{Y}\!\!\!/_{n}}\)-modules for all \(n\) by (2.1.6). Choose some \(n^{\prime}\geq\max(n_{L},n_{L^{\vee}})\) and choose sections \(s_{n^{\prime}}\in\Gamma(\mathcal{Y}\!\!\!/_{n^{\prime}},L|_{\mathcal{Y}\!\!\!/ _{n^{\prime}}})\) and \(s_{n^{\prime}}^{\vee}\in\Gamma(\mathcal{Y}\!\!\!/_{n^{\prime}},L^{\vee}|_{ \mathcal{Y}\!\!\!/_{n^{\prime}}})\) such that the associated \(\mathcal{C}_{\mathcal{Y}\!\!\!/_{n^{\prime}}}\)-module morphisms \(\mathcal{C}_{\mathcal{Y}\!\!\!/_{n^{\prime}}}\to L|_{\mathcal{Y}\!\!\!/_{n^{ \prime}}}\) and \(L|_{\mathcal{Y}\!\!\!/_{n^{\prime}}}\to\mathcal{C}_{\mathcal{Y}\!\!\!/_{n^{ \prime}}}\) are mutually inverse. By (2.1.7), for each \(n\geq n^{\prime}\) there exist unique lifts \(s_{n+1}\in\Gamma(\mathcal{Y}\!\!\!/_{n+1},L|_{\mathcal{Y}\!\!\!/_{n+1}})\) and \(s_{n+1}^{\vee}\in\Gamma(\mathcal{Y}\!\!\!/_{n+1},L^{\vee}|_{\mathcal{Y}\!\!\!/ _{n+1}})\) of \(s_{n}\) and \(s_{n+1}^{\vee}\) respectively. Since (2.1.2) is fully faithful, there exist unique sections \(s\in\Gamma(\mathcal{X},L)\) and \(s^{\vee}\in\Gamma(\mathcal{X},L^{\vee})\) which restrict to the systems \(\{s_{n}\}_{n\geq n^{\prime}}\) and \(\{s_{n}^{\vee}\}_{n\geq n^{\prime}}\) and the corresponding \(\mathcal{C}_{\mathcal{X}}\)-module morphisms \(\mathcal{C}_{\mathcal{X}}\to L\) and \(L\to\mathcal{C}_{\mathcal{X}}\) are mutually inverse, hence \(L\) itself is trivial. (4): Let \(\ell\) be a positive integer such that \(\ell\in k^{\times}\) and let \(\alpha^{\prime}\in\mathrm{H}^{2}_{\mathrm{et}}(\mathcal{X},\mathbf{G}_{m})[\ell]\) be an \(\ell\)-torsion Brauer class such that \(\alpha^{\prime}|_{\mathcal{X}|_{Y}}=0\) in \(\mathrm{H}^{2}_{\mathrm{et}}(\mathcal{X}|_{Y},\mathbf{G}_{m})\). Let \(\mathcal{G}^{\prime}\to\mathcal{X}\) be a \(\mathbf{G}_{m}\)-gerbe corresponding to \(\alpha^{\prime}\); then \(\mathcal{C}^{\prime}\times_{\mathcal{X}}\mathcal{X}|_{Y}\to\mathcal{X}|_{Y}\) is a trivial \(\mathbf{G}_{m}\)-gerbe. Let \(\mathcal{C}\to\mathcal{X}\) be a \(\boldsymbol{\mu}_{\ell}\)-gerbe whose class \([\mathcal{C}]\in\mathrm{H}^{2}_{\mathrm{et}}(\mathcal{X},\boldsymbol{\mu}_{\ell})\) lifts \(\alpha^{\prime}=[\mathcal{G}^{\prime}]\in\mathrm{H}^{2}_{\mathrm{et}}( \mathcal{X},\mathbf{G}_{m})\). Since \([\mathcal{C}|_{Y}]\in\mathrm{H}^{2}_{\mathrm{et}}(\mathcal{X}|_{Y},\mathbf{ \mu}_{\ell})\) has trivial image in \(\mathrm{H}^{2}_{\mathrm{et}}(\mathcal{X}|_{Y},\mathbf{G}_{m})\), there exists some line bundle \(L_{Y}\in\mathrm{H}^{1}_{\mathrm{et}}(\mathcal{X}|_{Y},\mathbf{G}_{m})\) which maps to \([\mathcal{C}|_{Y}]\) under the coboundary map \(\partial:\mathrm{H}^{1}_{\mathrm{et}}(\mathcal{X}|_{Y},\mathbf{G}_{m})\to \mathrm{H}^{2}_{\mathrm{et}}(\mathcal{X}|_{Y},\boldsymbol{\mu}_{\ell})\) of the Kummer sequence. By (3) applied to \(\mathfrak{X}\), there exists some line bundle \(L_{X}\in\operatorname{H}^{1}_{\operatorname{et}}(\mathfrak{X},\mathbf{G}_{m})\) restricting to \(L_{Y}\); after replacing \([\![\mathfrak{g}]\!]\) by \([\![\mathfrak{g}]\!]-\partial([L_{X}])\), we may assume that \(\mathcal{G}|_{Y}\to\mathfrak{X}|_{Y}\) is a trivial \(\boldsymbol{\mu}_{\ell}\)-gerbe. Let \(\mathcal{E}_{Y}\) be a \(1\)-twisted invertible \(\mathcal{G}_{\mathcal{G}|_{Y}}\)-module. Since the map \(\mathcal{G}\to\mathfrak{X}\) is a \(\boldsymbol{\mu}_{\ell}\)-gerbe, the stack \(\mathcal{G}\) is tame and the composition \(\mathcal{G}\to\mathfrak{X}\to X\) is a flat coarse moduli space morphism (i.e. satisfies (ii)). Thus, by (3) applied to \(\mathcal{G}\), there exists a \(1\)-twisted invertible \(\mathcal{G}_{\mathcal{G}}\)-module \(\mathcal{E}_{X}\) such that \(\mathcal{E}_{X}|_{\mathcal{G}|_{Y}}\simeq\mathcal{E}_{Y}\). Hence \(\mathcal{G}\to\mathfrak{X}\) is a trivial \(\boldsymbol{\mu}_{\ell}\)-gerbe, so \(\mathcal{G}^{\prime}\to\mathfrak{X}\) is a trivial \(\mathbf{G}_{m}\)-gerbe as well. **Lemma 2.2**.: _Let \(\pi:\mathfrak{X}\to X\) be a good moduli space morphism. For any quasi-coherent \(\mathcal{G}_{X}\)-module \(\mathcal{F}\), the pullback_ \[\pi^{*}:\operatorname{H}^{i}(X,\mathcal{F})\to\operatorname{H}^{i}(\mathfrak{ X},\mathcal{F}|_{\mathfrak{X}})\] _is an isomorphism for all \(i\geq 0\)._ Proof.: This follows from the Leray spectral sequence \[\operatorname{E}_{2}^{p,q}=\operatorname{H}^{p}(X,\mathbf{R}^{q}\pi_{*}( \mathcal{F}|_{\mathfrak{X}}))\Rightarrow\operatorname{H}^{p+q}(\mathfrak{X}, \mathcal{F}|_{\mathfrak{X}})\] where we have \(\mathbf{R}^{i}\pi_{*}(\mathcal{F}|_{\mathfrak{X}})=0\) for \(i\geq 1\) by [1, 3.10(v)]. ## 3. Proofs of the main theorems In this section, we use a series of reductions and purity to reduce the main results to the Proposition 2.1, the Grothendieck-Lefschetz theorem for stacks. **3.1**. We note that in Theorem 1.9 we are free to replace \(S\) by its reduction \(S_{\operatorname{red}}\) or by any scheme that is birational to it. **3.2** (Reduction to \(\dim S=1\)).: By 3.1, we may replace \(S\) by an affine open neighborhood of the reduced scheme \(\overline{\{s\}}\) to assume that \(S\) is affine and integral and \(\dim S=d\). Let \(x_{1},\ldots,x_{d}\in\kappa(\eta_{S})\) be a transcendence basis of the function field \(\kappa(\eta_{S})\) over \(F\); after replacing \(S\) by an open subscheme, we may assume that \(x_{i}\in\Gamma(S,\mathcal{G}_{S})\) for all \(i\); let \(S\to\mathbb{A}_{F}^{d}=\operatorname{Spec}F[t_{1},\ldots,t_{d}]\) be the quasi-finite dominant \(F\)-morphism sending \(t_{i}\mapsto x_{i}\) for all \(i\). Set \(F^{\prime}:=F(t_{1},\ldots,t_{d-1})\) and let \(S^{\prime}:=S\times_{\mathbb{A}_{F}^{d}}\mathbb{A}_{F^{\prime}}^{1}\) where \(\mathbb{A}_{F^{\prime}}^{1}\to\mathbb{A}_{F}^{d}\) corresponds to the natural map \(F[t_{1},\ldots,t_{d}]\to F^{\prime}[t_{d}]\). Let \(f:S^{\prime}\to S\) be the projection. We note that \(f((S^{\prime})_{(0)})\subseteq S_{(d-1)}\) and that \(f:S^{\prime}\to S\) induces an isomorphism of function fields \(\kappa(\eta_{S})\to\kappa(\eta_{S^{\prime}})\); moreover \(\operatorname{trdeg}_{k}(F^{\prime})=\operatorname{trdeg}_{k}(F)+d-1\geq 3\). Thus, by replacing \(F,S,\alpha\) by \(F^{\prime},S^{\prime},\alpha|_{S^{\prime}}\), we may assume that \(\dim S=1\). **3.3** (Reduction to a model over \(k\)).: By 3.2, we may assume that \(S\) is affine, integral, of finite type over \(F\), and \(\dim S=1\). By a limit argument, we can choose a \(k\)-subalgebra \(A\subseteq F\) such that 1. \(A\) is of finite type over \(k\), 2. there exists a finite type \(A\)-scheme \(U\) and an \(F\)-isomorphism \(U\times_{\operatorname{Spec}A}\operatorname{Spec}F\simeq S\), and 3. \(\alpha\) is in the image of \(\operatorname{Br}(U)\to\operatorname{Br}(S)\). After possibly replacing \(U\) by an open subscheme, we may assume that \(U\) is regular; since \(k\) is perfect, we have that \(U\) is smooth over \(k\). After a further localization, we may assume that there exists a projective \(k\)-scheme \(X\) of dimension \(\dim X=\operatorname{trdeg}_{k}(\kappa(S))\) such that there exists an open immersion \(U\to X\). Let \(\overline{S}\) be a projective closure of \(S\); we may replace \(\overline{S}\) by its normalization to assume that \(\overline{S}\) is regular. Applying [10, 11] to the composition \(S\to U\to X\), we obtain a nonconstant \(k\)-morphism \(g:\overline{S}\to X\). If \(s\in S_{(0)}\) is a closed point, then \(g(s)\) is a codimension 1 point of \(X\); by Lemma 3.4, for all but finitely many ample divisors \(Y\subset X\), we have \(\eta_{Y}\in g(S_{(0)})\). Thus we reduce to proving Theorem 3.5 below. **Lemma 3.4**.: _Let \(F\) be a field, let \(S\) be a proper \(F\)-scheme of dimension 1, let \(X\) be a scheme, let \(\mathscr{L}\) be a line bundle on \(X\), let \(f:S\to X\) be a nonconstant morphism. For any section \(s\in\Gamma(X,\mathscr{L})\) such that \(X_{s}\) is affine, the pullback \(f^{*}s\in\Gamma(S,f^{*}\mathscr{L})\) vanishes at a closed point of \(S\)._ Proof.: We may assume that \(S\) is connected. We have that \(f^{-1}(X_{s})=S_{f^{*}s}\). Since \(X_{s}\) is affine and \(S\) is proper over \(F\) and \(\dim S\geq 1\), we have \(f(S)\not\subseteq X_{s}\), hence \(S_{f^{*}s}=f^{-1}(X_{s})\neq S\). **Theorem 3.5**.: _Let \(k\) be a perfect field, let \(U\) be a smooth \(k\)-scheme of dimension \(\dim U\geq 4\), and let \(G\in\operatorname{Br}(U)\) be a finite subgroup. Assume that \(\ell:=|G|\) is invertible in \(k\). Then there exist infinitely many points \(u\in U^{(1)}\) such that the composition \(G\to\operatorname{Br}(U)\to\operatorname{Br}(\kappa(u))\) is injective._ Proof.: Let \(X\) be a proper \(k\)-scheme such that \(U\) admits an open embedding \(U\subseteq X\); let \(D:=X\setminus U\) be the complement. By Temkin's improvement [14, 4.3.1(iii)] of Gabber's theorem [12, 1.3], there exists a smooth projective \(k\)-scheme \(X^{\prime}\) and a prime-to-\(\ell\) alteration \(f:X^{\prime}\to X\) such that \(D^{\prime}:=f^{-1}(D)\) is a strict normal crossings divisor in \(X^{\prime}\). Since \(\gcd(\ell,[\kappa(\eta_{X^{\prime}}):\kappa(\eta_{X})])=1\), if \(\alpha\in G\) is an \(\ell\)-torsion Brauer class such that \(f^{*}\alpha=0\) in \(\operatorname{Br}(X^{\prime})\), then \(\alpha=0\) itself. Hence we may replace \(X,G\) by \(X^{\prime},f^{*}G\) to assume that \(X\) is smooth projective over \(k\) and that \(D\subset X\) is a strict normal crossings divisor. Let \(D_{1},\ldots,D_{r}\) be the irreducible components of \(D\) and let \[\mathscr{X}:=\sqrt[\ell]{(X,D_{1})}\times_{X}\cdots\times_{X}\sqrt[\ell]{(X,D _{r})}\] be the product of the \(\ell\)th root stacks. Since \(D\) is strict normal crossings in \(X\), we have that \(\mathscr{X}\) is regular. By the argument of [10, 3.2.1], there exists an open substack \(\mathscr{U}^{\prime}\subset\mathscr{X}\) containing \(U\times_{X}\mathscr{X}\simeq U\) and all codimension 1 points of \(\mathscr{X}\) and such that \(G\) is contained in the subgroup \(\operatorname{Br}(\mathscr{U}^{\prime})\subseteq\operatorname{Br}(U)\). By purity for the Brauer group [11], the restriction \(\operatorname{Br}(\mathscr{X})\to\operatorname{Br}(\mathscr{U}^{\prime})\) is an isomorphism, so we may view \(G\) as a subgroup of \(\operatorname{Br}(\mathscr{X})\). Let \(\mathscr{O}_{X}(1)\) be an ample line bundle on \(X\). By [11, III, 7.6], we may choose \(N\gg 0\) such that \(\operatorname{H}^{i}(X,\mathscr{O}_{X}(-nN))=0\) for all \(0\leq i\leq 3\) and \(n\geq 1\). By Bertini's theorem, we may choose infinitely many smooth ample divisors \(Y\in|\mathscr{O}_{X}(N)|\) such that \(D|_{Y}\subset Y\) is a strict normal crossings divisor; then the restriction \[\mathscr{X}|_{Y}\simeq\sqrt[\ell]{(Y,D_{1}|_{Y})}\times_{Y}\cdots\times_{Y} \sqrt[\ell]{(Y,D_{r}|_{Y})}\] is regular. By Proposition 2.1(4), the restriction \(\operatorname{Br}(\mathscr{X})\to\operatorname{Br}(\mathscr{X}|_{Y})\) is injective. Since \(\mathscr{X}|_{Y}\) is regular, the restriction \(\operatorname{Br}(\mathscr{X}|_{Y})\to\operatorname{Br}(\kappa(\eta_{Y}))\) is injective. **Corollary 3.6**.: _Let \(k\) be a perfect field, let \(S\) be a smooth \(k\)-scheme of dimension \(\dim S\geq 4\), let \(\alpha\in\operatorname{Br}(S)\) be a Brauer class such that \(\operatorname{per}(\alpha)\) is invertible in \(k\). If \(\alpha|_{s}=0\) for all codimension 1 points \(s\in S^{(1)}\), then \(\alpha=0\)._ Proof.: This follows from Theorem 3.5. **Remark 3.7**.: In Theorem 3.5, suppose that the subgroup \(G\) is unramified (i.e. \(D=\emptyset\), so \(G\subseteq\operatorname{Br}(X)\)). Using classical results from SGA 2, we may give a short proof that, for a smooth projective \(k\)-scheme \(X\) of dimension \(\dim X\geq 4\) and any smooth ample divisor \(Y\subset X\), the \(\ell\)-primary component of \(\ker(\operatorname{Br}(X)\to\operatorname{Br}(Y))\) is \(\ell\)-divisible. Let \(\alpha\in\operatorname{Br}(X)\) be a Brauer class and set \(\ell:=\operatorname{per}(\alpha)\). After replacing \(k\) by a prime-to-\(\ell\) extension, we may assume that \(k\) contains a primitive \(\ell\)th root of unity; in this case we may choose an isomorphism \(\mathbf{Z}/(\ell)\simeq\boldsymbol{\mu}_{\ell}\) of sheaves on the big fppf site of \(\operatorname{Spec}k\). The Kummer sequence induces a commutative diagram (3.7.1) where the rows are exact and the vertical arrows are restriction maps. Since \(\dim X\geq 4\), taking \(i=3,n=\dim X,c=0\) in [11, Exp. XIV, 5.7] gives that \(f_{2}\) is bijective and \(f_{3}\) is injective. The claim then follows from a diagram chase on (3.7.1). **Remark 3.8**.: We note that, in Theorem 3.5, it is not necessarily true that the set of points \(u\in U\) such that \(G\to\operatorname{Br}(U)\to\operatorname{Br}(\kappa(u))\) is not injective is finite. One way to construct examples of Brauer classes \(\alpha\in\operatorname{Br}(S)\) vanishing at infinitely many closed points is to arrange that there exists a surjective \(k\)-morphism \(f:S^{\prime}\to S\) where \(S^{\prime}(k)\) is infinite and \(f^{*}\alpha=0\); if so, then \(f(S^{\prime}(k))\cap S[\alpha^{-1}]=\emptyset\) by Lemma 3.9. For example, let \(k\) be an infinite field of characteristic \(\operatorname{char}k\neq 2\), let \(S:=\mathbf{P}^{1}_{k}\setminus\{0,\infty\}\), let \(a\in k^{\times}\setminus(k^{\times})^{2}\) be a non-square constant, let \(\mathcal{A}:=(a,t)\) be the quaternion algebra on \(S\). Then \(\alpha:=[\mathcal{A}]\in\operatorname{Br}(S)\) is nontrivial [10, 1.3.8], but \(\mathcal{A}\) is trivialized after pullback by the squaring map \(f:\mathbf{P}^{1}_{k}\to\mathbf{P}^{1}_{k}\), hence \(f(S(k))\cap S[\alpha^{-1}]=\emptyset\), i.e. for any square constant \(b\in(k^{\times})^{2}\), the specialization \([\mathcal{A}]_{b}|\in\operatorname{Br}(k)\) is trivial. In general, by results of Yanchevskii [12] and Mestre [13], [13], [14], it is known that certain Brauer classes on open subschemes of \(\mathbf{P}^{1}_{k}\) are trivialized after pullback by finite morphisms \(\mathbf{P}^{1}_{k}\to\mathbf{P}^{1}_{k}\). More precisely, let \(S\subseteq\mathbf{P}^{1}_{k}\) be an open subscheme and let \(\alpha\in\operatorname{Br}(S)\) be a Brauer class such that \(S(k)\setminus S[\alpha^{-1}]\neq\emptyset\). If either 1. \(k\) is Henselian, or 2. \(\alpha\) is \(2\)-torsion and \(\sum_{x\in\mathbf{P}^{1}_{k}\setminus S}[\kappa(x):k]\leq 4\), then there exists a finite morphism \(f:\mathbf{P}^{1}_{k}\to\mathbf{P}^{1}_{k}\) such that \(f^{*}\alpha=0\) (see also [12, II.Appendix]). **Lemma 3.9**.: _Let \(f:X\to Y\) be a morphism of schemes, let \(\alpha\in\operatorname{Br}(Y)\) be a Brauer class such that \(f^{*}\alpha\in\operatorname{Br}(X)\) is trivial, let \(y\in Y\) be a point such that the fiber \(X_{y}:=X\times_{Y}\operatorname{Spec}\kappa(y)\) admits a \(\kappa(y)\)-point. Then \(y\not\in Y[\alpha^{-1}]\)._ Proof.: Since \((f^{*}\alpha)|_{X_{y}}=0\), we have that \(\alpha|_{y}=0\). **3.10** (Proof of Corollary 1.11).: Let \(U\subseteq S\) be an open subset and let \(\alpha\in\operatorname{Br}(U)\) be a Brauer class such that the restriction \(\alpha|_{K}\) is contained in \(\operatorname{\text{\rm III}}_{\Omega_{S}}\operatorname{Br}(K)\). For every codimension \(1\) point \(s\in U\), we have \(\alpha|_{K_{s}}=0\), so \(\alpha|_{\Theta^{\wedge}_{S,s}}=0\) since \(\Theta^{\wedge}_{S,s}\) is regular; thus \(\alpha|_{\kappa(s)}=0\) since \(\Theta^{\wedge}_{S,s}\) is henselian. By Theorem 1.9, we have \(\alpha=0\). **Question 3.11**.: Does Theorem 1.9 still hold for fields \(F\) such that \(\operatorname{trdeg}_{k}(F)=2\)? _3.11.1_.: For this, we would need to generalize the Noether-Lefschetz theorem [10] to separated Deligne-Mumford stacks of dimension \(3\) satisfying the conditions of Proposition 2.1. For explicit evidence of an affirmative answer, see the example in 3.13.3. _3.11.2_.: As observed in the introduction, it is not enough to assume that \(\operatorname{trdeg}_{k}(F)=1\), for the following reason. Let \(k\) be an algebraically closed field and let \(F/k\) be a finitely generated field extension such that \(\operatorname{trdeg}_{k}(F)=1\). Let \(S\) be a finite type \(F\)-scheme and let \(\alpha\in\operatorname{Br}(S)\) be a Brauer class. For any closed points \(s\in S\), we have that \(\kappa(s)\) is a \(C_{1}\)-field, so \(\alpha|_{s}=0\) in \(\operatorname{Br}(\kappa(s))\). **Question 3.12**.: Does Theorem 3.5 hold for \(p\)-torsion classes in characteristic \(p\)? _3.12.1_.: Let \(F\) be a perfect field of characteristic \(p\), let \(S\) be a finite type \(F\)-scheme, let \(\alpha\in\operatorname{Br}(S)\) be a \(p\)-torsion Brauer class. For any closed point \(s\in S\), the specialization \(\alpha|_{s}\in\operatorname{Br}(\kappa(s))\) is a \(p\)-torsion Brauer class over a perfect field of characteristic \(p\), hence in fact \(\alpha|_{s}=0\). This gives one way to construct Brauer classes vanishing at all the closed points of a variety. However, this does not give a counterexample to Theorem 1.9 since finitely generated fields of positive transcendence degree are not perfect. **Question 3.13**.: Let \(k\) be a perfect field, let \(F/k\) be a finitely generated extension of transcendence degree \(\operatorname{trdeg}_{k}(F)\geq 3\), let \(S\) be a finite type \(F\)-scheme. Let \(G\subset\operatorname{Br}(S)\) be a finite subgroup such that \(|G|\) is invertible in \(k\), and suppose \(s\in S\) is a point such that the composition \(G\to\operatorname{Br}(S)\to\operatorname{Br}(\kappa(s))\) is injective. For a given subgroup \(G^{\prime}\subseteq G\), does there exist an infinite set of closed points \(s^{\prime}\in\overline{\{s\}}\) such that the kernel of the composition \[G\to\operatorname{Br}(S)\to\operatorname{Br}(\kappa(s^{\prime}))\] is \(G^{\prime}\)? _3.13.1_.: If \(G^{\prime}=0\), the answer is "yes" by Theorem 1.9. _3.13.2_.: If \(G^{\prime}=G\), the answer is "yes" by Theorem 3.14 below. A related question was considered by Frei, Hassett, Varilly-Alvarado in [12]. Namely, given a number field \(k\) and a smooth projective \(k\)-scheme \(X\) and a Brauer class \(\alpha\), they define \(\mathcal{S}(X,\alpha)\) to be the set of finite places \(\mathfrak{p}\) of \(k\) such that \(X\) has good reduction at \(\mathfrak{p}\), \(\alpha\) is unramified at \(\mathfrak{p}\), and \(\alpha|_{X_{\mathfrak{p}}}=0\) in \(\operatorname{Br}(X_{\mathfrak{p}})\). They prove that, if \(X\) is a K3 surface such that the transcendental cohomology \(T(X)\) satisfies a certain condition, then the set \(\mathcal{S}(X,\alpha)\) has positive natural density. _3.13.3_.: Here is an example where \(G\simeq\mathbf{Z}/n\mathbf{Z}\) and \(G^{\prime}\simeq d\mathbf{Z}/n\mathbf{Z}\) for an arbitrary divisor \(d\) of \(n\). Let \(F:=\mathbf{C}(t_{1},t_{2})\) be the function field in two indeterminates, let \(A:=\operatorname{Spec}F[t_{3}^{\pm}]\) and \(S:=\operatorname{Spec}A\), and \[\xi:=t_{1}t_{2}\in F\] and let \(F(\xi^{1/n})/F\) be the cyclic extension obtained by adjoining an \(n\)th root of \(\xi\). Let \(\chi\in\operatorname{Aut}(F(\xi^{1/n})/F)\) be a generator and let \[\alpha:=(S(\xi^{1/n})/S,\chi,t_{3})\in\operatorname{Br}(S)\] denote the cyclic algebra of degree \(n\) over \(S\); we set \(G:=\langle\alpha\rangle\) and \(G^{\prime}:=\langle\alpha^{n/d}\rangle\). Let \(K/F\) be an extension. For any \(K\)-point \(s:\operatorname{Spec}K\to S\) and any divisor \(d\) of \(n\), the restriction of the \(d\)th tensor power \[s^{*}\alpha^{\otimes d}\simeq(K(\xi^{d/n})/K,\chi^{n/d},s^{*}t_{3})\in \operatorname{Br}(K)\] is trivial if and only if the norm map \[\operatorname{Nm}_{K(\xi^{d/n})/K}:K(\xi^{d/n})^{\times}\to K^{\times}\] contains \(s^{*}t_{3}\in K^{\times}\) in its image [10, SS15.1, Lemma]. Thus Question 3.13 may be rephrased as follows: Does there exist infinitely many closed points \(s\in S\) such that, if we denote the residue field of \(s\) by \(K:=\kappa(s)\), the restriction \(t_{3}|_{s}\in K^{\times}\) is in the image of \(\operatorname{Nm}_{K(\xi^{d/n})/K}\) but not in the image of \(\operatorname{Nm}_{K(\xi^{1/n})/K}\)? Taking \(\{1,\xi^{1/m},\xi^{2/m},\ldots,\xi^{(m-1)/m}\}\) as our \(K\)-basis of \(K(\xi^{1/m})\), we have \[\operatorname{Nm}_{K(\xi^{1/m})/K}(x_{0}1+x_{1}\xi^{1/m}+\cdots+x_{m-1}\xi^{( m-1)/m})=\det M_{\xi,m}\] where \(M_{\xi,m}\in\operatorname{Mat}_{m\times m}(F[x_{0},x_{1},\ldots,x_{m-1}])\) is the matrix whose \((i,j)\)th entry is \[(M_{\xi,m})_{i,j}=\begin{cases}x_{i-j}&\text{if }i\geq j\\ \xi x_{i-j+m}&\text{if }i<j\end{cases}\] for all \(1\leq i,j\leq m\). For example, we have \[\operatorname{Nm}_{K(\xi^{1/4})/K}(x_{0}1+x_{1}\xi^{1/4}+x_{2} \xi^{2/4}+x_{3}\xi^{3/4})\] \[=\det\begin{bmatrix}x_{0}&\xi x_{3}&\xi x_{2}&\xi x_{1}\\ x_{1}&x_{0}&\xi x_{3}&\xi x_{2}\\ x_{2}&x_{1}&x_{0}&\xi x_{3}\\ x_{3}&x_{2}&x_{1}&x_{0}\end{bmatrix}\] \[=(x_{0}^{2}-\xi x_{2}^{2})^{2}-\xi(x_{1}^{2}-\xi x_{3}^{2})^{2}+4 \xi(x_{0}x_{1}-x_{2}x_{3}\xi)(x_{1}x_{2}+x_{0}x_{3})\] and similarly \[\operatorname{Nm}_{K(\xi^{1/2})/K}(y_{0}1+y_{1}\xi^{1/2})=\det\begin{bmatrix} y_{0}&\xi y_{1}\\ y_{1}&y_{0}\end{bmatrix}=y_{0}^{2}-\xi y_{1}^{2}\] in the \(m=4\) and \(m=2\) cases, respectively. Choose a polynomial \(f\in\mathbf{C}[t_{1},t_{2}]\) such that \(f|_{t_{1}=0}\in\mathbf{C}[t_{2}]\) is not a perfect \(d\)th power and let \(s\in S(F)\) be the \(F\)-rational point corresponding to \(t_{3}-f^{n/d}\). Then taking \((y_{0},y_{1})=(f,0)\) shows that \(t_{3}|_{s}\) is in the image of \(\operatorname{Nm}_{F(\xi^{d/n})/F}\). If \(t_{3}|_{s}\) is in the image of \(\operatorname{Nm}_{F(\xi^{1/n})/F}\), then by clearing denominators we obtain polynomials \(a_{0},a_{1},\ldots,a_{n}\in\mathbf{C}[t_{1},t_{2}]\) such that \[\det M_{\xi,n}(a_{0},a_{1},\ldots,a_{n-1})=f^{n/d}a_{n}^{n}\] in \(\mathbf{C}[t_{1},t_{2}]\). Since \(M_{\xi,n}(a_{0},a_{1},\ldots,a_{n-1})|_{t_{1}=0}\) is a lower-triangular matrix, the above simplifies to \[(a_{0}|_{t_{1}=0})^{n}=(f|_{t_{1}=0})^{n/d}(a_{n}|_{t_{1}=0})^{n}\] in \(\mathbf{C}[t_{2}]\) which is a contradiction since \((f|_{t_{1}=0})^{n/d}\) is not an \(n\)th power in \(\mathbf{C}(t_{2})\). **Theorem 3.14**.: _Let \(F\) be an infinite field, let \(S\) be a finite type \(F\)-scheme of \(\dim S\geq 1\), let \(G\subseteq\operatorname{Br}(S)\) be a finite subgroup. Then there exist infinitely many codimension 1 points \(s\in S^{(1)}\) such that \(\alpha|_{s}=0\) in \(\operatorname{Br}(\kappa(s))\) for all \(\alpha\in G\)._ Proof.: Let \(\alpha_{1},\ldots,\alpha_{n}\) be the elements of \(G\), and for all \(1\leq i\leq n\), let \(Y_{i}\to S\) be a Brauer-Severi scheme corresponding to \(\alpha_{i}\). Set \[Y:=Y_{1}\times_{S}\cdots\times_{S}Y_{n}\] and let \(f:Y\to S\) be the structure morphism. By Lemma 3.16, there exist infinitely many \(s\in S^{(1)}\) such that the fiber \[f^{-1}(s)=Y\times_{S}\operatorname{Spec}\kappa(s)\to\operatorname{Spec}\kappa(s)\] admits a section; for such \(s\), we have \(\alpha_{i}|_{s}=0\) for all \(1\leq i\leq n\) as required. **Lemma 3.15**.: _Let \(k\) be a field, let \(X,Y\) be finite type \(k\)-schemes, let \(\pi_{X}:X\times_{k}Y\to X\) and \(\pi_{Y}:X\times_{k}Y\to Y\) be the two projections. For any point \(z\in X\times_{k}Y\) such that \(\pi_{Y}(z)\) is a \(k\)-point of \(Y\), the extension of residue fields \(\kappa(\pi_{X}(z))\to\kappa(z)\) is an isomorphism._ Proof.: The \(k\)-point \(\pi_{Y}(z):\operatorname{Spec}k\to Y\) induces a section \(\sigma:X\to X\times_{k}Y\) of \(\pi_{X}\) which sends \(\pi_{X}(z)\mapsto z\). **Lemma 3.16**.: _Let \(k\) be an infinite field, let \(X,Y\) be integral, finite type \(k\)-schemes with \(\dim X\geq\dim Y\geq 1\), and let \(f:X\to Y\) be a generically smooth \(k\)-morphism. Then there exist infinitely many points \(y\in Y\) of codimension 1 such that the fiber_ \[f^{-1}(y)=X\times_{Y}\operatorname{Spec}\kappa(y)\to\operatorname{Spec}\kappa(y)\] _admits a section._ Proof.: Let \(\eta_{X}\in X\) and \(\eta_{Y}\in Y\) denote the generic points. If \(\dim X>\dim Y\), by [10, 9] we may choose a closed point \(x\in f^{-1}(\eta_{Y})\) such that \(\kappa(x)/\kappa(\eta_{Y})\) is finite separable. By replacing \(X\) by the reduced scheme \(\overline{\{x\}}\), we may assume that \(\dim X=\dim Y\) and that the function field extension \(\kappa(\eta_{X})/\kappa(\eta_{Y})\) is a finite separable extension. We may replace \(X\) and \(Y\) by open subschemes so that \(X\) and \(Y\) are affine. Set \(A:=\Gamma(X,6_{X})\) and \(B:=\Gamma(Y,6_{Y})\); after a further localization, we may assume (by the primitive element theorem) that \(A=B[a]\) for some \(a\in A\) which is integral over \(B\). Let \(\phi\in B[t]\) be the minimal polynomial of \(a\) over \(B\). If \(\deg\phi=1\), then every fiber of \(f\) is trivial, so we may assume \(\deg\phi\geq 2\). Let \(k^{\prime}\subseteq A\) be the integral closure of \(k\) in \(A\); then \(k^{\prime}/k\) is a finite extension by [1, 3.3.2]. Since \(\dim(B)\geq 1\), we have \(\dim_{k}(B)=\infty\) so we may choose some \(b\in B\) such that \(a+b\not\in k^{\prime}\). By replacing \(a\) by \(a+b\), we may assume that \(a\not\in k^{\prime}\), i.e. \(\phi\) has a root which is not integral over \(k\), so \(\phi\) does not divide any nonzero element of \(k[t]\). This implies the composition \[k[t]\to B[t]\to B[t]/(\phi)\simeq A\] is injective; hence the corresponding morphism \[g:X\to\mathbb{A}_{k}^{1}\times_{k}Y\to\mathbb{A}_{k}^{1}\] is flat. Thus the set-theoretic image \(U:=g(X)\subseteq\mathbb{A}_{k}^{1}\) is an open subset of \(\mathbb{A}_{k}^{1}\). Since \(k\) is infinite, the open subset \(U\) contains infinitely many \(k\)-points. For any \(x\in X\) such that \(g(x)\) is a \(k\)-point of \(U\), the extension of residue fields \(\kappa(f(x))\to\kappa(x)\) is an isomorphism by Lemma 3.15. ## 4. Brauer classes vanishing at a prescribed set of points Theorem 1.9 and Theorem 3.5 show that too many vanishing specializations of a Brauer class kill it. In this section, we consider the problem of constructing non-zero Brauer classes that vanish at a given finite set of points (i.e. whether there always exist \(\alpha\in\operatorname{Br}(S)\) such that \(S[\alpha^{-1}]\) avoids an arbitrary set of points). **Question 4.1**.: Let \(k\) be a field, let \(S\) be a curve over \(k\). Let \(T_{1},T_{2}\subseteq S\) be disjoint subsets of \(S\). Does there exist a Brauer class \(\alpha\in\operatorname{Br}(S)\) such that \(T_{1}\subseteq S[\alpha^{-1}]\) and \(T_{2}\cap S[\alpha^{-1}]=\emptyset\)? #### 4.1.1. In Section 2, we are mostly interested in whether \(S[\alpha^{-1}]\) is stable under specialization. We may ask whether \(S[\alpha^{-1}]\) is stable under generalization as well. If \(s\in S\) is a point such that the reduced subscheme \(\overline{\{s\}}\subset S\) is regular, then \(\alpha|_{s}=0\) implies \(\alpha|_{s^{\prime}}=0\) for all \(s^{\prime}\in\overline{\{s\}}\). On the other hand, if \(\overline{\{s\}}\) is not regular, it is possible that \(\alpha|_{s}=0\) but \(\alpha|_{s^{\prime}}\neq 0\) for some \(s^{\prime}\in\overline{\{s\}}\): Set \(A:=(\mathbf{Q}[x,y]/(x^{2}-y^{3}+2y^{2}))_{\langle x,y\rangle}\); then \(A\) is a local Noetherian domain of dimension \(1\) and a normalization is given by the map \(A\to\mathbf{Q}[t]_{\langle t\rangle}\) sending \(x\mapsto t(t^{2}+2)\) and \(y\mapsto t^{2}+2\). Set \(S:=\operatorname{Spec}A\) with generic point \(s\) and closed point \(s^{\prime}\), and consider the quaternion algebra \(\alpha:=(y-1,y-1)\in\operatorname{Br}(S)\). We have \(\alpha|_{s^{\prime}}\neq 0\) since the conic \(-X_{0}^{2}-X_{1}^{2}=X_{2}^{2}\) does not have a nontrivial solution over \(\kappa(s^{\prime})\simeq\mathbf{Q}\); however \(\alpha|_{s}=0\) since the conic \((t^{2}+1)X_{0}^{2}+(t^{2}+1)X_{1}^{2}=X_{2}^{2}\) has the solution \((X_{0},X_{1},X_{2})=(t,1,t^{2}+1)\) over \(\kappa(s)\simeq\mathbf{Q}(t)\). #### 4.1.2. In Proposition 4.2, we give a positive answer to a weaker version of Question 4.1 where we allow \(\alpha\) to be ramified on \(S\). In Proposition 4.3, we prove the existence of unramified Brauer classes for the case \(|T|=1\). **Proposition 4.2**.: _Let \(k\) be a Hilbertian field, let \(S\) be a smooth proper curve over \(k\). For any finite set of closed points \(T\subseteq S_{(0)}\) and any positive integer \(n\), there exists an open subscheme \(S^{\prime}\) of \(S\) such that \(T\subseteq S^{\prime}\) and a Brauer class \(\alpha\in\operatorname{Br}(S^{\prime})\) such that \(\eta_{S^{\prime}}\in S^{\prime}[\alpha^{-1}]\) and \(T\cap S^{\prime}[\alpha^{-1}]=\emptyset\) and \(\gcd(\operatorname{per}\alpha,n)=1\)._ Proof.: Let \(S_{1},S_{2}\) be smooth proper curves over \(k\), let \(f:S_{1}\to S_{2}\) be a finite \(k\)-morphism, let \(T_{1}\subseteq(S_{1})_{(0)}\) be a set of closed points, and set \(T_{2}:=f(T_{1})\). Suppose \(S^{\prime}_{2}\subseteq S_{2}\) is an open subset containing \(T_{2}\) and \(\alpha_{2}\in\operatorname{Br}(S^{\prime}_{2})\) is a Brauer class such that \(\eta_{S^{\prime}_{2}}\in S^{\prime}_{2}[\alpha_{2}^{-1}]\) and \(T_{2}\cap S^{\prime}_{2}[\alpha_{2}^{-1}]=\emptyset\). Then for \(S^{\prime}_{1}:=f^{-1}(S^{\prime}_{2})\) and \(\alpha_{1}:=f^{*}\alpha_{2}\in\operatorname{Br}(S^{\prime}_{1})\), we have \(T_{1}\cap S^{\prime}_{1}[\alpha_{1}^{-1}]=\emptyset\). In this setup, we have \[\operatorname{per}\alpha_{1}|\operatorname{per}\alpha_{2}|(\deg f) \operatorname{per}\alpha_{1}\] so \(\gcd(\operatorname{per}\alpha_{2},n\deg f)=1\) implies \(\operatorname{per}\alpha_{1}=\operatorname{per}\alpha_{2}\). By the above, we may reduce to the case \(S=\mathbf{P}_{k}^{1}\) by choosing a finite \(k\)-morphism \(f:S\to\mathbf{P}_{k}^{1}\) and replacing \(S,T,n\) by \(\mathbf{P}_{k}^{1},f(T),n\deg f\), respectively. By a translation of \(\mathbf{P}_{k}^{1}\), we may assume that \(0,\infty\not\in T\). For any \(s\in T\), let \(\xi_{s}\in k[t]\) be the monic irreducible polynomial defining the closed subscheme \(\operatorname{Spec}\kappa(s)\to\mathbf{P}_{k}^{1}\) and set \(\xi:=\prod_{s\in T}\xi_{s}\). Choose a positive integer \(m\geq 2\) such that \(\gcd(m,n)=1\). Since \(K\) is Hilbertian, by [1, 16.3.6] there exists a cyclic Galois extension \(k^{\prime}/k\) of degree \([k^{\prime}:k]=m\). Choose a positive integer \(i\in\mathbf{Z}_{\geq 1}\) such that \(m\nmid i+\deg\xi\), let \(\chi\in\operatorname{Aut}(k^{\prime}/k)\) be a generator. We show that the Brauer class \(\alpha\in\operatorname{Br}(k(t))\) corresponding to the cyclic algebra \[\mathfrak{sl}:=(k^{\prime}(t)/k(t),\chi,t^{i}\xi+1)\] has the desired properties. To show that \(\mathfrak{sl}\) is nontrivial, it suffices by [10, SS15.1, Lemma] to show that the unit \(t^{i}\xi+1\in(k(t))^{\times}\) is not equal to the norm of any element of \((k^{\prime}(t))^{\times}\). For this, we note that for any \(\frac{a}{b}\in(k^{\prime}(t))^{\times}\), the norm \(N_{k^{\prime}(t)/k(t)}(\frac{a}{b})=\prod_{\sigma\in\operatorname{Aut}(k^{ \prime}/k)}\sigma(\frac{a}{b})\) has degree \(m(\deg(a)-\deg(b))\), but \(\deg(t^{i}\xi+1)\) is not a multiple of \(m\) (by choice of \(i\)). For any closed point \(s\in T\), we have that \(\mathfrak{sl}\) is unramified at \(s\) (since \(\gcd(\xi_{s},t^{i}\xi+1)=1\)) and the restriction \[\mathfrak{sl}|_{s}=(k^{\prime}\otimes_{k}\kappa(s)/\kappa(s),\chi,(t^{i}\xi+1)| _{\kappa(s)})\simeq(k^{\prime}\otimes_{k}\kappa(s)/\kappa(s),\chi,1)\] is a trivial central simple \(\kappa(s)\)-algebra. **Proposition 4.3**.: _Let \(k\) be a field that is finitely generated over a global field, let \(S\) be a smooth proper geometrically integral curve over \(k\) of genus \(g\geq 1\). For any closed point \(x\in S_{(0)}\), there exists a Brauer class \(\alpha\in\operatorname{Br}(S)\) such that \(\eta_{S}\in S[\alpha^{-1}]\) and \(x\not\in S[\alpha^{-1}]\)._ Proof.: If \(x\) is a \(k\)-point, then the Leray spectral sequence gives a split exact sequence \[0\to\operatorname{Br}(k)\to\operatorname{Br}(S)\to\operatorname{H}^{1}( \operatorname{Spec}k,\operatorname{Jac}_{S})\to 0\] where \(\operatorname{Jac}_{S}\) is the Jacobian of \(S\). Since \(k\) is Hilbertian, by [10, Theorem 1.6] the group \(\operatorname{H}^{1}(\operatorname{Spec}k,\operatorname{Jac}_{S})\) contains elements of order \(n\) for any positive integer \(n\); given any such element, we obtain a nontrivial element of \(\operatorname{Br}(S)\) which vanishes at \(x\). If \(x\) is not a \(k\)-point, then the extension \(\kappa(x)/k\) is nontrivial; by a theorem of Fein, Schacher [11] the relative Brauer group \(\operatorname{Br}(\kappa(x)/k)\) is infinite. For any nontrivial class \(\alpha\in\operatorname{Br}(\kappa(x)/k)\), the constant class \(\alpha|_{S}\in\operatorname{Br}(S)\) is a Brauer class which vanishes at \(x\). **Remark 4.4**.: In Proposition 4.3, the condition \(S\not\simeq\mathbf{P}^{1}_{k}\) is necessary since the pullback \(\operatorname{Br}(k)\to\operatorname{Br}(\mathbf{P}^{1}_{k})\) is an isomorphism (so if \(x\in\mathbf{P}^{1}_{k}(k)\) is a \(k\)-point then there does not exist any nontrivial \(\alpha\in\operatorname{Br}(\mathbf{P}^{1}_{k})\) which vanishes at \(x\)). ## 5. Applications to rational points on genus \(1\) curves As an application of our results in Section 2, we prove Corollary 1.12, which may be viewed as a local-to-global principle for genus \(1\) curves over function fields of fourfolds over \(k\). We first prove a version of [10, Lemma 3.2] for relative elliptic curves. **Lemma 5.1**.: _Let \(S\) be a quasi-compact scheme admitting an ample line bundle, let \(\pi:\mathcal{S}\to S\) be a relative elliptic curve, and \(\sigma:S\to\mathcal{E}\) denote the identity section of \(\pi\). There is a natural isomorphism_ \[\ker(\sigma^{*}:\operatorname{Br}(\mathcal{E})\to\operatorname{Br}(S))\simeq \operatorname{H}^{1}_{\operatorname{\acute{e}t}}(S,\mathcal{E})_{\operatorname {tors}}\] _which is functorial on \(S\)._ Proof.: We consider the Leray spectral sequence for \(\pi\). Since \(\pi\) is a relative curve, we have \(\mathbf{R}^{2}\pi_{*}\mathbf{G}_{m}=0\) by [1, II, Lemma 2\({}^{\prime}\)]. Since \(\pi\) admits a section, the pullback \(\pi^{*}:\operatorname{H}^{3}_{\operatorname{\acute{e}t}}(S,\mathbf{G}_{m})\to \operatorname{H}^{3}_{\operatorname{\acute{e}t}}(\mathcal{E},\mathbf{G}_{m})\) is injective, hence the differential \(d^{1,1}_{2}:\operatorname{H}^{1}_{\operatorname{\acute{e}t}}(S,\mathbf{R}^{1} \pi_{*}\mathbf{G}_{m})\to\operatorname{H}^{3}_{\operatorname{\acute{e}t}}(S, \mathbf{G}_{m})\) is the zero map. Thus we obtain an exact sequence \[0\to\operatorname{H}^{2}_{\operatorname{\acute{e}t}}(S,\mathbf{G}_{m}) \stackrel{{\pi^{*}}}{{\to}}\operatorname{H}^{2}_{\operatorname{ \acute{e}t}}(\mathcal{E},\mathbf{G}_{m})\to\operatorname{H}^{1}_{\operatorname {\acute{e}t}}(S,\mathbf{R}^{1}\pi_{*}\mathbf{G}_{m})\to 0 \tag{5.1.1}\] which admits a canonical splitting induced by \(\sigma^{*}\). We have isomorphisms \[\mathbf{R}^{1}\pi_{*}\mathbf{G}_{m}\simeq\operatorname{Pic}_{\mathcal{E}/S} \simeq\operatorname{Pic}^{0}_{\mathcal{E}/S}\times\underline{\mathbf{Z}}\simeq \mathcal{E}\times\underline{\mathbf{Z}}\] of etale sheaves on \(S\). Since \(\operatorname{H}^{1}_{\operatorname{\acute{e}t}}(S,\underline{\mathbf{Z}})\) is torsion-free (see e.g. [15, A.3]), we have a split exact sequence \[0\to\operatorname{Br}(S)\stackrel{{\pi^{*}}}{{\to}} \operatorname{Br}(\mathcal{E})\to\operatorname{H}^{1}_{\operatorname{\acute{e}t }}(S,\mathcal{E})_{\operatorname{tors}}\to 0\] obtained by restricting to the torsion subgroups in (5.1.1). **5.2** (Proof of Corollary 1.12).: By replacing \(S\) with an open subscheme, we may assume that \(S\) is affine and regular; furthermore, we may assume that \(C\) has good reduction over \(S\), i.e. that there exists a scheme \(\mathcal{E}\) and a smooth projective morphism \(\pi:\mathcal{E}\to S\) whose geometric fibers are connected curves of genus \(1\) and such that there is a \(K\)-isomorphism \(\pi^{-1}(\eta_{S})\simeq C\), where \(\eta_{S}\in S\) denotes the generic point. Let \(\mathcal{E}:=\operatorname{Pic}^{0}_{\mathcal{E}/S}\) be the Jacobian of \(\mathcal{E}\). We have that \(\mathcal{C}\) is an \(\mathcal{E}\)-torsor, i.e. there is a class \([\mathcal{C}]\in\mathrm{H}^{1}_{\mathrm{et}}(S,\mathcal{C})\) which is the obstruction to the existence of a section of \(\pi\). Let \[\alpha\in\mathrm{Br}(\mathcal{C})\] be the Brauer class corresponding to \([\mathcal{C}]\) under the isomorphism in Lemma 5.1. We will show that \(\alpha|_{x}=0\) for all closed points \(x\in\pi^{-1}(\eta_{S})\); if so, then \(\alpha|_{\pi^{-1}(\eta_{S})}=0\) by Theorem 1.9 applied to \(\pi^{-1}(\eta_{S})\). Let \(\widetilde{S}:=\overline{\{x\}}\) be the closure of \(x\) in \(\mathcal{C}\), equipped with the reduced scheme structure. As the composition \(\widetilde{S}\to\mathcal{C}\to S\) is generically finite and dominant, it follows that \(\widetilde{S}\) is itself an integral finite type \(F\)-scheme of dimension \(\dim\widetilde{S}\geq 1\). By restricting \(S\) again to a smaller Zariski open set, we can assume that \(\widetilde{S}\to S\) is finite flat and that \(\widetilde{S}\) is smooth over \(F\). To show that \(\alpha|_{x}=0\), by Theorem 1.9 applied to \(\widetilde{S}\), it suffices to show that \(\alpha|_{\widetilde{s}}=0\) for every codimension \(1\) point \(\widetilde{s}\in\widetilde{S}\). Choose such a point \(\widetilde{s}\in\widetilde{S}\) and consider its image \(s\in S\), which is also codimension \(1\) since \(\widetilde{S}\to S\) is finite flat. By hypothesis, the restriction \([\mathcal{C}\times_{S}\operatorname{Spec}K_{s}]\in\mathrm{H}^{1}_{\mathrm{et} }(\operatorname{Spec}K_{s},\mathcal{C}\times_{S}\operatorname{Spec}K_{s})\) is trivial, so \(\alpha|_{\mathcal{C}\times_{S}\operatorname{Spec}K_{s}}=0\) by Lemma 5.1. Since \(\mathcal{C}\times_{S}\operatorname{Spec}\mathcal{C}^{\wedge}_{S,s}\) is regular, we have \(\alpha|_{\mathcal{C}\times_{S}\operatorname{Spec}\mathcal{C}^{\wedge}_{S,s}}=0\), thus \(\alpha|_{\mathcal{C}\times_{S}\operatorname{Spec}\kappa(s)}=0\), in particular \(\alpha|_{\widetilde{s}}=0\).
2304.11231
From Hyperbolic to Parabolic Parameters along Internal Rays
For the quadratic family $f_{c}(z) = z^2+c$ with $c$ in a hyperbolic component of the Mandelbrot set, it is known that every point in the Julia set moves holomorphically. In this paper we give a uniform derivative estimate of such a motion when the parameter $c$ converges to a parabolic parameter $\hat{c}$ radially; in other words, it stays within a bounded Poincar\'e distance from the internal ray that lands on $\hat{c}$. We also show that the motion of each point in the Julia set is uniformly one-sided H\"older continuous at $\hat{c}$ with exponent depending only on the petal number. This paper is a parabolic counterpart of the authors' paper ``From Cantor to semi-hyperbolic parameters along external rays" (Trans. Amer. Math. Soc. 372 (2019) pp. 7959--7992).
Yi-Chiuan Chen, Tomoki Kawahira
2023-04-21T19:51:52Z
http://arxiv.org/abs/2304.11231v1
# From Hyperbolic to Parabolic Parameters ###### Abstract For the quadratic family \(f_{c}(z)=z^{2}+c\) with \(c\) in a hyperbolic component of the Mandelbrot set, it is known that every point in the Julia set moves holomorphically. In this paper we give a uniform derivative estimate of such a motion when the parameter \(c\) converges to a parabolic parameter \(\hat{c}\) radially; in other words, it stays within a bounded Poincare distance from the internal ray that lands on \(\hat{c}\). We also show that the motion of each point in the Julia set is uniformly one-sided Holder continuous at \(\hat{c}\) with exponent depending only on the petal number. This paper is a parabolic counterpart of the authors' paper "From Cantor to semi-hyperbolic parameters along external rays" (_Trans. Amer. Math. Soc._**372** (2019) pp. 7959-7992). ###### Contents * 1 Introduction and main results * 2 Radial access condition and S-cycles * 3 Proof of the main theorem assuming three lemmas * 4 Postcritical set and hyperbolic metric * 5 Proof of Lemma A assuming Lemmas G and H * 6 Proof of Lemma B * 7 Proofs of Propositions 2.2 and 2.3 * 8 Proof of Lemma G * 9 Proof of Proposition 2.4 * 10 Proof of Lemma H * 11 Proof of Lemma D * 12 Proof of Lemma C * 13 Proofs of Theorems 1.2 and 1.3 Introduction and main results Hyperbolic components.Let \(\mathbb{M}\) be the _Mandelbrot set_, the connectedness locus of the quadratic family \[\big{\{}f_{c}:z\mapsto z^{2}+c\big{\}}_{c\,\in\,\mathbb{C}}.\] That is, the Julia set \(J(f_{c})\) is connected if and only if \(c\in\mathbb{M}\). A parameter \(c\in\mathbb{M}\) is called _hyperbolic_ if \(f_{c}\) has a (super-)attracting periodic point. Equivalently, there exist positive numbers \(\gamma_{c}\) and \(\varepsilon_{c}\) such that \(|Df_{c}^{n}(z)|\geq\gamma_{c}(1+\varepsilon_{c})^{n}\) for any \(n\geq 0\) and \(z\in J(f_{c})\). The set of hyperbolic parameters in \(\mathbb{M}\) is an open subset and its connected components are called _hyperbolic components_ of the Mandelbrot set. (The complement of \(\mathbb{M}\) is also called a hyperbolic component, but in this paper we only consider those contained in the Mandelbrot set.) Let \(\mathbb{D}\) be the unit disk in \(\mathbb{C}\) and \(\mathbb{X}\) a hyperbolic component of \(\mathbb{M}\). Sullivan and Douady-Hubbard (see [DH, Exposes XIV & XIX] and [Mi, Thm.6.5]) gave a _uniformization_ of \(\mathbb{X}\), which is a canonical homeomorphism \(\Phi=\Phi_{\mathbb{X}}:\overline{\mathbb{D}}\to\overline{\mathbb{X}}\) such that \(\Phi|_{\mathbb{D}}:\mathbb{D}\to\mathbb{X}\) is an conformal isomorphism; and for any \(c=\Phi(\mu)\) with \(\mu\in\overline{\mathbb{D}}-\{1\}\), the map \(f_{c}\) has a periodic point of multiplier \(\mu\) with a common period. The parameters \(\Phi(0)\) and \(\Phi(1)\) in \(\overline{\mathbb{X}}\) are called the _center_ and the _root_ of \(\mathbb{X}\) respectively. Note that the _Poincare distance_ in \(\mathbb{X}\) is defined by pulling-back the Poincare (hyperbolic) metric \(|dz|/(1-|z|^{2})\) on \(\mathbb{D}\) by the isomorphism \(\Phi\). Internal rays and thick internal rays.For a given hyperbolic component \(\mathbb{X}\) and a given real number \(\theta\), we define the _internal ray_\(I(\theta)\) of angle \(\theta\) by \[I(\theta)=I_{\mathbb{X}}(\theta):=\big{\{}\Phi(re^{2\pi i\theta})\in\mathbb{X} \,:\,0\leq r<1\big{\}}.\] The point \[\hat{c}:=\Phi(e^{2\pi i\theta})\in\partial\mathbb{X}\] is called the _landing point_ of \(I(\theta)\). For a given \(\delta\geq 0\), we define the \(\delta\)_-thick internal ray_\(\mathcal{I}(\theta,\delta)\) of angle \(\theta\) by the closed \(\delta\)-neighborhood of \(I(\theta)\) in \(\mathbb{X}\) with respect to the Poincare distance. We say the parameter \(c\)_tends to \(\hat{c}\) along a thick internal ray_ if there exists a \(\delta\geq 0\) such that \(c\) stays in the \(\delta\)-thick internal ray \(\mathcal{I}(\theta,\delta)\) while it tends to \(\hat{c}\). It is rather common to say that such a \(c\) converges to \(\hat{c}\)_radially_ (after McMullen [Mc2]) or _non-tangentially_. Indeed, for any angle \(A_{0}\in[0,\pi/2)\), if \(c\) stays in the \(\delta(A_{0})\)-thick internal ray with \[\delta(A_{0})=\frac{1}{2}\log\frac{1+\tan(A_{0}/2)}{1-\tan(A_{0}/2)}\in[0, \infty), \tag{1.1}\] then by letting \(\mu_{c}:=\Phi^{-1}(c)\) and \(\hat{\mu}:=\Phi^{-1}(\hat{c})\) we have \[\left|\arg\left(1-\frac{\mu_{c}}{\hat{\mu}}\right)\right|\leq A_{0}\] for \(c\) sufficiently close to \(\hat{c}\). In other words, \(\mu_{c}\) stays in the Stolz angle at \(\hat{\mu}\in\partial\mathbb{D}\) with opening angle \(2A_{0}\). (See [P, p.7] and Figure 4 in the next section.) Holomorphic motion of the hyperbolic Julia sets.It is well-known that there exists a _holomorphic motion_ ([BR, L, Mc1, MSS]) of the Julia sets over any hyperbolic component \(\mathbb{X}\) of \(\mathbb{M}\). Indeed, we have an equivariant holomorphic motion as follows. For any base point \(\sigma\in\mathbb{X}\), there exists a unique map \(H:\mathbb{X}\times J(f_{\sigma})\to\mathbb{C}\) such that 1. \(H(\sigma,z)=z\) for any \(z\in J(f_{\sigma})\). 2. For any \(c\in\mathbb{X}\), the map \(z\mapsto H(c,z)\) is injective on \(J(f_{\sigma})\). 3. For any \(z\in J(f_{\sigma})\), the map \(c\mapsto H(c,z)\) is holomorphic on \(\mathbb{X}\). 4. For any \(c\in\mathbb{X}\), the map \(h_{c}(z):=H(c,z)\) satisfies \(h_{c}(J(f_{\sigma}))=J(f_{c})\) and \(f_{c}\circ h_{c}=h_{c}\circ f_{\sigma}\) on \(J(f_{\sigma})\). See [Mc1, SS4] for more details. In this paper we choose the center \(\sigma:=\Phi(0)\) of \(\mathbb{X}\) as the base point of the motion. We are concerned with boundary behavior of such an equivariant holomorphic motion of the Julia set \(J(f_{\sigma})\) when \(c\in\mathbb{X}\) tends to some \(\hat{c}\in\partial\mathbb{X}\) along a thick internal ray. Parabolic parameters.Now suppose that the angle \(\theta\) of the internal ray \(I(\theta)\) is a rational number. Then for the landing point \(\hat{c}\) of \(I(\theta)\), \(f_{\hat{c}}\) has a parabolic periodic point, that is, a periodic point whose multiplier is a root of unity. We say such a parameter \(\hat{c}\) is _parabolic_, and a parabolic periodic point \(\hat{b}\) of \(f_{\hat{c}}\) has \(q\)_petals_ if the local dynamics of \(f_{\hat{c}}^{k}\) near \(\hat{b}\) is of the form \(\zeta\mapsto\zeta+\zeta^{q+1}+O(\zeta^{q+2})\) for some \(k\) in an appropriate local coordinate. In our setting, it is known that \(\hat{b}\) has \(q\) petals if and only if the multiplier of \(\hat{b}\) is a primitive \(q\)-th root of unity (since \(f_{\hat{c}}\) is quadratic and has only one critical point in \(\mathbb{C}\)). **Example 1** (Period one, the main cardioid).: For the hyperbolic component \(\mathbb{X}_{1}\) containing \(0\) (_the main cardioid_), the map \(\Phi=\Phi_{\mathbb{X}_{1}}\) is explicitly given by \(\Phi(\mu):=-\mu^{2}/4+\mu/2\) (\(\mu\in\overline{\mathbb{D}}\)) and \(f_{\Phi(\mu)}\) has a fixed point with multiplier \(\mu\). The internal ray of angle \(\theta\) is given by \(I_{\mathbb{X}_{1}}(\theta)=\big{\{}-r^{2}e^{4\pi\theta i}/4+re^{2\pi\theta i} /2\,:\,0\leq r<1\big{\}}.\) In Figure 1, \(I_{\mathbb{X}_{1}}(\theta)\) for \(\theta=0,1/2\) and \(1/3\) are depicted as paths (a), (b), and (c) respectively. The corresponding holomorphic motions along \(I_{\mathbb{X}_{1}}(0)\) and \(I_{\mathbb{X}_{1}}(1/2)\) are illustrated in Figure 2 and in Figures 3(a) and 3(b). The motion along \(I_{\mathbb{X}_{1}}(1/3)\) is depicted in Figure 3(c). **Example 2** (Period two).: Similarly the hyperbolic component \(\mathbb{X}_{2}\) containing \(-1\) consists of hyperbolic parameters \(c\) such that \(f_{c}\) has an attracting cycle of period two. The map \(\Phi=\Phi_{\mathbb{X}_{2}}\) is explicitly given by \(\Phi(\mu):=\mu/4-1\) for \(\mu\in\overline{\mathbb{D}}\), and \(f_{\Phi(\mu)}\) with \(\mu\in\overline{\mathbb{D}}-\{1\}\) has a periodic point of period two with multiplier \(\mu\). The internal ray of angle \(\theta\) is given by \(I_{\mathbb{X}_{2}}(\theta)=\big{\{}re^{2\pi\theta i}/4-1\,:\,0\leq r<1\big{\}}.\) In Figure 1, \(I_{\mathbb{X}_{2}}(\theta)\) for \(\theta=0\) and \(1/3\) are depicted as paths (d) and (e) respectively. The internal ray \(I_{\mathbb{X}_{2}}(0)\) lands at the root \(\hat{c}=\Phi(1)=-3/4\) of \(\mathbb{X}_{2}\), where the map \(f_{\hat{c}}\) has a parabolic fixed point of multiplier \(-1\) that has two petals. Note that \(\hat{c}\) is the landing point of another internal ray \(I_{\mathbb{X}_{1}}(1/2)\) of \(\mathbb{X}_{1}\). See Figures 3(d) and 3(e) for the motions along \(I_{\mathbb{X}_{2}}(0)\) and \(I_{\mathbb{X}_{2}}(1/3)\). **Example 3** (Period three).: There is a hyperbolic component \(\mathbb{X}_{3}\) attached to the main cardioid \(\mathbb{X}_{1}\) that consists of parameters \(c\) with \(\operatorname{Im}c>0\) for which \(f_{c}\) has an attracting cycle of period three. The internal ray \(I_{\mathbb{X}_{3}}(0)\) (depicted as path (f) in Figure 1) joins the center (so-called "rabbit") and the root \(\hat{c}=\Phi(1)\) ("the fat rabbit"), where the map \(f_{\hat{c}}\) has a parabolic fixed point of multiplier \(e^{2\pi i/3}\) that has three petals. (See Figure 3(f) for the holomorphic motion along \(I_{\mathbb{X}_{3}}(0)\).) Again \(\hat{c}\) is the landing point of another internal ray \(I_{\mathbb{X}_{1}}(1/3)\) of \(\mathbb{X}_{1}\). Main results.Let \(\mathbb{X}\) be a hyperbolic component in the Mandelbrot set and \(\sigma\) be its center. For any \(z_{*}\) in \(J(f_{\sigma})\), the map \(c\mapsto z(c):=H(c,z_{*})\) is holomorphic over \(\mathbb{X}\). Let \(\hat{c}\in\partial\mathbb{X}\) be the landing point of the internal ray \(I_{\mathbb{X}}(\theta)\) of rational angle \(\theta\). Our main theorem states that the speed of \(z(c)=H(c,z_{*})\) is uniformly bounded by a function of \(|c-\hat{c}|\) as \(c\) tends to \(\hat{c}\) along a thick internal ray: **Theorem 1.1** (Main Theorem).: _Suppose that \(f_{\hat{c}}\) has a parabolic periodic point with \(q\) petals and \(c\) tends to \(\hat{c}\) along a thick internal ray \(\mathcal{I}(\theta,\delta)\) in \(\mathbb{X}\). Then there exists a constant \(K>0\) depending only on \(\hat{c}\) and \(\delta\) such that for any \(z=z(c)\in J(f_{c})\), the point \(z(c)\) moves holomorphically with_ \[\left|\frac{d}{dc}z(c)\right|\leq\frac{K}{\left|\,c-\hat{c}\,\right|^{1-1/Q}},\] _where \(Q=\max\{2,q\}\)._ By this theorem we obtain one-sided Holder continuity of the holomorphic motion along thick internal rays landing on parabolic parameters: Figure 1: Internal rays (a) – (f) of the Mandelbrot set. **Theorem 1.2** (One-sided Holder Continuity).: _Under the same assumption as Theorem 1.1 above, the point \(z=z(c)\) in \(J(f_{c})\) tends to a limit \(z(\hat{c})\) in \(J(f_{\hat{c}})\) as \(c\) tends to \(\hat{c}\) along the thick internal ray \(\mathcal{I}(\theta,\delta)\). Moreover, there exists a constant \(K^{\prime}\) depending only on \(\hat{c}\) and \(\delta\) such that_ \[|z(c)-z(\hat{c})|\leq K^{\prime}|\,c-\hat{c}\,|^{1/Q} \tag{1.2}\] _for any \(c\) in \(\mathcal{I}(\theta,\delta)\)._ As an immediate consequence the holomorphic motion of each point \(z(c)\in J(f_{c})\) lands when \(c\) moves along the internal ray \(I(\theta)=\mathcal{I}(\theta,0)\). This theorem yields a precise description of the degeneration of the dynamics on the Julia sets along the internal rays of rational angles: **Theorem 1.3** (Pinching Semiconjugacy).: _Under the same assumption as the theorems above, the conjugacy \(H(c,\cdot)=h_{c}:J(f_{\sigma})\to J(f_{c})\) converges uniformly to a semiconjugacy \(h_{\hat{c}}:J(f_{\sigma})\to J(f_{\hat{c}})\) from \(f_{\sigma}\) to \(f_{\hat{c}}\) as \(c\) tends to \(\hat{c}\) along the thick internal ray \(I(\theta,\delta)\). Moreover, \(h_{\hat{c}}\) satisfies the following:_ 1. _If_ \(\hat{c}\) _is the root of_ \(\mathbb{X}\)_, then_ \(h_{\hat{c}}\) _is injective and thus a conjugacy._ 2. _If_ \(\hat{c}\) _is not the root of_ \(\mathbb{X}\) _(hence_ \(q\geq 2\)_), then the preimage_ \(h_{\hat{c}}^{-1}(\{w\})\) _of any_ \(w\in J(f_{\hat{c}})\) _consists of one or_ \(q\) _distinct points, and the latter holds if and only if_ \(w\) _eventually lands on a parabolic periodic point of_ \(f_{\hat{c}}\)_._ 3. _The semiconjugacy_ \(\eta_{c}:=h_{\hat{c}}\circ h_{\hat{c}}^{-1}:J(f_{c})\to J(f_{\hat{c}})\) _satisfies_ \[|\eta_{c}(z)-z|\leq K^{\prime}|c-\hat{c}|^{1/Q}\] (1.3) _for any_ \(c\) _in the thick internal ray_ \(\mathcal{I}(\theta,\delta)\)_._ By (3) of this theorem we obtain: Figure 2: Real analytic motion of the preimages of the repelling fixed point along the internal rays \(I(1/2)\) (left) and \(I(0)\) (right) in \(\mathbb{X}_{1}\). **Corollary 1.4** (Hausdorff Convergence).: _The Hausdorff distance between \(J(f_{c})\) and \(J(f_{\hat{c}})\) is \(O(|c-\hat{c}|^{1/Q})\) as \(c\) tends to \(\hat{c}\) along a thick internal ray._ _Remark 1.5_.: * These results are parabolic counterparts of the authors' results in [CK1] about parameter rays (external rays) landing on semi-hyperbolic parameters of the Mandelbrot set. * For any \(c\in\mathbb{X}\) and \(z=z(c)\in J(f_{c})\), we have \[\left|\frac{d}{dc}z(c)\right|\leq\frac{1+\sqrt{1+6\left|c\right|}}{\operatorname {dist}\left(c,\partial\mathbb{X}\right)}\] (1.4) by Proposition 3.1 in [CK1]. However, this will only give \[\left|\frac{d}{dc}z(c)\right|=O\!\left(\frac{1}{\left|c-\hat{c}\right|}\right)\] as \(c\) tends to \(\hat{c}\). * In [CK2], the authors showed that for any \(c\in[0,1/4)=I_{\mathbb{X}_{1}}(0)\) and \(z=z(c)\in J(f_{c})\), we have an optimal estimate \[\left|\frac{d}{dc}z(c)\right|\leq\frac{1}{2\sqrt{1/4-c}}.\] In particular, the Hausdorff distance between \(J(f_{c})\) and \(J(f_{1/4})\) is exactly \(\sqrt{1/4-c}\). * The existence of the semiconjugacy in Theorem 1.3 and the Hausdorff convergence of the Julia sets in Corollary 1.4 are previously shown in a more general context by the second author [K1] and McMullen [Mc2] respectively. The novel part of our results is the quantitative estimate \(O(|c-\hat{c}|^{1/Q})\). Structure of the paper.In Section 2, we define a parametrization of \(c\) with a complex parameter \(t\in\mathbb{C}\) such that \(c=c_{t}\) converges to \(\hat{c}\) along a thick internal ray as \(t\to 0\). Also in Section 2, we state three propositions, Propositions 2.2, 2.3 and 2.4, which concern the local dynamics of \(f_{c}\) in a neighborhood \(U_{0}\) of a parabolic point of \(f_{\hat{c}}\) when \(c\) is near \(\hat{c}\), and will be employed to prove Theorems 1.2 and 1.3 as well as some lemmas in the paper. Then, we introduce the notion of "S-cycle" to describe how an orbit of \(f_{c}\) repeatedly (infinite or finite times or never) enters and leaves a fixed subset of \(U_{0}\). In Section 3, by assuming Lemmas A, B and C, we prove our main theorem, Theorem 1.1. It is well-known (for example [Mc1, SS3.2]) that the Julia set is expanding with respect to the hyperbolic metric on \(\mathbb{C}-P(f_{c})\), where \(P(f_{c})\) denotes the postcritical set of \(f_{c}\). In order to estimate the expansion of the Julia set with respect to the Euclidean metric, we give an estimate of the distance between \(z\in J(f_{c})\) and \(P(f_{\hat{c}})\) in Lemma D in Section 4 for \(z\) not too close to the parabolic cycle of \(f_{\hat{c}}\). Lemma A is proved in Section 5 by assuming another two lemmas, Lemmas G and H. Both lemmas rely on local dynamics of perturbed parabolic cycle. We prove Lemma B in Section 6, and Lemma G in Section 8. Section 7 is devoted to the proofs of Propositions 2.2 and 2.3. We use a branched coordinate to prove Proposition 2.4 in Section 9. We also employ the branched coordinate to prove Lemma H in Section 10. Then, using some results presented in Section 10, we are able to prove Lemma D in Section 11. Some arguments in the proofs of Lemmas H and D are used to prove Lemma C in Section 12. Finally, in Section 13 we prove Theorems 1.2 and 1.3 simultaneously. ## 2 Radial access condition and S-cycles In this section we introduce the notion of S-cycles for a given orbit in the Julia set. The idea of S-cycles was introduced in [11] to describe orbits that repeatedly come close to the postcritical set. Here we present a modified version where the postcritical set is replaced by the parabolic cycle. Notation.We start with some notation and the terminology that will be used in what follows. * Let \(\mathbb{N}\) denote the set of positive integers. We denote the set of non-negative integers by \(\mathbb{N}_{0}:=\{0\}\cup\mathbb{N}\). * Let \(\mathbb{D}(a,r)\) denote the disk in \(\mathbb{C}\) centered at \(a\) and of radius \(r>0\). When \(a=0\) we denote it by \(\mathbb{D}(r)\). * For non-negative variables \(X\) and \(Y\), by \(X\asymp Y\) we mean there exists an implicit constant \(C>1\) independent of \(X\) and \(Y\) such that \(X/C\leq Y\leq CX\). * When we say "for any \(X\ll 1\)" it means that "for any sufficiently small \(X>0\)". More precisely, we mean there exists an implicit constant \(C>0\) such that \(0<X<C\). Hyperbolic components and internal rays.Let \(\hat{c}\in\partial\mathbb{M}\) be a parabolic parameter having a parabolic periodic point \(\hat{b}\) of period exactly \(p\). Let \(\hat{\lambda}:=Df^{p}_{\hat{c}}(\hat{b})\) be the multiplier of this cycle, and assume that it is a primitive \(q\)-th root of unity. We specify an internal ray \(I(\theta)=I_{\mathbb{X}}(\theta)\) of the hyperbolic component \(\mathbb{X}\) that lands at \(\hat{c}\) as follows. (See [DH] or [Mi] for details on the hyperbolic components of \(\mathbb{M}\).) Case 1.If \(q=1\), then there is only one hyperbolic component \(\mathbb{X}\) such that \(\hat{c}=\Phi_{\mathbb{X}}(1)\in\partial\mathbb{X}\), where \(\Phi_{\mathbb{X}}\) is the uniformizing map of \(\mathbb{X}\). Hence by letting \(\theta=0\) the internal ray \(I(\theta)\) of \(\mathbb{X}\) lands at \(\hat{c}\). Case 2.If \(q\geq 2\), then there are exactly two hyperbolic components \(\mathbb{X}^{-}\) and \(\mathbb{X}^{+}\) such that * \(\partial\mathbb{X}^{-}\cap\partial\mathbb{X}^{+}=\{\hat{c}\}\). * \(\hat{c}=\Phi_{\mathbb{X}^{-}}(\hat{\lambda})\) and \(\hat{c}=\Phi_{\mathbb{X}^{+}}(1)\), where \(\Phi_{\mathbb{X}^{\pm}}\) is the uniformizing map of \(\mathbb{X}^{\pm}\). Figure 3: Holomorphic motions along the internal rays depicted in Figure 1. Hence \(\mathbb{X}\) can be either \(\mathbb{X}^{-}\) or \(\mathbb{X}^{+}\), and the case of \(q\geq 2\) is divided into two sub-cases. * **Case \(\mathbf{2^{-}}\):** If \(\mathbb{X}=\mathbb{X}^{-}\), then we let \(\theta=(\arg\hat{\lambda})/(2\pi)\); and * **Case \(\mathbf{2^{+}}\):** If \(\mathbb{X}=\mathbb{X}^{+}\), then we let \(\theta=0\) in such a way that the internal ray \(I(\theta)=I_{\mathbb{X}}(\theta)\) of \(\mathbb{X}\) lands at \(\hat{c}\). Note that \(\hat{c}\) is the root of \(\mathbb{X}\) if and only if it is as Case 1 or Case \(2^{+}\). Hence, \[\theta=\left\{\begin{array}{ll}0&\mbox{for Case 1 or Case $2^{+}$},\\ (\arg\hat{\lambda})/(2\pi)&\mbox{for Case $2^{-}$}.\end{array}\right.\] **Example 4**.: Case 1 holds when \(\hat{c}=1/4\) with \(\mathbb{X}=\mathbb{X}_{1}\) (Example 1), or \(\hat{c}=-7/4\) with \(\mathbb{X}\) whose center is a unique real parameter \(\sigma<0\) with \(f_{\sigma}^{3}(0)=0\) ("the airplane"). If \(\hat{c}=-3/4\), Case \(2^{-}\) holds when \(\mathbb{X}^{-}=\mathbb{X}_{1}\), and Case \(2^{+}\) holds when \(\mathbb{X}^{+}=\mathbb{X}_{2}\) (Example 2). Radial convergence and thick internal rays.Let \(A_{0}\) and \(T_{0}\) be constants with \(0\leq A_{0}<\pi/2\) and \(0<T_{0}<2\cos A_{0}\), and let \[\Delta=\Delta(A_{0},T_{0}):=\{t\in\mathbb{C}\,:\,0<|t|\leq T_{0},\ |\arg t\,|\leq A_{0}\}.\] The _Stolz angle_ at \(\hat{\lambda}\) with opening angle \(2A_{0}\) is given by \[S(\Delta):=\{\mu=(1-t)\hat{\lambda}\in\mathbb{D}\,:\,t\in\Delta\}\subset \mathbb{D}. \tag{2.1}\] Let \(\Phi:=\Phi_{\mathbb{X}}\). If the parameter \(c\in\mathbb{X}\) tends to \(\hat{c}\) satisfying \(\Phi(c)\in S(\Delta)\), we say \(c\to\hat{c}\)_radially_ after McMullen [Mc2]. For a given \(\delta\)-thick internal ray \(\mathcal{I}(\theta,\delta)\subset\mathbb{X}\) of angle \(\theta\), one can easily check that \[\Phi^{-1}(\mathcal{I}(\theta,\delta))\cap\mathbb{D}(\hat{\lambda},T_{0}) \subset S(\Delta)\] if \(\delta\leq\delta(A_{0})\), which is given in (1.1). See Figure 4. Hence in what follows it is enough to consider the parameters of the form \(c=\Phi(\mu)\) with \(\mu\in S(\Delta)\cup\{\hat{\lambda}\}\). _Remark 2.1_.: Conversely, the Stolz angle \(S(\Delta)\) is contained in \(\Phi^{-1}(\mathcal{I}(\theta,\delta))\) with \(\delta=\delta(A_{0})+1\) by taking a sufficiently small \(T_{0}\). This implies that the convergence in a thick internal ray is equivalent to the radial convergence for some angle. Parametrization and notation.For a technical reason, instead of (2.1), it is convenient to re-parametrize \(\mu\in S(\Delta)\) by \[\mu=\mu_{t}:=\left\{\begin{array}{ll}1-q\,t&\mbox{in Case 1 or Case 2}^{+}\\ (1-t/q)\hat{\lambda}&\mbox{in Case 2}^{-}\end{array}\right.\] for \(t\in\Delta=\Delta(A_{0},T_{0})\) with sufficiently small \(T_{0}\). The radial access condition.In what follows, by \[c\approx\hat{c}\quad\mbox{or}\quad c=c_{t}\approx\hat{c}\] we mean the parameter \(c\) is of the form \[c=c_{t}=\Phi(\mu_{t})\] for some \(t\in\Delta=\Delta(A_{0},T_{0})\), where we take a smaller \(T_{0}\) in the definition of \(\Delta\) if necessary. We say such a parameter \(c\)_satisfies the radial access condition_ or \(c\)_is in a thick internal ray._ Perturbation of parabolic points.Let \(c_{0}:=\hat{c}\) and \(\lambda_{0}:=\hat{\lambda}\). In Section 7, we will show the following two propositions under the radial access condition: **Proposition 2.2**.: _The function \(t\mapsto c=c_{t}\) is holomorphic on \(\Delta\), and there exists a constant \(B_{0}\neq 0\) such that_ * **Case 1 (\(q=1\)):**__\(c_{t}=\hat{c}+B_{0}t^{2}+O(t^{3})\)__ * **Case 2\({}^{\pm}\) (\(q\geq 2\)):**__\(c_{t}=\hat{c}+B_{0}t+O(t^{2})\)__ _as \(t\in\Delta\) tends to \(0\). In particular, we have \(\sqrt{|c_{t}-\hat{c}|}\asymp|t|\) or \(|c_{t}-\hat{c}|\asymp|t|\) for \(t\in\Delta\) according to Case 1 or Case 2\({}^{\pm}\)._ **Proposition 2.3**.: _There exists a continuous map \(t\mapsto b_{t}\) defined for \(t\in\Delta\cup\{0\}\) such that_ 1. \(b_{0}=\hat{b}\) _and_ \(b_{t}\) _is a periodic point of_ \(f_{c_{t}}\) _with the same period_ \(p\) _as_ \(\hat{b}\)_._ 2. _Let_ \(\lambda_{t}:=Df_{c_{t}}^{p}(b_{t})\)_. Then there exist two families of holomorphic local coordinates_ \(\{\zeta=\varphi_{t}(z)\}_{t\,\in\,\Delta\cup\{0\}}\) _and_ \(\{w=\psi_{t}(z)\}_{t\,\in\,\Delta\cup\{0\}}\) _defined on a disk_ \(\hat{U}:=\mathbb{D}(\hat{b},\hat{R})\) _such that: For each_ \(z\in\hat{U}\)_,_ \(\varphi_{t}(z)\) _and_ \(\psi_{t}(z)\) _are holomorphic in_ \(t\in\Delta\) _and continuous at_ \(t=0\)_;_ \(\varphi_{t}(b_{t})=\psi_{t}(b_{t})=0\)_; and_ \[\varphi_{t}\circ f_{c_{t}}^{p}\circ\varphi_{t}^{-1}(\zeta) =\lambda_{t}\zeta+\zeta^{q+1}+O(\zeta^{2q+1}),\mbox{ and}\] (2.2) \[\psi_{t}\circ f_{c_{t}}^{pq}\circ\psi_{t}^{-1}(w) =\lambda_{t}^{q}w\,(1+w^{q}+O(w^{2q})).\] (2.3) _In particular, both \(D\varphi_{t}^{-1}(0)\) and \(D\psi_{t}^{-1}(0)\) are uniformly bounded away from zero._ 3. _In Case 1 or Case 2\({}^{+}\), \(b_{t}\) is repelling for \(t\in\Delta\), and the multiplier satisfies_ \[\lambda_{t}=\left(1+t/q+o(t)\right)\hat{\lambda}.\] _Moreover, there are_ \(q\) _distinct attracting fixed points_ \(\alpha_{t}^{1},\,\cdots,\,\alpha_{t}^{q}\) _of_ \(f_{c_{t}}^{pq}\) _satisfying_ \(Df_{c_{t}}^{pq}(\alpha_{t}^{j})=\mu_{t}=1-qt\) _and_ \(\left(\psi_{t}(\alpha_{t}^{j})\right)^{q}=-t+o(t)\) _for_ \(j=1,\cdots,q\)_._ 4. _In Case 2\({}^{-}\),_ \(b_{t}\) _is attracting for_ \(t\in\Delta\) _and the multiplier satisfies_ \[\lambda_{t}=\mu_{t}=\left(1-t/q\right)\hat{\lambda}.\] _Moreover, there are_ \(q\) _distinct repelling fixed points_ \(\beta_{t}^{1},\,\cdots,\,\beta_{t}^{q}\) _of_ \(f_{c_{t}}^{pq}\) _satisfying_ \(Df_{c_{t}}^{pq}(\beta_{t}^{j})=1+qt+o(t)\) _and_ \(\left(\psi_{t}(\beta_{t}^{j})\right)^{q}=t+o(t)\) _for_ \(j=1,\cdots,q\)_._ The local dynamics of \(f_{c_{t}}^{pq}\) observed as (2.3) behaves quite similar to that of \(f_{\hat{c}}^{pq}\). See Figure 5. In particular, in the domain \(\hat{U}=\mathbb{D}(\hat{b},\hat{R})\) of \(\psi_{t}\), the map \(f_{c_{t}}^{pq}\) has exactly * one repelling fixed point \(b_{t}\) in Case 1 or Case 2\({}^{+}\); and * \(q\) repelling fixed points in Case 2\({}^{-}\) that are symmetrically arrayed near \(b_{t}\). Definition of \(U_{0}\).We fix a small \(R_{0}\in(0,\hat{R})\) such that the disk \[U_{0}:=\mathbb{D}(\hat{b},R_{0})\quad(\Subset\hat{U})\] possesses the property that \(f_{\hat{c}}^{p}:U^{\prime}\to U_{0}\) is univalent for each connected component \(U^{\prime}\) of \(f_{\hat{c}}^{-p}(U_{0})\). Such an \(R_{0}\) exists because the orbit \(f_{\hat{c}}^{n}(0)\) of \(0\) (the critical point) keeps a definite distance from the parabolic cycle for \(0\leq n\leq p\). The next proposition will be proved in Section 9: **Proposition 2.4**.: _For each parameter \(c_{t}\) with \(t\in\Delta\) if \(z_{0}\in J(f_{c_{t}})\cap U_{0}\) is none of the repelling fixed points of \(f_{c_{t}}^{pq}\) described as above, then the orbit \(z_{kpq}:=f_{c_{t}}^{kpq}(z_{0})\)\((k\in\mathbb{N}_{0})\) leaves \(U_{0}\) for some \(k>0\)._ Figure 5: The middle frame depicts the local dynamics of \(f_{\hat{c}}^{pq}\) near \(\hat{U}\) (indicated by the big circle) that can be mildly perturbed in two ways: Case 2\({}^{-}\) on the left, and Case 2\({}^{+}\) on the right. The white, black, and gray dots indicate attracting, repelling, and parabolic fixed points respectively. The red spiky curves indicate the Julia sets. We choose some \(\xi\in(0,1]\) such that \[\operatorname{dist}\left(0,f_{\hat{c}}^{k}(U_{0})\right)\geq\frac{3\xi}{4}\] for any integer \(k\) with \(-p\leq k\leq p\). Note that by continuity, we have \[\operatorname{dist}\left(0,f_{c}^{k}(U_{0})\right)\geq\frac{\xi}{2}\] for any \(c\approx\hat{c}\) (taking a smaller \(T_{0}\) in the definition of \(\Delta\) if necessary) and any \(k\) with \(-p\leq k\leq p\). _Remark 2.5_.: We will frequently use the following property: _If \(z\in f_{c}^{k}(U_{0})\) for some \(c\approx\hat{c}\) and \(k\) with \(-p\leq k\leq p\), then \(|Df_{c}(z)|=2|z|\geq\xi\)._ Definition of \(V_{0}\) and \(\mathcal{V}(c)\).Now we take a small \(\nu\in(0,R_{0})\) and let \[V_{0}:=\mathbb{D}(\hat{b},\nu)\quad(\Subset U_{0}).\] We also define \[\mathcal{V}(c):=\bigcup_{j=0}^{p-1}f_{c}^{-j}(V_{0})\] for each \(c\approx\hat{c}\). By Remark 2.5 above, \(f_{c}(z)\in\mathcal{V}(c)\) implies \(|Df_{c}(z)|\geq\xi\) for \(c\approx\hat{c}\). S-cycles.For \(c\approx\hat{c}\), let \(z_{0}\) be any point in the Julia set \(J(f_{c})\). The orbit \(z_{n}:=f_{c}^{n}(z_{0})\)\((n\in\mathbb{N}_{0})\) may land on \(V_{0}\)\((\Subset U_{0})\), and leave \(U_{0}\) by Proposition 2.4 (unless it lands exactly on the repelling cycle), then it may come back to \(V_{0}\) again. To describe the behavior of such an orbit, we introduce the notion of "S-cycle" for the orbit of \(z_{0}\), where "S" indicates that orbit stays near the "singularity" of the hyperbolic metric \(\rho(z)|dz|\) on the complement of the postcritical set of \(f_{\hat{c}}\) to be defined in Section 4. Definition (S-cycle).A _finite S-cycle_ of the orbit \(z_{n}=f_{c}^{n}(z_{0})\)\((n\in\mathbb{N}_{0})\) is a finite subset of \(\mathbb{N}_{0}\) of the form \[\mathsf{S}=\{n\in\mathbb{N}_{0}\,:\,M\leq n<M^{\prime}\}=[M,M^{\prime})\cap \mathbb{N}_{0}\] with the following properties: 1. \(z_{M}\in V_{0}\), and if \(M>0\) then \(z_{M-1}\notin V_{0}\). 2. There exists a minimal \(m\geq 1\) such that for \(n=M+mpq\), \(z_{n-pq}\in U_{0}\) but \(z_{n}\notin U_{0}\). 3. \(M^{\prime}=M+mpq+L\) for some \(L\in[1,\infty)\) such that \(z_{n}\notin V_{0}\) for \(n=M+mpq+i\)\((0\leq i<L)\) and \(z_{M^{\prime}}\in V_{0}\). An _infinite S-cycle_\(\mathsf{S}\) of the orbit \(z_{n}=f_{c}^{n}(z_{0})\) (\(n\in\mathbb{N}_{0}\)) is an infinite subset of \(\mathbb{N}_{0}\) of the form \[\mathsf{S}=\{n\in\mathbb{N}_{0}\,:\,M\leq n<\infty\}=[M,\infty)\cap\mathbb{N}_{0},\] satisfying either * Type (I): (S1), (S2), and * \(z_{n}\notin V_{0}\) for all \(n\geq M+mpq\); or * Type (II): (S1) and * \(z_{M+kpq}\in V_{0}\) for any \(k\in\mathbb{N}\). Equivalently, \(z_{M}\) is a repelling periodic point of period \(p\) or period \(pq\) in \(V_{0}\) (by Proposition 2.4). By an _S-cycle_ we mean a finite or infinite S-cycle. In both cases, we denote them by \(\mathsf{S}=[M,M^{\prime})\) or \(\mathsf{S}=[M,\infty)\) for brevity. _Remark 2.6_.: We may assume without loss of generality that \(L\) of the finite S-cycle is at least \(p\) by shrinking the radius of the disk \(U_{0}\). Indeed, after the orbit \(z_{n}\) (\(n\in\mathbb{N}_{0}\)) leaves \(U_{0}\) when \(n=M+mpq\), the orbit follows the parabolic cycle for a while and it cannot land immediately on \(V_{0}\) by the local dynamics near the perturbed cycle. See Figure 6. Figure 6: A finite S-cycle \([M,M^{\prime})\) with \(M^{\prime}=M+mpq+L\), \(p=3\). The dotted arrow indicates there is another connected component of \(f_{c}^{-p}(U_{0})\) intersecting with \(U_{0}\) which is not drawn. Decomposition of the orbit by S-cycles.For a given orbit \(z_{n}=f_{c}^{n}(z_{0})\) (\(n\in\mathbb{N}_{0}\)) of \(z_{0}\in J(f_{c})\), the set \(\mathbb{N}_{0}\) of indices is uniquely decomposed by using finite or infinite S-cycles in one of the following three types: * The first type is of the form \[\mathbb{N}_{0}=[0,M_{1})\sqcup\mathsf{S}_{1}\sqcup\mathsf{S}_{2}\sqcup\cdots,\] (2.4) where \(z_{n}\notin V_{0}\) for \(n\in[0,M_{1})\) and \(\mathsf{S}_{k}:=[M_{k},M_{k+1})\) is a finite S-cycle for each \(k\geq 1\). * The second type is of the form \[\mathbb{N}_{0}=[0,M_{1})\sqcup\mathsf{S}_{1}\sqcup\mathsf{S}_{2}\sqcup\cdots \sqcup\mathsf{S}_{k_{0}}\sqcup\emptyset\sqcup\emptyset\sqcup\cdots\] (2.5) with \(k_{0}\geq 1\), where \(z_{n}\notin V_{0}\) for \(n\in[0,M_{1})\); \(\mathsf{S}_{k}:=[M_{k},M_{k+1})\) is a finite S-cycle for each \(1\leq k\leq k_{0}-1\); \(\mathsf{S}_{k_{0}}=[M_{k_{0}},\infty)\) is an infinite S-cycle; and \(\mathsf{S}_{k}=\emptyset\) for \(k\geq k_{0}+1\). * The third type is \[\mathbb{N}_{0}=[0,M_{1})\sqcup\emptyset\sqcup\emptyset\sqcup\cdots\] (2.6) with \(M_{1}=\infty\) and \(\mathsf{S}_{k}=\emptyset\) for \(k\geq 1\), where \(z_{n}\notin V_{0}\) for all \(n\in\mathbb{N}\). In the first and second types it is possible that \(M_{1}=0\) and \([0,M_{1})\) is empty. ## 3 Proof of the main theorem assuming three lemmas The derivative formula.Let \(z_{*}\) be any point in \(J(f_{\sigma})\) (where \(\sigma\) is the center of \(\mathbb{X}\)), and consider its motion \(z=z(c)=H(c,z_{*})\in J(f_{c})\) for \(c\in\mathbb{X}\). The estimate of the main theorem is based on the following formula: **Proposition 3.1** (The Derivative Formula).: _For any \(c\in\mathbb{X}\) and \(z=z(c)\in J(f_{c})\), we have_ \[\frac{d}{dc}z(c)=-\sum_{n=1}^{\infty}\frac{1}{Df_{c}^{n}(z(c))}.\] See [1, Proposition 3.2] for the proof. Since \(f_{c}\) is hyperbolic, the convergence of the series above is absolute and it is enough to show \[\left|\sum_{n=1}^{\infty}\frac{1}{Df_{c}^{n}(z(c))}\right|\leq\frac{K}{|c- \hat{c}|^{1-1/Q}}\] for some constant \(K>0\) independent of \(c\) in a thick internal ray, where \(Q:=\max\{2,\,q\}\) and \(q\) is the petal number of the parabolic fixed point \(\hat{b}\). Now we present three principal lemmas about S-cycles that are valid for sufficiently small \(\nu\) (the radius of \(V_{0}\)) under the radial access condition \(c=c_{t}\approx\hat{c}\). That is, we only consider \(c=c_{t}\) with \(t\in\Delta=\Delta(A_{0},T_{0})\) as in the previous section. **Lemma A.**_There exists a constant \(K_{\rm A}>0\) such that for any \(c\approx\hat{c}\), any \(z_{0}\in J(f_{c})\), and for any S-cycle \({\sf S}=[M,M^{\prime})\) of the orbit \(z_{n}=f_{c}^{n}(z_{0})\)\((n\in{\mathbb{N}}_{0})\), we have_ \[\left|\sum_{i=1}^{M^{\prime}-M}\frac{1}{Df_{c}^{i}(z_{M})}\right|\leq\frac{K_{ \rm A}}{|c-\hat{c}|^{1-1/Q}}, \tag{3.1}\] _where we set \(M^{\prime}-M:=\infty\) if \(M^{\prime}=\infty\)._ **Lemma B.**_There exists a constant \(K_{\rm B}>0\) such that for any \(c\approx\hat{c}\) and any \(M\leq\infty\), if \(z_{0}\in J(f_{c})\) satisfies \(z_{n}\notin V_{0}\) for any \(n\in[0,M)\), then we have_ \[\sum_{i=1}^{M}\frac{1}{|Df_{c}^{i}(z_{0})|}\leq K_{\rm B}. \tag{3.2}\] An immediate consequence of Lemma B is: **Corollary 3.2**.: _For \(c\approx\hat{c}\), if the orbit of \(z=z(c)\in J(f_{c})\) by \(f_{c}\) never lands on \(V_{0}\), then the derivative satisfies_ \[\left|\frac{d}{dc}z(c)\right|\leq\sum_{n=1}^{\infty}\frac{1}{|Df_{c}^{n}(z(c) )|}\leq K_{\rm B}. \tag{3.3}\] **Lemma C** (S-cycles Expand Uniformly).: _There exists a constant \(\Lambda>1\) such that for any \(c\approx\hat{c}\), any \(z_{0}\in J(f_{c})\), and for any finite S-cycle \({\sf S}=[M,M^{\prime})\) of the orbit \(z_{n}=f_{c}^{n}(z_{0})\)\((n\in{\mathbb{N}}_{0})\), we have_ \[|Df_{c}^{M^{\prime}-M}(z_{M})|\geq\Lambda. \tag{3.4}\] The constants \(K_{\rm A}\), \(K_{\rm B}\), and \(\Lambda\) above depends only on the choice of \(\hat{c}\), \(\nu\), and the thickness \(\delta\) of \({\cal I}(\theta,\delta)\) (equivalently, the angle \(A_{0}\) of \(\Delta=\Delta(A_{0},T_{0})\)). The proofs of these lemmas will be given later. By assuming these three lemmas, we can give a proof of the main theorem: Proof of the main theorem assuming Lemmas A, B, and C.It is enough to show the theorem for \(c\approx\hat{c}\). (Indeed, if \(c\) stays a uniform distance away from \(\partial{\mathbb{X}}\), the derivative is bounded above by the inequality (1.4).) For a given \(c\approx\hat{c}\) and \(z_{*}\in J(f_{\sigma})\), let \(z_{0}=z(c)=H(c,z_{*})\in J(f_{c})\). We consider the decomposition \({\mathbb{N}}_{0}=[0,M_{1})\sqcup{\sf S}_{1}\sqcup{\sf S}_{2}\sqcup\cdots\) as in (2.4), (2.5) or (2.6). Then we have \[\left|\frac{d}{dc}z(c)\right| =\left|\sum_{n=1}^{\infty}\frac{1}{Df_{c}^{n}(z_{0})}\right|\leq \sum_{n=1}^{M_{1}}\frac{1}{|Df_{c}^{n}(z_{0})|}+\left|\sum_{k\geq 1}\sum_{n\in{ \sf S}_{k}}\frac{1}{Df_{c}^{n+1}(z_{0})}\right|\] \[\leq\sum_{n=1}^{M_{1}}\frac{1}{|Df_{c}^{n}(z_{0})|}+\sum_{k\geq 1,{ \sf S}_{k}\neq\emptyset}\frac{1}{|Df_{c}^{M_{k}}(z_{0})|}\left|\sum_{i=1}^{M_{ k+1}-M_{k}}\frac{1}{Df_{c}^{i}(z_{M_{k}})}\right|.\] By Lemma B, we obviously have \(1/|Df_{c}^{M_{1}}(z_{0})|\leq K_{\rm B}\). By Lemma C, we have \[|Df_{c}^{M_{k}}(z_{0})|=|Df_{c}^{M_{k}-M_{k-1}}(z_{M_{k-1}})|\cdots|Df_{c}^{M_{2} -M_{1}}(z_{M_{1}})|\ |Df_{c}^{M_{1}}(z_{0})|\geq\Lambda^{k-1}/K_{\rm B}\] as long as \({\sf S}_{k}\neq\emptyset\). Hence by Lemma A, we have \[\left|\sum_{n=1}^{\infty}\frac{1}{Df_{c}^{n}(z_{0})}\right|\leq K_{\rm B}+\sum _{k\geq 1}\frac{K_{\rm B}}{\Lambda^{k-1}}\cdot\frac{K_{\rm A}}{|c-\hat{c}|^{1- 1/Q}}=K_{\rm B}+\frac{K_{\rm B}\Lambda}{\Lambda-1}\cdot\frac{K_{\rm A}}{|c-\hat {c}|^{1-1/Q}}.\] We may assume that \(|c-\hat{c}|\leq 1\) such that \(K_{\rm B}\leq K_{\rm B}/|c-\hat{c}|^{1-1/Q}\). Hence by setting \(K:=K_{\rm B}+\frac{K_{\rm B}K_{\rm A}\Lambda}{\Lambda-1}\), we have \(\left|\frac{dz}{dc}\right|\leq\frac{K}{|c-\hat{c}|^{1-1/Q}}\) for any \(c\approx\hat{c}\). \(\blacksquare\) ## 4 Postcritical set and hyperbolic metric In this section we show some properties of the hyperbolic metric on the complement of the postcritical set of \(f_{\hat{c}}\). Postcritical sets.The _postcritical set_\(P(f_{c})\subset\mathbb{C}\) of the polynomial \(f_{c}(z)=z^{2}+c\) is defined by \[P(f_{c}):=\overline{\{f_{c}(0),\,f_{c}^{2}(0),\,f_{c}^{3}(0),\,\cdots\}}.\] In this paper we only consider \(P(f_{\hat{c}})\), which is a countable set and accumulates only on the parabolic cycle of \(f_{\hat{c}}\). In particular, the universal covering of \(\mathbb{C}-P(f_{\hat{c}})\) is the unit disk \(\mathbb{D}\). Let \({\rm dist}\,(z,P(f_{\hat{c}}))\) be the Euclidean distance between \(z\) and \(P(f_{\hat{c}})\). The following lemma provides an estimate of \({\rm dist}\,(z,P(f_{\hat{c}}))\) for \(z\in J(f_{c})-{\cal V}(c)\) for \(c\approx\hat{c}\), and plays a very important role. **Lemma D**.: _There exists a constant \(K_{\rm D}\in(0,1]\) such that for any sufficiently small \(\nu\in(0,R_{0})\), we have \({\rm dist}\,(z,P(f_{\hat{c}}))\geq K_{\rm D}\nu\) for any \(z\in J(f_{c})-{\cal V}(c)\) with \(c=c_{t}\approx\hat{c}\ (t\in\Delta)\) by taking a sufficiently small \(T_{0}\) (which depends on \(\nu\)) in the definition of \(\Delta=\Delta(A_{0},T_{0})\)._ The proof will be given later in Section 11. Hyperbolic metric.For \(c=\hat{c}\), let \(\rho(z)|dz|\) denote the hyperbolic metric of \(\mathbb{C}-P(f_{\hat{c}})\), which is induced by the metric \(|dz|/(1-|z|^{2})\) of constant curvature \(-4\) on the unit disk. The metric \(\rho(z)|dz|\) has the following properties: * \(\rho:\mathbb{C}-P(f_{\hat{c}})\to\mathbb{R}_{+}\) is real analytic and diverges on \(P(f_{\hat{c}})\cup\{\infty\}\). * if both \(z\) and \(f_{\hat{c}}(z)\) are in \(\mathbb{C}-P(f_{\hat{c}})\), we have \[\frac{\rho(f_{\hat{c}}(z))}{\rho(z)}|Df_{\hat{c}}(z)|>1.\] See [Mc1, SS3.2] for example. There is an important observation: Fix any compact subset \(\Gamma\) of \(\mathbb{C}-P(f_{\hat{c}})\), and suppose that both \(z\) and \(f_{c}(z)\) are contained in \(\Gamma\) for some \(c\approx\hat{c}\). Then by continuity of \(\rho(f_{c}(z))|Df_{c}(z)|\) with respect to \(c\), we have \[\frac{\rho(f_{c}(z))}{\rho(z)}|Df_{c}(z)|\geq A,\quad\text{or equivalently }|Df_{c}(z)|\geq\frac{\rho(z)}{\rho(f_{c}(z))}\cdot A,\] for some uniform constant \(A=A_{\Gamma}>1\) independent of \(c\approx\hat{c}\). To give some estimates of the function \(\rho(z)\), we need the following: **Proposition 4.1**.: _The hyperbolic metric \(\rho(z)|dz|\) of \(\mathbb{C}-P(f_{\hat{c}})\) satisfies_ \[\rho(z)\leq\frac{1}{\operatorname{dist}\left(z,P(f_{\hat{c}})\right)}.\] This is just an application of a standard fact on hyperbolic metrics. See [A, Theorems 1.10 & 1.11] for example. Now we are ready to show: **Lemma E.** _If the constant \(\nu\) is sufficiently small, there exists a constant \(K_{\mathrm{E}}\asymp\nu\) with the following property: For any \(c\approx\hat{c}\), we have_ \[\frac{\rho(z)}{\rho(\zeta)}\geq K_{\mathrm{E}}\] _if \(z,\zeta\in J(f_{c})-\mathcal{V}(c)\)._ **Proof.** Note that \(J(f_{c})\subset\overline{\mathbb{D}(2)}\) for \(c\in\mathbb{M}\). Since \(\rho\) diverges only at the postcritical set \(P(f_{\hat{c}})\) in \(\overline{\mathbb{D}(3)}\), there exists a constant \(C_{1}>0\) such that \(\rho(w)\geq C_{1}\) for any \(w\in\overline{\mathbb{D}(3)}-P(f_{\hat{c}})\). In particular, we have \(\rho(z)\geq C_{1}\). By Lemma D, for \(\nu\ll 1\) and \(c\approx\hat{c}\) we have \[\operatorname{dist}\left(\zeta,P(f_{\hat{c}})\right)\geq K_{\mathrm{D}}\nu\] and thus Propositon 4.1 implies \(\rho(\zeta)\leq 1/(K_{\mathrm{D}}\nu)\) for sufficiently small \(\nu\). Now we have \(\rho(z)/\rho(\zeta)\geq C_{1}K_{\mathrm{D}}\nu=:K_{\mathrm{E}}\). \(\blacksquare\) By Lemma E we obtain a kind of uniform expansion of \(f_{c}\) with respect to \(\rho\): **Lemma F.** _There exists a constant \(A>1\) such that for \(c\approx\hat{c}\), if \(z,f_{c}(z),\dots,f_{c}^{n}(z)\) are all contained in \(J(f_{c})-\mathcal{V}(c)\), we have_ \[|Df_{c}^{n}(z)|\geq K_{\mathrm{E}}A^{n}.\] **Proof.** By Lemma D, \(J(f_{c})-{\cal V}(c)\) is contained in a compact set \(\Gamma\) in \({\mathbb{C}}-P(f_{\hat{c}})\) independent of \(c\approx\hat{c}\). As mentioned in the observation above, there exists a constant \(A=A_{\Gamma}>1\) such that for any \(c\approx\hat{c}\), \[\frac{\rho(f_{c}(w))}{\rho(w)}|Df_{c}(w)|\geq A\] if both \(w\), \(f_{c}(w)\in J(f_{c})-{\cal V}(c)\). By the chain rule, we have \[|Df_{c}^{n}(z)|=\prod_{i=0}^{n-1}|Df_{c}(f_{c}^{i}(z))|\geq\prod_{i=0}^{n-1} \frac{\rho(f_{c}^{i}(z))}{\rho(f_{c}^{i+1}(z))}A\geq\frac{\rho(z)}{\rho(f_{c}^ {n}(z))}A^{n}. \tag{4.1}\] By applying Lemma E with \(\zeta=f_{c}^{n}(z)\), we obtain the desired inequality. ## 5 Proof of Lemma A assuming Lemmas G and H For any \(c=c_{t}\approx\hat{c}\), we choose an arbitrary \(z_{0}\in J(f_{c})\) and let \(z_{n}:=f_{c}^{n}(z_{0})\) (\(n\in{\mathbb{N}}_{0}\)). For a given S-cycle \({\sf S}=[M,M^{\prime})\) of this orbit, we may assume that \(M=0\) without loss of generality. We divide the proof into two cases. First we suppose that \({\sf S}\) is either a finite S-cycle or an infinite S-cycle of type (I). Then there exist \(m\in{\mathbb{N}}\) and \(L\in{\mathbb{N}}\cup\{\infty\}\) such that * \(z=z_{0}\in V_{0}\); * \(z_{(m-1)pq}\in U_{0}\) but \(z_{mpq}\notin U_{0}\); * \(z_{mpq+i}\notin V_{0}\) if \(0\leq i<L\); and * \(M^{\prime}<\infty\) iff \(L<\infty\) and \(M^{\prime}=mpq+L\). As in Remark 2.6, by shrinking \(U_{0}\) (that is, by taking a smaller \(R_{0}\)) if necessary, each \(z_{mpq+i}\) with \(0\leq i\leq p\) remains near \(f^{i}(U_{0})\) and we may assume that \(L>p\) for any S-cycle. Recall that under the radial access condition \(c=c_{t}\approx\hat{c}\), we have a periodic point \(b_{t}\) of period \(p\) in \(U_{0}\) such that \(b_{t}\to\hat{b}\) as \(t\in\Delta\) tends to \(0\) (Proposition 2.3). For \(c=c_{t}\approx\hat{c}\), let \(b_{0}(c),b_{1}(c),\ldots,b_{p-1}(c)\) denote the periodic points \(b_{t},f_{c_{t}}(b_{t}),\cdots,f_{c_{t}}^{p-1}(b_{t})\) respectively. The next two lemmas rely on local dynamics of perturbed parabolic cycle. **Lemma G.** _Suppose that \(q\geq 2\). There exists a constant \(K_{\rm G}>0\) with the following property: For any \(c\approx\hat{c}\) and \(z_{0}\in J(f_{c})\cap U_{0}\) such that \(z_{p}\), \(z_{2p},\ldots,z_{qp}\in U_{0}\), we have_ \[\left|\sum_{l=0}^{q-1}\frac{1}{Df_{c}^{lp}(z_{j})}\right|\leq K_{\rm G}\big{(} |c-\hat{c}|+|z_{j}-b_{j}(c)|\big{)}.\] _for each \(0\leq j\leq p-1\)._ **Lemma H.**_There exists a constant \(K_{\rm H}>0\) with the following property: For any \(c\approx\hat{c}\), \(z_{0}\in J(f_{c})\cap V_{0}\), and \(m\in\mathbb{N}\) such that \(z_{kpq}\in U_{0}\) for \(0\leq k\leq m-1\), we have for each \(0\leq j\leq p-1\) that_ \[\sum_{k=0}^{m-1}\frac{1}{|Df_{c}^{kpq}(z_{j})|}\leq\frac{K_{\rm H}}{|\,c-\hat{c }\,|^{1/2}}\qquad\mbox{if $q=1$},\] _and_ \[\sum_{k=0}^{m-1}\frac{|c-\hat{c}|+|f_{c}^{kpq}(z_{j})-b_{j}(c)|}{|Df_{c}^{kpq}( z_{j})|}\leq\frac{K_{\rm H}}{|\,c-\hat{c}\,|^{1-1/q}}\qquad\mbox{if $q\geq 2$}.\] The proofs of Lemmas G and H will be given later. Finite S-cycles.Set \(f=f_{c}\). For the finite S-cycle \({\sf S}=[0,M^{\prime})\) with \(M^{\prime}=mpq+L\), we have \[\sum_{i=1}^{M^{\prime}}\frac{1}{Df^{i}(z_{0})}\] \[= \sum_{j=1}^{p}\frac{1}{Df^{j}(z_{0})}\sum_{k=0}^{m-1}\frac{1}{Df^ {kpq}(z_{j})}\sum_{l=0}^{q-1}\frac{1}{Df^{lp}(z_{kpq+j})} \tag{5.1}\] \[\qquad+\sum_{i=1}^{L-p}\frac{1}{Df^{mpq}(z_{0})\,Df^{i}(z_{mpq})} +\sum_{i=1}^{p}\frac{1}{Df^{mpq+(L-p)}(z_{0})\,Df^{i}(z_{mpq+(L-p)})}.\] When \(1\leq j\leq p\), we have \(z_{j}\in f^{j}(U_{0})\) and thus \[|Df^{j}(z_{0})|\geq\xi^{j}\] in (5.1) by Remark 2.5. Note that by the Koebe distortion theorem, \[|Df^{mpq}(z_{j})|\geq C_{2}\cdot\frac{\operatorname{diam}U_{0}}{\operatorname {diam}V_{0}}\geq\frac{C_{3}}{\nu}\] for some constants \(C_{2}\) and \(C_{3}\) independent of \(c\) and \(j\in\{0,1,\ldots,p-1\}\). Consequently, \[\sum_{j=1}^{p}\frac{1}{|Df^{j}(z_{0})|}\sum_{k=0}^{m-1}\frac{1}{ |Df^{kpq}(z_{j})|}\left|\sum_{l=0}^{q-1}\frac{1}{Df^{lp}(z_{kpq+j})}\right|\] \[\leq \begin{cases}\sum_{j=1}^{p}\frac{1}{\xi^{j}}\sum_{k=0}^{m-1}\frac {1}{|Df^{kpq}(z_{j})|}&\mbox{if $q=1$}\\ \sum_{j=1}^{p}\frac{1}{\xi^{j}}\sum_{k=0}^{m-1}\frac{K_{\rm G}\,(|c-\hat{c}|+| z_{kpq+j}-b_{j}(c)|)}{|Df^{kpq}(z_{j})|}&\mbox{if $q\geq 2$ (by using Lemma G)}\end{cases}\] \[\leq \frac{p}{\xi^{p}}\cdot\max\{1,K_{\rm G}\}\cdot\frac{K_{\rm H}}{|c -\hat{c}|^{1-1/Q}}\qquad\mbox{(by Lemma H)}\] where \(Q=\max\{2,\,q\}\). When \(n=mpq+i\) with \(1\leq i\leq L-p\), we have \(z_{mpq}\), \(z_{mpq+i}\notin{\cal V}(c)\) and thus \[|Df^{n}(z_{0})| =|Df^{mpq}(z_{0})|\,|Df^{i}(z_{mpq})|\] \[\geq|Df^{mpq}(z_{0})|\cdot K_{\rm E}\cdot A^{i}.\] Here the constant \(A\) above is the same as that of Lemma F. Consequently, \[\sum_{i=1}^{L-p}\frac{1}{|Df^{mpq}(z_{0})|\,|Df^{i}(z_{mpq})|} \leq \sum_{i=1}^{L-p}\frac{1}{|Df^{mpq}(z_{0})|\ K_{\rm E}\ A^{i}}\] \[\leq \frac{\nu}{C_{3}\ K_{\rm E}}\frac{1}{A-1}.\] When \(n=mpq+(L-p)+i\) with \(1\leq i\leq p\), \(f(z_{n})\in{\cal V}(c)\) and thus \[|Df^{n}(z)| =|Df^{mpq+(L-p)}(z_{0})|\,|Df^{i}(z_{mpq+(L-p)})|\] \[\geq|Df^{mpq}(z_{0})|\cdot K_{\rm E}\cdot A^{L-p}\cdot\xi^{i}\] \[\geq|Df^{mpq}(z_{0})|\cdot K_{\rm E}\cdot 1\cdot\xi^{p}.\] Consequently, \[\sum_{i=1}^{p}\frac{1}{|Df^{mpq+(L-p)}(z_{0})|\,|Df^{i}(z_{mpq+(L -p)})|} \leq \sum_{i=1}^{p}\frac{1}{|Df^{mpq}(z_{0})|\ K_{\rm E}\ \xi^{p}}\] \[\leq \frac{p\nu}{C_{3}\ K_{\rm E}\ \xi^{p}}.\] By these estimates, when \(M^{\prime}<\infty\), we have: \[\left|\sum_{i=1}^{M^{\prime}}\frac{1}{Df^{i}(z_{0})}\right| \leq \sum_{j=1}^{p}\frac{1}{|Df^{j}(z_{0})|}\sum_{k=0}^{m-1}\frac{1}{ |Df^{kpq}(z_{j})|}\left|\sum_{l=0}^{q-1}\frac{1}{Df^{lp}(z_{kpq+j})}\right|\] \[\ \ +\sum_{i=1}^{L-p}\frac{1}{|Df^{mpq}(z_{0})|\,|Df^{i}(z_{mpq})|}\] \[\ \ +\sum_{i=1}^{p}\frac{1}{|Df^{mpq+(L-p)}(z_{0})|\,|Df^{i}(z_{ mpq+(L-p)})|}\] \[\leq \frac{p}{\xi^{p}}\cdot\max\{1,K_{\rm G}\}\cdot\frac{K_{\rm H}}{|c -\hat{c}|^{1-1/Q}}+\frac{\nu}{C_{3}\ K_{\rm E}}\bigg{(}\frac{1}{A-1}+\frac{p} {\xi^{p}}\bigg{)}.\] Hence by setting \[K_{\rm A}:=\max\left\{1,K_{\rm G}\right\}\cdot\frac{2p}{\xi^{p}}\cdot K_{\rm H},\] we obtain our desired estimate \[\left|\sum_{i=1}^{M^{\prime}}\frac{1}{Df^{i}(z_{0})}\right|\leq\frac{K_{\rm A }}{|\,c-\hat{c}\,|^{1-1/Q}}\] when \(|c-\hat{c}|\) is sufficiently small. Note that \(K_{\rm A}\) does not depend on \(m\) and \(L\). Infinite S-cycles, Type (I).If \(M^{\prime}=\infty\), then \(L=\infty\) and one can easily check \[\left|\sum_{i=1}^{\infty}\frac{1}{Df^{i}(z_{0})}\right|\leq \frac{p}{\xi^{p}}\cdot\max\{1,K_{\mathrm{G}}\}\cdot\frac{K_{ \mathrm{H}}}{|c-\hat{c}|^{1-1/Q}}+\frac{\nu}{C_{3}\;K_{\mathrm{E}}}\bigg{(} \frac{1}{A-1}\bigg{)}\] \[< \frac{K_{\mathrm{A}}}{|\,c-\hat{c}\,|^{1-1/Q}}\] by the same argument as the case of finite S-cycles. Infinite S-cycles, Type (II).Next we suppose that \(\mathsf{S}=[0,\infty)\) is an infinite S-cycle of Type (II). As we have seen in Section 2 (see in particular Propositions 2.3 and 2.4), \(z_{0}\) must be a repelling periodic point of period \(p\) or \(pq\) of \(f_{c}\) contained in \(V_{0}\) for \(c=c_{t}\approx\hat{c}\) (\(t\in\Delta\)) by taking a smaller \(T_{0}>0\) in the definition of \(\Delta\) if necessary. Note that Lemmas G and H are valid even in such a case. Case 1 (\(q=1\)).Since the assumption of Lemma H is valid for any large \(m\), we have \[\sum_{k=0}^{\infty}\frac{1}{|Df_{c}^{kp}(z_{j})|}\leq\frac{K_{\mathrm{H}}}{|\, c-\hat{c}\,|^{1/2}}\] for \(q=1\) and \(0\leq j\leq p-1\). Hence for \(c\approx\hat{c}\), we have \[\left|\sum_{i=1}^{\infty}\frac{1}{Df^{i}(z_{0})}\right| \leq\sum_{k=0}^{\infty}\sum_{j=1}^{p}\frac{1}{|Df^{kp}(z_{j})||Df ^{j}(z_{0})|}\] \[\leq\sum_{k=0}^{\infty}\sum_{j=1}^{p}\frac{1}{|Df^{kp}(z_{j})| \cdot\xi^{j}}\leq\frac{pK_{\mathrm{H}}}{\xi^{p}|\,c-\hat{c}\,|^{1/2}}.\] Case \(2^{\pm}\).As in Case 1, Lemma H implies \[\sum_{k=0}^{\infty}\frac{|c-\hat{c}|+|f_{c}^{kpq}(z_{j})-b_{j}(c)|}{|Df_{c}^{ kpq}(z_{j})|}\leq\frac{K_{\mathrm{H}}}{|\,c-\hat{c}\,|^{1-1/q}}\] for \(q\geq 2\) and \(0\leq j\leq p-1\). (In Case \(2^{+}\), \(b_{j}(c)\) is a repelling periodic point and we have \(f_{c}^{kpq}(z_{j})=b_{j}(c)\) for any \(k\geq 0\).) Hence for \(c\approx\hat{c}\), we have \[\left|\sum_{i=1}^{\infty}\frac{1}{Df^{i}(z_{0})}\right| \leq\sum_{j=1}^{p}\frac{1}{|Df^{j}(z_{0})|}\sum_{k=0}^{\infty} \frac{1}{|Df^{kpq}(z_{j})|}\left|\sum_{l=0}^{q-1}\frac{1}{Df^{lp}(z_{kpq+j})}\right|\] \[\leq\sum_{j=1}^{p}\frac{1}{\xi^{j}}\sum_{k=0}^{\infty}\frac{1}{|Df ^{kpq}(z_{j})|}\cdot K_{\mathrm{G}}\big{(}|c-\hat{c}|+|z_{kpq+j}-b_{j}(c)|\, \big{)}\] \[\leq\frac{p}{\xi^{p}}\cdot K_{\mathrm{G}}\cdot\frac{K_{\mathrm{H }}}{|c-\hat{c}|^{1-1/q}}\] by Lemma G. In both Case 1 and Case \(2^{\pm}\), we conclude that Lemma A is valid with the same constant \(K_{\mathrm{A}}>0\) defined as above. Proof of Lemma B Without loss of generality, we may assume that either * \(M<\infty\) and \(z_{M}\in V_{0}\); or * \(M=\infty\). For the first case, if \(M\leq p\), then \(z_{0}\in\bigcup_{k=0}^{p}f_{c}^{-k}(V_{0})\) since \(z_{M}\in V_{0}\). Hence \[|Df_{c}^{i}(z_{0})|\geq\xi^{i}\geq\xi^{p}\] for \(i\in[1,M]\). This implies \[\sum_{i=0}^{M}\frac{1}{|Df_{c}^{i}(z_{0})|}\leq\frac{M}{\xi^{p}}\leq\frac{p}{ \xi^{p}}.\] If \(M>p\), then \(z_{M-p}\in f_{c}^{-p}(V_{0})\) and \(z_{i}\notin\bigcup_{k=0}^{p}f_{c}^{-k}(V_{0})\) for \(i\in[0,M-p)\). Then * \(|Df_{c}^{i}(z_{0})|\geq K_{\rm E}A^{i}\) for \(i\in[1,M-p]\) (by Lemma F) * \(|Df_{c}^{M-p+j}(z_{0})|\geq K_{\rm E}A^{M-p}\cdot\xi^{j}\geq K_{\rm E}\xi^{p}\) for \(j\in[1,p]\). This implies: \[\sum_{i=1}^{M}\frac{1}{|Df_{c}^{i}(z_{0})|}\leq\sum_{i=1}^{M-p}\frac{1}{K_{\rm E }A^{i}}+\sum_{j=1}^{p}\frac{1}{K_{\rm E}\xi^{p}}\leq\frac{1}{K_{\rm E}}\bigg{(} \frac{1}{A-1}+\frac{p}{\xi^{p}}\bigg{)}.\] By defining \[K_{\rm B}:=\frac{1}{K_{\rm E}}\bigg{(}\frac{1}{A-1}+\frac{p}{\xi^{p}}\bigg{)}\] we have the desired estimate. For the second case (\(M=\infty\)), the orbit never land on \({\cal V}(c)\) and by Lemma F, we have \[|Df_{c}^{i}(z_{0})|\geq K_{\rm E}A^{i}\] for any \(i\in\mathbb{N}\). Hence \[\sum_{i=1}^{\infty}\frac{1}{|Df_{c}^{i}(z_{0})|}\leq\sum_{i=1}^{\infty}\frac{1 }{K_{\rm E}A^{i}}\leq\frac{1}{K_{\rm E}}\cdot\frac{1}{A-1}<K_{\rm B}.\] \(\blacksquare\) ## 7 Proofs of Propositions 2.2 and 2.3 This section is devoted to the proofs of Propositions 2.2 and 2.3, which describe the local properties of parabolic periodic points and its perturbations along thick internal rays. The argument here relies on some well-known facts originally due to Douady and Hubbard [DH, Expose XIV] on perturbation of parabolic cycles. Here we will adopt Milnor's formulation in [Mi, SS4]. (See also [T, Thm. 1.1 and Thm. A.1].) Let \(\hat{c}\) be a parabolic parameter and \(\hat{b}\) be a parabolic periodic point of period \(p\) of \(f_{\hat{c}}\). Let \(\hat{\lambda}\) be the multiplier \(Df_{\hat{c}}^{p}(\hat{b})\) of \(\hat{b}\) that is a primitive \(q\)-th root of unity. **Proposition 7.1** (Lemma 4.5 of [Mi]).: _There exist unique single valued functions \(c(\lambda)\) and \(z(\lambda)\) defined on a neighborhood of \(\hat{\lambda}\) so that \(z(\lambda)\) is a periodic point of period \(p\) and multiplier \(\lambda\) for the map \(f_{c(\lambda)}\), with \(\hat{c}=c(\hat{\lambda})\) and \(\hat{b}=z(\hat{\lambda})\). This function \(c(\lambda)\) has a single critical point at \(\hat{\lambda}\) when \(q=1\) (Case 1) but is univalent when \(q\geq 2\) (Case 2\({}^{\pm}\))._ By applying the argument of [K2, Appendix A.4] one can find a holomorphic family of local coordinates \(\zeta=\varphi_{\lambda}(z)\) and \(w=\psi_{\lambda}(z)\) defined near \(z(\lambda)\) such that \(\varphi_{\lambda}(z(\lambda))=\psi_{\lambda}(z(\lambda))=0\), \[\varphi_{\lambda}\circ f_{c(\lambda)}^{p}\circ\varphi_{\lambda}^{-1}(\zeta)= \lambda\zeta+\zeta^{q+1}+O(\zeta^{2q+1}),\text{ and} \tag{7.1}\] \[\psi_{\lambda}\circ f_{c(\lambda)}^{pq}\circ\psi_{\lambda}^{-1}(w)=\lambda^{q }w\left(1+w^{q}+O(w^{2q})\right) \tag{7.2}\] where \[w=\left(\frac{1+\lambda^{q}+\lambda^{2q}+\cdots+\lambda^{(q-1)q}}{\lambda} \right)^{1/q}\zeta.\] (Note that the error terms in (7.1) and (7.2) are slightly refined compared with the similar coordinates given in [DH, Prop. 11.1] and [Mi, Lem.4.2].) Then we take a sufficiently small \(\hat{R}>0\) such that the domains of these local coordinates contain \(\hat{U}=\mathbb{D}(\hat{b},\hat{R})\) for \(\lambda\) sufficiently close to \(\hat{\lambda}\). In particular, both \(D\varphi_{\lambda}^{-1}(0)\) and \(D\psi_{\lambda}^{-1}(0)\) are uniformly bounded away from zero. **Proposition 7.2**.: _For \(\lambda\neq\hat{\lambda}\), one can find exactly \(q\) non-zero fixed points of \(\psi_{\lambda}\circ f_{c(\lambda)}^{pq}\circ\psi_{\lambda}^{-1}(w)=\lambda^{q }w\left(1+w^{q}+O(w^{2q})\right)\) of the form \(w=w_{\lambda}(1+o(1))\) with multiplier \(1+q(1-\lambda^{q})+o(|\lambda-\hat{\lambda}|)\), where \(w_{\lambda}\) is a \(q\)-th root of \((1-\lambda^{q})/\lambda^{q}\)._ **Proof.** Since the equation of the form \(\lambda^{q}w\left(1+w^{q}+O(w^{2q})\right)=w\) is regarded as a perturbation of the \(\lambda=\hat{\lambda}\) case, it has exactly \(q\) non-zero roots by Hurwitz's theorem. To obtain the estimate of the solution, we apply Rouche's theorem1. The multiplier comes from \(D(\psi_{\lambda}\circ f_{c(\lambda)}^{pq}\circ\psi_{\lambda}^{-1})(w)=\lambda ^{q}(1+(q+1)w^{q}+O(w^{2q}))\) and \(|w_{\lambda}|\asymp|\lambda-\hat{\lambda}|^{1/q}\). \(\blacksquare\) Footnote 1: Let \(F(w)=w^{q}-w_{\lambda}^{q}=w^{q}-(1-\lambda^{q})/\lambda^{q}\) and \(G(w)=O(w^{2q})\). Let \(\ell:=|\lambda-\hat{\lambda}|\). Then consider the circle \(|w-w_{\lambda}|=\ell^{s}\) with \(1/q<s<1+1/q\). For example, one can take \(s:=(2q+1)/(2q)\). Now one can check \(|w_{\lambda}|\asymp\ell^{1/q}\) and thus \(|F(w)|\asymp\ell^{1-1/q+s}\) and \(|G(w)|=O(\ell^{2})\) on this circle. Since \(|F(w)|>|G(w)|\) for \(\ell\ll 1\), \(F(w)+G(w)=0\) and \(F(w)=0\) has the same number of zeroes, which is one. Hence the solution is of the form \(w=w_{\lambda}+O(\ell^{s})=w_{\lambda}(1+o(1))\). Now we fix a hyperbolic component \(\mathbb{X}\) attached to \(\hat{c}\) with uniformization \(\Phi=\Phi_{\mathbb{X}}:\overline{\mathbb{D}}\to\overline{\mathbb{X}}\), and a Stolz angle \(S(\Delta)\subset\mathbb{D}\) at \(\hat{\lambda}\) with \(\Delta=\Delta(A_{0},T_{0})\). In what follows we will take a sufficiently small \(T_{0}\) if necessary. Let us start with Case 2\({}^{-}\). Proofs of Propositions 2.2 and 2.3 for Case 2\({}^{-}\).We obtain Case 2\({}^{-}\) by letting \(\lambda=\lambda_{t}:=(1-t/q)\hat{\lambda}\) with \(t\in\Delta\) in Proposition 7.1. Indeed, the hyperbolic component \(\mathbb{X}\) is specified by the family of attracting fixed points \(z(\lambda)\) of \(f_{c(\lambda)}^{p}\) with \(\lambda=\lambda_{t}\). Proposition 7.1 implies that \(|c_{t}-\hat{c}|\asymp|\lambda_{t}-\hat{\lambda}|\asymp|t|\) by setting \(c_{t}=c(\lambda_{t})\). This proves Proposition 2.2 for this case. Items (1), (2) and (4) of Proposition 2.3 are straightforward by applying Proposition 7.2 with \(b_{t}:=z(\lambda_{t})\), \(\varphi_{t}:=\varphi_{\lambda_{t}}\), and \(\psi_{t}:=\psi_{\lambda_{t}}\) for \(t\in\Delta\). Proofs of Propositions 2.2 and 2.3 for Case 1 and Case 2\({}^{+}\).In these cases, \(q\) non-zero fixed points of \(f_{\lambda}^{pq}\) given in Proposition 7.2 should be attracting with multiplier \(1-qt\) for \(t\in\Delta\), and this specifies the hyperbolic component \(\mathbb{X}\) in Case 1 or Case 2\({}^{+}\). Hence by the equality \(1-qt=1+q(1-\lambda^{q})+o(|\lambda-\hat{\lambda}|)\) we obtain \(\lambda=\lambda_{t}:=(1+t/q+o(t))\hat{\lambda}\) for \(t\in\Delta\). By setting \(c_{t}:=c(\lambda_{t})\), Proposition 7.1 implies that \(|c_{t}-\hat{c}|\asymp|\lambda_{t}-\hat{\lambda}|^{2}\asymp|t|^{2}\) in Case 1, and \(|c_{t}-\hat{c}|\asymp|\lambda_{t}-\hat{\lambda}|\asymp|t|\) in Case 2\({}^{+}\). This proves Proposition 2.2 for these cases. Items (1), (2) and (3) of Proposition 2.3 are straightforward by applying Proposition 7.2 with \(b_{t}:=z(\lambda_{t})\), \(\varphi_{t}:=\varphi_{\lambda_{t}}\), and \(\psi_{t}:=\psi_{\lambda_{t}}\) for \(t\in\Delta\). \(\blacksquare\) ## 8 Proof of Lemma G Preliminary.Lemma G comes from symmetric behavior of the local dynamics of \(f_{c}^{p}\) near the periodic point \(b(c)\). Indeed, it holds without the radial access condition and the assumption that \(z_{0}\in J(f_{c})\). To describe its principle in a generalized form, we adopt Milnor's formulation as above. Let \(\lambda\) range over a neighborhood \(\hat{S}\) of \(\hat{\lambda}\) such that the holomorphic maps \(c(\lambda)\) and \(z(\lambda)\) in Proposition 7.1 are defined. Note that the map \(\lambda\mapsto c(\lambda)\) gives an isomorphism between \(\hat{S}\) and a neighborhood of \(\hat{c}=c(\hat{\lambda})\) since we are in the case of \(q\geq 2\) (Case 2\({}^{\pm}\)). We claim: **Proposition 8.1**.: _Suppose that \(q\geq 2\). For any \(\lambda\in\hat{S}\) and \(z_{0}\in\hat{U}=\mathbb{D}(\hat{b},\hat{R})\), suppose that \(z_{lp}:=f_{c(\lambda)}^{lp}(z_{0})\in\hat{U}\) for \(0\leq l\leq q\). Then we have_ \[\left|\sum_{l=0}^{q-1}\frac{1}{Df_{c(\lambda)}^{lp}(z_{0})}\right|=O(|c( \lambda)-\hat{c}|)+O(|z_{0}-z(\lambda)|)\] _by taking smaller \(\hat{S}\) and \(\hat{U}\) if necessary._ Proof of Proposition 8.1.Let \(\epsilon:=\lambda/\hat{\lambda}-1\) (\(\lambda\in\hat{S}\)) be an alternative parameter such that \[\lambda=(1+\epsilon)\hat{\lambda}\in\hat{S}.\] Since we are in the case of \(q\geq 2\) (Case 2\({}^{\pm}\)), we have \(\hat{\lambda}\neq 1\) and \(\lambda\mapsto c(\lambda)\) is univalent (Proposition 7.1). Hence we obtain \(|c(\lambda)-\hat{c}|\asymp|\lambda-\hat{\lambda}|\asymp|\epsilon|\) for \(\lambda\in\hat{S}\). Recall that we have a local coordinate \(\zeta=\varphi_{\lambda}(z)\) defined on \(\hat{U}\) such that \(\varphi_{\lambda}(z(\lambda))=0\) and \[F_{\lambda}(\zeta):=\varphi_{\lambda}\circ f_{c(\lambda)}^{p}\circ\varphi_{ \lambda}^{-1}(\zeta)=\lambda\zeta+\zeta^{q+1}+O(\zeta^{2q+1})\] as in (7.1). For a given \(z_{0}\in\hat{U}\), let \(\varphi_{\lambda}^{-1}(\zeta_{0})=z_{0}\) and \(\zeta_{lp}:=\varphi_{\lambda}(z_{lp})\). Then it is easy to check that \[\zeta_{lp}=F^{l}_{\lambda}(\zeta_{0})=\lambda^{l}\zeta_{0}+\lambda^{l-1}\left(1+ \lambda^{q}+\cdots+\lambda^{(l-1)q}\right)\zeta_{0}^{q+1}+O(\zeta_{0}^{2q+1}) \tag{8.1}\] for any \(1\leq l\leq q\). Since the local coordinate \(\varphi_{\lambda}(z)\) depends holomorphically on \(\lambda\), we have \[\varphi_{\lambda}^{-1}(\zeta)=z(\lambda)+C_{4}\zeta+C_{5}\zeta^{2}+O(\zeta^{3}) \tag{8.2}\] for some \(C_{4}=C_{4}(\lambda)\) and \(C_{5}=C_{5}(\lambda)\) with \(C_{4}\) bounded away from zero for \(\lambda\in\hat{S}\). Let \[{\cal A}(z_{0}):=\sum_{l=0}^{q-1}\frac{1}{Df^{lp}_{c(\lambda)}(z_{0})}.\] Now, \[Df^{lp}_{c(\lambda)}(z_{0})=D\varphi_{\lambda}^{-1}(\zeta_{lp})\ DF^{l}_{ \lambda}(\zeta_{0})\ D\varphi_{\lambda}(z_{0}),\] and \[{\cal A}(z_{0})=1+\frac{D\varphi_{\lambda}^{-1}(\zeta_{0})}{D\varphi_{\lambda }^{-1}(\zeta_{p})}\frac{1}{DF_{\lambda}(\zeta_{0})}+\frac{D\varphi_{\lambda}^{ -1}(\zeta_{0})}{D\varphi_{\lambda}^{-1}(\zeta_{2p})}\frac{1}{DF_{\lambda}^{2} (\zeta_{0})}+\cdots+\frac{D\varphi_{\lambda}^{-1}(\zeta_{0})}{D\varphi_{ \lambda}^{-1}(\zeta_{(q-1)p})}\frac{1}{DF_{\lambda}^{q-1}(\zeta_{0})}.\] By (8.1) and (8.2), \[D\varphi_{\lambda}^{-1}(\zeta_{lp}) = C_{4}+2C_{5}\zeta_{lp}+O(\zeta_{lp}^{2})\] \[= C_{4}+2C_{5}\lambda^{l}\zeta_{0}+O(\zeta_{0}^{2}),\] thus \[\frac{D\varphi_{\lambda}^{-1}(\zeta_{0})}{D\varphi_{\lambda}^{-1 }(\zeta_{lp})} = \frac{C_{4}+2C_{5}\zeta_{0}+O(\zeta_{0}^{2})}{C_{4}+2C_{5}\zeta_{ lp}+O(\zeta_{lp}^{2})}\] \[= 1+\frac{2C_{5}}{C_{4}}(1-\lambda^{l})\ \zeta_{0}+O(\zeta_{0}^{2}).\] Formula (8.1) also gives \[\frac{1}{DF_{\lambda}^{l}(\zeta_{0})}=\frac{1}{\lambda^{l}}\left(1+O(\zeta_{0} ^{2})\right).\] Therefore, \[{\cal A}(z_{0}) = 1+\sum_{l=1}^{q-1}\bigg{\{}\left(1+\frac{2C_{5}}{C_{4}}(1-\lambda ^{l})\zeta_{0}+O(\zeta_{0}^{2})\right)\cdot\frac{1}{\lambda^{l}}\left(1+O( \zeta_{0}^{2})\right)\bigg{\}}\] \[= 1+\sum_{l=1}^{q-1}\frac{1}{\lambda^{l}}\left(1+\frac{2C_{5}}{C_{4 }}(1-\lambda^{l})\zeta_{0}+O(\zeta_{0}^{2})\right)\] \[= \frac{1}{\lambda^{q}}\left(\lambda+\lambda^{2}+\cdots+\lambda^{q }+\frac{2C_{5}}{C_{4}}(\lambda+\lambda^{2}+\cdots+\lambda^{q-1}+\lambda^{q}-q \lambda^{q})\zeta_{0}\right)+O(\zeta_{0}^{2})\] \[= \frac{\lambda+\lambda^{2}+\cdots+\lambda^{q}}{\lambda^{q}}\left(1 +\frac{2C_{5}}{C_{4}}\zeta_{0}\right)-\frac{2qC_{5}}{C_{4}}\zeta_{0}+O(\zeta_{0 }^{2}).\] Clearly, \[\hat{\lambda}+\hat{\lambda}^{2}+\cdots+\hat{\lambda}^{q}=0.\] Thus, by using the alternative parameter \(\epsilon\), we have \[\lambda+\lambda^{2}+\cdots+\lambda^{q} = (1+\epsilon)\hat{\lambda}+(1+\epsilon)^{2}\hat{\lambda}^{2}+\cdots +(1+\epsilon)^{q}\hat{\lambda}^{q}\] \[= (\hat{\lambda}+2\hat{\lambda}^{2}+\cdots+q\hat{\lambda}^{q})\ \epsilon+O(\epsilon^{2})\] \[= \frac{q\hat{\lambda}}{\hat{\lambda}-1}\epsilon+O(\epsilon^{2}).\] Hence, \[\mathcal{A}(z_{0}) = \frac{1}{(1+\epsilon)^{q}}\Bigg{\{}\frac{q\hat{\lambda}}{\hat{ \lambda}-1}\epsilon+O(\epsilon^{2})\Bigg{\}}\left(1+\frac{2C_{5}}{C_{4}}\zeta_ {0}\right)-\frac{2q\,C_{5}}{C_{4}}\zeta_{0}(1+O(\zeta_{0}))\] \[= O(\epsilon)+O(\zeta_{0})\] where the implicit constants are independent of \(\lambda=(1+\epsilon)\hat{\lambda}\in\hat{S}\) and \(\zeta_{0}\in\varphi_{\lambda}(\hat{U})\). Since \(C_{4}\) is uniformly bounded away from zero for \(\lambda\in\hat{S}\), (8.2) implies \(|\zeta_{0}|\asymp|z_{0}-z(\lambda)|.\) Since \(|\epsilon|\asymp|c(\lambda)-\hat{c}|\), we conclude that \[|\mathcal{A}(z_{0})|=O(|c(\lambda)-\hat{c}|)+O(|z_{0}-z(\lambda)|).\] \(\blacksquare\) Proof of Lemma G.When \(j=0\), we apply Proposition 8.1 by letting \(\lambda:=\lambda_{t}\), \(c=c_{t}:=c(\lambda_{t})\), \(b_{0}(c)=b_{t}:=z(\lambda_{t})\) as in the proof of statements (3) and (4) of Proposition 2.3 given in the previous section. (Note that we have \(U_{0}\subset\hat{U}\) and \(S(\Delta)\subset\hat{S}\) by taking a smaller \(T_{0}\).) Then we immediately obtain \[\left|\sum_{l=0}^{q-1}\frac{1}{Df_{c}^{lp}(z_{0})}\right|=O(|c-\hat{c}|)+O(|z_ {0}-b_{0}(c)|).\] When \(1\leq j\leq p-1\), we can apply the same argument as Proposition 8.1 by replacing \(\hat{U}\) with \(\hat{U}_{j}:=f_{c}^{j}(\hat{U})\) and \(b_{0}(c)\) with \(b_{j}(c)\). More precisely, having the local coordinate \(\varphi_{0}=\varphi_{\lambda_{t}}\) on \(\hat{U}\) for \(c=c_{t}=c(\lambda_{t})\), we can define a local coordinate \(\varphi_{j}:=\varphi_{c}\circ f_{c}^{-j}|_{\hat{U}_{j}}\) on \(\hat{U}_{j}\) for each \(1\leq j\leq p-1\) by a branch such that \(\varphi_{j}(z_{lp+j})=\zeta_{lp}\) for \(0\leq l\leq q\). Since \(\varphi_{j}^{-1}\) is of the form \[\varphi_{j}^{-1}(\zeta)=b_{j}(c)+C_{4,j}\zeta+C_{5,j}\zeta^{2}+O(\zeta^{3})\] where \(C_{4,j}\) is a constant bounded away from zero, we have \(|z_{j}-b_{j}(c)|\asymp|\zeta_{0}|\) and the same argument as Proposition 8.1 yields \[\left|\sum_{l=0}^{q-1}\frac{1}{Df_{c}^{lp}(z_{j})}\right|=O(|c-\hat{c}|)+O(|z_ {j}-b_{j}(c)|).\] Hence there exists a constant \(K_{\rm G}>0\) independent of \(c=c_{t}\approx\hat{c}\) and \(z_{0}\in U_{0}\) such that \[\left|\sum_{l=0}^{q-1}\frac{1}{Df_{c}^{lp}(z_{j})}\right|\leq K_{\rm G}\big{(} |c-\hat{c}|+|z_{j}-b_{j}(c)|\big{)}\] for \(0\leq j\leq p-1\). \(\blacksquare\) Proof of Proposition 2.4 This section is devoted to the proof of Proposition 2.4. Branched coordinates near infinity.Let \(c=c_{t}=c(\lambda_{t})\in\mathbb{X}\cup\{\hat{c}\}\) with \(\lambda_{t}\in S(\Delta)\cup\{\hat{\lambda}\}\) and \(t\in\Delta\cup\{0\}\). It is convenient to use a branched coordinate \(\Psi_{t}:\psi_{t}(\hat{U})\to\overline{\mathbb{C}}_{W}\) given by \(W=\Psi_{t}(w):=-\lambda_{t}^{q^{2}}/(q\,w^{q})\) where \(\overline{\mathbb{C}}_{W}\) denotes the Riemann sphere in \(W\)-coordinate. Let \(W=\Phi_{t}(z):=\Psi_{t}\circ\psi_{t}(z)\). We may assume that \(\Phi_{t}(\hat{U})\) is always contained in \(\big{\{}W\in\overline{\mathbb{C}}_{W}\,:\,|W|\geq\hat{r}\big{\}}\) for some constant \(\hat{r}>0\) independent of \(t\in\Delta\), with \(\hat{r}\hat{R}^{q}\asymp 1\) by taking a smaller \(\hat{R}\) if necessary. In this coordinate we observe the map \(f_{c_{t}}^{pq}\) as \(G:=\Phi_{t}\circ f_{c_{t}}^{pq}\circ\Phi_{t}^{-1}\) (taking an appropriate branch of \(\Phi_{t}^{-1}\)) of the form \[G(W)=G_{t}(W)=\tau\,W+1+O(1/W),\] where \[\tau:=\lambda_{t}^{-q^{2}}=\left\{\begin{array}{ll}1-qt+O(t^{2})&\mbox{for Case 1 and Case 2${}^{+}$},\\ 1+qt+O(t^{2})&\mbox{for Case 2${}^{-}$}\end{array}\right.\] by Proposition 2.3. For \(t\in\Delta\), let \[B:=\frac{1}{1-\tau}=\pm\frac{1}{qt}+O(1).\] Note that \(B\) is a fixed point of \(W\mapsto\tau W+1\) and \(G(W)-B=\tau(W-B)+O(1/W)\). By Rouche's theorem, there is a fixed point \(B^{*}\) of \(G\) of the form \(B^{*}=B+O(1)=O(1/t)\) with multiplier \(\tau^{*}:=DG(B^{*})=\tau+O(t^{2})\). Figure 7 illustrates the Julia set in \(\overline{\mathbb{C}}_{W}\). Dynamics of \(G\) near infinity.Let \[d_{0}:=\frac{\cos A_{0}}{4}>0,\] where \(A_{0}\in[0,\pi/2)\) is given in the definition of \(\Delta\) (hence it is determined by the given thickness of the thick internal ray). **Proposition 9.1**.: _When \(|W|>1/(qd_{0}|t|)\), we have_ \[|G(W)|<|W|\quad\mbox{in Cases 1 and 2${}^{+}$}\mbox{, and}\] \[|G(W)|>|W|\quad\mbox{in Case 2${}^{-}$}\] _by taking sufficiently small \(\hat{R}\) and \(T_{0}\) in the definitions of \(\hat{U}\) and \(\Delta\)._ Proof.Since \(G(W)/W=\tau+W^{-1}+O(W^{-2})=1\mp qt+O(t^{2})+W^{-1}+O(W^{-2})\), we have \[\log\left|\frac{G(W)}{W}\right|=\mbox{Re}\,\log\frac{G(W)}{W}=\mbox{Re}\, \big{(}\mp qt+O(t^{2})+W^{-1}+O(W^{-2})\big{)}.\] Now suppose that \(|W|>1/(qd_{0}|t|)\). By taking a smaller \(T_{0}\) if necessary, for any \(t\in\Delta=\Delta(A_{0},T_{0})\) we have \[\operatorname{Re}\left(qt\right)\geq q|t|\cos A_{0}=4qd_{0}|t|,\] \[\operatorname{Re}\left(W^{-1}\right)\leq|W^{-1}|\leq qd_{0}|t|, \quad\text{and}\quad\operatorname{Re}\left(O(t^{2})+O(W^{-2})\right)\leq qd_{ 0}|t|.\] Hence \[\log\left|\frac{G(W)}{W}\right|\leq(-4+1+1)qd_{0}|t|<0\] in Cases 1 and \(2^{+}\), and \[\log\left|\frac{G(W)}{W}\right|\geq(4-1-1)qd_{0}|t|>0\] in Case \(2^{-}\). \(\blacksquare\) We first prove Proposition 2.4 for Case \(2^{-}\), then prove it for Case 1 and Case \(2^{+}\). Proof of Proposition 2.4: Case \(2^{-}\).In this case \(\tau=1+qt+O(t^{2})\) and thus \(B=-1/(qt)+O(1)\). Note that we may assume \(|B|>7\hat{r}\) for any \(t\in\Delta\) by taking a smaller \(T_{0}\) if necessary. We first claim: **Proposition 9.2**.: _In Case 2-, for any \(W\) with \(|W|\geq\hat{r}\) and \(|W-B|\geq|B|/2\), we have_ \[|G(W)-B|\geq|W-B|+d_{0}\] _by taking sufficiently small \(\hat{R}\) and \(T_{0}\) in the definitions of \(\hat{U}\) and \(\Delta\)._ Figure 7: The image of the Julia set (red) by \(\Phi_{t}\) in \(W\)-coordinate for Cases 1 and \(2^{+}\) (left) and Case \(2^{-}\) (right). The origin of the \(W\)-plane is the center of the dotted (or black) circle. The radius of the black circle equals \(\hat{r}\). The annulus between the dotted circle and the black one encloses the region where the dynamics by \(G\) is relatively close to the translation \(W\mapsto W+1\) (see Proposition 10.1). Proof.Since \(G(W)-B=\tau(W-B)+O(W^{-1})\) and \(|W-B|\geq|B|/2\), \[|G(W)-B|=|\tau|\,|W-B|+|O(W^{-1})|\geq|W-B|+\frac{|\tau|-1}{2|\tau-1|}+|O(W^{-1} )|.\] Taking smaller \(\hat{R}\) (hence a larger \(\hat{r}\)) and \(T_{0}\) if necessary, we obtain \[\frac{|\tau|-1}{2|\tau-1|}=\frac{\operatorname{Re}\left(qt\right)+O(|t^{2}|)} {2|qt+O(|t^{2}|)|}\geq\frac{q|t|\cos A_{0}\cdot(1+O(|t|))}{2q|t|\cdot(1+O(|t|)) }\geq\frac{\cos A_{0}}{2}\cdot\frac{3}{4}\] and \(|O(W^{-1})|\leq(\cos A_{0})/8\). Hence \[|G(W)-B|\geq|W-B|+\left(\frac{3}{8}-\frac{1}{8}\right)\cos A_{0}=|W-B|+d_{0}.\] \(\blacksquare\) **Proposition 9.3**.: _In Case 2-, there exists a unique linearizing coordinate \(L:\mathbb{D}(B,3|B|/4)\to\mathbb{C}\) of the repelling fixed point \(B^{*}\) of \(G\) such that \(L(B^{*})=0\), \(DL(B^{*})=1\), and if \(G(W)\in\mathbb{D}(B,3|B|/4)\),_ \[L\circ G(W)=\tau^{*}L(W)\] _where \(\tau^{*}=DG(B^{*})\)._ Proof.By the previous proposition, the disk \(\mathbb{D}(B,3|B|/4)\) is compactly contained in \(G(\mathbb{D}(B,3|B|/4))\). Hence the univalent branch of \(G^{-1}\) on \(G(\mathbb{D}(B,3|B|/4))\) is strictly contracting. By the Riemann mapping theorem and the Schwarz lemma, there exists a unique attracting fixed point of \(G^{-1}\) in \(\mathbb{D}(B,3|B|/4)\), which must be \(B^{*}\). Hence there is a unique linearizing coordinate \(L\) near \(B^{*}\) that satisfies \(L(B^{*})=0\), \(DL(B^{*})=1\), and the relation \(L\circ G^{-1}(W)=(\tau^{*})^{-1}L(W)\). We obtain the desired linearizing coordinate by extending \(L\) to \(\mathbb{D}(B,3|B|/4)\) by this relation. \(\blacksquare\) Now we are ready to finish the proof of Proposition 2.4 in Case \(2^{-}\). Take any \(z_{0}\in J(f_{c_{t}})\cap U_{0}\) for a given \(t\in\Delta\). Let \(W_{0}:=\Phi_{t}(z_{0})\) and \(W_{k}:=G^{k}(W)\) for \(k\geq 0\). If \(|W_{0}-B|\leq|B|/2\), Proposition 9.3 implies that \(|W_{k}-B|>|B|/2\) for some \(k\geq 1\) unless \(W_{0}=B^{*}\). If \(W_{0}=B^{*}\), then \(z_{0}\) is a repelling fixed point of \(f_{c}^{pq}\) in \(U_{0}\). If \(W_{0}\neq B^{*}\), by Proposition 9.2, either \(|W_{k+j}-B|\geq|W_{k}-B|+jd_{0}\to\infty\) as \(j\to\infty\) or \(|W_{k+j}|\leq\hat{r}\) for some \(j\geq 0\). The former implies that the orbit of \(z_{0}\) is attracted to an attracting fixed point of \(f_{c}^{pq}\) (see Proposition 9.1) and thus a contradiction. The latter implies that \(z_{pq(k+j)}\) is not contained in \(U_{0}\) any more. \(\blacksquare\) Proof of Proposition 2.4: Cases 1 and 2+.In this case \(\tau=1-qt+O(t^{2})\) and thus \(B=1/(qt)+O(1)\). The proof is analogous to Case \(2^{-}\) and we only give an outline. Since \(G^{-1}(W)-B=\tau^{-1}(W-B)+O(W^{-1})\), we have \[|G^{-1}(W)-B|\geq|W-B|+d_{0} \tag{9.1}\] when \(|W|\geq\hat{r}\) and \(|W-B|\geq|B|/2\). The fixed point \(B^{*}\) of \(G^{-1}\) is repelling and we can find a linearizing coordinate \(L\) of \(G^{-1}\) in \(\mathbb{D}(B,3|B|/4)\). Let \(z_{0}\in J(f_{\epsilon_{t}})\cap U_{0}\) for a given \(t\in\Delta\). Let \(W_{0}:=\Phi_{t}(z_{0})\) and \(W_{k}:=G^{k}(W_{0})\) for \(k\geq 0\). If \(W_{0}=\infty\), then \(z_{0}\) is the repelling fixed point of \(f_{c_{t}}^{pq}\) (see Proposition 9.1). Suppose that \(W_{0}\neq\infty\). Then \(W_{0}\) is not contained in \(\mathbb{D}(B,|B|/2)\), otherwise \(W_{k}\) tends to \(B^{*}\) as \(k\to\infty\). By (9.1), we have \(|W_{0}-B|\geq|W_{k}-B|+kd_{0}\) and thus either \(|W_{k}-B|<3|B|/4\) or \(|W_{k}|\leq\hat{r}\) for some \(k\geq 0\). The former implies that \(z_{pqk}\) is contained in the attracting basin of an attracting fixed point of \(f_{c}^{pq}\), and thus a contradiction. The latter implies that \(z_{pqk}\) is not contained in \(U_{0}\) any more. \(\blacksquare\) ## 10 Proof of Lemma H In this section we give a proof of Lemma H. The proof employs the branched coordinates and the dynamics of the form \(G(W)=\tau W+1+O(1/W)\) as in the previous section. Dynamics of \(G\) near the origin.Since \(\tau\to 1\) as \(t\in\Delta\) tends to \(0\), the dynamics of \(G\) (relatively) near the origin becomes closer to the translation \(W\mapsto W+1\): **Proposition 10.1**.: _For any \(t\in\Delta\) and any \(W\) with \(\hat{r}\leq|W|\leq 1/(5q|t|)\), we have \(|G(W)-(W+1)|\leq 1/2\) by taking sufficiently small \(\hat{R}\) and \(T_{0}\) in the definitions of \(\hat{U}\) and \(\Delta\). In particular, we have_ \[1/2\leq\operatorname{Re}G(W)-\operatorname{Re}W\leq 3/2\quad\text{and}\quad| \operatorname{arg}(W-G(W))|\leq\pi/6. \tag{10.1}\] Proof.We have \(|(\tau-1)W|=|(\mp qt+O(t^{2}))W|\leq 1/4\) when \(|W|\leq(5q|t|)^{-1}\) by taking a smaller \(T_{0}\). We also have \(|G(W)-(\tau W+1)|=|O(1/W)|\leq 1/4\) when \(|W|\geq\hat{r}\) by taking a smaller \(\hat{R}\) (hence a larger \(\hat{r}\)). It follows that \(|G(W)-(W+1)|=|(\tau-1)W+O(1/W)|\leq 1/2\) and (10.1) is an immediate consequence. \(\blacksquare\) Some additional conditions on \(U_{0}\) in \(W\)-coordinate.In what follows we suppose that \(z_{0}\in J(f_{\epsilon_{t}})\cap U_{0}\), and there exists an \(m\in\mathbb{N}\) such that \(z_{(m-1)pq}\in U_{0}\) but \(z_{mpq}\notin U_{0}\). Let \(W_{k}:=\Phi_{t}(z_{kpq})\). Then we may assume that \(W_{m-1}\in\Phi_{t}(U_{0})\) but \(W_{m}\notin\Phi_{t}(U_{0})\). By taking smaller and appropriate \(\hat{R}\) and \(R_{0}\) in the definitions of \(\hat{U}\) and \(U_{0}\) (and taking a smaller \(T_{0}\) if necessary), we may assume that the set \(\Phi_{t}(\partial U_{0})\) is contained in the annulus \[\hat{A}:=\left\{W\in\overline{\mathbb{C}}_{W}\,:\,5\hat{r}<|W|<6\hat{r}\right\}\] (where \(\hat{r}\asymp\hat{R}^{-q}\) is introduced in the previous section) for any \(t\in\Delta\). Indeed, we can apply the Koebe distortion theorem to the family of local coordinates \(\{\psi_{t}\}\) to control the shape (eccentricity) of the image \(\Phi_{t}(\partial U_{0})\). Because \(\operatorname{Re}W_{m}-\operatorname{Re}W_{m-1}\leq 3/2\) by Proposition 10.1, we may assume in addition that \(W_{m}\) is contained in this \(\hat{A}\). Now we claim that \(3\pi/4\leq\operatorname{arg}W_{m}\leq 5\pi/4\), equivalently, \(|\operatorname{arg}(-W_{m})|\leq\pi/4\). Indeed, since \(z_{0}\) is contained in the Julia set, the same argument as Proposition 2.4 yields that the orbit of \(W_{0}\) by \(G\) must leave the domain of \(G\) (by taking a larger \(\hat{r}\) if necessary). In other words, there exists an integer \(k>m\) such that \(|W_{k}|>\hat{r}\) and \(|W_{k+1}|\leq\hat{r}\). Since \(|\operatorname{arg}(W_{k}-W_{m})|\leq\pi/6\) by the proposition above, the condition \(|W_{m}|>5\hat{r}\) implies \(|\operatorname{arg}(-W_{m})|\leq\pi/4\). (Here we have used \(\cos(5\pi/12)>1/5\). See Figure 8.) The lemma below is a very important conclusion from the radial access condition: **Lemma I.** _There exists a constant \(K_{\rm I}\in(0,1/(5q)]\) such that for any \(t\in\Delta\), \(z_{0}\in J(f_{c_{t}})\cap U_{0}\) and \(k\geq 0\), the condition \(|W_{k}|\leq K_{\rm I}/|t|\) implies \(|\arg(-W_{k})|\leq\pi/4\). Moreover, for such a \(W_{k}\), we have \(|W_{k+1}|<|W_{k}|\)._ Hence we can only find \(W_{k}\) near the negative real axis in the disk \(\mathbb{D}(0,K_{\rm I}/|t|)\). **Proof.** By the assumption on \(|W_{m}|\) as above, we may assume that there exists \(j\in\mathbb{N}\) such that \[W_{m-j},\,W_{m-j+1},\cdots,W_{m}\in\big{\{}W\in\overline{\mathbb{C}}_{W}\,:\, |W|\leq(5q|t|)^{-1},\,|\arg(-W)|\leq\pi/4\big{\}}\] and that \(|W_{m-j}|\geq(6q|t|)^{-1}\). Note that the condition \(|\arg(-W)|\leq\pi/4\) implies \(|W|/\sqrt{2}\leq-{\rm Re}\,W\leq|W|\). By Proposition 10.1, we have \({\rm Re}\,W_{m}-{\rm Re}\,W_{m-j}\leq 3j/2\) and thus \[3j/2\geq-|W_{m}|+|W_{m-j}|/\sqrt{2}\geq-|W_{m}|+(6\sqrt{2}q|t|)^{-1}.\] Since the assumption \(5\hat{r}<|W_{m}|<6\hat{r}\) implies \(|W_{m}|\asymp 1\), we conclude that \(j\geq C_{6}/|t|\) for some constant \(C_{6}\asymp 1\) independent of \(t\in\Delta\) (taking a smaller \(T_{0}\) if necessary). In Case \(2^{-}\), as in the proof of Proposition 2.4, there is \(k^{\prime}>j\geq 1\) for which \(|W_{m-k^{\prime}}-B|>|B|/2\) (otherwise \(W_{0}=B^{*}\)). We have \[|W_{m}-B|\geq|W_{m-k^{\prime}}-B|+k^{\prime}d_{0}>|W_{m-k^{\prime}}-B|+C_{6}d _{0}/|t|\] by Proposition 9.2. Hence \[|W_{m-k^{\prime}}|\geq|B|-|W_{m-k^{\prime}}-B|>|B|-|W_{m}-B|+C_{6}d_{0}/|t| \geq-|W_{m}|+C_{6}d_{0}/|t|.\] Since \(|W_{m}|\asymp 1\), there exists a constant \(K_{\rm I}\asymp 1\) such that \(|W_{m-k^{\prime}}|>K_{\rm I}/|t|\) for any such \(k^{\prime}\) and \(t\in\Delta\) (again taking a smaller \(T_{0}\) if necessary). Hence if we have \(|W_{k}|\leq K_{\rm I}/|t|\) Figure 8: The orbit leaving the domain \(\Phi_{t}(U_{0})\) of \(G\). for some \(k\), then \(m-j\leq k\leq m\) and thus \(|\arg(-W_{k})|\leq\pi/4\). Since \(|W_{k}|\leq K_{\rm I}/|t|\) also implies \(|W_{k}|\leq 1/(5q|t|)\), we conclude \(K_{\rm I}\leq 1/(5q)\). The proof for Cases 1 and 2\({}^{+}\) is analogous: By (9.1), we have \[|W_{m-k^{\prime}}-B|\geq|W_{m}-B|+k^{\prime}d_{0}>|W_{m}-B|+C_{6}d_{0}/|t|\] instead, and this implies \[|W_{m-k^{\prime}}|\geq|W_{m-k^{\prime}}-B|-|B|>|W_{m}-B|-|B|+C_{6}d_{0}/|t|\geq -|W_{m}|+C_{6}d_{0}/|t|.\] Then we repeat the same argument as above. Now, we show the condition \(|W_{k}|\leq K_{\rm I}/|t|\) implies \(|W_{k+1}|<|W_{k}|\). By Proposition 10.1, we get \(|W_{k+1}-(W_{k}+1)|\leq 1/2\) and thus \[\left|\frac{W_{k+1}}{W_{k}}\right|\leq\left|1+\frac{1}{W_{k}}\right|+\frac{1} {2|W_{k}|}.\] Since \(|\arg(-W_{k})|\leq\pi/4\) implies \(3\pi/4\leq\arg(1/W_{k})\leq 5\pi/4\), we obtain \[\left|1+\frac{1}{W_{k}}\right|\leq\sqrt{\left(1-\frac{1}{\sqrt{2}\ |W_{k}|}\right)^{2}+\left(\frac{1}{\sqrt{2}\ |W_{k}|}\right)^{2}}\] and thus \[\left|\frac{W_{k+1}}{W_{k}}\right| \leq 1-\frac{1}{\sqrt{2}\ W_{k}}+\frac{1}{2\ |W_{k}|}+O\left(\frac{1}{|W_{k}|^{2}}\right)\] \[= 1-\frac{\sqrt{2}-1}{2\ |W_{k}|}+O\left(\frac{1}{|W_{k}|^{2}}\right)\] \[< 1\] by taking a sufficiently large \(\hat{r}\) if necessary. \(\blacksquare\) _Remark 10.2_.: Once we have \(|W_{k}|\leq K_{\rm I}/|t|\), then \(|W_{j}|\leq K_{\rm I}/|t|\) for \(k\leq j\leq m\). **Proposition 10.3**.: _For any \(t\in\Delta\), any \(z_{0}\in J(f_{c_{t}})\cap U_{0}\) that is not a fixed point of \(f_{c_{t}}^{pq}\), and any \(k\) with \(0\leq k\leq m\), we have_ \[|DG^{k}(W_{0})|\asymp|\tau|^{k}\] _in Case 1 or Case 2\({}^{+}\), and_ \[|DG^{k}(W_{0})|\asymp|\tau^{*}|^{k}\] _in Case 2\({}^{-}\), where \(W_{0}=\Phi_{t}(z_{0})\). The implicit constants are independent of \(t,z_{0}\), and \(k\)._ Proof for Case 1 and Case 2\({}^{+}\).Since \(DG(W)=\tau+O(W^{-2})\), \[DG^{k}(W_{0})=\tau^{k}\prod_{j=0}^{k-1}\big{(}1+O(W_{j}^{-2})\big{)}. \tag{10.2}\] Hence it is enough to show that \(\sum_{j=0}^{k-1}|W_{j}|^{-2}\) is uniformly bounded in \(t\), \(W_{0}\), and \(k\). As in the proof of Proposition 2.4, \(\mathbb{D}(B,3|B|/4)\) is contained in the attracting basin of \(B^{*}\). Therefore, we may always assume \(|W_{j}|\geq\hat{r}\) and \(|W_{j}-B|>|B|/2\) for \(j=0,1,\ldots,k\). By (9.1), \(|W_{j}-B|\) is strictly decreasing in \(j\). This implies that if \(|W_{k_{1}}-B|<4|B|\) for some \(0\leq k_{1}\leq k\), then \(|W_{j}-B|<4|B|\) for all \(k_{1}\leq j\leq k\). On the other hand, by Lemma I, if \(|W_{k_{2}}|\leq K_{\rm I}/|t|\) for some \(0\leq k_{2}\leq k\), then \(|W_{j}|\leq K_{\rm I}/|t|\) and \(|\arg(-W_{j})|\leq\pi/4\) for all \(k_{2}\leq j\leq k\) (\(\leq m\)). Consequently, we can assume the location of \(W_{0},\ldots,W_{k}\) to be * \(|W_{j}-B|\geq 4|B|\) for \(j=0,\ldots,k_{1}-1\), * \(|W_{j}-B|<4|B|\) and \(|W_{j}|>K_{\rm I}/|t|\) for \(j=k_{1},\ldots,k_{2}-1\), * \(|W_{j}-B|<4|B|\) and \(|W_{j}|\leq K_{\rm I}/|t|\) for \(j=k_{2},\ldots,k\). For the case (i\({}^{+}\)), by (9.1) again we have \[|W_{j}|\geq|W_{j}-B|-|B|\geq|W_{k_{1}-1}-B|-|B|+(k_{1}-1-j)d_{0}\geq|B|+(k_{1}- 1-j)d_{0}.\] Hence \[\sum_{j=0}^{k_{1}-1}|W_{j}|^{-2}=O\Bigg{(}\sum_{j=0}^{k_{1}-2}\frac{1}{(k_{1} -1-j)^{2}}\Bigg{)}+\frac{1}{|B|^{2}}=O\Bigg{(}\sum_{j=1}^{\infty}\frac{1}{j^{ 2}}\Bigg{)}+O\big{(}|t|^{2}\big{)}=O(1).\] The condition \(|W_{j}-B|<4|B|\) implies \(|W_{j}|<5|B|=5/(q|t|)+O(1)\). Hence the case (ii\({}^{+}\)) implies \(|W_{j}|\asymp|t|^{-1}\) and thus \(|W_{j}|^{-2}\asymp|t|^{2}\). From (9.1), we obtain \[4|B|>|W_{k_{1}}-B|\geq|W_{k_{2}-1}-B|+(k_{2}-1-k_{1})d_{0}\geq(k_{2}-1-k_{1})d _{0}\] and thus \(k_{2}-1-k_{1}=O(|t|^{-1})\). This gives \[\sum_{j=k_{1}}^{k_{2}-1}|W_{j}|^{-2}=O(|t|).\] In the (iii\({}^{+}\)) case, we have \(|\arg(-W_{j})|\leq\pi/4\) and thus \(|W_{j}|/\sqrt{2}\leq-\mbox{Re}\,W_{j}\leq|W_{j}|\). By Proposition 10.1, we have \(-\mbox{Re}\,W_{j}\geq-\mbox{Re}\,W_{k}+(k-j)/2.\) Hence, we get \(|W_{j}|\geq|W_{k}|/\sqrt{2}+(k-j)/2\geq(k-j)/2\), and \[\sum_{j=k_{2}}^{k-1}|W_{j}|^{-2}=O\Bigg{(}\sum_{j=k_{2}}^{k-1}\frac{1}{(k-j)^ {2}}\Bigg{)}=O\Bigg{(}\sum_{j=1}^{\infty}\frac{1}{j^{2}}\Bigg{)}=O(1).\] Proof for Case 2\({}^{-}\).First note by Proposition 9.1 that \(|W_{j}|\leq 1/(qd_{0}|t|)\) for \(0\leq j\leq k\) (otherwise \(W_{j}\) must be attracted by \(\infty\)). Second, by Proposition 9.2, if \(|W_{k_{1}}-B|\geq|B|/2\) for some least \(k_{1}\leq k\), then \(|W_{j}-B|\) is strictly increasing for \(j\) with \(k_{1}\leq j\leq k\). Hence, \(|W_{j}-B|\geq|B|/2\) for such a \(j\). Furthermore, if \(|W_{k_{2}}|\leq K_{\rm I}/|t|\) for some \(0\leq k_{2}\leq k\), then as in the Case 1 and Case 2\({}^{+}\), the location of \(W_{0},\ldots,W_{k}\) must be * \(|W_{j}-B|<|B|/2\) for \(j=0,\ldots,k_{1}-1\), * \(|W_{j}-B|\geq|B|/2\) and \(|W_{j}|>K_{\rm I}/|t|\) for \(j=k_{1},\ldots,k_{2}-1\), * \(|W_{j}-B|\geq|B|/2\) and \(|W_{j}|\leq K_{\rm I}/|t|\) for \(j=k_{2},\ldots,k\). Now, \[\frac{DG^{k}(W_{0})}{(\tau^{*})^{k}}\] \[= \frac{DG^{k_{1}-1}(W_{0})}{(\tau^{*})^{k_{1}-1}}\cdot\frac{DG^{k- k_{1}+1}(W_{k_{1}-1})}{(\tau^{*})^{k-k_{1}+1}}\] \[\asymp 1\cdot\frac{DG^{k-k_{1}+1}(W_{k_{1}-1})}{(\tau^{*})^{k-k_{1}+1 }}\qquad\mbox{(by Proposition \ref{prop:2})}\] \[\asymp \Big{(}\frac{\tau}{\tau^{*}}\Big{)}^{k-k_{1}+1}\prod_{j=k_{1}-1}^ {k-1}\big{(}1+O({W_{j}}^{-2})\big{)}\qquad\mbox{(as (\ref{prop:2}))}\] \[\asymp \big{(}1+O(t^{2})\big{)}^{k-k_{1}+1}\cdot\prod_{j=k_{1}-1}^{k-1} \big{(}1+O({W_{j}}^{-2})\big{)}\qquad\mbox{(since $\frac{\tau}{\tau^{*}}=1+O(t^{2})$.)}\] Because \(|W_{k_{1}-1}|\geq|B|-|W_{k_{1}-1}-B|>|B|-|B|/2=|B|/2=O(1/|t|)\) and because \(|W_{k_{1}-1}|\leq 1/(qd_{0}|t|)\), we have \(|W_{k_{1}-1}^{-2}|=O(|t|^{2})\). By Proposition 9.2, we know \[|W_{k}-B|\geq|W_{k_{1}}-B|+(k-k_{1})d_{0}\geq(k-k_{1})d_{0}\] and thus \(k-k_{1}\leq(|W_{k}|+|B|)/d_{0}=O(|t|^{-1})\). This implies that \((1+O(t^{2}))^{k-k_{1}+2}=1+O(t)\). It remains to obtain an estimate of \(\prod_{j=k_{1}}^{k-1}\big{(}1+O({W_{j}^{-2}})\big{)}\). Again, it is enough to obtain an estimate by showing that \(\sum_{j=k_{1}}^{k-1}|W_{j}|^{-2}\) is uniformly bounded in \(k\) and \(W_{0}\). For the (ii\({}^{-}\)) case, since \(|W_{j}|<1/(qd_{0}|t|)\), we obtain \(|W_{j}|\asymp|t|^{-1}\). Therefore, \[\sum_{j=k_{1}}^{k_{2}-1}|W_{j}|^{-2}=O(|t|)\qquad\mbox{(since $k-k_{1}=O(|t|^{-1})$ so is $k_{2}-k_{1}$.)}\] The (iii\({}^{-}\)) case follows exactly as the (iii\({}^{+}\)) case. The proof of Proposition 10.3 is complete. Lemma H in the branched coordinates.To give estimates for the sums of the form \[\sum_{k=0}^{m-1}\frac{1}{|Df_{c}^{kpq}(z_{j})|}\quad\text{and}\quad\sum_{k=0}^{m- 1}\frac{|c-\hat{c}|+|f_{c}^{kpq}(z_{j})-b_{j}(c)|}{|Df_{c}^{kpq}(z_{j})|}\] with \(0\leq j\leq p-1\), we rewrite them in \(W\)-coordinates. It is enough to consider the case of \(j=0\), since we may apply the same argument as in the proof of Lemma G. Let \(W=\Phi_{t}(z)=\Psi_{t}\circ\psi_{t}(z)\) for \(z\in\hat{U}\) such that \(W_{k}:=\Phi_{t}(z_{kpq})\) for each \(k\). Note that \(|D\psi_{t}(z)|\asymp 1\) for any \(z\in\hat{U}\), where the implicit constant is independent of \(t\in\Delta\). Since \(D\Psi_{t}(w)=\lambda_{t}^{q^{2}}/w^{q+1}\), we have \[|D\Phi_{t}(z)|=|D\Psi_{t}(w)\cdot D\psi_{t}(z)|\asymp|w|^{-(q+1)}\asymp|W|^{1+ /q}.\] By the chain rule, we obtain \[|Df_{c_{t}}^{kpq}(z_{0})| =|D(\Phi_{t}^{-1}\circ G^{k}\circ\Phi_{t})(z_{0})|=|D\Phi_{t}^{-1} (W_{k})\cdot DG^{k}(W_{0})\cdot D\Phi_{t}(z_{0})|\] \[\asymp|W_{k}|^{-1-1/q}\cdot|DG^{k}(W_{0})|\cdot|W_{0}|^{1+1/q}\] \[=\left|\frac{W_{0}}{W_{k}}\right|^{1+1/q}\cdot|DG^{k}(W_{0})|.\] We let \[\mathcal{S}_{1}:=\sum_{k=0}^{m-1}\left|\frac{W_{k}}{W_{0}}\right|^{1+1/q} \cdot\frac{1}{|DG^{k}(W_{0})|}\asymp\sum_{k=0}^{m-1}\frac{1}{|Df_{c_{t}}^{kpq} (z_{0})|},\] and \[\mathcal{S}_{2}:=\frac{1}{|W_{0}|^{1/q}}\sum_{k=0}^{m-1}\left|\frac{W_{k}}{W_{ 0}}\right|\cdot\frac{1}{|DG^{k}(W_{0})|}\asymp\sum_{k=0}^{m-1}\frac{|f_{c_{t}} ^{kpq}(z_{0})-b_{t}|}{|Df_{c_{t}}^{kpq}(z_{0})|},\] where we used the fact that \(|f_{c}^{kpq}(z_{0})-b_{0}(c)|=|z_{kpq}-b_{t}|\asymp|W_{k}|^{-1/q}\). Now Lemma H is reduced to the estimates \[\mathcal{S}_{1}=O\bigg{(}\frac{1}{|t|}\bigg{)}\quad\text{and}\quad\mathcal{S }_{2}=O\bigg{(}\frac{1}{|t|^{1-1/q}}\bigg{)}.\] Indeed, Proposition 2.2 implies \[\sum_{k=0}^{m-1}\frac{1}{|Df_{c_{t}}^{kpq}(z_{0})|}\asymp\mathcal{S}_{1}=O \bigg{(}\frac{1}{|t|}\bigg{)}=O\Bigg{(}\frac{1}{\sqrt{|c_{t}-\hat{c}|}}\Bigg{)}\] in Case 1, and \[\sum_{k=0}^{m-1}\frac{|c_{t}-\hat{c}|+|f_{c_{t}}^{kpq}(z_{0})-b_{t }|}{|Df_{ct}^{kpq}(z_{0})|} \asymp|c_{t}-\hat{c}|\mathcal{S}_{1}+\mathcal{S}_{2}\] \[=O\bigg{(}|t|\cdot\frac{1}{|t|}+\frac{1}{|t|^{1-1/q}}\bigg{)}=O \bigg{(}\frac{1}{|c_{t}-\hat{c}|^{1-1/q}}\bigg{)}\] in Case 2. Proof of Lemma H for Case \(2^{-}\).We will use tha fact that \(\tau^{*}=1+qt+O(t^{2})\) and \[|\tau^{*}|=1+\operatorname{Re}\left(qt\right)+O(|t^{2}|)\geq 1+qd_{0}|t|\] for \(t\in\Delta\) if we take a smaller \(T_{0}\) if necessary. First, suppose that \(|W_{0}|\geq K_{\mathrm{I}}/|t|\). Since it must be \(|W_{k}|\leq 1/(qd_{0}|t|)\) for any \(k=0,1,\ldots,m\) by Proposition 9.1 (otherwise \(W_{k}\) is attracted by \(\infty\)), we have \(|W_{0}|\asymp|t|^{-1}\) and \(|W_{k}/W_{0}|=O(1)\). Hence by Proposition 10.3, we have \[\mathcal{S}_{1}=\sum_{k=0}^{m-1}\left|\frac{W_{k}}{W_{0}}\right|^{1+1/q}\cdot \frac{1}{|DG^{k}(W_{0})|}=\sum_{k=0}^{m-1}O(1)\cdot\frac{1}{|\tau^{*}|^{k}}=O \Bigg{(}\!\sum_{k=0}^{m-1}\frac{1}{(1+qd_{0}|t|)^{k}}\Bigg{)}=O\bigg{(}\frac{ 1}{|t|}\Bigg{)}\] and \[\mathcal{S}_{2}=\frac{1}{|W_{0}|^{1/q}}\sum_{k=0}^{m-1}\left|\frac{W_{k}}{W_{ 0}}\right|\cdot\frac{1}{|DG^{k}(W_{0})|}=|t|^{1/q}\sum_{k=0}^{m-1}O(1)\cdot \frac{1}{|\tau^{*}|^{k}}=O\bigg{(}\frac{1}{|t|^{1-1/q}}\bigg{)}.\] Next, suppose that \(|W_{0}|\leq K_{\mathrm{I}}/|t|\). By Proposition 10.1 and Lemma I, we have \(|\arg(-W_{k})|\leq\pi/4\) and thus \(|W_{k}|/\sqrt{2}\leq-\operatorname{Re}W_{k}\leq|W_{k}|\) for any \(k=0,1,\ldots,m\). By Proposition 10.1 again we have \(\operatorname{Re}W_{0}+k/2\leq\operatorname{Re}W_{k}\). Hence \[0\leq|W_{k}|/\sqrt{2}\leq-\operatorname{Re}W_{k}\leq-\operatorname{Re}W_{0}-k /2\leq|W_{0}|-k/2 \tag{10.3}\] and this implies \[\left|\frac{W_{k}}{W_{0}}\right|\leq\sqrt{2}\cdot\frac{|W_{0}|-k/2}{|W_{0}|}=O (1).\] In particular, by letting \(k=m\) in (10.3) we have \(m\leq 2|W_{0}|=O(|W_{0}|)\). Since \(|W_{0}|\leq K_{\mathrm{I}}/|t|\), we have \[\mathcal{S}_{1}=\sum_{k=0}^{m-1}O(1)\cdot\frac{1}{|\tau^{*}|^{k}}\leq\sum_{k=0 }^{m-1}O(1)\cdot 1^{k}=O(m)=O(|W_{0}|)=O\bigg{(}\frac{1}{|t|}\bigg{)}\] and \[\mathcal{S}_{2} =\frac{1}{|W_{0}|^{1/q}}\sum_{k=0}^{m-1}O(1)\cdot\frac{1}{|\tau^ {*}|^{k}}\leq\frac{1}{|W_{0}|^{1/q}}\sum_{k=0}^{m-1}O(1)\cdot 1^{k}\] \[=\frac{O(m)}{|W_{0}|^{1/q}}=O(|\,W_{0}|^{1-1/q}|)=O\bigg{(}\frac{ 1}{|t|^{1-1/q}}\bigg{)}.\] This completes the proof for Case \(2^{-}\). Preliminary for Case 1 and Case \(2^{+}\).We will use the fact that \(\tau^{-1}=1+qt+O(t^{2})\) and \[|\tau|^{-1}=1+\operatorname{Re}\left(qt\right)+O(|t^{2}|)\geq 1+qd_{0}|t|\] for \(t\in\Delta\) if we take a smaller \(T_{0}\) if necessary. It is very convenient to use _Ueda's modulus_ defined as follows (see [U]): For \(W\neq\infty\), \[N(W):=|W-B|-|B|.\] Then one can easily check that * \(|W|\geq N(W)\); and * \(N(W)\geq|W|/3\) when \(|W-B|\geq 4|B|\). Indeed, \(|W|\geq N(W)\) is just the triangle inequality; and when \(|W-B|\geq 4|B|\), we have \(|W|\geq 3|B|\) and thus \(N(W)/|W|=|1-B/W|-|B/W|\geq 1/3\). Here is another useful fact: **Proposition 10.4**.: _In Case 1 and Case 2\({}^{+}\), we have_ \[N(G^{-1}(W))\geq|\tau|^{-1}N(W)+d_{0}\] _by taking sufficiently small \(\hat{R}\) and \(T_{0}\) in the definitions of \(\hat{U}\) and \(\Delta\)._ Proof.Since \(|G^{-1}(W)-B|=|\tau|^{-1}|W-B|+O(|W^{-1}|)\), we have \[N(G^{-1}(W))=|\tau|^{-1}N(W)+\frac{|\tau|^{-1}-1}{|1-\tau|}+O(|W^{-1}|).\] By taking a smaller \(T_{0}\) if necessary, we have \[\frac{|\tau|^{-1}-1}{|1-\tau|}=\frac{\operatorname{Re}t}{|t|}(1+O(|t|))\geq \frac{\cos A_{0}}{2}=2d_{0}\] for any \(t\in\Delta\). By taking a smaller \(\hat{R}\) (hence a larger \(\hat{r}\)) if necessary, we have \(O(|W^{-1}|)\leq(\cos A_{0})/4=d_{0}\) for any \(|W|\geq\hat{r}\). Hence we have the desired inequality. \(\blacksquare\) **Proposition 10.5**.: _In Case 1 and Case 2\({}^{+}\), we have_ \[\left|\frac{W_{k}}{W_{0}}\right|=O(|\tau|^{k})\] _for any \(k=0,1,\ldots,m\)._ Proof.We adopt the argument of Proposition 10.3 and assume that * \(|W_{j}-B|\geq 4|B|\) for \(j=0,\ldots,k_{1}-1\), * \(|W_{j}-B|<4|B|\) and \(|W_{j}|>K_{\mathrm{I}}/|t|\) for \(j=k_{1},\ldots,k_{2}-1\), * \(|W_{j}-B|<4|B|\) and \(|W_{j}|\leq K_{\mathrm{I}}/|t|\) for \(j=k_{2},\ldots,k\) for some \(k_{1}\) and \(k_{2}\). Now we consider the product \[\left|\frac{W_{k}}{W_{0}}\right|=\left|\frac{W_{k_{1}-1}}{W_{0}}\right|\cdot \left|\frac{W_{k_{2}}}{W_{k_{1}-1}}\right|\cdot\left|\frac{W_{k}}{W_{k_{2}}} \right|\!. \tag{10.4}\] In the case of (i\({}^{+}\)), we can apply Proposition 10.4 and it follows that \[|W_{0}|\geq N(W_{0})\geq|\tau|^{-(k_{1}-1)}N(W_{k_{1}-1})\geq|\tau|^{-k_{1}+1} |W_{k_{1}-1}|/3.\] Hence \(|W_{k_{1}-1}/W_{0}|\leq 3\,|\tau|^{k_{1}-1}\). Since \(|W_{k_{1}-1}|\geq 3|B|=3/(q|t|)+O(1)\) and \(|W_{k_{2}}|\leq K_{\rm I}/|t|\), we have \(|W_{k_{2}}/W_{k_{1}-1}|=O(1)\). Since \(|W_{j}|\) is decreasing in the case of (iii\({}^{+}\)) by Lemma I, we have \(|W_{k}/W_{k_{2}}|\leq 1\). Hence by (10.4), we obtain \(|W_{k}/W_{0}|=O(|\tau|^{k_{1}-1})\). To show the estimate \(|W_{k}/W_{0}|=O(|\tau|^{k})\), it is enough to show that \(|\tau|^{k-(k_{1}-1)}=O(1)\). Because \(|W_{k_{1}}-B|\leq 4|B|\) and hence \(|W_{k_{1}}|\leq 5|B|\), by (9.1) we have \[|W_{k_{1}}-B|\geq|W_{k}-B|+(k-k_{1})\,d_{0}\geq(k-k_{1})\,d_{0}\] and thus \(k-k_{1}\leq(|W_{k_{1}}|+|B|)/d_{0}=O(|t|^{-1})\). Hence \[|\tau|^{k-(k_{1}-1)}=(1-{\rm Re}\,(qt)+O(t^{2}))^{k-k_{1}}\cdot|\tau|=O(1)\] and this completes the proof. \(\blacksquare\) Proof of Lemma H for Case 1 and Case \(2^{+}\).By Propositions 10.3 and 10.5, we have \[{\cal S}_{1}=\sum_{k=0}^{m-1}\left|\frac{W_{k}}{W_{0}}\right|^{1+1/q}\cdot \frac{1}{|DG^{k}(W_{0})|}=\sum_{k=0}^{m-1}O(|\tau|^{k(1+1/q)})\cdot\frac{1}{| \tau|^{k}}=O\!\left(\sum_{k=0}^{m-1}|\tau|^{k/q}\right)\!.\] Since \(|\tau|^{1/q}\leq 1-d_{0}|t|\) for \(t\in\Delta\) (by taking a smaller \(T_{0}\) if necessary), we obtain \({\cal S}_{1}=O(|t|^{-1})\). Similarly we have \[{\cal S}_{2}=\frac{1}{|W_{0}|^{1/q}}\sum_{k=0}^{m-1}\left|\frac{W_{k}}{W_{0}} \right|\cdot\frac{1}{|DG^{k}(W_{0})|}=\frac{1}{|W_{0}|^{1/q}}\sum_{k=0}^{m-1}O (|\tau|^{k})\cdot\frac{1}{|\tau|^{k}}=\frac{O(m)}{|W_{0}|^{1/q}}.\] First we suppose that \(|W_{0}-B|<4|B|\). Then we have \(|W_{0}|<5|B|=O(|t|^{-1})\). By (9.1), \[|W_{0}|\geq|W_{0}-B|-|B|\geq|W_{m}-B|-|B|+md_{0}\geq-|W_{m}|+md_{0}.\] By the assumption that \(W_{m}\in\hat{A}\), we have \(|W_{m}|<6\hat{r}\) and thus \(md_{0}\leq|W_{0}|+6\hat{r}\). We obtain \(m=O(|W_{0}|)\) since \(5\hat{r}<|W_{0}|=O(|t|^{-1})\). Hence \[{\cal S}_{2}=\frac{O(m)}{|W_{0}|^{1/q}}=O(|W_{0}|^{1-1/q})=O\!\left(\frac{1}{ |t|^{1-1/q}}\right)\!.\] Next we suppose that \(|W_{0}-B|\geq 4|B|\). There exists a \(k\geq 0\) such that * \(|W_{j}-B|\geq 4|B|\) for \(0\leq j\leq k\); and * \(|W_{j}-B|<4|B|\) for \(k\leq j\leq m\). (We may regard \(k\) and \(m\) as \(k_{1}-1\) and \(k\) in the proof of Proposition 10.5 respectively.) Since \(|W_{k}/W_{0}|\leq 3|\tau|^{k}\) as in the proof of Proposition 10.5, and \(|W_{k}|\geq 3|B|>1/(q|t|)\) for \(t\in\Delta\), we have \(3|\tau|^{k}\geq 1/(q|tW_{0}|)\). Hence \[1<|\tau|^{-k}\leq 3q|tW_{0}|,\quad\text{or equivalently}\quad 0<k\log|\tau|^{-1} \leq\log(3q|tW_{0}|).\] This implies that \[0<k(\operatorname{Re}\left(qt\right)+O(t^{2}))\leq\log(3q|tW_{0}|).\] Since \(\operatorname{Re}\left(qt\right)+O(t^{2})\geq(q|t|\cos A_{0})/2\) for \(t\in\Delta\) (by taking a smaller \(T_{0}\) if necessary), we conclude \[k=O\bigg{(}\frac{\log(3q|tW_{0}|)}{|t|}\bigg{)}.\] As in the proof of Proposition 10.5, we have \(m-k=O(1/|t|)\). Since \(|W_{0}|\geq 3|B|\geq 2/(q|t|)\) (for \(t\in\Delta\) by taking a smaller \(T_{0}\) if necessary), we obtain \(\log(3q|tW_{0}|)\geq\log 6>1\) and thus \[m=k+(m-k)=O\bigg{(}\frac{\log(3q|tW_{0}|)}{|t|}\bigg{)}+O\bigg{(}\frac{1}{|t|} \bigg{)}=O\bigg{(}\frac{\log(3q|tW_{0}|)}{|t|}\bigg{)}.\] Hence we conclude that \[\mathcal{S}_{2}=\frac{O(m)}{|W_{0}|^{1/q}}=O\bigg{(}\frac{\log(3q|tW_{0}|)}{| tW_{0}|^{1/q}}\bigg{)}\cdot\frac{1}{|t|^{1-1/q}}=O\bigg{(}\frac{1}{|t|^{1-1/q}} \bigg{)},\] where we used the fact that \(x^{-1/q}\log(3qx)\) is bounded if \(3qx\geq 6\). \(\blacksquare\) ## 11 Proof of Lemma D In this section we give a proof of Lemma D. Since the proof heavily relies on Lemma I, we restate Lemma D in terms of the parameter \(t\in\Delta\cup\{0\}\). (Note that we have \(\hat{c}=c_{0}\)). **Proposition 11.1**.: _There exist constants \(K_{\mathrm{D}}>0\) and \(\nu_{0}\in(0,R_{0})\) such that for any \(\nu\in(0,\nu_{0})\), there exists a constant \(T_{0}=T_{0}(\nu)\in(0,2\cos A_{0})\) such that for any \(t\in\Delta=\Delta(A_{0},T_{0})\), \(\zeta\in P(f_{c_{0}})\), and \(z\in J(f_{c_{t}})-\mathcal{V}(c_{t})\),_ \[|\zeta-z|\geq K_{\mathrm{D}}\,\nu.\] Without loss of generality, we may assume that \(K_{\mathrm{D}}\leq 1\). Proof.We prove it by contradiction. Suppose that for any \(K_{\mathrm{D}}>0\) and \(\nu_{0}\in(0,R_{0})\), there exists a \(\nu\in(0,\nu_{0})\) such that for any \(T_{0}\in(0,2\cos A_{0})\), there exist \(t\in\Delta=\Delta(A_{0},T_{0})\), \(\zeta\in P(f_{c_{0}})\), and \(z\in J(f_{c_{t}})-\mathcal{V}(c_{t})\) such that \[|\zeta-z|<K_{\mathrm{D}}\,\nu.\] For instance, let \(K_{\mathrm{D}}:=1/n\), \(\nu_{0}:=R_{0}/n\) for integer \(n\geq 1\). Then for any sufficiently large \(n\), we may assume that \(R_{0}/n<1\) and there exists a \(\nu_{n}\in(0,R_{0}/n)\) such that for \(T_{0}:=2\nu_{n}^{2q}\cos A_{0}\) we can find sequences \(t_{n}\in\Delta=\Delta(A_{0},T_{0})\), \(\zeta_{n}\in P(f_{c_{0}})\), and \(z_{n}\in J(f_{c_{n}})-\mathcal{V}(c_{n})\) with \(c_{n}=c_{t_{n}}\) such that \[|\zeta_{n}-z_{n}|<\frac{1}{n}\cdot\nu_{n}.\] (Now, \(\mathcal{V}(c_{n})=\bigcup_{k=0}^{p-1}f_{c_{n}}^{-k}(V_{0})\) and \(V_{0}=\mathbb{D}(\hat{b},\nu_{n})\).) We may assume that \(\zeta_{n}\) and \(z_{n}\) tends to the same limit \(\hat{\zeta}\in P(f_{c_{0}})\) by passing through a subsequence, since \(P(f_{c_{0}})\) is compact and the Julia sets \(J(f_{c_{n}})\) are uniformly bounded. Moreover, there exists a \(j\geq 0\) such that \(f_{c_{0}}^{j}(\hat{\zeta})\) is contained in \(\mathbb{D}(\hat{b},R_{0}/2)\subset U_{0}=\mathbb{D}(\hat{b},R_{0})\). By replacing \(\zeta_{n}\) and \(z_{n}\) by \(f_{c_{0}}^{j}(\zeta_{n})\) and \(f_{c_{n}}^{j}(z_{n})\) respectively, we may assume that \(\zeta_{n}\in U_{0}\) and \(z_{n}\in U_{0}-V_{0}\) for sufficiently large \(n\). In \(w\)-coordinate \(w=\psi_{0}(z)\) given in Proposition 2.3 for \(t=0\), the local dynamics of \(f_{c_{0}}^{pq}\) in \(U_{0}\) can be observed as \(w\mapsto w(1+w^{q}+O(w^{2q}))\) and the orbit of \(\psi_{0}(\zeta_{n})\) by this map tends to \(0\) tangentially to the attracting direction (that is, the set of \(w\)'s with \(\operatorname{Im}\left(w^{q}\right)=0\) and \(\operatorname{Re}\left(w^{q}\right)<0\)). This implies that (by taking sufficiently large \(j\) if necessary) \(\psi_{0}(\zeta_{n})\) is contained in the set \(X:=\{w\in\psi_{0}(U_{0})\,:\,|\arg(-w^{q})|\leq\pi/4\}\). Next we consider \(z_{n}\in U_{0}-V_{0}\) in the local coordinate \(w=\psi_{t_{n}}(z)\). Since \(\psi_{t_{n}}\) is univalent and \(\nu_{n}<|z_{n}-\hat{b}|<R_{0}\), there exists a constant \(C_{7}>0\) independent of \(n\) such that \(w_{n}:=\psi_{t_{n}}(z_{n})\) satisfies \(|w_{n}|\geq C_{7}\nu_{n}\). In \(W\)-coordinate \(W=\Psi_{t_{n}}(w)=-\lambda_{t_{n}}^{q^{2}}/(qw^{q})\), we have \(\arg(-W)=-\arg(w^{q})+O(t_{n})\) since \(\lambda_{t_{n}}^{q^{2}}=1+O(t_{n})\). Now, for \(W_{n}=\Psi_{t_{n}}(w_{n})=\Psi_{t_{n}}\circ\psi_{t_{n}}(z_{n})\) with sufficiently large \(n\), we obtain \[|W_{n}|\leq\left|\frac{1+O(t_{n})}{qC_{7}^{q}\nu_{n}^{q}}\right|\leq\left| \frac{\sqrt{2\cos A_{0}}}{qC_{7}^{q}}\frac{1+O(|t_{n}|)}{\sqrt{|t_{n}|}} \right|\leq\frac{K_{\rm I}}{|t_{n}|}.\] Therefore, by Lemma I, we have \(|\arg(-W_{n})|\leq\pi/4\) since \(z_{n}\in J(f_{c_{n}})\cap U_{0}\). Hence in \(w\)-coordinate we have \(|\arg(w_{n}^{q})|=|\arg(-W_{n})|+O(|t_{n}|)\leq\pi/4+O(\nu_{n}^{2q})\) for \(w_{n}=\psi_{t_{n}}(z_{n})\). We also have \(|\psi_{t_{n}}(z_{n})-\psi_{0}(z_{n})|=O(|t_{n}|)=O(\nu_{n}^{2q})\), since the map \((t,z)\mapsto\psi_{t}(z)\) is holomorphic in both \(t\) and \(z\). Hence \(\psi_{0}(z_{n})\) is contained in \(Y:=\{w\in\psi_{0}(U_{0})\,:\,|\arg(w^{q})|\leq\pi/3,\,|w|\geq C_{7}\nu_{n}\cdot (3/4)\}\) for sufficiently large \(n\). It follows that \(|\psi_{0}(\zeta_{n})-\psi_{0}(z_{n})|\) is larger than the distance between the sets \(X\) and \(Y\), which is comparable with the radius \(\nu_{n}\) of \(V_{0}\) (see Figure 9). However, the univalence of \(\psi_{0}\) implies that \(|\psi_{0}(\zeta_{n})-\psi_{0}(z_{n})|\) is not comparable with \(\nu_{n}\) by \(|\psi_{0}(\zeta_{n})-\psi_{0}(z_{n})|\asymp|\zeta_{n}-z_{n}|<\nu_{n}/n\). This is a contradiction. Figure 9: The sets \(X\) and \(Y\) in \(w\)-coordinate for \(q=3\). Proof of Lemma C We will show that \(|Df_{c}^{M^{\prime}-M}(z_{M})|\geq\kappa_{\mathrm{C}}/\nu\) for some constant \(\kappa_{\mathrm{C}}\) that does not depend on \(c=c_{t}\) with \(t\in\Delta\). By choosing \(\nu\) sufficiently small, we have \(\Lambda:=\kappa_{\mathrm{C}}/\nu>1\). As in the proof of Lemma A, we assume that \(M=0\) and \(M^{\prime}=mpq+L\) for which \(z_{0}\in V_{0}\) (equivalently, \(|z_{0}-\hat{b}|<\nu\)), \(z_{(m-1)pq}\in U_{0}\), \(z_{mpq}\notin U_{0}\) (equivalently, \(|z_{mpq}-\hat{b}|\geq R_{0}\)), and \(z_{M^{\prime}}\in V_{0}\). By the chain rule we have \[|Df_{c}^{M^{\prime}}(z_{0})|=|Df_{c}^{mpq}(z_{0})|\cdot|Df_{c}^{L}(z_{mpq})|. \tag{12.1}\] First let us give an estimate of \(|Df_{c}^{mpq}(z_{0})|\). For \(c=c_{t}\) (\(t\in\Delta\)), let \(b(c)\) be the periodic point \(b_{t}\) of period \(p\) given in Proposition 2.3 with \(b(c)\to\hat{b}\) as \(c=c_{t}\) tends to \(\hat{c}\). We may assume that \(|b(c)-\hat{b}|\leq\nu\) for \(c\approx\hat{c}\), and by the Koebe distortion Theorem, we have \[|Df_{c}^{mpq}(z_{0})|\asymp\frac{|z_{mpq}-b(c)|}{|z_{0}-b(c)|}\geq\frac{|z_{mpq }-\hat{b}|-|b(c)-\hat{b}|}{|z_{0}-\hat{b}|+|b(c)-\hat{b}|}\geq\frac{R_{0}-\nu}{ \nu+\nu}.\] This implies that \(|Df_{c}^{mpq}(z_{0})|\geq C_{8}/\nu\) for some constant \(C_{8}>0\) independent of \(c\approx\hat{c}\), \(z_{0}\) and \(\nu\ll 1\). Next we give an estimate of the form \(|Df_{c}^{L}(z_{mpq})|\geq C_{9}\), where \(C_{9}\) is a constant independent of \(c\approx\hat{c}\), \(\nu\ll 1\), and \(z_{0}\in J(f_{c})\). (Then by (12.1) the proof is done by setting \(\kappa_{\mathrm{C}}:=C_{8}C_{9}\).) As in the proof of Lemma A, by shrinking \(U_{0}\) if necessary, we may always assume that \(L>p\). Since \(z_{M^{\prime}}\in V_{0}\), we have \(z_{M^{\prime}-p}\in f_{c}^{-p}(V_{0})\) and thus \(|Df_{c}^{p}(z_{M^{\prime}-p})|\geq\xi^{p}\). By Proposition 4.1 and inequality (4.1) in Lemma F we obtain \[|Df_{c}^{L}(z_{mpq})|= |Df_{c}^{L-p}(z_{mpq})||Df_{c}^{p}(z_{M^{\prime}-p})|\] \[\geq \frac{\rho(z_{mpq})}{\rho(z_{M^{\prime}-p})}A^{L-p}\cdot\xi^{p}\] \[\geq \rho(z_{mpq})\cdot\mathrm{dist}\,(z_{M^{\prime}-p},P(f_{\hat{c}}) )\cdot\xi^{p}.\] Since \(z_{mpq}\notin U_{0}\) and \(z_{mpq}\) has a definite distance from the parabolic cycle, the distance between \(z_{mpq}\) and the postcritical set \(P(f_{\hat{c}})\) is larger than a positive constant independent of \(c\approx\hat{c}\), \(\nu\ll 1\), and \(z_{0}\in J(f_{c})\). (Indeed, we may apply the proof of Lemma D to obtain \(\mathrm{dist}\,(z_{mpq},P(f_{\hat{c}}))\geq K_{\mathrm{D}}R_{0}\) by taking a smaller \(R_{0}\) if necessary.) Hence we always have \(\rho(z_{mpq})\asymp 1\). Finally we show that \(\mathrm{dist}\,(z_{M^{\prime}-p},P(f_{\hat{c}}))\) has a uniform distance away from zero. Since \(z_{M^{\prime}}\in V_{0}\), \(\mathrm{dist}\,(z_{M^{\prime}-p},P(f_{\hat{c}}))\) can be close to \(0\) only if \(z_{M^{\prime}-p}\) belongs to the connected component \(V_{0}^{\prime}\) of \(f_{c}^{-p}(V_{0})\) contained in \(U_{0}\). (See Figure 6.) However, the local dynamics of \(f_{c}^{p}\) on \(U_{0}\) does not allow any point \(z\in J(f_{\hat{c}})\) satisfying both \(z\in V_{0}^{\prime}-V_{0}\) and \(f_{c}^{p}(z)\in V_{0}\). (More precisely, we may apply Proposition 10.1 and Lemma I to exclude such a \(z\) in the Julia set.) Hence \(z_{M^{\prime}-p}\) belongs to a connected component of \(f_{c}^{-p}(V_{0})\) that has a definite distance from \(P(f_{\hat{c}})\) and \(\mathrm{dist}\,(z_{M^{\prime}-p},P(f_{\hat{c}}))\) is bounded below by a positive constant independent of \(c\approx\hat{c}\), \(\nu\ll 1\), and \(z_{0}\in J(f_{\hat{c}})\). In conclusion, \(|Df_{c}^{L}(z_{mpq})|\) is bounded away from zero by a constant independent of \(c\approx\hat{c}\), \(\nu\ll 1\), and \(z_{0}\in J(f_{c})\). Proofs of Theorems 1.2 and 1.3 In this section we prove Theorem 1.2 and Theorem 1.3 together. Proofs of Theorems 1.2 and 1.3.Let \(\sigma_{0}\) be any point in the thick internal ray \(\mathcal{I}(\theta,\delta)\) and \(\ell:=|\sigma_{0}-\hat{c}|\). For each \(n=0,1,\ldots\), let \[E_{n}:=\bigg{\{}c\in\mathcal{I}(\theta,\delta)\,:\,\frac{\ell}{2^{n+1}}\leq|c- \hat{c}|\leq\frac{\ell}{2^{n}}\bigg{\}}.\] Note that \(\operatorname{diam}E_{n}\asymp\ell/2^{n}\) and that for any \(c\in E_{n}\) we have \(|c-\hat{c}|\asymp\ell/2^{n}\). We take a \(\sigma_{n}\in E_{n}\) for each \(n\geq 1\). Then we can join \(\sigma_{n}\) and \(\sigma_{n+1}\) by a piecewise smooth path \(\gamma_{n}\) of length compatible with \(\ell/2^{n}\) contained in \(E_{n}\cup E_{n+1}\). By the Main Theorem we have \[\bigg{|}\frac{d}{dc}z(c)\bigg{|}\leq\frac{K}{|c-\hat{c}|^{1-1/Q}}\asymp\bigg{(} \frac{\ell}{2^{n}}\bigg{)}^{-1+1/Q}\] for any \(c\in\gamma_{n}\) and \(n\geq 0\). Hence \[|z(\sigma_{n+1})-z(\sigma_{n})|=\bigg{|}\int_{\gamma_{n}}\frac{d}{dc}z(c)\,dc \bigg{|}=O\bigg{(}\bigg{(}\frac{\ell}{2^{n}}\bigg{)}^{-1+1/Q}\cdot\frac{\ell}{ 2^{n}}\bigg{)}=O\bigg{(}\frac{1}{2^{n/Q}}\bigg{)}\cdot\ell^{1/Q}.\] It follows that the sequence \(\{z(\sigma_{n})\}_{n\geq 0}\) is Cauchy, and we denote the limit by \(z(\hat{c})\). (One can easily check that the limit does not depend on the choice of the sequences \(\{\sigma_{n}\}\) and \(\{\gamma_{n}\}\).) Moreover, we have \[|z(\sigma_{0})-z(\sigma_{n})|\leq O\bigg{(}1+\frac{1}{2^{1/Q}}+\cdots+\frac{1 }{2^{(n-1)/Q}}\bigg{)}\cdot\ell^{1/Q}=O(\ell^{1/Q})\] and thus \(|z(\sigma_{0})-z(\hat{c})|=O(\ell^{1/Q})=O(|\sigma_{0}-\hat{c}|^{1/Q})\), where the implicit constants depend only on \(\hat{c}\) and the thickness \(\delta\) of the thick internal ray. This proves (1.2) of Theorem 1.2. Semiconjugacy.Next we show that \(z(\hat{c})\) belongs to the Julia set \(J(f_{\hat{c}})\). For each \(z_{*}\in J(f_{\sigma})\) and its motion \(z(c)=h_{c}(z_{*})=H(c,z_{*})\) along the thick internal ray \(\mathcal{I}(\theta,\delta)\), we define \(h_{\hat{c}}(z_{*})\) by the limit \(z(\hat{c})\) given as above. Since \(h_{c}:J(f_{\sigma})\to J(f_{c})\) is continuous and the convergence of \(h_{c}\) to \(h_{\hat{c}}\) as \(c\in\mathcal{I}(\theta,\delta)\) tends to \(\hat{c}\) is uniform, \(h_{\hat{c}}:J(f_{\sigma})\to h_{\hat{c}}(J(f_{\sigma}))\) is continuous as well. Hence by \(f_{c}\circ h_{c}=h_{c}\circ f_{\sigma}\) we obtain \(f_{\hat{c}}\circ h_{\hat{c}}=h_{\hat{c}}\circ f_{\sigma}\). In particular, this implies that \(f_{\hat{c}}\circ h_{\hat{c}}(J(f_{\sigma}))=h_{\hat{c}}(J(f_{\sigma}))\) and thus the image \(h_{\hat{c}}(J(f_{\sigma}))\) is a compact forward invariant set contained in the filled Julia set of \(f_{\hat{c}}\). Suppose that there exists a \(z_{*}\in J(f_{\sigma})\) such that \(h_{\hat{c}}(z_{*})=z(\hat{c})\in h_{\hat{c}}(J(f_{\sigma}))\) belongs to the Fatou set of \(f_{\hat{c}}\). It actually belongs to the basin of attraction of \(\hat{b}\), and thus there exists an \(l\) such that \(f_{\hat{c}}^{l+kpq}(z(\hat{c}))\) is contained in \(V_{0}\) for any \(k\geq 0\). Since \(h_{\hat{c}}\) is continuous, the image of any nearby point of \(z_{*}\) under \(h_{\hat{c}}\) also belongs to the same basin. In addition, since the points that are not eventually periodic in the dynamics of \(f_{\sigma}\) are dense in \(J(f_{\sigma})\), we may assume that the orbit \(f_{\sigma}^{n}(z_{*})\) (\(n\in\mathbb{N}_{0}\)) of \(z_{*}\) never lands on the periodic points. By \(f_{\hat{c}}^{l+kpq}(z(\hat{c}))=h_{\hat{c}}(f_{\sigma}^{l+kpq}(z_{*}))\) and uniform convergence of \(h_{c}\) to \(h_{\hat{c}}\), we have \(h_{c}(f_{\sigma}^{l+kpq}(z_{*}))\in V_{0}\) for any \(k\) and \(c\approx\hat{c}\). Since \(h_{c}(f_{\sigma}^{l+kpq}(z_{*}))\in J(f_{c})\), Proposition 2.4 implies that \(h_{c}(f_{\sigma}^{l+kpq}(z_{*}))\) is a repelling periodic point contained in \(U_{0}\). This is impossible because \(z_{*}\) is not eventually periodic. This completes the proof of Theorem 1.2. To confirm that \(h_{\hat{c}}\) is a semiconjugacy, we show surjectivity of \(h_{\hat{c}}:J(f_{\sigma})\to J(f_{\hat{c}})\): First we take any repelling periodic point \(\hat{x}\in J(f_{\hat{c}})\). Since there is a holomorphic family \(x(c)\) of repelling periodic points for \(c\) sufficiently close to \(\hat{c}\) such that \(\hat{x}=x(\hat{c})\), we have some \(z_{0}\in J(f_{\sigma})\) with \(h_{c}(z_{0})=x(c)\) for any \(c\in\mathcal{I}(\theta,\delta)\). In particular, we have \(h_{\hat{c}}(z_{0})=\hat{x}\). Next we take any \(w\in J(f_{\hat{c}})\) and a sequence of repelling periodic points \(\hat{x}_{n}\) of \(f_{\hat{c}}\) that converges to \(w\) as \(n\to\infty\). (Such a sequence exists since repelling periodic points are dense in the Julia set.) Let \(z_{n}\in J(f_{\sigma})\) be the repelling periodic point with \(h_{\hat{c}}(z_{n})=\hat{x}_{n}\). Then any accumulation point \(y\) of the sequence \(z_{n}\) satisfies \(h_{\hat{c}}(y)=w\) by continuity. Finally we check properties (1) - (3) of Theorem 1.3. Property (3) is an immediate consequence of (1.2) in Theorem 1.2. (In particular, we obtain Corollary 1.4.) To show (1) and (2), let \(\eta_{c}:=h_{\hat{c}}\circ h_{c}^{-1}:J(f_{c})\to J(f_{\hat{c}})\) for \(c\approx\hat{c}\) such that \(\eta_{c}\) is a semiconjugacy between \(f_{c}\) and \(f_{\hat{c}}\) and satisfies \(|\eta_{c}(z)-z|\leq K^{\prime}|c-\hat{c}|^{1/Q}\) by (3). We assume that \(c\) is sufficiently close to \(\hat{c}\) such that \(K^{\prime}|c-\hat{c}|^{1/Q}<K_{\rm D}\,\nu/4\), where the constant \(K_{\rm D}\leq 1\) is given in Lemma D. Suppose that \(\eta_{c}(z)=\eta_{c}(z^{\prime})\) for some distinct points \(z\), \(z^{\prime}\in J(f_{c})\). Let \(z_{n}:=f_{c}^{n}(z)\) and \(z_{n}^{\prime}:=f_{c}^{n}(z^{\prime})\) for \(n\in\mathbb{N}\). Then \(\eta_{c}\circ f_{c}=f_{\hat{c}}\circ\eta_{c}\) implies \(\eta_{c}(z_{n})=\eta_{c}(z_{n}^{\prime})\) for any \(n\in\mathbb{N}\), and thus \[|z_{n}-z_{n}^{\prime}|\leq|\eta_{c}(z_{n})-z_{n}|+|\eta_{c}(z_{n}^{\prime})-z_ {n}^{\prime}|=2K^{\prime}|c-\hat{c}|^{1/Q}<K_{\rm D}\nu/2<\nu/2. \tag{13.1}\] Suppose that the orbit \(z_{n}\) (\(n\in\mathbb{N}\)) never lands on repelling periodic points of \(f_{c}\) in \(\hat{U}=\mathbb{D}(\hat{b},\hat{R})\) described in assertion (3) or (4) of Proposition 2.3. By Proposition 2.4, the orbit \(z_{n}^{\prime}\) (\(n\in\mathbb{N}\)) must behave in the same way. Let \(\{n_{k}\}_{k\in\mathbb{N}}\) be a subsequence such that \(z_{n},z_{n}^{\prime}\in J(f_{c})-\mathcal{V}(c)\) when \(n=n_{k}\) for each \(k\). By Lemma D, \(\mathbb{D}(z_{n},K_{\rm D}\nu)\) does not contain any point in the postcritical set \(P(f_{\hat{c}})\) when \(n\) ranges over the subsequence \(\{n_{k}\}_{k\in\mathbb{N}}\). Hence we have a univalent branch \(g_{n}\) of \(f_{c}^{-n}\) defined on \(\mathbb{D}(z_{n},K_{\rm D}\nu)\) that sends \(z_{n}\) and \(z_{n}^{\prime}\) to \(z\) and \(z^{\prime}\). However, since \(f_{c}\) is hyperbolic, the Koebe distortion theorem (see [A, Theorem 5-3]) implies \[|z-z^{\prime}|\leq\operatorname{diam}g_{n}(\mathbb{D}(z_{n},K_{\rm D}\nu/2)) \asymp\frac{\nu}{|Df_{c}^{n}(z)|}\to 0\] as \(n=n_{k}\to\infty\). This contradicts the assumption \(z\neq z^{\prime}\). Hence we may assume that the orbit \(z_{n}\) (\(n\in\mathbb{N}\)) lands on a repelling fixed point of \(f_{c}^{pq}\) in \(V_{0}=\mathbb{D}(\hat{b},\nu)\). Let \(z_{N}\) be such a repelling fixed point. Case 1 or Case 2\({}^{+}\).By (13.1) and Proposition 2.4, \(z_{N}^{\prime}\) must be the only repelling fixed point of \(f_{c}^{pq}\) in \(V_{0}\) given in assertion (3) of Proposition 2.3, and thus \(z_{N}=z_{N}^{\prime}\). Since \(z\neq z^{\prime}\), we must have \(z_{n}\neq z_{n}^{\prime}=-z_{n}\) for some \(n<N\). However, this is impossible because the Julia set \(J(f_{\hat{c}})\) and the critical point \(0\) have a definite distance, and the same holds for \(J(f_{c})\) by (1.3) for \(c\approx\hat{c}\). (Indeed, \(J(f_{\hat{c}})\) converges to \(J(f_{\hat{c}})\) as \(c\) tends to \(\hat{c}\) along a thick internal ray. See Corollary 1.4.) Hence we conclude that \(\eta_{c}\) is injective. Since \(h_{\hat{c}}=\eta_{c}\circ h_{c}\) and \(h_{c}\) is a conjugacy, property (1) holds. Case 2\({}^{-}\).By the same argument as above, \(z_{N}^{\prime}\) must be one of the \(q\) repelling fixed points of \(f_{c}^{pq}\) in \(V_{0}\) given in assertion (4) of Proposition 2.3. Hence we conclude that \(\eta_{c}(z_{N})\) is a parabolic periodic point \(\hat{b}\) and property (2) holds by the relation \(h_{\hat{c}}=\eta_{c}\circ h_{c}\). \(\blacksquare\) ## Acknowledgments Chen was partly supported by MOST 108-2115-M-001-005 and 109-2115-M-001-006. Kawahira was partly supported by JSPS KAKENHI Grants numbers 16K05193 and 19K03535. They thank the hospitality of Academia Sinica, Nagoya University, RIMS in Kyoto University, and Tokyo Institute of Technology where parts of this research were carried out.
2307.11761
Fairness of ChatGPT and the Role Of Explainable-Guided Prompts
Our research investigates the potential of Large-scale Language Models (LLMs), specifically OpenAI's GPT, in credit risk assessment-a binary classification task. Our findings suggest that LLMs, when directed by judiciously designed prompts and supplemented with domain-specific knowledge, can parallel the performance of traditional Machine Learning (ML) models. Intriguingly, they achieve this with significantly less data-40 times less, utilizing merely 20 data points compared to the ML's 800. LLMs particularly excel in minimizing false positives and enhancing fairness, both being vital aspects of risk analysis. While our results did not surpass those of classical ML models, they underscore the potential of LLMs in analogous tasks, laying a groundwork for future explorations into harnessing the capabilities of LLMs in diverse ML tasks.
Yashar Deldjoo
2023-07-14T09:20:16Z
http://arxiv.org/abs/2307.11761v1
# Fairness of ChatGPT and the Role Of Explainable-Guided Prompts+ ###### Abstract Our research investigates the potential of Large-scale Language Models (LLMs), specifically OpenAI's GPT, in credit risk assessment--a binary classification task. Our findings suggest that LLMs, when directed by judiciously designed prompts and supplemented with domain-specific knowledge, can parallel the performance of traditional Machine Learning (ML) models. Intriguingly, they achieve this with significantly less data--40 times less, utilizing merely 20 data points compared to the ML's 800. LLMs particularly excel in minimizing false positives and enhancing fairness, both being vital aspects of risk analysis. While our results did not surpass those of classical ML models, they underscore the potential of LLMs in analogous tasks, laying a groundwork for future explorations into harnessing the capabilities of LLMs in diverse ML tasks. ## 1 Introduction and Context **Motivation.** Recent advancements in large language models such as OpenAI's GPT [3], Google's PALM [5], and Facebook's LaMDA [13] have redefined the landscape of Artificial Intelligence (AI). These behemoth models utilize billions of parameters and capitalize on the vastness of the internet data for training, leading to the generation of accurate and high-quality content. Large-scale Language Models (LLMs) have shown outstanding performance across tasks such as health diagnostics, job-seeking, and risk assessment, among others [4, 6, 11, 12]. Given the transformative potential of these systems in decision-making across various contexts, their trustworthiness has drawn substantial attention. Unlike conventional ML models, LLMs leverage immense data scales, far surpassing those typically used for pretraining smaller or mid-scale models. This data, often sourced from the internet, mirrors societal norms but can also propagate prevalent societal biases. If unchecked, these biases can amplify, leading to biased outcomes that unfairly affect certain individuals or demographics. A crucial aspect of harnessing these systems is through _prompt engineering_, highlighted in this work. This technique mitigates the need for extensive dedicated training and offers system designers a measure of 'control' over the model's behavior by enabling direct infusion of their insights into the learning process. While our focus is a case study on ChatGPT, the insights, and methodologies could potentially extend to other LLM types, an avenue we plan to explore in future work. **Contributions.** This paper provides preliminary insights from an ongoing larger project aiming to address the challenges associated with the use of LLMs, particularly in decision-making processes through _prompt engineering_. We demonstrate the ability to harness the potential of utilizing pre-trained models for downstream ML tasks, thereby eliminating the need for dedicated model training. By meticulously designing prompts that embody problem-specific instructions and contexts, we direct these models toward achieving desired objectives, such as enhancing prediction accuracy and minimizing risk (and/or mitigating unfair outcomes). Moreover, we underscore the significance of incorporating _domain-specific knowledge_ from experts, obtained through an apriori ML phase, as a powerful tool to improve the quality and effectiveness of the prediction tasks to a considerable degree. Our contributions can be summarized as follows: * **OpenAI-ML Application:** We exemplify the application of OpenAI's GPT for specific ML tasks, focusing on credit risk assessment as a case study; * **Prompt Engineering:** We investigate the impact of different prompts and their parameters on the outcomes of ML tasks; * **Domain Knowledge Integration:** We propose a method for enhancing openAI-ML model accuracy by integrating optimal features, as identified by the ML models employed a priori. This demonstrates how leveraging feature importance can boost model performance when used as domain knowledge; * **Bias of Classical-ML vs. OpenAI-ML:** Contrary to the approach of [9] that focuses on aggregate metrics, we scrutinize biases in OpenAI ML models, using gender as a case study. We assess gender fairness not through aggregate metrics but by comparing distributions via bootstrap sampling and evaluating results with statistical significance tests. Figure 1: Flowchart illustrating the conceptual framework of the paper Our research is aimed at providing a guide for utilizing LLMs in ML tasks, with a primary focus on enhancing accuracy through prompt engineering and assessing its impact on fairness outcomes. We expand on previous work by Li et al.[9], where fairness-based prompt engineering was conducted across several datasets. We have demonstrated how the accuracy of these systems can be substantially enhanced and supplemented this with a detailed fairness analysis using statistical measures. 1 Footnote 1: The link to the system developed can be found in [https://github.com/yasdel/ChatGPT-FairXAI](https://github.com/yasdel/ChatGPT-FairXAI). ## 2 OpenAI-ML Framework for Credit Assessment Task We utilize ChatGPT-3.5-Turbo via the _chat-completion_ API, chosen for its outstanding text generation, speed, and cost-effectiveness. Converting the ML task into a "chat conversation" with ChatGPT is pivotal for prediction, ensuring the model responds in binary format - yes or no. Constructing prompts, which link the model and task, demands understanding of both context and capabilities. Effective prompts harness the model's comprehension skills for complex tasks, though their creation is challenging. Fig. 2 visually depicts the process for designing prompts for downstream ML tasks. **Prompt Construction.** This technique provides task-oriented instruction for the OpenAI model, delivering the necessary context. Our method starts with "Part 1. Task Instruction", where we guide the model on its task, then "Part 2. In-context Examples" to boost predictions. In "Part 3. Attribute Description", we detail task-specific features. This is followed by "Part 4. Integration of Domain Knowledge", strategically incorporated to improve model comprehension and accuracy. The final stage is "Part 5. Formulation of a Question/Problem", framing the task or query at hand. Note that the _Integration of Domain Knowledge_ is strategically included to enhance the model's understanding and prediction accuracy (cf. Section 2.1). Figure 2: Diagram of the prompt creation and credit evaluation process using OpenAI. Note that we plan incorporate an explanation service via the API. **Part 1: Task Instruction** Evaluate the credit risk based on given attributes. If good, respond with '1', if bad, respond with '0'. **Part 2: In-Context Example** Here's an example: a customer with a good checking account history, a credit duration of 12 months, no history of bad credit, and a purpose of car loan, requested a credit amount of $5000. The system evaluated the risk as good and responded with '1'. **Part 3: Attribute Description** Consider each attribute: Checking-account: Existing status Duration: Credit duration (months) Credit-history Purpose: (car, furniture, etc.) Credit-amount **Part 4: Domain Knowledge Integration** Domain Knowledge: **dk2**: Important features in evaluating credit risk often include Checking-account, Foreign-worker, Other-installment, Other-debtors, Credit-history, Credit-amount, and Savings-account. **dk3**: The order of features is important in evaluating credit risk. Important features in evaluating credit risk involve assessing each feature sequentially, starting with the Checking-account, then moving to Foreign-worker, Other-installment, Other-debtors, Credit-history, Credit-amount, Savings-account, Age, Purpose, and finally Duration. **Part 5: Final Task Question** Based on the provided inputs and domain knowledge, is the credit risk good ('1' or bad '0')? ### Domain knowledge Integration In the context of ML tasks, domain knowledge is typically provided by the domain expert, such as a bank expert in the case of credit risk assessment. However, this domain knowledge can also be simulated by an ML model, which learns not only the relevance of individual features but also their interconnections. To evaluate the impact of this domain knowledge on task performance, a wide range of ML models were utilized as part of the domain knowledge for the OpenML prediction task. Particularly, we introduced three categories of domain knowledge, as detailed in Table 2: 'dk0' (prompt-0) represents a base case with no domain knowledge and learning is purely data-driven, 'Odd dk' (prompt-1, prompt-3, prompt-5, prompt-7, prompt-9) or Machine Learning Feature Importance (MLFI) refers to the scenario where important features are identified by ML algorithms such as XGB, RF, Ada, LR, Ensemble, and 'Even dk' (prompt-2, prompt-4, prompt-6, prompt-8, prompt-10) or MLFI-ord considers the order of features in addition to their importance. ## 3 Experimental Setup **Task.** This work focuses on binary classification within a credit assessment context in machine learning (ML). The task involves learning a function \(f:\mathcal{X}\to 0,1\), predicting a binary outcome \(y\in 0,1\) for each feature instance \(x\in\mathcal{X}\). The feature space, \(\mathcal{X}\), comprises a protected attribute \(G\) (e.g., age, race, sex) and all other attributes, \(\mathcal{X}^{\prime}\). Together, they form the feature vector for each instance, \(\mathcal{X}=(G,\mathcal{X}^{\prime})\). The outcome, \(y\), denotes an individual's creditworthiness. **Hyperparameters and Models.** We employed six ML models, each with a distinct set of hyperparameters. These were optimized using a randomized search cross-validation (CV) strategy, experimenting with 25 unique hyperparameters. This led to an extensive model tuning process involving numerous model iterations. We used a 5-fold CV (0.8, 0.2), with _RandomizedSearchCV_ over 20 iterations. The exact hyperparameters depended on the specific model: * RF : 'n-estimators','max-depth','min-samples-split','min-samples-leaf', 'bootstrap'. (Total = 5) \begin{table} \begin{tabular}{l l l l l} \hline \hline **DK** & **Name** & **Description** & **Focus** & **Implementation** \\ \hline dk0 & N/A & No extra domain & N/A & Solely data-driven \\ & & knowledge & & \\ Odd dk & MLFI & ML defines feature & Feature & XGB, RF, Ada, LR, \\ & & importance & Attribution & Ensemble \\ Even dk & MLFI-ord & Similar to MLFI, & Feature & XGB, RF, Ada, LR, \\ & includes feature order & Attribution & Ensemble \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of Domain Knowledge Types * LR : 'C', 'penalty','solver'. (Total = 3) * MLP : 'hidden-layer-sizes', 'activation','solver', 'alpha', 'learning-rate','maxiter'. (Total = 6) * KNN : 'n-neighbors', 'weights', 'algorithm', 'leaf-size', 'p'. (Total = 5) * XGB : 'n-estimators', 'l-rate','max-depth', 'colsample-bytree'. (Total = 4) * AdaBoost : 'n-estimators', 'learning-rate'. (Total = 2) Dataset.We used the German Credit dataset, a space-efficient choice with 1,000 individuals and 21 attributes for creditworthiness classification. Cleaned by Le et al.[8] ), this dataset aids banks in lending decisions, using gender as a fairness-sensitive feature. **Bootstrap sampling.** To address imbalances and distribution disparities between groups (e.g., Male vs. Female), we employed bootstrapping with 1000 resamples. Bootstrapping is a robust statistical technique that estimates sampling distributions through data resampling. By generating resampled datasets, calculating the mean disparity (here TPR) for each, and analyzing the resulting distributions, we assessed the statistical significance of the observed difference. ## 4 Results **Accuracy.** Table 2 presents a comparative analysis of the performance of various models equipped with different types of domain knowledge and machine learning algorithms. The performance metrics under consideration include precision (Pre.), recall (Rec.), F1 score (F1), false-positive cost (FP Cost), and false-negative cost (FN Cost). Given the context of credit risk assessment, the latter two metrics bear particular importance, with the false-positive cost being assigned a weight of 5 to reflect the higher financial risk associated with erroneously granting credit to an unworthy applicant. OpenAI-based models are tested under different scenarios using Machine Learning Feature Importance (MLFI) and its ordered variant (MLFI-ord), which can be seen as an attempt to incorporate domain knowledge into the model. Notably, \(\texttt{Prompt}-5\), using AdaBoost model under the MLFI scenario, achieves the highest precision, recall, and F1 score among all OpenAI-based models, which are 0.7305 in each metric. This suggests that using a combination of domain knowledge (MLFI) and the AdaBoost model, the OpenAI-based model can achieve balanced and competitive results (the results are comparable with LR, XGB, and AdaBoost in terms of Pre). Overall, when utilizing AdaBoost and RF as domain knowledge, we observe relatively high performance. It is noteworthy that these models performed exceptionally well in the classical ML part, especially considering their F1 scores. To our surprise, we did not observe a significant advantage when using an ordered feature introduction. Instructing ChatGPT to use ordered feature values often led to poorer performance in many cases. However, when we compare the average values of classical models with those of OpenAI-based models, we see that the former outperforms the latter in all accuracy metrics: precision (0.7792 vs. 0.7129), recall (0.8822 vs. 0.6078), and F1 score (0.8302 vs. 0.6528). This suggests that, for this specific task, classical models are generally more effective. Importantly, it's worth noting that the classical machine learning models used approximately 80% of the available data (i.e., '800' samples) for training, while the OpenAI-ML models only utilized 20 samples as training examples. This means that the classical models had a significant data advantage, using the information at a rate 40 times greater than that of OpenAI-ML. Despite this disparity in data, the OpenAI-ML still produced competitive results, highlighting their potential and efficiency. Additionally, it is worth considering that OpenAI-ML offers the unique advantage of producing human-controllable outputs in the form of prompts that can be generated through a predefined course of action. An intriguing observation is the reduced False Positive (FP) cost associated with OpenAI models compared to traditional models--an aspect of utmost importance in credit risk assessment. This suggests that OpenAI models demonstrate a cautious approach in averting certain false alarms, a trait that could potentially be amplified through targeted instruction. However, this cautious stance might also make them more susceptible to overlooking true instances. **Fairness.** The fairness analysis of the provided results, which leverages "odd dk" due to its superior preceding task performance, offers insightful conclusions. In contrast to machine learning models, certain prompts could achieve fair \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Model} & DK & ML & \multirow{2}{*}{Pre.\(\uparrow\)} & \multirow{2}{*}{Rec.\(\uparrow\)} & \multirow{2}{*}{F1\(\uparrow\)} & FP & FN \\ & Type & model & & & & Cost\({}^{\downarrow}\) & Cost\({}^{\downarrow}\) \\ \hline RF & - & - & & **0.8153** & 0.9078 & **0.8591** & 145.0 & 13.0 \\ LR & - & - & & 0.7368 & 0.8936 & 0.8077 & 225.0 & 15.0 \\ MLP & - & - & & 0.7654 & 0.8794 & 0.8185 & 190.0 & 17.0 \\ KNN & - & - & & 0.7707 & 0.8582 & 0.8121 & 180.0 & 20.0 \\ XGB & - & - & & 0.8077 & 0.8936 & 0.8485 & 150.0 & 15.0 \\ AdaBoost & - & - & & 0.7875 & 0.8936 & 0.8372 & 170.0 & 15.0 \\ random & - & - & & 0.7625 & 0.4326 & 0.5520 & 95.0 & 80.0 \\ \hline \multicolumn{2}{l}{_Avg._} & - & - & 0.7792 & 0.8822 & 0.8302 & 172.5 & 15.6 \\ \hline prompt-0 & N/A & - & & 0.7625 & 0.4326 & 0.5520 & 95.0 & 80.0 \\ prompt-1 & MLFI & XGB & 0.7083 & 0.7234 & 0.7158 & 210.0 & 39.0 \\ prompt-2 & MLFI-ord XGB & 0.6842 & 0.5532 & 0.6118 & 180.0 & 63.0 \\ prompt-3 & MLFI & RF & 0.7206 & 0.6950 & 0.7076 & 190.0 & 43.0 \\ prompt-4 & MLFI-ord RF & 0.7087 & 0.5177 & 0.5984 & 150.0 & 68.0 \\ prompt-5 & MLFI & Ada & **0.7305** & **0.7305** & **0.7305** & 190.0 & 38.0 \\ prompt-6 & MLFI-ord Ada & 0.7404 & 0.5461 & 0.6286 & 135.0 & 64.0 \\ prompt-7 & MLFI & LR & 0.7154 & 0.6596 & 0.6863 & 185.0 & 48.0 \\ prompt-8 & MLFI-ord LR & 0.6957 & 0.4539 & 0.5494 & 140.0 & 77.0 \\ prompt-9 & MLFI & ensemble & 0.7209 & 0.6596 & 0.6889 & 180.0 & 48.0 \\ prompt-10 & MLFI-ord ensemble & 0.7037 & 0.5390 & 0.6104 & 160.0 & 65.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance Comparison of Models on the German Credit Dataset. Average accuracy results in the classical-ML part are computed excluding random. outcomes, i.e., a non-significant difference in the efforts required by different genders. Despite comparable performance to prompts, machine learning models, including RF, LR, MLP, KNN, XGB, and AdaBoost, consistently rejected the null hypothesis, implying significant gender-based effort disparity and lack of fairness. Prompts, however, showcased more diverse results. For instance, Prompt-5 and Prompt-7 suggested relative fairness, whereas Prompt-1, Prompt-3, and Prompt-9 indicated significant effort differences. Interestingly, some prompts like Prompt-9 even reversed disparity direction, favoring the less privileged group. This underlines the potential of prompts to promote fairness, even if they do not always yield statistically significant results. Our analysis emphasizes the need for a holistic model evaluation approach, going beyond statistical metrics, to encompass performance and fairness implications for different demographics. ## 5 Conclusion The study exhibits the benefits of Large-scale Language Models (LLMs), in particular OpenAI's ChatGPT, in Machine Learning tasks. It uses prompt engineering to optimize behavior and prediction accuracy, suggesting that these models could potentially perform comparably, if not better, than traditional ML models. Integration of domain knowledge shows an interesting impact on accuracy, and gender fairness enhancement, setting the basis for broader future investigations. Future exploration requires prompt design optimization, providing ample reasoning time for the system systems through concepts such as Chain-of-Thought Prompting [15], and fine-tuning methodologies. The incorporation of methods for system decision explanation, as well as the application of GPT-based systems to recommender systems while mitigating biases, are critical considerations for further exploration [16, 14, 10, 2, 7]. \begin{table} \begin{tabular}{l c c c} \hline \hline clf & \multicolumn{2}{c}{Sex} & \multicolumn{2}{c}{\(\Delta\) and \(H_{0}\)} \\ \cline{2-4} & \(\mathbb{E}[\mathcal{E}]_{M}\) & \(\mathbb{E}[\mathcal{E}]_{F}\) & \(\Delta_{g}\) & reject \(H_{0}\) \\ \hline RF & 0.6611 & 0.6325 & 0.0286 & **True** \\ LR & 0.6419 & 0.6240 & 0.0178 & **True** \\ MLP & 0.6254 & 0.6181 & 0.0073 & **True** \\ KNN & 0.5709 & 0.6190 & -0.0486 & **True** \\ XGB & 0.6382 & 0.6267 & 0.0116 & **True** \\ AdaBoost & 0.6057 & 0.6401 & -0.0344 & **True** \\ Prompt-1 & 0.5156 & 0.5070 & 0.0085 & **True** \\ Prompt-3 & 0.5709 & 0.4587 & 0.1122 & **True** \\ Prompt-5 & 0.5200 & 0.5143 & 0.0063 & False \\ Prompt-7 & 0.4670 & 0.4641 & 0.0031 & False \\ Prompt-9 & 0.4485 & 0.4706 & -0.0221 & **True** \\ \hline \hline \end{tabular} \end{table} Table 3: Gender fairness results based on true-positive-rate. Note that the TPR values for each group, obtained via bootstrap sampling, are represented by \(\mathbb{E}[\mathcal{E}]\). The disparity in the true positive rate is denoted by \(\Delta\).
2303.12947
Deep Attention Recognition for Attack Identification in 5G UAV scenarios: Novel Architecture and End-to-End Evaluation
Despite the robust security features inherent in the 5G framework, attackers will still discover ways to disrupt 5G unmanned aerial vehicle (UAV) operations and decrease UAV control communication performance in Air-to-Ground (A2G) links. Operating under the assumption that the 5G UAV communications infrastructure will never be entirely secure, we propose Deep Attention Recognition (DAtR) as a solution to identify attacks based on a small deep network embedded in authenticated UAVs. Our proposed solution uses two observable parameters: the Signal-to-Interference-plus-Noise Ratio (SINR) and the Reference Signal Received Power (RSSI) to recognize attacks under Line-of-Sight (LoS), Non-Line-of-Sight (NLoS), and a probabilistic combination of the two conditions. In the tested scenarios, a number of attackers are located in random positions, while their power is varied in each simulation. Moreover, terrestrial users are included in the network to impose additional complexity on attack detection. To improve the systems overall performance in the attack scenarios, we propose complementing the deep network decision with two mechanisms based on data manipulation and majority voting techniques. We compare several performance parameters in our proposed Deep Network. For example, the impact of Long Short-Term-Memory (LSTM) and Attention layers in terms of their overall accuracy, the window size effect, and test the accuracy when only partial data is available in the training process. Finally, we benchmark our deep network with six widely used classifiers regarding classification accuracy. Our algorithms accuracy exceeds 4% compared with the eXtreme Gradient Boosting (XGB) classifier in LoS condition and around 3% in the short distance NLoS condition. Considering the proposed deep network, all other classifiers present lower accuracy than XGB.
Joseanne Viana, Hamed Farkhari, Pedro Sebastiao, Luis Miguel Campos, Katerina Koutlia, Biljana Bojovic, Sandra Lagen, Rui Dinis
2023-03-03T17:10:35Z
http://arxiv.org/abs/2303.12947v1
Deep Attention Recognition for Attack Identification in 5G UAV scenarios: Novel Architecture and End-to-End Evaluation ###### Abstract Despite the robust security features inherent in the 5G framework, attackers will still discover ways to disrupt 5G unmanned aerial vehicle (UAV) operations and decrease UAV control communication performance in Air-to-Ground (A2G) links. Operating under the assumption that the 5G UAV communications infrastructure will never be entirely secure, we propose Deep Attention Recognition (DAitR) as a solution to identify attacks based on a small deep network embedded in authenticated UAVs. Our proposed solution uses two observable parameters: the Signal-to-Interference-plus-Noise Ratio (SINR) and the Reference Signal Received Power (RSSI) to recognize attacks under Line-of-Sight (LoS), Non-Line-of-Sight (NLoS), and a probabilistic combination of the two conditions. In the tested scenarios, a number of attackers are located in random positions, while their power is varied in each simulation. Moreover, terrestrial users are included in the network to impose additional complexity on attack detection. To improve the system's overall performance in the attack scenarios, we propose complementing the deep network decision with two mechanisms based on data manipulation and majority voting techniques. We compare several performance parameters in our proposed Deep Network. For example, the impact of Long Short-Term-Memory (LSTM) and Attention layers in terms of their overall accuracy, the window size effect, and test the accuracy when only partial data is available in the training process. Finally, we benchmark our deep network with six widely used classifiers regarding classification accuracy. Our algorithm's accuracy exceeds 4% compared with the eXtreme Gradient Boosting (XGB) classifier in LoS condition and around 3% in the short distance NLoS condition. Considering the proposed deep network, all other classifiers present lower accuracy than XGB. Security, Convolutional Neural Networks (CNNs), Deep Learning, Jamming Detection, Jamming Identification, UAV, Unmanned Aerial Vehicles, 4G, 5G. ## I Introduction unmanned aerial vehicles (UAVs) have the potential to bring revolutionary changes that will fulfill consumer demands in several industry verticals. UAVs will play a crucial role in emergency response [1, 2], package delivery in the logistics industry, and in temporal events, [2]. UAVs are becoming more common and reliable [3] due to technological advancements [4, 5], as well as the improvements in energy-efficient UAV's trajectory optimizations algorithms to be feasible in practice to take into account the dynamics of the UAV as a parametrized method [6, 7, 8], thus integrating UAVs into 5G and 6G networks will increase telecommunication coverage and reduce costs for businesses willing to invest in this technology. However, UAVs can easily be hacked by malicious users [9] throughout their wireless communication channels, which might divert delivery packets from their destinations. This can have disastrous consequences in unfortunate climate events where UAVs are transporting people to hospitals, or in cases of criminal investigations. A jamming attack can lead to loss of UAV communication control, UAV robbery, UAV destruction, and property damage in urban areas, which would generate problems for business leaders. The authors in [10, 11, 12, 13], emphasize the need for research on new robust methods for attack detection and its associated challenges in 5G UAV communications. Obviously, the ability to recognize different patterns in communication connectivity plays an important role in the UAV security paradigm. Therefore, a Self-Identifying Solution against Attacks (SISA), becomes a basic requirement for UAV communication control. According to [14], identification of interference must serve as the basis for selecting anti-jamming solutions. Statistical models have recently been recognized as a viable way for monitoring network activity in wireless communications and detecting suspicious attacks through the use of wireless parameters. Cheng et al. [15] offer a Bayesian technique for detecting jamming. The authors of [16] present a jamming detection approach based on a Naive Bayes classifier trained on a small sample of data and addresses just noise effects. The authors in [17] employ a sequential change point detection algorithm to detect the state changes in the time series using Bayesian estimators. Lu et al. [18] propose the message invalidation ratio as a new metric to evaluate performance while under jamming attacks in time-critical applications. In [19], the authors offer a jamming detection strategy for Global Navigation Satellite System (GNSS)-based trained localization that makes use of Singular Value Decomposition (SVD). However, the majority of research does not account for the effects of the wireless propagation channel in their solutions. With respect to machine learning, Krayani et al. use a Bayesian network to identify jammers in [20]. Youness et al. [21] create a dataset based on signal property observations and use Random Forest (RF), Support Vector Machines (SVM), and a neural network algorithm to classify the features extracted by the jamming signal. [22] also uses a SVM and a self-taught learning method to identify attacks in UAV Networks. In [23], the authors utilize a Machine Learning Intrusion Detection System (ML-IDS) based on SVM to identify jamming in the Cloud Radio Access Network (C-RAN). Deep Learning (DL) has been used to create models with high-level data abstraction by utilizing numerous layers with activation function processing. In DL, Deep Neural Networks (DNNs), such as Convolutional Neural Network (CNNs) [24, 25] are able to define trends and seasonality in time-series data [25, 26]. These characteristics make deep network-based algorithms useful for discovering patterns in wireless networks by analyzing time series and spatial information [27]. The authors in [28] also identify jamming samples using signal-extracted features, but the authors add another way to detect attacks that employs 2D samples and pre-trained networks, such as AlexNet, VGG-16, and ResNet-50. In [29], the authors also use pre-trained deep networks to develop a three step framework to identify jamming in radar scenarios. In [30], the features of the signal in the time domain, frequency domain, and fractal dimensions, as well as deep networks, are used to recognize jamming attacks. Nevertheless, Deep Learning (DL) presents its own challenges when applied in the wireless context: * It is challenging to collect network parameters for DL input layers. All deep learning algorithms need training and testing. In each phase, the DNN's input layer is made up of the parameters of the data samples. The greater the sample coverage in terms of data qualities, the better the DL can identify network features. However, some wireless data may be missing due to the stochastic nature of the communication paths. As a consequence, DL models should be built to tolerate missing parameters, data errors, and out-of-range values in their input layers. * UAVs have constraints in memory, CPU capabilities, and available batteries. Complex algorithms cannot be programmed into their current protocols because DL is iterative in nature. This may prolong system response time. To save memory space, the DL algorithms should use techniques that do not rely on increasing the amount of layers, nodes, or trainable parameters. To minimize execution time, the algorithms should be optimized. * DL needs entire or nearly complete training samples to effectively detect network patterns. However, because of the difficulty of collecting so many data points for each potential network condition, the training samples may be relatively restricted. This dictates that DL should be capable of adding additional samples after failing to recognize a new pattern. The fresh samples may help to increase the accuracy of the DL models. * Furthermore, network engineers/programmers are required to carefully design the DL data formats since various network parameters have extremely distinct data properties and formatting requirements. The correct numerical representations and data normalization algorithms must be explicitly stated to combine numerous network parameters into the same DL input layer. ### _Objectives and contributions_ In this paper, we study the attack identification problem in authenticated UAVs in 5G communications. To enable UAVs to cope with jamming recognition, we propose a deep network called DAtR (Deep Attention Recognition) that uses only two observable parameters: Signal-to-Interference-plus-Noise Ratio (SINR) and Reference Signal Received Power (RSSI). 5G \begin{table} \begin{tabular}{l l l} \hline \hline Abbreviation & Definition & Abbreviation & Definition \\ \hline ASD & Azimuth spread of departure & LR & Logistic Regression \\ ASA & Azimuth spread of arrival & LSTM & Long Short-Term Memory \\ A2G & Air to ground & MVA & Majority Voting Algorithm \\ CAT & CatBoost & NLoS & Non-Line-of-Sight \\ CDL & Clustered Delay Line & RF & Random Forest \\ CNN & Convolutional Neural Network & SINR & Signal-to-Interference-plus-Noise Ratio \\ 1 CPU & Central processing unit & SISA & Self-Identififying Solution against Attacks \\ C-RAN & Cloud Radio Access Network & SVD & Singular Value Decomposition \\ DAtR & Deep Attention Recognition & SVM & Support Vector Machines \\ DL & Deep Learning & RSSI & Reference Signal Received Power \\ DNN & Deep Neural Networks & TSA & Time-Series Augmentation \\ GNB & Gaussian Naive Bayes & UAV & Unmanned Aerial Vehicle \\ GNSS & Global Navigation Satellite System & UMi & Urban Micro Scenario \\ MH-DNN & Multi-Headed Deep Neural Network & XGB & eXtreme Gradient Boosting \\ ML-IDS & Machine learning Intrusion Detection System ZSD & Zenith spread of departure \\ LoS & Line-of-Sight & ZSA & Zenith spread of arrival \\ \hline \hline \end{tabular} \end{table} TABLE I: Abbreviation list. communication networks provide these measurements in the receivers in LoS conditions. We add NLoS, and probabilistic LoS and NLoS conditions in the deep network and compare the accuracy for each channel condition case. We use a neural network that includes attention layers with optimized parameters to decrease the chances of low accuracy when adding users and attackers to the network. We demonstrate that the DAtR is able to recognize jamming attacks from other malicious aerial agents in complex urban environments, where there are terrestrial users connected to the network. The final goal is to demonstrate that it is possible to identify attacks in the UAV's receiver using learning techniques, such as deep network architectures, which have significantly fewer layers than well-known pre-trained networks. Also, the deep network does not rely on transfer learning techniques, and it could provide better accuracy than other well-known classifiers. Taking these into account, the main contributions of this work are highlighted in the following: 1. A novel, robust, and effective convolutional-attention deep network for UAVs, named DAtR, that detects jamming in complex environments under LoS and NLoS conditions and that tolerates incomplete raw data inputs. To the best of the authors' knowledge this is the first time that an attention model is proposed to detect jamming in LoS, NLoS, and hybrid conditions. 2. A study of deep network architectures for UAVs considering Long Short-Term Memory (LSTM) and Attention layers for 5G UAV communication data. 3. Two new complementary methods named Time-Series Augmentation (TSA) and Majority Voting Algorithm (MVA) to improve classification accuracy and detect false alarms for deep networks. 4. An accuracy comparison with six other state-of-the-art machine learning classifiers. 5. An analysis of the tradeoffs between accuracy and added latency in the model while identifying attacks. The remaining parts of this paper are organized as follows. Section II presents the preliminaries and the attack identification problem in authenticated UAVs. Additionally, it describes the transmission and channel models, as well as the observable parameters of SINR and RSSI. Section III illustrates the proposed deep network for jamming identification. Section IV focuses on the accuracy analysis of the network simulation results, comparisons of parameter configurations, and comparisons between the proposed deep network with six different classifiers. Section V includes our conclusions. Table I summarizes the abbreviations used in this paper. ## II Preliminaries and Problem Formulation ### _Scenarios_ Fig. 1 illustrates the UAV simulation environment. It identifies the adopted X-Y-Z cartesian coordinates. We consider a scenario where authenticated UAVs fly in a 1 km \(\times\) 1 km square area, while they are connected to a serving small cell through Air-to-Ground (A2G) 5G wireless data links. In this environment, we include authenticated terrestrial users placed on the ground. UAV attackers are placed in predetermined randomly assigned spots. They fly towards the authenticated UAVs inside the coverage area of the small cell. To create our model, we assume that the authenticated UAV transmission power is fixed during each simulation, and we use Clustered Delay Line (CDL) channels including slow and fast fading components to model their propagation conditions. UAV attackers use the same propagation models as the authenticated UAVs [31],[32]. For the terrestrial users, we follow the 5G wireless terrestrial propagation models defined in [32] instead. Fig. 1 shows a configuration example with two authenticated UAVs, three terrestrial users, three UAV attackers, and one small cell. For the sake of simplicity, the authors considered the UAV to be a fly antenna; Assuming that the antennas position in the UAV is ideal, that means their present the best performance possible to the simulation results and the UAV sizes or UAVs mechanical components have a low contribution to the overall experiment. When UAV attackers move, their speed is kept constant, and they head in the direction of the authenticated UAVs getting closer to them as simulation time evolves. The attackers' and authenticated UAVs' positions are at higher altitudes and follow the losses according to the standards in [31] and [32]. Our research presumes that terrestrial users may likewise be in fixed locations or have the ability to change their positions according to mobility models in [33]. The small cells are configured with an antenna height of 10 m typically seen in the urban environments. Table II displays the four different experimental setups we created, in which basically multiple combinations of mobility for UAV attackers and/or terrestrial users are considered. During the simulations, as further explained in Section V, we vary the scenarios to account for different mobility/speed options, as well as different distances between the small cells and authenticated UAVs, UAV attacker power, number of UAV attackers, and number of terrestrial users. The authenticated UAVs try to identify if there are any attackers attempting to disrupt the communication link by using the proposed DAtR mechanism, which is fed with the RSSI and SINR measurements that are available in the receiver. For each scenario listed in Table II, we create a dataset with 600 files, including up to four attackers and thirty terrestrial users connected at the same time. We group them together to form a complete dataset, composed of 2400 files split into RSSI and SINR parameters in constant LoS condition. Then, we change the channel condition in the dataset and check if it is possible to identify the attackers in persistent NLoS condition, and in randomly combined LoS and NLoS conditions through the 3rd Generation Partnership Project (3GPP) stochastic models in [31] and [32]. In the end, we have 3 datasets with 2400 files each, corresponding to LoS, NLoS, and hybrid LoS/NLoS conditions. Additional information on the dataset's development and possible applications are available in [34, 35]. This is an intriguing problem due to the fact that in LoS cases, channel variations and terrestrial users increase the difficulty of identifying attacks. Under the NLoS condition, the lower received power makes it more challenging to recognize the UAV attackers. Finally, let us notice that the connection link between the authenticated UAV and the small cell exists during the entire simulation even in low SINR circumstances. ### _Communication model_ We consider an A2G connection between the small cell and the authenticated UAVs, as depicted in Fig. 1. The scenario consists of an urban environment where buildings, trees, and other structures may cause significant path loss and shadowing degradation. We define the A2G large scale effect with two components, i.e., path loss and shadowing, as follows: \[L^{\alpha}(d,f)=PL^{\alpha}(d,f)+\eta^{\alpha}\ [\text{dB}], \tag{1}\] where \(PL^{\alpha}(d,f)\) is the path loss at distance \(d\) from the authenticated UAV to the respective small cell (in km), when transmitting over the carrier frequency \(f\) (in MHz). \(\eta^{\alpha}\) is the shadowing (in dB), and \(\alpha\) reflects the LoS and NLoS conditions, i.e., \(\alpha\in\{\text{LoS},\) NLoS\(\}\). In A2G communications, the pathloss \(PL^{\alpha}(d,f)\) in Eq. (1) depends on the high/low altitude configurations and the LoS/NLoS conditions. We compute it as follows: \[PL^{\alpha}(d,f)=\begin{array}{ll}PL^{\text{LoS}}(d,f)&\text{if }\text{LoS}\\ PL^{\text{NLoS}}(d,f)&\text{if }\text{NLoS}.\end{array} \tag{2}\] For urban UAV scenarios, the pathloss in the LoS condition is given by the maximum between high/low altitude pathloss computations: \[PL^{\text{LoS}}(d,f) =\max(PL_{h}(d,f),PL_{h}(d,f)). \tag{3}\] \[PL_{h}(d,f) =20\log(d)+20\log(f)+20log(4\pi/c)\] \[PL_{h}(d,f) =30.9+(22.25-0.5\log(h))\log(d)+20\log(f)\] where \(c\) is the speed of light (in m/s), \(h\) is the altitude (in m), \(PL_{h}(d,f)\) is the free space path loss for high altitudes, and \(PL_{h}(d,f)\) is the low altitude path loss. Under NLoS condition, the pathloss is given by the maximum between the LoS pathloss and the NLoS pathloss expression: \[PL^{\text{NLoS}}(d,f) =\max(PL^{\text{LoS}}(d,f),PL_{m}(d,f)) \tag{4}\] \[PL_{m}(d,f)_{\alpha} =32.4+(43.2-7.6\log(h))\log(d)+20\log(f)\] \begin{table} \begin{tabular}{l c c} \hline \hline Scenario & Attackers configured & Users configured \\ & with speed & with speed \\ \hline None Speed & N & N \\ Attackers Speed & Y & N \\ Users Speed & N & Y \\ Both Speed & Y & Y \\ \hline \hline \end{tabular} \end{table} TABLE II: Speed configuration scenario. Fig. 1: Simulation scenario. In our scenario, we assume that all the UAVs fly with a height within the margin \(22.5\)m \(<h<300\)m. With that in mind, the remaining shadowing component (\(\eta^{a}\)) in Eq. (1) is defined by 3GPP as an additional variation over the pathloss with a certain standard deviation, depending on LoS/NLoS conditions as well. Table III includes the shadowing characterization for LoS and NLoS. To determine the LoS or NLoS condition for each communication link, 3GPP uses a stochastic model. The probability of being in LoS (\(p_{\text{LoS}}\)) is given by: \[p_{\text{LoS}}=\frac{d_{1}}{d_{2D}}+\exp\Big{(}\frac{-d_{2D}}{p}\Big{)}\Big{(}1 -\frac{d_{1}}{d_{2D}}\Big{)}, \tag{5}\] where \(p=-233.98\log_{10}(h)-0.95\), \(h\) is the height of the UAV, \(d_{1}=\max(294.05log_{20}(h)-432.94\), \(18)\), and \(d_{2D}\) is the 2D distance between the UAV and the small cell. Accordingly, the probability of being in NLoS is: \(p_{\text{NLoS}}=1-p_{\text{LoS}}\). For the small-scale fading, we adopt Cluster-Delay-Line (CDL) models, as in [32] and [31]. 3GPP defines in tabular mode the parameters that model the fading, including the powers, delays and angles of arrival and departure (AoA, AoD) that contain spreads in both the azimuth (ASA, ASD) and zenith (ZSA, ZSD) of each cluster for the UAV scenario. The scenario assumes large and small-scale fading in the link between the UAVs and the small cells. Given this model, the received power at the UAV with no jammers or interferences can be expressed as: \[P_{uav}=P+G-L^{a}(d,f)-S(n,m) \tag{6}\] where \(P\) is the transmission power, \(G\) is the overall antenna gain in the link considering UAV and small cell antenna gains, i.e., \(G=(G_{uav}+G_{\text{sc}})\), and \(S(n,m)\) is the small-scale fading effect, which corresponds to the superposition of \(n\) clusters with \(m\) rays in the communication link, as per [32, 31]. Our model considers single antenna elements in the small cell and the UAVs. The simulation in this work uses CDL-A and CDL-D models for small-scale fading in the NLoS and LoS conditions. In this case, each CDL comprises 23 clusters with 20 multipath components (rays) each. Each cluster has an AoD and an AoA. These values are used to create the rays' AoAs/AoDs according to the azimuth/zenith arrival/departure spreads (ASA/ASD, ZSA/ZSD), respectively. \begin{table} \begin{tabular}{l l l} \hline \hline & Std. deviation (dB) & Altitude (m) \\ \hline LoS & \(\max(5\ \times\ \exp(-0.01h),2)\) & \(22.5<h<300\) \\ NLoS & \(8\) & \(22.5<h<300\) \\ \hline \hline \end{tabular} \end{table} TABLE III: Shadowing for UAVs in UMi [32, 31]. Fig. 2: Multi-headed deep neural network (MH-DNN) architecture. Note the switch from LSTM to Attention layers. The signal-to-interference noise ratio (SINR) between the authenticated UAV and the small cell at distance \(d\), in the presence of interference coming from jammers and terrestrial users, is given by: \[\Gamma_{uav}=\frac{P_{uav}}{\zeta^{2}+\sum_{i=1}^{U}P_{\text{user}}^{i}+\sum_{j=1 }^{J}P_{\text{jammer}}^{j}}, \tag{7}\] where \(P_{\text{user}}^{i}\) and \(P_{\text{jammer}}^{j}\) represent the received power at the UAV coming from the i-th user and the j-th jammer, respectively, which act as interfering signals (including the channel gain with the authenticated UAV), \(\zeta^{2}\) is the noise power, \(U\) is the total number of terrestrial users transmitting at the same time as the authenticated UAV, and \(J\) is the number of jammers transmitting in the scenario. Next, the RSSI includes the linear average of the total received power in Watts from all sources, including co-channel serving and non-serving cells, adjacent channel interference, thermal noise, etc. Considering A0 as the RSSI value at a reference distance, we have \[\Lambda=\Lambda_{0}-10\rho\log(d), \tag{8}\] where \(\rho=L^{d}(d,f)+S(n,m)\) includes path loss and fast fading components and \(d\) is the link distance. ### _Problem formulation and dataset_ The SISA goal for the authenticated UAV is to quickly identify malicious changes in the received power, caused by UAV jammers in the environment. For that, we use a small deep network, where the number of trainable parameters T are smaller than 100K (\(T<100000\)), that is composed of a combination of layers, including CNNs, Attention, Drop out, Flatten, among others. The details of the DNN architecture are provided in Section III. First, we study the case where UAV attackers try to disrupt the communication when the UAV and the small cell can directly see each other (LoS condition). Then, we simulate the NLoS condition, where buildings and other elements in the city may block the direct communication between the UAV and the small cell. Finally, we study a probabilistic combination of LoS and NLoS conditions. As such, we assume the following in the three datasets we create for the experiment: * LoS: The UAV is always in LoS condition throughout all the simulations available in the dataset; * NLoS: The UAV is in NLoS condition for the entire time during all the simulations included in the dataset; * LoS and NLoS: The link between the UAV and the small cell is in either LoS or NLoS condition with a probability of \(p_{\text{NLoS}}\) and \(1-p_{\text{LoS}}\) (according to Eq. (5)) for all the simulations in the dataset. Table II describes the four scenarios in each dataset. The differences between the scenarios inside the dataset relates to the following parameters: the UAVs' and terrestrial users' mobility and speed, the distance between the small cell and the authenticated UAVs, the number of attackers and their power, and the number of terrestrial users in the network. It is important to note that the scenarios in the dataset, such as Attackers Speed, Users Speed, Both speed and None Speed, are unbalanced, meaning that the proportion between attackers and no attackers in the raw data is different. For example, the dataset has data for 1, 2, 3, 4 attackers, while for no attacks there is 0 attacker data. Therefore, in order to avoid bias towards the classification, it is necessary to implement countermeasures to balance the data during the pre-processing phase. Our deep network design aims to achieve maximum performance. To this end, we compare the use of LSTM and Attention layers. We improve the capabilities of the Multi-Headed Deep Neural Network (MH-DNN) by integrating TSA and MVA techniques, which results in the proposed DAtR. We benchmark our DAtR with six other well known ML algorithms and analyze other parameters, such as the optimum window size, the attack accuracy when the deep network sees the data for the first time during the test, and the latency added due to the DAtR processing time. ## III Convolutional-Attention-Based Attack Detection Our SISA model is based on a MH-DNN. The proposed architecture is shown in Fig. 2. It contains (i) three CNN layers, (ii) an Attention or an LSTM layer, and (iii) a drop-layer in each head. The body of the deep network consists of: (i) a Flatten, (ii) a Concatenate, (iii) a Reshape, (iv) three CNN layers, (v) a Flatten, (vi) a Drop out layer, (vii) a Fully connected layer, and (viii) the output layer for two classes classification. Although RSSI and SINR measure different parameters from the telecommunication perspective, both values may be related to each other. For example, when RSSI increases, SINR may decrease. Using our proposed MH-DNN, we can extract features from both parameters simultaneously in each head at each window size. The window size defines the amount of data the deep network algorithm will receive as input in each head. First, we convolute both signals in each head in the three CNN layers as Fig.2 indicates. This operation creates a convolution kernel that is convoluted with the layer input over a single temporal dimension to produce a tensor of outputs. Thanks to the configuration of strides and kernels, this operation returns a single vector with several channels (i.e., \(1\times channel\)). The convolution operation extracts different features from the time-series data available in each head. The result from the convolutional layers is computed in parallel in the Attention layer, and it is possible to check the states before the current state in the window sequence to understand the attack pattern. The layer uses an auxiliary vector that stores the previous hidden states and increases or decreases the weights of the layer by the sum of the row vectors that hold the information using both the previous and the current states [36]. In our case, we use 8 heads in the Attention layer to capture different contextual information. The drop out layer removes some of the features from the attention layer to avoid over-fitting. In other words, it prevents the deep network from memorizing the input parameters instead of learning the patterns in the sequences. In the deep network body illustrated in Fig. 2, after the dropout layer, the remaining features from RSSI and SINR are flattened, concatenated, and reshaped. The concatenation procedure merges the features extracted from both RSSI and SINR in each head, while the flatten and reshape methods keep the tensor size consistency. Next, we use the 3-CNN layers again, we invert the flatten with the drop out layer position. Then, we apply dense and softmax layers. The classification happens in the softmax layer after feature extraction and learning representations of the input data. A fair comparison between LSTM and the Attention layer's overall performance requires that the input and the output of both layers have the same size and shape. In order to guarantee that, we add the global average after the Attention layer and we adopt LSTM with 16 filters. Table IV shows the main parameters for the deep network. ## IV Improvements in MH-DNN Robustness In this section, we introduce the TSA method combined with the MVA to improve the performance of our deep neural network under the NLoS condition, which tends to present lower total received power compared to the LoS condition. Fig. 3 summarizes the main additions to the MH-DNN in order to include these two new features. Notice that the MH-DNN combined with the TSA and MVA results to the proposed DAtR. ### _Time-series augmentation technique_ TSA aims to supplement the original dataset with additional and unrelated samples for the MH-DNN to process further. We create the additional data using data augmentation and flipping techniques that are applied in the train set to increase data diversity, to prevent over-fitting in the test set, and to convert binary classification into 3 classes in the majority voting calculation in section IV-B. As Fig. 3 shows, we convert the input samples into four augmented samples. In Table V, we display an example of how to generate the four new augmented samples according to TSA. By randomly inverting each RSSI and SINR sequence, we are able to generate four distinct augmented samples from each occurrence. Let us point out, that other data augmentation strategies could be considered to generate the data as well. After \begin{table} \begin{tabular}{l l} \hline \hline Deep network Parameters & Value \\ \hline Base learning rate & 2.5x10\({}^{-2}\) \\ Base batch size & 32 \\ \hline Conv-1 filters, kernel size, strides & 8, 8, 2 \\ Conv-2 filters, kernel size, strides & 8, 4, 2 \\ Conv-3 filters, kernel size, strides & 8, 3, 1 \\ Self-Attention head-number, key-dimensions & 8, 8 \\ (or LSTM) & 16 \\ \hline Conv-1 filters, kernel size, strides & 8, 8, 2 \\ Conv-2 filters, kernel size, strides & 8, 4, 2 \\ Conv-3 filters, kernel size, strides & 8, 3, 1 \\ Fully connected layer & 100 \\ Drop-out & 0.4 \\ Softmax & 2 \\ \hline \hline \end{tabular} \end{table} TABLE IV: Deep Network Configuration Parameters. Fig. 3: Deep Attention Recognition (DAtR), including TSA and MVA Techniques - Method 1 and 2 preprocessing the dataset, which results in the conversion of the data to augmented samples with an appropriate rolling window, each augmented sample has two data sequences representing the RSSI and the SINR. Then, we feed the augmented samples to MH-DNN as in Fig. 3. ### _Proposed majority voting algorithm_ DAtR uses TSA and MVA as preprocessing and postprocessing techniques, respectively. After, feature classification is done in the softmax layer, we use the MVA to reclassify the features in order to have better accuracy. MVA is divided into 2 Methods (see Fig. 3). Initially, in Method 1, MVA uses one hot encoding probability values between 0 and 1 as input from the MH-DNN classification prediction, rounds them, then calculates the mean for each sample created from the previously explained TSA method, and uses one hot encoding to classify it again. If the feature is classified in class 1 (attack) or 2 (no attack), the code finishes and the classification achieves high accuracy, minimal false alarms, and the amount of features in class 3 (no decision) is low. However, if the feature is classified in class 3, we try to reclassify using other ML algorithms. In Method 2, we try to classify the features as class 1 or 2 by inverting the algorithm order. Instead of rounding it first and then calculating the mean, we calculate the mean and then round it. The mean calculation at first decreases the precision of the encoding features and consequently the overall accuracy, which increases the chances of a false alarm and includes a higher degree of unclassified data (ud). If after Method 2, the feature can not be classified in class 1 or 2, we apply other well known ML algorithms to classify the features that Methods 1 and 2 could not classify. Notice that although the proposed DAtR results to be efficient in LoS channel conditions (as it will be demonstrated in Section X), the motivation for using preprocessing and postprocessing techniques in MH-DNN arises from the fact that the attack detection accuracy might decreases in cases of extremely low received power conditions, as they happen in NLoS channel conditions. As such, we target to increase accuracy by applying TSA and MVA. At the end, DAtR proved to be efficient also in LoS conditions. Algorithm 1 illustrates the details of Methods 1 and 2 where \(\tau\) is the main sample and \(T\) represents the four augmented samples for the \(\tau\) sample. The algorithm tries to classify the deep network features when the categorization into the classes is not possible in the softmax layer. Basically, if 3 out of the 4 augmented samples classify a feature into a specific class, then this feature is added to the class. In the case of a draw, the feature goes into class 3. ## V Simulation Results In this section, we present the performance evaluation of the proposed DAtR. In particular, we provide five experimental outcomes related to the robustness of the DAtR. As a first step, we conduct a comparative study on the efficacy of different layers, such as Attention and LSTM in the MH-DNN architecture. Then, we study the effect of the window size on the DAtR's accuracy. In addition, we examine the performance of the proposed DAtR when we remove parts of the dataset from training, and we benchmark the DAtR's accuracy against six machine learning alternatives. All these experiments evaluate LoS and NLoS channel conditions, separately. For the evaluation of the DAtR's performance, we compare the overall accuracy based on the various parameters available in the dataset. Initially, we analyze the accuracy as a function of the number of attackers and attackers' power. After that, we analyze the accuracy as a function of the attackers' distance and power. These simulations set all the three conditions presented in the paper side by side; LoS, NLoS, and a combination of LoS/NLoS, respectively. \begin{table} \begin{tabular}{c c c} \hline \hline & RSSI & SINR \\ & Sequence & Sequence \\ \hline Sample 1 & Same & Same \\ Sample 2 & Same & Flipped \\ Sample 3 & Flipped & Same \\ Sample 4 & Flipped & Flipped \\ \hline \hline \end{tabular} \end{table} TABLE V: Output of the TSA. ### _The window size impact_ Fig. 4 (a) and (b) show the window size impact on the final accuracy for LoS and NLoS conditions, using the MH-DNN (no improvements, no TSA and no MVA). Fig. 4 (a) indicates that the accuracy range for w = 100 is roughly 65% to 90%, whereas the range for w =300 is approximately 75% to 95%. In the NLoS case (see Fig. 4 (b)), the MH-DNN achieves a range of about 67% to 85 % when w = 100, and the percentage ranges from 70% to 87 % when w = 300. Both figures demonstrate that the accuracy is directly proportional to the window size, independently of the channel condition. It is worth noting that there is a small tradeoff between the time that it takes to calculate the estimate for each class and the available resources (i.e., the window size), as it will be demonstrated later in Fig. 11. ### _Attention vs LSTM_ Both the LSTM and Attention layers are trying to solve the same problem. They keep track of the old input sequences in the current node or state. For example, the information flowing from \(t_{0}\) to \((t-n)\) is available in a modified/partial form in the \begin{table} \begin{tabular}{l l} \hline \hline Scenario Parameters & Value \\ \hline Terrestrial Users & \(0,3,5,10,20,30\) \\ Authenticated UAVs & 1 \\ Small Cells & 10 \\ Small cell height & 10 m \\ Attackers & \(0,1,2,3,4\) \\ Speeds & 10 m/s \\ Modulation scheme & OFDM \\ Small cell power & 4 dBm \\ Authenticated UAV power & 2 dBm \\ Attackers power & 0,2,5,10,20 dBm \\ Authenticated UAV position & URD* \\ Attackers position & URD* \\ Small cells position & URD* \\ Scenario & UMi \\ Distance & \(100,200,500,1000\) m \\ Simulation time & 30 s \\ \hline \hline \end{tabular} * Uniformly Random Distributed \end{table} TABLE VI: Network Parameters. Fig. 4: Impact of the window size w=100 and w=300 (a) In LoS, (b) In NLoS state at time \(t\). The algorithm uses the modified form to establish a relationship with the incoming data. We opt to compare both LSTM and Attention in terms of window size and final accuracy improvements in LoS and NLoS conditions for each proposed algorithm in the paper. The trainable parameters do not change between the different window sizes or conditions. In our example, the MH-DNN configured with LSTM has 59984 trainable parameters compared to 64368 in the one with the Attention. The majority of well-known pre-trained deep neural networks, such as VGG [37] and ResNet [38], employ more than one million trainable parameters in their architectures, which increases the overall training time and requires more computation capabilities. During the test, we only interchange the Attention and LSTM layers, using the settings in Table IV and the proposed DAtR. Table VII shows the differences in the overall accuracy between the Attention and LSTM layers for different window sizes (ranging from w=50 to w=300), different channel conditions (LoS, NLoS, and both) and the three proposed methods (DNN, DNN+Method1, DNN+Method2). Results are compared to the reference XGB algorithm, for different window sizes \begin{table} \begin{tabular}{l|c c c c} & XGB & w=50 & w=100 & w=200 & w=300 \\ \hline LoS & 83.27 & 83.69 & 85,57 & 86.33 \\ \hline NLoS & 83.04 & 82.58 & 83.41 & 80.58 \\ \hline Both & 79.65 & 79.47 & 78.40 & 78.85 \\ \hline \end{tabular} \end{table} TABLE VIII: Accuracy measurements using XGB for each condition and for each window size Fig. 5: Comparison between Attention and LSTM algorithms for w=50 and w=300, users \(=\)20, number of attackers \(=\) 2 attacker power \(\Rightarrow\)5 dBm (a) In LoS, (b) In NLoS \begin{table} \begin{tabular}{l|c c c c c c} w & & & 50 & 100 & 200 & 300 \\ \hline \multirow{4}{*}{DNN} & \multirow{2}{*}{LoS} & **Attention** & **82.26** & 83.04 & **88.35** & **89.59** \\ \cline{3-6} & & LSTM & 79.62 & **84.67** & 86.51 & 88.06 \\ \cline{2-6} & & **Attention** & **72.58** & **73.00** & **74.12** & **75.60** \\ \cline{2-6} & & LSTM & 69.43 & 71.46 & 65.76 & 68.67 \\ \cline{2-6} & & **Attention** & **76.31** & **79.59** & **79.19** & **82.77** \\ \cline{2-6} & & LSTM & 76.07 & 78.19 & 77.10 & 77.29 \\ \hline \multirow{4}{*}{DNN+Method 1} & \multirow{2}{*}{LoS} & **Attention** & **83.88** & 84.31 & **88.48** & **89.98** \\ \cline{2-6} & & LSTM & 83.65 & **84.38** & 87.10 & 88.34 \\ \cline{2-6} & & **Attention** & **82.81** & 82.53 & **82.94** & **83.07** \\ \cline{2-6} & & LSTM & 81.87 & **83.05** & 81.27 & 80.19 \\ \cline{2-6} & & **Attention** & **80.50** & **81.27** & **79.13** & **83.66** \\ \hline \multirow{4}{*}{DNN+Method 2} & \multirow{2}{*}{LoS} & LSTM & 79.82 & 79.67 & 78.95 & 79.02 \\ \cline{2-6} & & **Attention** & **84.10** & 84.77 & **89.99** & **90.80** \\ \cline{1-1} \cline{2-6} & & LSTM & 81.34 & **86.26** & 88.47 & 89.49 \\ \cline{1-1} \cline{2-6} & & **Attention** & **75.66** & **76.07** & **77.13** & **79.00** \\ \cline{1-1} \cline{2-6} & & LSTM & 72.20 & 73.85 & 68.60 & 73.10 \\ \cline{1-1} \cline{2-6} & & **Attention** & **78.61** & **81.52** & **80.51** & **84.65** \\ \cline{1-1} \cline{2-6} & & LSTM & 78.28 & 80.11 & 79.22 & 79.59 \\ \hline \end{tabular} \end{table} TABLE VII: Differences in the overall accuracy for each condition and for each window size and channel conditions available in Table VIII. The XGB performs poorly when the hybrid dataset is applied to the algorithm in contrast to the results obtained with the DNN and DNN with Methods 1 and 2. Considering the same condition, better results are seen in the Attention layer except in DNN+Method 1 in NLos condition and w=100 where the difference is around 0.05 in favor of LSTM. Moreover, we notice that an increase in the window size has a positive impact in the overall accuracy when using Attention layers, but for LSTM in NLoS conditions it has the opposite effect when w \(>\) 100. The pattern recognition in NLoS is hard to be extracted in general due to the low power received in the authenticated UAV, but for this particular case when w \(>\) 100, it decreases the overall accuracy. With respect to the LoS, NLoS and Both conditions, LoS presents the best accuracy because there is no decrease in the received power due to obstacles and objects between the authenticated UAV and the small cell. Therefore, the deep network could learn the attacker pattern even in cases with channel variations and more users in the network. The combined condition presents the second-best results and as expected, NLoS presents the worst. Notice that by adding more nodes and layers, the deep network can learn this pattern, however there is a tradeoff in terms of memory and energy consumption, which is not within the scope of this work. The greatest impact of the MVA and TSA in the DNN is in NLoS conditions. Method 1 increases the overall accuracy by more than 10% when using LSTM and by approximately 10% with the Attention. Among the methods in the study, the MH-DNN \(+\) Method 2 performs better for LoS, whereas the MH-DNN \(+\) Method 1 performs better for NLoS conditions. Fig. 5 depicts the accuracy against the distance between the authenticated UAV and the small cell in the network for two different window sizes using Attention and LSTM layers, for (a) LoS and (b) NLoS channel conditions. For each condition, we present the results for MH-DNN with no additional methods. Fig. 5(a) shows that, for LoS, both Attention and LSTM configurations with window size 300 (w=300) outperform the configurations with window size 50 (w = 50). In NLoS condition, see Fig. 5(b),the DNN embedded with the Attention layer has the best performance independently of the window size. ### _Comparison with other machine learning classifiers_ Fig. 6 compares the proposed DAtR (composed by MH-DNN, Method 1 and Method 2), with three other machine learning methods, namely RF, CAT, XGB, over the distances between the small cell and the authenticated UAV available in the dataset, in LoS and NLoS conditions, separately. We eliminate GNB and LR from the charts because they fail to achieve 70% accuracy across the range of distances and SVM because its performance is comparable to the other ML algorithms for shorter distances but dropped to 75% accuracy for those with d \(>\) 200 in LoS conditions. In Fig. 6 (a) we show that even our worst classifier, which is the MH-DNN embedded alone with the Attention layer, consistently outperforms well known classifiers such as RF, CAT, and XGB, while Method 1 and 2 present an additional improvement, especially for large distances. CAT and XGB perform similarly, while RF decreases its overall accuracy for large distances. In general, compared to all the accuracies obtained from other algorithms, the proposed DAtR achieves an accuracy range from 80% up to 95% over all distance ranges. The mean accuracy the DatR achieves is 89.97%, while the RF, CAT and XGB achieved 83.24%, 85.60%, and 86.33% respectively. Fig. 6 (b) presents the results for the NLoS channel condition. This figure shows that Method 1 in this case is more effective in short distances. However, note that the DAtR and Method 2 outperform the benchmark schemes for short distances, but they lose accuracy for higher distances. As such, Method 1 appears to achieve a good compromise between small and large distances. Fig. 6: Comparison between the proposed DNN with DNN+ Method 1, DNN \(+\) Method 1 and 2, RF, CAT, XGB. w=300, users \(=\)20, number of attackers \(=\) 2, attacker power \(=\)5 dBm. (a) In LoS, (b) In NLoS Overall, comparing both charts, it is clear that DAtR can more easily identify attackers in LoS, but it can also be implemented in NLoS, or in mixed conditions depending on the link distance. ### _Confusion matrices_ Fig. 7 (a) and (b) illustrate the confusion matrices resulting from the proposed algorithms: MH-DNN, MH-DNN+Method 1 and ML alg, and MH-DNN+Method 2 and ML alg, for LoS and NLoS, respectively. We utilize the XGB as a ML algorithm for Method 1 and 2. We compare the results of Method 1 and Method 2 with the results of MH-DNN alone. We notice that MH-DNN+Method 2+XGB increases the accuracy in LoS scenarios. While MH-DNN+Method 1+XGB is more suitable for NLoS settings. For example, Fig. 6(a) highlights the difference between the two True Negative (True Neg) when we subtract Method 1 and Method 2 values from the Deep Network (MH-DNN). Method 1+XGB results in -0.64% less accuracy, while with Method 2+XGB there is +0.38% better accuracy. Also, Method 1 increases the chances of False positive (False Pos) by +0.63 % while Method 2 decreases the likelihood of False Pos by -0.39 %. In Fig. 6(b), we see the opposite effect. Method 1+XGB has better values for True Neg and False Pos than Method 2 +XGB when comparing both to the Deep Network. When it comes to LoS, the MH-DNN+Method 2 performs better than the other approaches in the research, but the MH-DNN+Method 1 is the clear winner when it comes to NLoS. Taking into account the best outcomes that we have so far, specifically, MH-DNN configured with Attention + Method 2 for LoS or + Method 1 for NLoS and XGB algorithm, except when explicitly mentioned, we use this configuration to show detailed performance evaluation considering all cases and parameters available in the dataset using DAtR. In the combined condition, we used MH-DNN configured with Attention + Method 1 for NLoS and XGB algorithm. The accuracy presented in the confusion matrix, is the average accuracy from all the scenarios in the dataset, and it has a significant impact in the specific cases, as it will be presented in the next sections. ### _Attacker number and power_ Fig. 8 presents the accuracy over the number of attackers and their power, in (a) LoS, (b) Combined and (c) NLoS conditions. If we take close look at the individual charts, we see that the accuracy increases with more attackers and more power for LoS and Combined conditions. In the NLoS case, the low accuracy is centered in the scenario with 2 attackers when both of them are configured with power less than 5 dBm. It increases for both more and less attackers and as the attacker power rises. In the LoS case, the scenario with 1 attacker is the hardest for the proposed algorithms to learn. In the Combined condition, 0 and 1 attacker scenarios are complicated for the algorithms to learn and for the NLoS condition, the most complicated scenario is with 2 attackers. In Los and Combined cases, the changes in the power presented improvements in the accuracy of around 5%. The low accuracy when there are less then 3 attackers in the scenario might be justified by the stochastic channel models available in 5G UAV cases where the channel adjustments experienced by the UAV can change approximately 30dB from one Fig. 7: Overall Confusion Matrices of the proposed MH-DNN, MH-DNN+Method 1 and ML alg, and MH-DNN+Method 2 and ML alg, w\(=\)300, (a) In LoS, (b) In NLoS channel update to another. The number of users affects the total received power reducing the DArR's overall accuracy. In the NLoS case, the fact that there are no straight rays feeding in the receiver impacts the overall power received and decreases the accuracy results. Comparing all the results, the NLoS simulation presents the lowest overall accuracy from all conditions, but the best accuracy that it can achieve is 93% with 4 attackers configured with 20 dBm power. ### _Comparison with data that is not in the training_ Fig. 9 (a) and (b) depict the accuracy results based on the attacker power when the number of users in the network is U=20, for distance \(=500\) and number of attackers \(=2\). We remove the data related to the attacker power \(=2\) and 10 (dBm) from the training. Therefore, the deep network sees both these pieces of data for the first-time during testing. We executed this simulation for LoS and NLoS conditions. Fig. (a)a demonstrates the outcomes for LoS. We notice a proportional decrease in all samples when we compare when the training is done with all and removed samples. This difference is around 1.5%. For the NLoS case, illustrated in fig (b)b, there is a difference more significant than 0.5% only when the attacker was set up with 20 dBm power. There were no significant differences for the other cases, which shows the robustness of our proposed algorithm. ### _Attacker power and distance_ Fig. 10 shows the accuracy over distance and attackers power ratios during training for the three conditions: LoS, Combined, and NLoS. In the three conditions, attackers with lower power are harder for the deep network to recognize. In the LoS conditions, the deep network can identify attacks even though the base station is positioned 1000 m away from the authenticated UAV and the attacker power is lower than 5 dBm with 96% accuracy. There are improvements when the power increases, but we achieve better results when increasing distance. We believe that the interference from the users decrease at this position and that is why the deep network could achieve high accuracy. In the Combined condition, we see the impact of power on accuracy more clearly than in LoS. For example, when the attacker power is set to 15 dBm the accuracy Fig. 8: Accuracy vs Attackers Number and Attacker Power data during the test, users\(=\)20, distance\(=\)100, w\(=\)300, (a) LoS only, (b) LoS and NLoS, (c) NLoS only. is 85% when the distance between the authenticated UAV and the Base station is 100 m. We see a peak accuracy when the distance is 500m and the attacker power is 15dBm. While it is easier to identify attackers for the other conditions when the attacker power is higher than 5 dBm, in the NLoS condition, the attacker power needs to be adjusted to 15dBm so the deep network can have approximately 84% accuracy. ### _Average processing time_ Fig. 11 compares the average prediction time after training for each of the three baseline classifiers (RF, CAT and XGB) and the proposed MH-DNN configured with Attention or LSTM for different window sizes to classify each sample. Table IX shows the average values with their respective standard deviations. The prediction time is an important metric because it shows Fig. 10: Accuracy vs Attackers Power and Attacker Distance test data, window_size=300, number of attackers=2, users =20 (a) LOS only, (b) LOS and NLoS, (c) NLoS only. Fig. 9: Comparison with data that is not in the training (a) In LoS, (b) In NLoS the latency in discovering attacks when using such algorithms in the UAVs. All timing tests were done using a system with a Nvidia RTX 3090 GPU. In Fig. 11, we can see that the window size has a small effect for the XGB and for the MH-DNN configured with Attention ot LSTM. However, it has a bigger impact on CAT and RF. The prediction time for CAT increases four times when the window size is 300. For RF, the impact of the window size is smaller than for CAT, but it still increases approximately 10% for the same window size (w=300). There is a minor difference between the LSTM and Attention prediction times. The RF algorithm displays the highest prediction time. Our proposed method has a good tradeoff between accuracy and prediction time. ## VI Conclusion This paper studied the attacks self-identifying problem in 5G UAV networks assuming scenarios with LoS, NLoS and a probabilistic combination of both conditions. Specifically, we proposed a small deep network system, named DAtR, that can cope with the attack self-identifying problem, and we verified its accuracy through extensive simulation campaigns. Our research examined five major implementation issues related to the deep network: how the key parameters, such as the window size, impact the deep network accuracy, the impact of different layers on the overall performance (i.e., Attention vs. LSTM), its performance compared to other machine learning alternatives for classification, the robustness of our deep network using data that is not available in training, and the prediction timing for the proposed DAtR. We showed that the proposed system, compared to six popular classifiers available in the literature, is a competitive option for the attack classification for all distance ranges in LoS conditions and for short range distances in NLoS conditions. The comparison between LSTM and Attention shows that increasing the window size in the LSTM setup reduced the performance, while with Attention it boosted performance. The use of Attention layers in DAtR outperformed the same system configured with LSTM. Finally, we present the performance graphs we created for each case study. Results have have have demonstrated that our deep network reliably identifies attacks across all possible configurations. It was simpler to identify attacks in simulations with three or more attackers, fewer users, and a power of 10 dBm or higher. The identification accuracy was also affected by the three-dimensional distance between the small cell and the authenticated UAV. Here, the chances of identification improved with increasing distances since there was less interference to contend with.
2305.00015
Determination of the neutron skin of $^{208}$Pb from ultrarelativistic nuclear collisions
Emergent bulk properties of matter governed by the strong nuclear force give rise to physical phenomena across vastly different scales, ranging from the shape of atomic nuclei to the masses and radii of neutron stars. They can be accessed on Earth by measuring the spatial extent of the outer skin made of neutrons that characterises the surface of heavy nuclei. The isotope $^{208}$Pb, owing to its simple structure and neutron excess, has been in this context the target of many dedicated efforts. Here, we determine the neutron skin from measurements of particle distributions and their collective flow in $^{208}$Pb+$^{208}$Pb collisions at ultrarelativistic energy performed at the Large Hadron Collider, which are sensitive to the overall size of the colliding $^{208}$Pb ions. By means of state-of-the-art global analysis tools within the hydrodynamic model of heavy-ion collisions, we infer a neutron skin $\Delta r_{np}=0.217\pm0.058$ fm, consistent with nuclear theory predictions, and competitive in accuracy with a recent determination from parity-violating asymmetries in polarised electron scattering. We establish thus a new experimental method to systematically measure neutron distributions in the ground state of atomic nuclei.
Giuliano Giacalone, Govert Nijs, Wilke van der Schee
2023-04-28T18:00:00Z
http://arxiv.org/abs/2305.00015v2
# Determination of the neutron skin of \({}^{208}\)Pb from ultrarelativistic nuclear collisions ###### Abstract Emergent bulk properties of matter governed by the strong nuclear force give rise to physical phenomena across vastly different scales, ranging from the shape of atomic nuclei to the masses and radii of neutron stars. They can be accessed on Earth by measuring the spatial extent of the outer skin made of neutrons that characterises the surface of heavy nuclei. The isotope \({}^{208}\)Pb, owing to its simple structure and neutron excess, has been in this context the target of many dedicated efforts. Here, we determine the neutron skin from measurements of particle distributions and their collective flow in \({}^{208}\)Pb+\({}^{208}\)Pb collisions at ultrarelativistic energy performed at the Large Hadron Collider, which are sensitive to the overall size of the colliding \({}^{208}\)Pb ions. By means of state-of-the-art global analysis tools within the hydrodynamic model of heavy-ion collisions, we infer a neutron skin \(\Delta r_{np}=0.217\pm 0.058\) fm, consistent with nuclear theory predictions, and competitive in accuracy with a recent determination from parity-violating asymmetries in polarised electron scattering. We establish thus a new experimental method to systematically measure neutron distributions in the ground state of atomic nuclei. + Footnote †: preprint: CERN-TH-2023-069/MIT-CTP/5558 Understanding the distribution of neutrons within heavy atomic nuclei has profound implications for our knowledge of the neutron-rich matter that shapes exotic astrophysical objects such as neutron stars. The neutron skin that forms on the surface of heavy nuclei, whereby neutrons are located more diffusely and more on the outside [1; 2], represents in particular a sensitive probe of the equation of state (EOS) of neutron matter, whose pressure determines the spatial extent of the neutron distributions. Indeed, nuclear models predict a strong correlation between the neutron skins of heavy nuclei and the masses and radii of neutron stars [3; 4]. While proton distributions in nuclei can be determined in a model-independent way from electron scattering experiments [5], accessing neutron distributions poses a far greater challenge. As a consequence, we have only limited experimental constraints on the neutron skin of nuclei, \(\Delta r_{np}\), defined as the difference in root mean square (rms) radii between protons and neutrons. The doubly-magic nucleus \({}^{208}\)Pb (\(Z=82\), \(N=126\)) has both protons and neutrons filling up their respective shells and represents an optimal study subject in this context. A recent, precise deduction of the neutron skin of \({}^{208}\)Pb has been achieved by the PREX collaboration [6] from the measurement of parity-violating asymmetries in polarised electron scattering. On the side of theory, the first calculation of \({}^{208}\)Pb and its neutron skin in the context of _ab initio_ nuclear theory was also recently performed [7]. These results, along with information coming from pulsar and gravitational wave observations, portray a picture of nuclear matter that hints at potential tensions [8; 9; 10]. In this article, we determine the neutron skin of \({}^{208}\)Pb from a new type of probe. We use data collected in \({}^{208}\)Pb+\({}^{208}\)Pb collisions performed at ultrarelativistic energy at the CERN Large Hadron Collider (LHC). These collisions produce short-lived quark-gluon plasma [11; 12; 13] (QGP), the hot phase of quantum chromodynamics (QCD), which behaves like a near-ideal relativistic fluid [14; 15] before fragmenting into observable particles. In high-energy scattering, interactions are mediated by gluons, such that the combined distribution of protons and neutrons (altogether called nucleons) within the colliding \({}^{208}\)Pb ions determines the shape and the size of the created QGP. Employing the latest advances in simulation and Bayesian inference tools within the hydrodynamic framework of heavy-ion collisions we reconstruct the geometry of the QGP by using the detected particle distributions. In conjunction with the precise knowledge of the proton density this enables us to place a tight constraint on the neutron skin of \({}^{208}\)Pb. **The neutron skin and the quark-gluon plasma -** Our understanding of the QGP formed in \({}^{208}\)Pb+\({}^{208}\)Pb collisions is highly developed thanks to the wealth of experimental data collected in the past decade by all LHC experiments, and in particular by the ALICE experiment dedicated to nuclear physics [16]. Following Fig. 1, in an ultrarelativistic heavy-ion collision in the lab frame (Fig. 1a), interactions of gluons deposit energy density in the area of overlap in the so-called _transverse plane_, perpendicular to the beam direction (Fig. 1b). The deposition of energy density depends on the collision's impact parameter \(b\), on the structure of the colliding nuclei and on the dynamics of the interaction itself. Phenomenological studies have established a picture where the colliding ions are treated, in each collision (or _event_), as a superposition of nucleons that participate in the interaction. Both boosted nuclei are thus associated with a profile of matter in the transverse plane, \(\mathcal{T}_{\mathcal{L},\mathcal{R}}(x_{\perp})\), given as the sum of their participant nucleon profiles, typically taken as Gaussians with a width denoted by \(w\). The interaction process and the subsequent energy depositions are then parameterised following some flexible prescription which can be fine-tuned directly from experimental data. Here we use a T\({}_{\text{R}}\)ENTo-type Ansatz for the energy density of the QGP [17; 18], \[e(x_{\perp})\propto\left(\frac{\mathcal{T}_{L}(x_{\perp}-b/2)^{p}+\mathcal{T}_{ R}(x_{\perp}+b/2)^{p}}{2}\right)^{q/p}, \tag{1}\] where \(L,R\) denote the two colliding ions, while \(p\) and \(q\) are model parameters. As the positions of the participant nucleons shaping the functions \(\mathcal{T}_{L,R}\) are sampled in each collision from the neutron and proton densities in the ground state of the scattering ions, the energy density \(e(x_{\perp})\) is sensitive to their spatial distribution. This can be seen by eye in the density plot of Fig. 1b, representing an average energy density over many collisions. The scenario where the colliding \({}^{208}\)Pb nuclei have a narrower neutron skin leads to a QGP with a sharper profile over the plane and a higher density peak. Starting from the initial condition discussed in Fig. 1b, the QGP then evolves as a relativistic viscous fluid (with transport properties, such as shear and bulk viscosities, that are also model parameters). For a single event, snapshots of the hydrodynamic expansion obtained using our hydrodynamic code are depicted in Fig. 1c. Cooling of the QGP lasts until the confinement crossover is reached, after which at a fixed switching temperature the fluid is converted into a gas of QCD resonance states that can further re-scatter or decay to stable particles. Out of this process, experiments can only detect final event-by-event stable particle spectra, typically denoted by: \[\frac{d^{3}N_{\text{ch}}}{d^{2}\mathbf{p}_{T}\,d\eta}=\frac{d^{2}N_{\text{ch}}}{dp_ {T}\,d\eta}\frac{1}{2\pi}\left(1+2\sum_{n=1}^{\infty}v_{n}\cos n(\phi-\phi_{n })\right),\] where \(\mathbf{p}_{T}\) is the transverse momentum, \(\eta\) is the particle pseudorapidity (\(\eta\equiv-\ln\tan(\theta/2)\) with \(\theta\) the polar angle in the (\(x_{\perp}\), \(z\)) plane of Fig. 1a), and the subscript ch indicates that only charged particles are included. We have conveniently decoupled the spectrum into a distribution of transverse momenta, \(p_{T}\equiv|\mathbf{p}_{T}|\), which quantifies the explosiveness of the QGP expansion, and an azimuthal component developed in Fourier modes, where \(v_{n}\) are the so-called anisotropic flow coefficients that quantify the anisotropy of the particle emission. Experimentally the first step is to sort the collisions in centrality classes based on the number of particles that they produce, where \(0\%\) centrality corresponds to events with the highest number of particles at almost zero impact parameter. As a function of centrality one can then measure among others the distributions of \(p_{T}\) and \(v_{n}\) coefficients for different particle species (pions, kaons, protons and more). This generates a wealth of experimental information from which the hydrodynamic model parameters (here, we have 26 in total) can be inferred. The central idea of this manuscript is that of promoting the neutron skin of \({}^{208}\)Pb to a model parameter that we constrain from LHC data. Figure 1: Neutron skin and collective flow in relativistic nuclear collisions. **a**: Two ions collide with impact parameter \(b=8\,\text{fm}\). Both ions are Lorentz-contracted by a factor \(\gamma\approx 2500\), and the relevant dynamics hence effectively takes place in the transverse plane, \(x_{\perp}=(x,\,y)\). **b**: The collision deposits energy in the interaction region depending on the extent of the neutron skin of the \({}^{208}\)Pb nuclei. We consider \(\Delta r_{np}=0.086\,\text{fm}\) (top) and \(\Delta r_{np}=0.384\) (bottom). The neutron skin is varied by keeping the half-width neutron radius, \(R_{n}\), constant while changing the neutron diffuseness, as displayed by the dotted lines (see also Eqn. (2) below). A larger neutron skin leads to a considerably larger total hadronic cross section, \(\sigma_{\text{tot}}\), and the resulting QGP is in addition more diffuse and less elliptical. **c**: We show a single QGP evolving hydrodynamically and being converted into particles (marked in the figure with their respective symbols) as it cools, while expanding both in \(z\) and in the transverse plane. The observation of millions such events leads to characteristic azimuthal anisotropies in the momentum distribution of the produced particles, the most important of which is quantified by the rms value of its second Fourier component, the elliptic flow \(v_{2}\{2\}\), which reflects the ellipticity of the QGP. The neutron skin is introduced by considering variations in the neutron diffuseness, \(a_{n}\), in the two-parameter Fermi distributions that model the point-neutron and point-proton densities in the colliding \({}^{208}\)Pb nuclei: \[\rho_{n,p}(r)\propto\left[1+\exp\left(\frac{r-R_{n,p}}{a_{n,p}}\right)\right]^{- 1}. \tag{2}\] We take \(a_{p}=0.448\,\mathrm{fm}\), \(R_{p}=6.680\,\mathrm{fm}\) (corresponding to an rms proton radius \(r_{p}=5.436\,\mathrm{fm}\)), and \(R_{n}=6.690\,\mathrm{fm}\), which is motivated by the experimental result that the neutron skin is caused by a more diffuse profile rather than a larger half-width radius [1, 2]. Our results are however expected to be mostly sensitive to the neutron skin \(\Delta r_{np}=r_{n}-r_{p}\), which involves only the rms neutron radius, \(r_{n}\), and is relatively insensitive to varying \(R_{n}\), \(a_{n}\) or both. Before proceeding with a full Bayesian analysis we simulate the QGP formation and evolution for three different values of \(\Delta r_{np}\) while keeping all other model parameters fixed. First, a larger neutron skin leads to a larger total hadronic cross section, \(\sigma_{\mathrm{tot}}\) (see Fig. 1b for an increase from 7.75 to 8.67 b), because it increases the overall number of events occurring at higher impact parameters. We follow now Fig. 2, showing experimental and model results for quantities that characterise the bulk of particle production from the measured spectra. The larger \(\sigma_{\mathrm{tot}}\) for the larger neutron skin induces larger impact parameters at the same centrality. As a consequence, fewer particles are produced for larger values of \(\Delta r_{np}\), as clearly visible in the total multiplicity in Fig. 2 (left panel). A second effect of a larger skin, highlighted in Fig. 1b, is that it leads to more diffuse QGP droplets, which leads to weaker pressure gradients and a slower hydrodynamic expansion. This translates into a lower average momentum of the detected particles, as seen in the middle panel of Fig. 2. In addition Fig. 1 shows that a larger neutron skin reduces the ellipticity of the QGP. This leads to a reduction of the elliptic flow, measured in experiment as a two-particle azimuthal correlation (\(v_{2}\{2\}\), the rms value of the distribution of \(v_{2}\)) or as a four-particle correlation (\(v_{2}\{4\}\)). Indeed Fig. 2 (right) shows the expected reduction and moreover we find that a larger neutron skin enhances the difference between \(v_{2}\{2\}\) and \(v_{2}\{4\}\), which corresponds to larger elliptic flow fluctuations. **Bayesian inference of the \({}^{208}\)Pb neutron skin -** Due to the interplay and cross-correlations among parameters and observables, constraining the model from experiment requires advanced Bayesian analysis tools as pioneered in earlier works [15, 21]. Our analysis makes use of over 600 data points in \({}^{208}\)Pb+\({}^{208}\)Pb collisions and a single data point (the total cross section) of proton-nucleus (\(p+^{208}\)Pb) collisions. We use 3000 design points for the Gaussian Processes to emulate our collisions as a function of the 26-dimensional parameter space. See Methods for a specification of all data, parameters and their inferred distributions. The posterior distributions are displayed in Fig. 3 for a subset of parameters that correlate highly with \(\Delta r_{np}\). These are the parameters appearing in the energy deposition formula, Eqn. (1), namely, the energy deposition parameters \(p\) and \(q\), as well as the nucleon size, \(w\). In fact, the \(p\) parameter and \(\Delta r_{np}\) are the most negatively correlated across our entire parameter space. This is not surprising, as both parameters strongly influence the centrality dependence of observables, whereby a larger neutron skin in particular affects off-central collisions by increasing the total cross section. In Fig. 4 we put our new result in context of other state-of-the-art determinations of the skin of \({}^{208}\)Pb. From the posterior distribution we obtain Figure 2: Signature of the neutron skin on bulk particle production in ultrarelativistic \({}^{208}\)Pb+\({}^{208}\)Pb collisions. Varying only the neutron skin size at our optimal parameter settings we show the charged particle multiplicity (left), the mean transverse momentum (middle) and the elliptic flow as measured by \(v_{2}\{k\}\) (right) with a comparison to ALICE data [19, 20]. A larger neutron skin leads to more collisions, but per collision the multiplicity is lower at larger centralities. The larger size of the QGP leads to a reduced transverse momentum on average. Smearing of the elliptical shape leads to reduced elliptic flow as measured by \(v_{2}\{2\}\) and \(v_{2}\{4\}\). Theoretical error bars are statistical only, experimental uncertainties include systematics as well. \(0.058\,\mathrm{fm}\), corresponding to a point-like rms neutron radius \(r_{n}=5.653\pm 0.058\,\mathrm{fm}\). Our result is compatible with both the _ab initio_ determination [7] and the PREX result [6], which is competitive in precision. With regards to the EOS of neutron matter, from the relation between \(\Delta r_{np}\) and the slope parameter, \(L\), of the symmetry energy around the nuclear saturation density [22], we obtain \(L=79\pm 39\,\mathrm{MeV}\), representing the first collider-based constraint on this parameter from high-energy data. We comment now on the robustness of this result. The total \({}^{208}\)Pb+\({}^{208}\)Pb and \(p\)+\({}^{208}\)Pb cross sections [23; 24] pose important constraints on the neutron skin. Indeed, excluding these two measurements we obtain \(\Delta r_{np}=0.31\pm 0.10\,\mathrm{fm}\), whereas using exclusively these two data points results in \(\Delta r_{np}=0.03\pm 0.12\,\mathrm{fm}\). Our result comes hence from constraints due to a combination of observables, where the cross section prefers a smaller neutron skin, while other observables prefer a larger value (this is similar for \(w\)[25]). For the first time in Bayesian analyses we include the \(\rho_{2}\) observable [26; 27], a sensitive probe of the initial conditions [28; 29; 30; 25; 31] which measures the correlation between \(v_{2}\{2\}\) and \(\langle p_{T}\rangle\). Without this observable, the analysis yields a consistent result, \(\Delta r_{np}=0.243\pm 0.059\,\mathrm{fm}\). Also, as introduced in Ref. [25], we weight the targeted observables according to a prescription that models unknown theoretical uncertainty with respect to \(p_{T}\)-differential observables in particular. Turning this weighting off, we find a consistent albeit slightly smaller neutron skin, \(\Delta r_{np}=0.160\pm 0.057\,\mathrm{fm}\). Further indication of the robustness of our finding comes from the fact that targeting a subset of \(p_{T}\)-integrated-only observables, corresponding to 233 ALICE data points, we obtain \(\Delta r_{np}=0.216\pm 0.057\,\mathrm{fm}\). This suggests that the extraction of \(\Delta r_{np}\) is likely insensitive to theoretical uncertainties in the particlisation of the QGP at the switching temperature [32]. Lastly, we note that our TRENTo Ansatz of Eqn. (1) is very versatile, and may lead to a relatively conservative estimate of the uncertainty on \(\Delta r_{np}\). Implementing in the future a model of initial conditions motivated by first-principles arguments and with fewer parameters [33], may lead to stronger constraints than presented here. **Future skin determinations at the LHC -** We expect our analysis to trigger a program of complementary measurements of skin effects at the LHC. A method pioneered by the STAR collaboration utilises the photoproduction of vector mesons in ultra-peripheral nucleus-nucleus collisions to infer the average gluon density in the colliding nuclei, and hence the neutron skins [34]. The extracted skin of \({}^{197}\)Au is in good agreement with nuclear theory predictions [35]. Therefore, the same method could be exploited at the LHC to perform an independent extraction of the skin of \({}^{208}\)Pb. In addition, the global analysis presented here uses so-called _soft_ observables that depend on particles with transverse momentum of order of the QCD deconfinement temperature, around \(150\,\mathrm{MeV}\). With high-luminosity LHC runs it may be possible to constrain the neutron skin as well via _hard_ observables, such as high transverse momentum electroweak bosons [36]. The Figure 3: Inferred neutron skin and energy-deposition parameters. We show the posterior distribution of the neutron skin \(\Delta r_{np}\), the nucleon width \(w\) and the energy deposition parameters \(p\) and \(q\), together with their expectation values (see top) and correlations. Uncertainties correspond to the standard deviations of the posterior distributions. Especially the \(p\) parameter (see Eqn. (1)) is highly anti-correlated with \(\Delta r_{np}\), as both have a strong effect on the centrality dependence of observables (see also Fig. 2). Figure 4: State-of-the-art determinations of the neutron skin of \({}^{208}\)Pb. We show the final likelihood distribution of the neutron skin as determined from the LHC data as compared to the values obtained experimentally by the PREX collaboration (including both experimental and theoretical uncertainties in the extraction) [6] and the estimate of _ab initio_ nuclear theory (with an error bar corresponding to a 68% credibility interval) [7]. charge of the produced electroweak bosons can serve as a direct probe of the number of neutron-neutron interactions. By selecting collisions at relatively large impact parameter, it is then possible to determine the dominance of neutrons at the outer edges of the \({}^{208}\)Pb nucleus. It is likely that the nucleus \({}^{48}\)Ca and other ions will be collided at the high-luminosity LHC in the next decade [37]. This will enable an extended analysis that in particular can be compared with the dedicated CREX measurement of the neutron skin of \({}^{48}\)Ca [38]. Comparing many different collision systems will furthermore permit us to study ratios of observables that cancel most of the systematic theoretical uncertainties [39; 40; 41; 42], leading to improved determinations of \(\Delta r_{np}\) across the nuclear chart. **Acknowledgments -** We acknowledge discussions with the participants of the INT Program INT-23-1a, "Intersection of nuclear structure and high-energy nuclear collisions", and the hospitality of the Institute for Nuclear Theory, Seattle. We thank in particular Rituparna Kanungo for interesting discussions. G.G. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC 2181/1 - 390900948 (the Heidelberg STRUCTURES Excellence Cluster), SFB 1225 (ISOQUANT) and FL 736/3-1. G.N. is supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under grant Contract Number DE-SC0011090. ## Methods ### Bayesian analysis in the _Trajectum_ framework The _Trajectum_ framework consists of an initial stage, a hydrodynamic stage and finally a freeze-out to hadrons that are then evolved by the SMASH code [43; 44; 45]. Here we give a brief summary of all parameters involved, how they are constrained, and by which experimental data. All parameters are displayed in boldface. Full details can be found in Ref. [18]. For the initial stage the nucleons are distributed within the colliding nuclei according to Eqn. (2) with parameter \(\mathbf{a_{n}}\), whereby each sampled nucleon pair has a minimum distance of \(\mathbf{d_{\text{min}}}\). In addition, in Eqn. (2), we consider the half-width radius expanded in spherical harmonics up to the axial quadrupole deformation, \(R(\theta)=R(1+\beta_{2}Y_{2}^{0}(\theta))\), where \(\beta_{2}\) quantifies the magnitude of the ellipsoidal deformation. Given the structure of \({}^{208}\)Pb [46], we let \(\beta_{2}\) fluctuate around zero with a standard deviation \(\sqrt{\langle\mathbf{\beta_{2}^{2}}\rangle-\langle\mathbf{\beta_{2}}\rangle^{2}}\). The nucleons that collide are determined by the measured \(\sigma_{\text{NN}}\) cross section as in [17; 25] and are called participants. Each participant then consists of \(\mathbf{n_{c}}\) constituents, each associated with a transverse Gaussian profile with width \(v=0.2\,\text{fm}+\mathbf{\chi_{\text{struct}}}(\mathbf{w}-0.2\,\text{fm})\). The center coordinates of the nucleon constituents are distributed according to a Gaussian distribution. This leads to an average Gaussian nucleon profile with a width \(\mathbf{w}\). Superimposition of the nucleon constituent profiles then leads to the thickness functions \(\mathcal{T}\) in Eqn. (1). The normalisation of each Gaussian profile fluctuates and is equal to \(\mathbf{N}\gamma/\mathbf{n_{c}}\), where \(\gamma\) is sampled from a gamma distribution with mean 1 and width \(\mathbf{\sigma_{\text{{{{{{{{{{{{{{{{{{{{{{{ \(2.76\,\mathrm{TeV}\). We also include the integrated anisotropic flow coefficients \(v_{2}\{2\}\), \(v_{2}\{4\}\), \(v_{3}\{2\}\) and \(v_{4}\{2\}\) at both \(2.76\) and \(5.02\,\mathrm{TeV}\)[54] and \(p_{T}\)-differential observables with bin boundaries at \((0.25,0.5,0.75,1.0,1.4,1.8,2.2,3.0)\,\mathrm{GeV}\). In particular, this includes spectra for pions, kaons and protons at \(2.76\,\mathrm{TeV}\)[51], as well as \(v_{2}\{2\}(p_{T})\) for pions, kaons and protons, and \(v_{3}\{2\}(p_{T})\) for pions (these data are only available for \(p_{T}>0.5\,\mathrm{GeV}\)) [55]. Finally we also include "statistically difficult" observables being the \(\rho_{2}(v_{2}\{2\},\langle p_{T}\rangle)\), \(NSC(2,3)\) and \(NSC(2,4)\) correlators, where \(NSC(i,j)\) measures the normalised correlations between \(v_{i}^{2}\) and \(v_{j}^{2}\)[56; 27]. The posterior for our parameters is then given by Bayes' formula \[\mathcal{P}(\mathbf{x}|\mathbf{y}_{\mathrm{exp}})=\frac{e^{-\Delta^{2}/2}}{\sqrt{(2\pi )^{n}\det\left(\Sigma(\mathbf{x})\right)}}\mathcal{P}(\mathbf{x}) \tag{3}\] with \(\mathcal{P}(\mathbf{x})\) the (flat) prior probability density and where \[\Delta^{2}=\left(\mathbf{y}(\mathbf{x})-\mathbf{y}_{\mathrm{exp}}\right)\cdot\Sigma(\mathbf{ x})^{-1}\cdot\left(\mathbf{y}(\mathbf{x})-\mathbf{y}_{\mathrm{exp}}\right),\] Figure 5: Complete correlation matrix among all 26 model parameters. For detailed information about the prior ranges, we refer the reader to Ref. [18]. The only new parameters of this analysis are \(\mathbf{a}_{n}\), whose prior can be inferred from Fig. 4, \(\mathbf{a}_{\mathbf{EOS}}\), whose prior range is between \(-9.5\) and \(-8.4\), and \(\sqrt{(\mathbf{\beta_{2}^{2}})-(\mathbf{\beta_{2}})^{2}}\) whose prior is between \(0\) and \(0.1\). with \(\mathbf{y}(\mathbf{x})\) the predicted data for parameters \(\mathbf{x}\), \(\mathbf{y}_{\rm exp}\) the \(n\) experimental data points and \(\Sigma(\mathbf{x})\) is the sum of the experimental and theoretical covariance matrices. The covariance matrices are constructed as in [15]. The standard procedure is then to run the model at a number of design points in the parameter space as determined by a latin hypercube and use those design points to construct an emulator for \(\mathbf{y}(\mathbf{x})\) to evaluate Eqn. (3) using the parallel tempered emcee code [57, 58]. In this work we used 3000 design points whereby each design point has 1M initial stages of which we evolve 18k using hydrodynamics and finally simulate about 100k SMASH events to get high statistics even for ultracentral collisions. Every 1 in 15 design points uses 10 times more statistics to allow for "statistically difficult" observables that are then emulated using 200 design points. The final posterior distributions and their correlations for our standard settings for the fitting procedure are displayed in Fig. 5. See the main text for details on variations in these settings. In Fig. 6 we compare the resulting posterior likelihood distributions for the proton and neutron densities to measurements as presented in [2]. Unlike [2] we vary only \(a_{n}\) so consequently our neutron density is much more restricted and the proton density is in fact fixed. Nevertheless, in the region where our variation is relevant (the diffusiveness of the neutron skin) the two methods agree remarkably well both in value and in uncertainty.
2308.06498
Latent Emission-Augmented Perspective-Taking (LEAPT) for Human-Robot Interaction
Perspective-taking is the ability to perceive or understand a situation or concept from another individual's point of view, and is crucial in daily human interactions. Enabling robots to perform perspective-taking remains an unsolved problem; existing approaches that use deterministic or handcrafted methods are unable to accurately account for uncertainty in partially-observable settings. This work proposes to address this limitation via a deep world model that enables a robot to perform both perception and conceptual perspective taking, i.e., the robot is able to infer what a human sees and believes. The key innovation is a decomposed multi-modal latent state space model able to generate and augment fictitious observations/emissions. Optimizing the ELBO that arises from this probabilistic graphical model enables the learning of uncertainty in latent space, which facilitates uncertainty estimation from high-dimensional observations. We tasked our model to predict human observations and beliefs on three partially-observable HRI tasks. Experiments show that our method significantly outperforms existing baselines and is able to infer visual observations available to other agent and their internal beliefs.
Kaiqi Chen, Jing Yu Lim, Kingsley Kuan, Harold Soh
2023-08-12T08:22:11Z
http://arxiv.org/abs/2308.06498v1
# Latent Emission-Augmented Perspective-Taking (LEAPT) ###### Abstract Perspective-taking is the ability to perceive or understand a situation or concept from another individual's point of view, and is crucial in daily human interactions. Enabling robots to perform perspective-taking remains an unsolved problem; existing approaches that use deterministic or handcrafted methods are unable to accurately account for uncertainty in partially-observable settings. This work proposes to address this limitation via a deep world model that enables a robot to perform both perception and conceptual perspective taking, i.e., the robot is able to infer what a human sees and believes. The key innovation is a decomposed multi-modal latent state space model able to generate and augment fictitious observations/emissions. Optimizing the ELBO that arises from this probabilistic graphical model enables the learning of uncertainty in latent space, which facilitates uncertainty estimation from high-dimensional observations. We tasked our model to predict human observations and beliefs on three partially-observable HRI tasks. Experiments show that our method significantly outperforms existing baselines and is able to infer visual observations available to other agent and their internal beliefs. ## I Introduction This work focuses on the problem of perspective taking, which is the ability to take another agent's point of view, either visually or conceptually. For example, consider the scenario in Fig. 1 where a robot and a human are aligning a peg and hole to assemble a table. As the robot is holding the table, it cannot see the peg and hole. To collaborate effectively, the robot should reason about the human's visual perspective and infer that the human is able to see the peg and hole (despite not knowing what he actually sees). It can then query the human for relevant information. Enabling robots to perform such perspective-taking is challenging, especially when the environment is partially-observable. Prior works on perspective-taking in Human-Robot Interaction (HRI) focus mainly on hand-crafted models (e.g., [1, 2, 3, 4, 5]) or learning deterministic models in fully-observable environments [6]. Unfortunately, handcrafted models do not easily scale to complex real-world environments with high-dimensional observations, and learnt deterministic models do not capture uncertainty in the beliefs or world state. In this paper, we take a step towards addressing this gap and propose a perspective-taking approach based on deep Fig. 1: Perspective-Taking Example. (**A**) As the robot (green) is holding the table, it cannot see the peg and hole (purple), which is clearly visible to the human (light blue). The robot considers the perspective of the human and reasons about what the human can observe, despite not knowing what he actually sees. (**B**) Human belief inference. In brief, the robot’s world model (self-model) is used to sample possible environment states and predict human observations, and the human model is used to infer human belief. By leveraging the inference network and perspective taking-model, we can infer human belief under each sampled environment state. latent state-space world models. Given only partial observations (e.g., camera images), our **L**atent **E**mission-**A**ugmented **P**erspective-**T**aking (LEAPT) is able to infer what another agent observes and knows. This is achieved by sampling a set of possible world states from the robot's underlying world or "self" model (Fig.1.B.). Using these sampled states along with human pose information, we predict potential human observations, which are then used to infer human beliefs using a modified self-model [7]. We find that the self-model requires careful design to be effective -- a standard latent state-space model with a single time-dependent latent variable did not properly capture uncertainty in the world state (resulting in samples that were blurry "averages" of training observations). To address this issue, we impose an additional model structure by decomposing the latent state into two variables that represent (i) information that the robot can fully observe (\(s_{t}^{R}\)) and (ii) information that is not revealed by the robot's observations (\(h_{t}^{R}\)). By conditioning \(h_{t}^{R}\) on \(s_{t}^{R}\), we explicitly train the model to generate possible task states given only partial information. Moreover, by minimizing the Kullback-Leibler (KL) divergence between task-complete observations (available during training) and the robot's estimation in latent space, the model learns to properly estimate uncertainty about the state. Experiments on various tasks in a simulated domain show that our model can better estimate the visual perceptions and beliefs of other agents compared to a deterministic approach and a standard latent-state space model. Specifically, we show our model enables robots to model false beliefs and better estimate another agent's (and it's own) beliefs during interaction and communication. We also report on a human-subject study using Virtual Reality (VR), which shows that LEAPT is better at predicting the human's belief about the world. In summary, this paper makes the following contributions: * LEAPT -- a perspective-taking approach based on a latent state-space model -- for partially-observable environments; * A decomposed model structure and corresponding ELBO training loss that influences the model to better capture uncertainty over possible world states; * Empirical results that show that LEAPT outperforms competing methods and ablated variants. ## II Background & Related Work Perspective-taking [8, 9, 10] is considered essential for many useful social skills in human-human interaction, such as understanding social dynamics and relationships. Within the context of robotics, perspective-taking can also occur during human-robot interaction (HRI) [11, 1]. In the following, we briefly mention prior works on enabling artificial agents to perform visuospatial persepective taking and mentalizing (reasoning about another agent's mental states). **Cognitive Architectures.** To enable efficient HRI, earlier works proposed cognitive architectures using hand-crafted rules and perspective-taking models [1, 2, 3, 4, 5]. Although these models can be effective, they are generally time-consuming to create and do not lend themselves easily to scenarios with high-dimensional observations (e.g., images). Potentially, modern physics simulators [12, 13, 14] could be embedded within the cognitive architecture of the robot to perform perspective-taking. However, uncertainty estimation within such a setup remains computationally expensive -- one would need to either simulate multiple possible observations and track beliefs over time. **ML-based Models.** An alternative approach is to leverage machine learning (ML) to predict other agents' perspectives [6, 15] and apply planning [6] (or reinforcement learning [15]) to generate robot behaviors. For example, in a hide-and-seek task [6], a robot hider predicts what the seeker sees using deterministic deep neural networks and then applies planning to avoid the seeker. Recent work learned a Q function conditioned on both ego-agent and other agents observations in a dominant-and-subordinate-monkey task [15]. Unlike LEAPT, these works use deterministic models or assume access to other agents' observations. **Agent Modeling and Communication in Multi-Agent Systems** Our work is also related to opponent modeling in a multi-agent system. In this line of research, one approach is to optimize policies using a latent embedding that encodes the (predicted) trajectories of other agents [16, 17, 18]. These methods either employ deterministic approaches when deployed [17, 18] or model uncertainty in the observation space [16]. In the field of human-robot communication, recent works modify generated observations and track human belief using learned models [7, 19]. Unlike LEAPT, these methods assume that the robot is able to fully observe the environment at test time. LEAPT is designed to operate in _partially-observable environments_ for tasks where visual and conceptual (belief) perspective-taking is helpful. ## III Latent Emission-Augmented Perspective-Taking (LEAPT) **Objective.** We aim to to enable a robot to perform: * _Visual perspective-taking_: estimate what another agent observes at a given time; * _Conceptual perspective-taking_: Infer what the agent believes given the observations they have made so far, which is a form of a set of latent states. We assume that the robot has access to its own on-board sensors and knows the other agent's pose. In addition, during training, the robot has access to a world-view that can provide task-complete information; this information is _not_ available during test time. In this section, we describe our main contribution: the Latent Emission-Augmented Perspective-Taking (LEAPT) model. At its core, LEAPT uses a multi-modal latent state-space model (MSSM) [7, 20] that is modified by decomposing the latent state variable. We begin with a discussion of the standard MSSM and its limitations, which we then address by factorizing the latent state representation. This enables the robot to (i) better track its own uncertainty over the world state, and (ii) track the beliefs of other agents. We then illustrate how this model can be used for both visual and conceptual (belief) perspective taking via sampling. ### _LEAPT Latent State-Space Model_ **Standard MSSM World Model.** Our robot maintains a world model -- called the _self-model_ -- on which it can perform inference and planning. We build upon the MSSM [20, 7], which captures how the environment changes over time given actions taken by the robot. In general, the robot is unable to directly perceive the entire world state and we distinguish between two kinds of observations: * Ego observations \(x_{t}^{m}\) at time \(t\) for \(m=1,\ldots,M\) sensory modalities, which are the agent's observations from its own perspective (e.g., a camera mounted on the robot's arm) and can be accessed in both training and testing; * Task-Complete observations \(y_{t}^{k}\) at time \(t\) for \(k=1,\ldots,K\) modalities, which contains all sufficient information about world state to complete the task and _can only be accessed during training_. Similar to standard state-space models, the MSSM assumes Markovian transitions that are only dependent on the current state and the action \(a_{t}\) taken by the agent. **Limitations of the standard MSSM.** One can train the MSSM using variational inference by optimizing the evidence lower bound (ELBO): \[\mathcal{L}_{e}= \sum_{t=1}^{T}\mathop{\mathbb{E}}_{q_{\phi}(z_{t})}\left[\sum_{m =1}^{M}\log p_{\theta}(x_{t}^{m}|z_{t})+\sum_{k=1}^{K}\log p_{\theta}(y_{t}^{k }|z_{t})\right]\] \[-\sum_{t=2}^{T}\mathop{\mathbb{E}}_{q_{\phi}(z_{t-1})}\left[ \mathbb{D}_{\mathrm{KL}}\left[q_{\phi}(z_{t})\|p_{\theta}(z_{t}|z_{t-1},a_{t- 1})\right]\right]\] \[-\mathop{\mathbb{E}}_{q_{\phi}(z_{1})}\left[\mathbb{D}_{\mathrm{ KL}}\left[q_{\phi}(z_{1})\|p_{\theta}(z_{1})\right]\right] \tag{1}\] where the variational distribution \(q_{\phi}(z_{t})\) is shorthand for \(q(z_{t}|u_{\phi}(z_{1:t}^{1:M},y_{1:t}^{1:K}))\). This variational distribution can be modeled using an inference network \(u_{\phi}\) that takes as input observations (_both_ ego observations \(x_{1:t}^{1:M}\) and task-complete observations \(y_{1:t}^{1:K}\)) and outputs a distribution over \(z_{t}\). In theory, once trained, \(q_{\phi}(z_{t})\) can be used to sample state \(z_{t}\) given only agent's _ego_ observations \(x_{1:t}^{1:M}\), i.e., the \(z_{t}\)'s will be used to enable perspective taking. The problem lies in using inference network during test time when the task-complete observations are missing. One simple solution is to randomly drop the task-complete observations during training [20, 7, 21] by zero-ing out \(y_{1:t}^{1:M}\). Unfortunately, our preliminary experiments results showed that this approach led to poor variational distributions: \(q_{\phi}(z_{t})\) was 'narrow' (over-confident) when \(x_{1:t}^{1:M}\) did not contain sufficient information of the world state. The image samples from \(q_{\phi}(z_{t})\) tended to be blurry "averages" of the data the model was trained with (see Fig 7 for examples). One possible reason for this undesirable behavior is that the MSSM is trained to encode \(y_{1:K}\) into \(z_{t}\) via the reconstruction term \(\mathop{\mathbb{E}}_{q_{\phi}(z_{t})}\left[\log p_{\theta}(y_{t}^{k}|z_{t})\right]\). If we drop \(y_{1:t}^{1:K}\), \(q(z_{t}|u_{\phi}(x_{1:t}^{1:M}))\) may lack sufficient information to generate a sample representative of the world state. In such cases, the model learns to "hedge" by generating (decoding) the mean observation, which corresponds to a narrow \(q_{\phi}(z_{t})\) centered around a specific latent state. **Decomposed Latent State.** As a remedy, we propose to restructure the MSSM to have a more explicit generation process. This modified structure is illustrated in Fig.2. Intuitively, we would like the model to learn how to generate task-complete observations _only_ using ego observations. As such, we decompose the latent state into two disjoint parts, \(z_{t}=[s_{t},h_{t}]\), where \(s_{t}\) represents the observable part of the world and \(h_{t}\) represents information that the robot cannot observe directly. By conditioning \(h_{t}\) on \(s_{t}\) in \(p(h_{t}|g_{\theta}([s,h,a]_{t-1},s_{t}))\), the model explicitly represents our desired computation using a neural network \(g_{\theta}\). Given this decomposed latent structure, the joint distribution factorizes as: \[p_{\theta}(x_{1:T}^{1:M},y_{1:T}^{1:K},s_{1:T},h_{1:T},a_{1:T-1 })=\] \[p_{\theta}(h_{1}|s_{1})p(s_{1})\prod_{t=2}^{T}\left[\prod_{m=1}^ {M}p_{\theta}(x_{t}^{m}|s_{t})\right]\left[\prod_{k=1}^{K}p_{\theta}(y_{t}^{k }|s_{t},h_{t})\right]\] \[p_{\theta}(s_{t}|s_{t-1},h_{t-1},a_{t-1})p_{\theta}(h_{t}|s_{t-1,t},h_{t-1},a_{t-1}) \tag{2}\] where each of the factorized distributions is modeled using deep neural networks parameterized by \(\theta\): \[\text{Transition:}\ p_{\theta}([s,h]_{t}|[s,h,a]_{t-1})\] \[=p(s_{t}|f_{\theta}([s,h,a]_{t-1}))p(h_{t}|g_{\theta}([s,h,a]_{ t-1},s_{t})) \tag{3}\] \[\text{Ego Observations:}\ p_{\theta}(x_{t}^{m}|s_{t})=p(x_{t}^{m}|d_{x_{ \theta}}^{m}(s_{t}))\] (4) \[\text{Task-complete Observations:}\ p_{\theta}(y_{t}^{k}|[s,h]_{t})\] \[=p(y_{t}^{k}|d_{y_{\theta}}^{k}([s,h]_{t})) \tag{5}\] **Model Training.** Similar to the basic MSSM, we optimize the ELBO, but with two variational distributions \(q(s_{t}|v_{\psi}(x_{1:t}^{1:M}))\) and \(q(h_{t}|l_{\eta}(y_{1:t}^{1:K}))\) over the latent state variables \(s_{t}\) and \(h_{t}\). Note that \(v_{\psi}\) and \(l_{\eta}\) are two different Fig. 2: The LEAPT Decomposed Latent State-Space Model. Circle nodes represent random variables. The latent variables are decomposed into two parts, \(z_{t}=[h_{t},s_{t}]\). Inference networks \(q_{\phi}\) are shown using blue dotted line arrows; we see that \(q(s_{t}|x_{t}^{1:M})\) computes \(s_{t}\) (the part of the world that robot can fully observe), whilst \(p(h_{t}|s_{1})\) and \(p(h_{t}|s_{t-1,t},h_{t-1})\) generates the part that robot cannot observe. inference networks. For notational simplicity, we will drop the explicit dependence on the conditioning variables and write the ELBO as, \[\mathcal{L}_{e} =\sum_{t=1}^{T}\Big{(}\operatorname*{\mathbb{E}}_{q_{\psi}(s_{t})} \left[\sum_{m=1}^{M}\log p_{\theta}(x_{t}^{m}|s_{t})\right]\] \[+\operatorname*{\mathbb{E}}_{q_{\psi}(s_{t})q_{\eta}(h_{t})}\left[ \sum_{k=1}^{K}\log p_{\theta}(y_{t}^{k}|[s,h]_{t})\right]\Big{)}\] \[-\sum_{t=2}^{T}\Big{(}\operatorname*{\mathbb{E}}_{q_{\psi}(s_{t- 1})q_{\eta}(h_{t-1})}[\operatorname*{\mathbb{D}}_{\operatorname*{\mathrm{KL}} }[q_{\psi}(s_{t})\|p_{\theta}(s_{t}|[s,h,a]_{t-1})]]\] \[-\operatorname*{\mathbb{E}}_{q_{\psi}(s_{t-1})q_{\eta}(h_{t-1})} [\operatorname*{\mathbb{D}}_{\operatorname*{\mathrm{KL}}}[q_{\eta}(h_{t})\|p _{\theta}(h_{t}|[s,h,a]_{t-1},s_{t})]]\Big{)}\] \[-\operatorname*{\mathbb{E}}_{q_{\psi}(s_{1})}[\operatorname*{ \mathbb{D}}_{\operatorname*{\mathrm{KL}}}[q_{\eta}(h_{1})\|p_{\theta}(h_{1}| s_{1})]]\] \[-\operatorname*{\mathbb{D}}_{\operatorname*{\mathrm{KL}}}[q_{ \psi}(s_{1})\|p_{\theta}(s_{1})] \tag{6}\] In contrast to the MSSM, we don't need to discard \(y_{1:t}^{1:K}\) during the training process. Instead, we substitute \(q_{\eta}(h_{t})\) with \(p_{\theta}(h_{t}|[s,h,a]_{t-1},s_{t})\) during the testing phase (or with \(p_{\theta}(h_{1}|s_{1})\) when \(t=1\)). This ELBO variant enables \(q_{\eta}(h_{t})\) to convert \(y_{1:t}^{1:K}\) into a latent distribution, such as a Gaussian or Categorical distribution. By minimizing \(\operatorname*{\mathbb{D}}_{\operatorname*{\mathrm{KL}}}\left[q_{\eta}(h_{t}) \|p_{\theta}(h_{t}|[s,h,a]_{t-1},s_{t})\right]\) and \(\operatorname*{\mathbb{D}}_{\operatorname*{\mathrm{KL}}}\left[q_{\eta}(h_{1}) \|p_{\theta}(h_{1}|s_{1})\right]\), \(p_{\theta}(h_{t}|[s,h,a]_{t-1},s_{t})\) and \(p_{\theta}(h_{1}|s_{1})\) are trained to generate potential \(h_{t}\) using observations \(s_{1:t}\). In a nutshell, this latent state-space model decomposition captures task-complete uncertainty at the latent level. By carefully setting \(q\) and \(p\), e.g., Gaussian, we can calculate these KL divergence terms exactly. We found that this decomposition resolves the issue of suboptimal latent state estimations and reconstructions, enabling us to create a self-model that produces credible samples. Please refer to Fig. 7 and 6 for examples. ### _Visual Perspective-Taking_ We use sampled task-complete observations \(y_{t}^{1:K}\) to generate relevant observations from different perspectives. Specifically, we train a visual perspective-taking model \(\hat{x}_{t}^{1:M}=d_{\chi}(y_{t}^{1:K},\omega_{t})\), parameterized by \(\chi\), that produces observations given an agent's pose \(\omega_{t}\) at time \(t\) and \(y_{t}^{1:K}\). Training can be performed with data gathered by the robot R during roll-outs in the environment. Given trajectories of the form \(\left\{(x_{t}^{1:M,\text{R}},y_{t}^{1:K},\omega_{t}^{\text{R}})\right\}_{t=1}^ {T}\) collected by the robot, we learn \(\chi\) by minimizing the following loss: \[\operatorname*{arg\,min}_{\chi}\;\mathcal{L}(\chi)=\sum_{t=1}^{T}\sum_{m=1}^{M }\left(\hat{x}_{t}^{m}-x_{t}^{m}\right)^{2}\] Once trained, we can use the decomposed latent state space model and \(d_{\chi}\) to obtain samples of another agent's observations. The process is straightforward: given robot observations \(x_{t}^{1:M,\text{R}}\) and the agent's pose \(\omega_{t}^{\text{H}}\), we sample a task-complete observation \(y_{t}^{1:K}\) via 1. Sample the robot's belief state \(s_{t}^{\text{R}}\sim q_{\psi}(s_{t}|(\hat{x}_{t}^{1:M,\text{R}}))\) and \(h_{t}^{\text{R}}\sim p_{\theta}(h_{t}|[s^{\text{R}},h^{\text{R}},a]_{t-1},s_{t} ^{\text{R}})\) with \(h_{1}^{\text{R}}\sim p_{\theta}(h_{1}|s_{1}^{\text{R}})\). 2. Sample the task-complete observation \(\hat{y}_{t}^{1:K}\sim p_{\theta}(y_{t}^{1:K}|s_{t}^{\text{R}},h_{t}^{\text{R}})\). 3. Given human position, generate human observations \(\hat{x}_{t}^{1:M,\text{R}}=d_{\chi}(\hat{y}_{t}^{1:K},\omega_{t}^{\text{H}})\) As an aside, one may be curious why we did not directly predict \(x_{t}^{m}\) using \(h_{t}^{\text{R}}\) and \(s_{t}^{\text{R}}\). Our preliminary experiments indicated that this direct approach caused the model to disregard the poses \(\omega_{t}^{\text{H}}\), resulting in poor samples. This behavior stems from training the perspective-taking model on tuples of _robot_ pose and _robot_ ego observations \(x_{t}^{1:M,\text{R}}\). By including the robot's latent states \(h_{t}^{\text{R}}\) and \(s_{t}^{\text{R}}\) (derived from \(x_{t}^{1:M,\text{R}}\)) the model is able to make predictions directly from them, overshadowing the importance of the agent poses. As a result, substituting \(\omega_{t}^{\text{R}}\) with human pose \(\omega_{t}^{\text{H}}\) during test time leads to the failure of the perspective-taking model in generating accurate observations from the other agent's perspective. ### _Conceptual (Belief) Perspective-Taking_ Next, we turn our attention to how the self-model can be used to estimate human belief, which is a form of conceptual perspective taking. We follow prior work [7] and use a variant of the robot's self-model. By coupling the human model and robot's self-model together as shown in Fig.1.B., we sample belief states in the human model: 1. Generate the human observations \(\hat{x}_{1:t}^{1:M,\text{H}}\) by visual perspective-taking. 2. Sample the human's belief state \(s_{t}^{\text{H}}\sim q_{\psi}(s_{t}|(\hat{x}_{1:t}^{1:M,\text{H}}))\) and \(h_{t}^{\text{H}}\sim p_{\theta}(h_{t}|[s^{\text{H}},h^{\text{H}},a]_{t-1},s_{t} ^{\text{H}})\) with \(h_{1}^{\text{H}}\sim p_{\theta}(h_{1}|s_{1}^{\text{H}})\). Unlike existing approaches, LEAPT can capture a (sample-based) distribution over possible observations and beliefs. This enables the robot to perform epistemic reasoning, i.e., the robot can infer that the other agent is aware of some information even though the robot doesn't know exactly what that information is. We will see examples of this ability in the experiments. ## IV Simulated Human Experiments In the following section, we describe experiments with simulated humans on a False-Belief test, a Fetch-Tool task, and a Table-Assembly task (Fig. 3.). Using simulated humans allowed us to compare the learned models against ground truth observations and beliefs. Experiments with real human subjects are described in the next section. ### _Experimental Setup_ **Domains.** Fig. 3. summarizes the three domains used in our experiments. All domains involve two agents: a robot and a human. The observations of the environment (both ego and task-complete) are images. We assume the agents can communicate via specific speech symbols in the Fetch-Tool and Table-Assembly tasks (these are simply modeled as observations as in [7]). We assume the robot can always observe the human's pose. **Simulated Humans.** We assume that the human is a Bayes-rational agent that updates their beliefs using ego-centric observations. For example, in the False-Belief test, the simulated human will believe that the drill remains in the same position when they walk away. Specifically, the simulated human's belief is a Bernoulli distribution representing where the drill is (left or right box). In the Fetch-Tool task, the simulated human belief is a Categorical distribution over which object they see on the screen (six classes). In the Table-Assembly task, we use a Gaussian distribution over the relative distance between the peg and hole. **Compared Methods.** In total, we compare four different perspective-taking methods: * Baseline-S: This baseline uses the standard MSSM and is trained with the ELBO in Eq. 1. We use bi-directional GRU as the main component of the inference network. * Baseline-D: This baseline is similar to Baseline-S except that the latent space is deterministic. * LEAPT-GRU: Our decomposed latent state space model with a GRU-based inference network. The model is trained with the ELBO in Eq. 6. * LEAPT-Transformer: This model is the same as the LEAPT-GRU, except we use a Transformer-based [22] inference network. All the models are trained on the same data and epochs within each domain (False-Belief: 30 trajectories with length 5, 100 epochs; Fetch-Tool: 100 trajectories with length 5, 400 epochs; Table-Assembly: 300 trajectories with length 10, 700 epochs). The trajectories used for training are generated by taking random actions. At every time step \(t\), the inference time is less than 0.05s for all models on a NVIDIA GeForce RTX 2080 Ti. Source code is available at [https://github.com/clear-nus/](https://github.com/clear-nus/) /perspective-taking-dmssm. **Evaluation** We first evaluate whether the robot maintains a correct belief about the world. Then, we evaluate the robot's ability to perform visual and belief perspective-taking, i.e., infer what the other agent sees and believes. As the direct evaluation of latent states is challenging, we utilize pre-trained classifiers or regression models on the samples \(x\) and \(y\) that are decoded from the latent states. Denote the predicted labels as \(c(x)\) or \(c(y)\). The exact predicted quantities was task-dependent: i) In the False-Belief test, we predict where the drill is; ii) In the Fetch-Tool task, we predict the object present on the screen (Fig. 3.B.); iii) In the Table-Assembly task, we predict the relative position between human and table and the relative distance between peg and hole (Fig. 3.C.). We then compare the distribution of these predictions to a specified ground-truth distribution (assuming a Bayes-rational agent) via KL divergence (lower is better). _Evaluation of robot's belief:_ We first decode \(\hat{g}^{\text{R}}\) from a set of the robot's latent states. Then, we apply classifiers/regression models on \(\hat{g}^{\text{R}}\) to obtain predictions \(c(\hat{g}^{\text{R}}_{t})\). The quality of the robot's belief is measured through the KL divergence \(\mathbb{D}_{\mathrm{KL}}\left[\hat{p}(c(\hat{g}^{\text{R}}_{t})|x^{\text{R}} _{1:t})\|p(c(y^{\text{R}}_{t})|x^{\text{R}}_{1:t})\right]\) between the empirical distribution, \(\hat{p}\), and the ground truth distribution \(p\) (representing what the robot should know about world given our experiment parameters). _Evaluation of visual perspective-taking:_ We generate human observations \(\hat{x}^{\text{H}}_{t}\) via our visual perspective-taking method (Sec. III-B). Then, we predict labels \(c(\hat{x}^{\text{H}}_{t})\) and compute the KL divergence \(\mathbb{D}_{\mathrm{KL}}\left[\hat{p}(c(\hat{x}^{\text{H}}_{t})|x^{\text{R}} _{1:t})\|p(c(x^{\text{H}}_{t})|x^{\text{R}}_{1:t})\right]\). The distribution \(p\) varies according to the amount of information available, e.g., in the Fetch-Tool task, if the robot has yet to communicate with the human, \(p(c(x^{\text{H}}_{t})|x^{\text{R}}_{1:t})\) is uniform over all possible objects. After receiving a message Fig. 3: Experimental domains. (**A**) A False-Belief test, where the robot _cannot_ see what is inside the box but the human can. The human first observes the position of the drill (in the left or right box) and then walks away. Then, unobserved by the human, the robot will either switch the box or not. The robot’s task is to infer whether the human would believe that the objects remain in their original position. (**B**) In the Fetch-Tool task, a human is sitting in front of his workstation and observes a randomly chosen object on his computer screen. There are six different types of objects, which are characterized by their color and type. The robot’s task is to reason about human uncertainty and what he sees without observing the object, using only communication information from the human. However, communication of the object has three levels of ambiguity: type, colour and both the type and colour of the object. Once the robot figures out what the human sees, it can subsequently fetch the appropriate tool to assemble the object. (**C**) In the Table-Assembly, the robot’s task is to fit the peg of a table piece into a hole with guidance from the human. However, the hole may be occluded depending on the agent’s position. As the robot is holding on to the table, only the human is able to observe the relative distance between the peg and hole. The robot’s task is to reason whether the human can see the hole so that the robot can move the table to match it. (**D**) Our VR setup in the human-subject experiments used an Occulus Quest headset. that the object is brown, the ground truth distribution shrinks to equal probabilities across brown objects. _Evaluation of belief perspective-taking:_ We begin by sampling from the human belief distribution (via belief perspective-taking in Sec. III-C). Then, we generate labels \(c(y_{t}^{\text{H}})\) and compute the Conditional KL (Cond KL) measure via \(\mathbb{E}_{x_{1:t}^{\text{H}}}\left[\mathbb{D}_{\text{KL}}\left[\hat{p}(c(y_{t }^{\text{H}})|x_{1:t}^{\text{H}})\|p(c(y_{t}^{\text{H}})|x_{1:t}^{\text{H}}) \right]\right]\), where \(x_{1:t}^{\text{H}}\) is sampled using visual perspective-taking. ### _Results and Analysis_ Due to space considerations, we highlight key results from our experiments. Additional plots are available in our online appendix: [https://github.com/clear-nus//perspective-taking-dmssm](https://github.com/clear-nus//perspective-taking-dmssm).. **Overall, LEAPT-Transformer maintains more accurate beliefs about the world compared to the baselines when given only ego observations.** We see in the first row of Fig. 4 that the KL divergence tends to be smallest for the LEAPT variants in the False-Belief and Table-Assembly tasks. The sampled images from the baselines tended to be blurry, leading to poor identification of task-relevant features. For Fetch-Tool, the LEAPT-Transformer achieved better scores in the initial time-steps, but the baselines performed well when all needed information was available (at time-step 3). The LEAPT-Transformer state distribution was overly broad at the last time-step, but it generated the best samples overall over the course of the interaction. **The LEAPT-Transformer achieves better visual perspective-taking given the robot's beliefs.** The second row of Fig. 4 shows better scores by the LEAPT-Transformer, except (again) for the last time step of the Fetch-Tool task. In general, the quality of the generated samples (both diversity and visual clarity) was positively correlated with better robot beliefs1. The baselines tended to produce poor quality images, with sharp images only appearing when all task information was provided (See Fig. 7 for examples). Footnote 1: For the False-Belief task, note the human doesn’t return to the boxes so the generated observations are of a different area. **The LEAPT models better infer the other agent's beliefs compared to the baselines.** This can be seen in the last row of Fig. 4, where the KL scores are overall better than the baselines. For the False-Belief test, a more interpretable result is shown in Fig. 5 where we ask the robot to classify where the human thinks the object to be. When the boxes are switched, the baselines fail to capture that the human holds a false belief. For the fetch tool task, a 2D Principal Components Analysis (PCA) plot (Fig.6) suggest that the baseline models do not learn an appropriate latent space. It incorrectly maps different partial observations to the same cluster of latent points. On the other hand, LEAPT-Transformer correctly captures the distribution of latent states: with more communication coming from the human, the states converge. Fig.7 illustrates the ability of the LEAPT-Transformer to predict human belief given partial information. At the start of the interaction, without any information, the LEAPT-Transformer predicts a broad distribution over possible human observations and beliefs over all objects. After receiving communication that the object's color is brown, the uncertainty over human observations and human belief shrinks to the brown cabinet and table. Thereafter, it receives communication that the object type is a table, allowing it to determine the object in question and the distributions shrink to what the human actually sees. In contrast, the Baseline-S model consistently outputs a single image (Fig.7) which is often blurry, failing to correctly model uncertainty under partial observations (similar results were obtained for Baseline-D). Fig. 4: Evaluation of (visual & conceptual) perspective-taking in three tasks using KL divergence as a measure (lower is better). In most cases, LEAPT-Transformer predicts the (simulated) human’s observation and belief more accurately compared to the other methods. Fig. 5: Accuracy of human belief prediction on the False-Belief Test. (A) boxes not switched by robot (B) boxes switched by robot. ## V Human-Subject Experiments In this section, we describe preliminary human-subject experiments designed to test the viability of LEAPT when interacting with real humans. A total of 12 participants (mean age = 22.2, 3 females, 9 males) were recruited from the university community. The participants were asked to perform the Fetch-Tool and the Table-Assembly tasks using a VR setup (Fig.3.D). In the Fetch-Tool task, the participants sat in front of the (simulated) workstation, and the robot was situated on the other side. The robot queried about the type or color of the item shown on the screen, and the participants would respond accordingly. In the Table-Assembly task, the participants stood in front of the table, where they could see the peg and hole. The robot was on the other side, gripping the table, which visually occluded the peg and hole. The robot queried the participants on the relative distance of the hole from the peg and moved the table accordingly until the participants confirmed that the peg was aligned with the hole. EvaluationOur evaluations are similar to the simulated human experiments described in the previous section. From the robot observations and human communication collected during the human-subject experiments, we use our models to derive the estimated empirical human belief. We then evaluate it with the ground truth distribution set based on the information provided in human communication. For instance, in the Fetch-Tool task, when a human communicated that they saw a table, the ground truth distribution was set as uniform over all kinds of tables. We measure the KL divergence \(\mathbb{D}_{\mathrm{KL}}\left[p(c(\hat{y}_{t}^{\mathrm{H}})|x_{t:t}^{\mathrm{ R}})\|p(c(\hat{y}_{t}^{\mathrm{H}})|x_{1:t}^{\mathrm{R}})\right]\), where \(c\) is a classifier/regression model. Transformer vs. Baseline-D: \(t(11)=140.874,p<0.001\) and LEAPT-Transformer vs. Baseline-S: \(t(11)=52.906,p<0.001\)). ## VI Conclusions This paper introduces LEAPT, a framework that empowers robots to conduct visual and conceptual perspective-taking in partially-observable environments. The key innovation is a specially-designed latent multi-modal state-space model, which enables more consistent beliefs to be maintained when observations are limited. Our experiments on three tasks show that LEAPT leads to better performance compared to deterministic approaches and standard latent-state space models when estimating the visual perceptions and beliefs of other agents. Moving forward, we plan to apply LEAPT to other robot tasks and in real-world environments. In addition, LEAPT currently exhibits several limitations that would make for interesting future work. First, LEAPT requires human poses during testing. To address this, an alternative approach could involve an additional distribution over the poses given partial observations of the human and a learnt dynamics model. Second, the model needs to be trained with sufficient views of the world to be effective. Inadequate training data may impede the model's ability to handle complex environments. Third, the human model in LEAPT is based on the robot self-model, thus, falls short in capturing certain aspects of the human, such as trust [23, 24, 25] and emotions [26]. Lastly, communication in LEAPT is based on simple, pre-defined sentence templates and we can integrate large language models (e.g., [27, 28]) to enhance LEAPT's communication ability.
2307.01572
Efficient computation of optical excitations in two-dimensional materials with the Xatu code
Here we describe an efficient numerical implementation of the Bethe-Salpeter equation to obtain the excitonic spectrum of semiconductors. This is done on the electronic structure calculated either at the simplest tight-binding level or through density funcional theory calculations based on local orbitals. We use a simplified model for the electron-electron interactions which considers atomic orbitals as point-like orbitals and a phenomenological screening. The optical conductivity can then be optionally computed within the Kubo formalism. Our results for paradigmatic two-dimensional materials such as hBN and MoS2, when compared with those of more sophisticated first-principles methods, are excellent and envision a practical use of our implementation beyond the computational limitations of such methods.
Alejandro José Uría-Álvarez, Juan José Esteve-Paredes, Manuel Antonio García-BlÑzquez, Juan José Palacios
2023-07-04T08:56:01Z
http://arxiv.org/abs/2307.01572v1
# Efficient computation of optical excitations in two-dimensional materials with the Xatu code ###### Abstract Here we describe an efficient numerical implementation of the Bethe-Salpeter equation to obtain the excitonic spectrum of semiconductors. This is done on the electronic structure calculated either at the simplest tight-binding level or through density functional theory calculations based on local orbitals. We use a simplified model for the electron-electron interactions which considers atomic orbitals as point-like orbitals and a phenomenological screening. The optical conductivity can then be optionally computed within the Kubo formalism. Our results for paradigmatic two-dimensional materials such as hBN and MoS\({}_{2}\), when compared with those of more sophisticated first-principles methods, are excellent and envision a practical use of our implementation beyond the computational limitations of such methods. keywords: Exciton, Bethe-Salpeter Equation, Optics, Many-Body Physics, Localized Orbitals, Tight-Binding + Footnote †: journal: Computer Physics Communications ## 1 Introduction Bound electron-hole pairs, namely excitons, are known to be largely responsible for the most prominent features of the optical response of semiconductors near the band edge[1], particularly for low dimensional materials[2; 3]. This includes, of course, absorption and photoluminescence, but also the photovoltaic response, where substantial efforts, both experimentally and theoretically, are being made for energy-harvesting real-life applications [4]. On the theory side, multiple ways to describe excitons have been developed varying in accuracy and sophistication [5], from an effective two-body description [6; 7] and configuration interaction [8; 9; 10] to many-body perturbation theory (MBPT) [11; 12; 13] or time-dependent techniques [14; 15; 16; 17]. MBPT itself can be purely electronic or include electron-phonon interactions [18; 19; 20]. The current standard for exciton calculations is GW-BSE: the GW approximation [21] is used to correct the density functional theory (DFT) electronic band structure, specifically the gap in insulators and semiconductors [22]. The resulting band structure is then used to compute the exciton spectrum with the Bethe-Salpeter equation (BSE). The accuracy of the MBPT approach has prompted the development of several software applications for the calculation of first-principles many-body excitations [23; 24; 25; 26]. Being first-principles calculations, one can seek quantitative agreement with experiments, at the cost of computational time. Alternatively, one can seek less costly, qualitative comparison through an effective description of the interactions. Our code Xatu is intended for such purpose. While the base electronic structure can be computed at any degree of fidelity, from the simplest tight-binding (TB) model to the more sophisticated GW approximation, electron-hole interactions are taken into consideration through a simplified model where orbitals are considered to be point-like along with phenomenological models for screening. In principle, any band structure can be used as the starting point as long as it comes from a localized orbitals basis code; plane waves-based calculations are out of scope since they go against the nature of the approximation used for the interaction. Ultimately, these two approximations result in a considerable reduction of the computational cost. Beyond the intrinsic speed-up coming from the calculation scheme itself, Xatu has been written mainly in C++, and is designed to be as efficient and general as possible while keeping its usability relatively simple. It targets a wide range of systems in the landscape of computational tools for optical excitations, from those that simply are out of the range of first-principles ones because of the complexity of the unit cell, to those that require a quick iteration, while obtaining qualitative and sometimes even quantitative agreement with experiments. ## 2 Exciton theory ### The Bethe-Salpeter equation Here we review the basic aspects of our theoretical approach, highlighting the simplifications and analogies with respect to the standarized GW-BSE method (see e.g. [27]). From a quantum chemistry perspective, for the description of excitons we consider the exact, non-relativistic electronic Hamiltonian of the solid of interest: \[H=H_{0}+V=\sum_{i,j}t_{ij}c_{i}^{\dagger}c_{j}+\frac{1}{2}\sum_{i,j,k,l}V_{ijkl }c_{i}^{\dagger}c_{j}^{\dagger}c_{l}c_{k}, \tag{1}\] where the indices include orbital and position degrees of freedom; we restrict to basis of localized orbitals. \(H_{0}\) describes the kinetic and ion-electron interaction terms and \(V\) is the electrostatic interaction between electrons. Diagonalization of \(H_{0}\) yields a Bloch eigenbasis \(|n\mathbf{k}\rangle\) with energies \(\varepsilon_{n\mathbf{k}}\), which here will correspond to insulating or semi-conducting materials. The interaction term in 1 contains \[V_{ijkl} =\langle i,j|V|k,l\rangle\] \[=\int d\mathbf{r}d\mathbf{r}^{\prime}\varphi_{i}^{*}(\mathbf{r}) \varphi_{j}^{*}(\mathbf{r}^{\prime})V(\mathbf{r},\mathbf{r}^{\prime})\varphi _{k}(\mathbf{r})\varphi_{l}(\mathbf{r}^{\prime}) \tag{2}\] where \(V(\mathbf{r},\mathbf{r}^{\prime})\) is the two-body interaction. This can be the bare Coulomb interaction or some alternative interaction to take into account dimensionality or screening. Since the non-interacting Hamiltonian \(H_{0}\) describes insulating materials, it is usually a good approximation to take the ground state for the interacting Hamiltonian \(H\) as the Fermi sea: \[|GS\rangle=\prod_{n,\mathbf{k}}^{\varepsilon_{n\mathbf{k}}\leq\varepsilon_{ F}}c_{n\mathbf{k}}^{\dagger}\left|0\right\rangle \tag{3}\] where \(\left|0\right\rangle\) denotes the state with zero electrons, and \(\varepsilon_{F}\) is the Fermi energy. Then, an electron-hole pair of center-of-mass momentum \(\mathbf{Q}\) between the conduction band \(c\) and the valence band \(v\), and located at momentum \(\mathbf{k}\) is defined as: \[\left|v,c,\mathbf{k},\mathbf{Q}\right\rangle=c_{c\mathbf{k}+\mathbf{Q}}^{ \dagger}c_{v\mathbf{k}}\left|GS\right\rangle \tag{4}\] meaning that one electron of momentum \(\mathbf{k}\) from the valence bands is promoted to the conduction bands with momentum \(\mathbf{k}+\mathbf{Q}\). Note that even though we denote these states as electron-hole pairs, we are not actually using hole quasiparticle operators, but simply refer to the hole as the absence of an electron in the Fermi sea. We will stick to the electron picture throughout this work, unless specified otherwise. These electron-hole pairs will serve as the basis for the exciton states, \(\left|X\right\rangle_{\mathbf{Q}}\): \[\left|X\right\rangle_{\mathbf{Q}} =\sum_{v,c,\mathbf{k}}A_{vc}^{\mathbf{Q}}(\mathbf{k})\left|v,c, \mathbf{k},\mathbf{Q}\right\rangle\] \[=\sum_{v,c,\mathbf{k}}A_{vc}^{\mathbf{Q}}(\mathbf{k})c_{c\mathbf{ k}+\mathbf{Q}}^{\dagger}c_{v\mathbf{k}}\left|GS\right\rangle \tag{5}\] Therefore, the exciton is expressed as a linear combination of electron-hole pairs over different bands and momenta. Note that \(\mathbf{Q}\) serves as a good quantum number for the exciton states, since the interaction is momentum-conserving. The interaction only mixes electron-hole pairs with the same net momentum, which is \(\mathbf{Q}\). This can be seen by computing explicitly a general interaction matrix element, \(V_{ijkl}\). Next, we determine the \(A_{vc}^{\mathbf{Q}}(\mathbf{k})\) coefficients that minimize the expectation value \(\left\langle X|H|X\right\rangle_{\mathbf{Q}}\): \[\frac{\delta E[X]}{\delta X}=\frac{\delta}{\delta X}\left[\frac{\left\langle X |H|X\right\rangle_{\mathbf{Q}}}{\left\langle X|X\right\rangle_{\mathbf{Q}}} \right]=0 \tag{6}\] Performing this derivative explicitly is equivalent to the problem of diagonalizing the Hamiltonian represented in the basis of electron-hole pairs: \[\sum_{v^{\prime},c^{\prime},\mathbf{k}^{\prime}}H_{vc,v^{\prime}c^{\prime}}( \mathbf{k},\mathbf{k}^{\prime},\mathbf{Q})A_{v^{\prime}c^{\prime}}^{\mathbf{Q }}(\mathbf{k}^{\prime})=E_{X}A_{vc}^{\mathbf{Q}}(\mathbf{k}) \tag{7}\] where \(H_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime},\mathbf{Q})=\left\langle v,c,\mathbf{k},\mathbf{Q}|H|v^{\prime},c^{\prime},\mathbf{k}^{\prime},\mathbf{ Q}\right\rangle\). The expansion in electron-hole pairs of the exciton is actually an ansatz: we obtain exact eigenstates of the Hamiltonian restricted to a partition of the Hilbert space, \(PHP\), where \(P\) is a projector over the single electron-hole pairs. \[PHP=\sum_{\begin{subarray}{c}v,c,\mathbf{k}\\ v^{\prime},c^{\prime},\mathbf{k}^{\prime}\end{subarray}}H_{vc,v^{\prime}c^{ \prime}}(\mathbf{k},\mathbf{k}^{\prime},\mathbf{Q})\left|v,c,\mathbf{k}, \mathbf{Q}\right\rangle\left\langle v^{\prime},c^{\prime},\mathbf{k}^{\prime},\mathbf{Q}\right| \tag{8}\] In fact, if we only consider charge-conserving excitations, we could represent the Hamiltonian in the following way: \[H=\bigoplus_{n=0}^{N_{e}}P_{n}HP_{n}+C,\text{ with} \tag{9}\] where \[P_{n}=\sum_{\begin{subarray}{c}\{c\},\{v_{i}\},\{v_{j}\}\\ \{c^{\prime}_{i}\},\{v^{\prime}_{i}\}\end{subarray}}\left|\{c_{i}\},\{v_{i}\} \right\rangle\left\langle\{c^{\prime}_{i}\},\{v^{\prime}_{i}\}\right|\] and \[\left|\{c_{i}\},\{v_{i}\}\right\rangle=\prod_{i=1}^{n}c^{\dagger}_{c_{i}}\prod _{i=1}^{n}c_{v_{i}}\left|GS\right\rangle \tag{10}\] \(N_{e}\) is the total number of electrons, \(C\) the coupling between the different excitation sectors, and \(P_{n}\) is the projector over the n-th electron-hole pairs sector. If instead of using the Bloch states from \(H_{0}\) we formulate the problem in terms of the Hartree-Fock (HF) solution to 1, then the coupling between the Fermi sea and the single-pair sector, \(P_{0}HP_{1}\), is exactly zero according to Brillouin's theorem[28, 29]. As we will mention later, we will assume that this always holds even when the ground state has not been calculated in the HF approximation. The same, however, is not true for \(P_{0}HP_{2}\) or \(P_{1}HP_{2}\), i.e., the interaction couples the ground state and the one electron-hole pair sector with the two electron-hole pairs sector. Thus, the proposed ground state and the exciton states are never exact but approximate eigenstates. Given that the material is insulating, we expect the coupling to be weak due to the energy differences, which justifies the ansatz. Keeping with the exact diagonalization approach, one could try to diagonalize the Hamiltonian including more excitation sectors. Although possible in principle, it becomes quickly unfeasible since the Hilbert space in many-body systems grows exponentially and, in this case, the eigenstates would involve a mixture of excitations, losing the interpretation as a bound electron-hole pair. Going back to 7, we compute next the Hamiltonian matrix elements in the \(H_{0}\) basis, which are given in terms of the single particle energies and the interaction matrix elements: \[\begin{split}& H_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{ \prime},\mathbf{Q})=\\ &\delta_{\mathbf{k}\mathbf{k}^{\prime}}\delta_{vv^{\prime}}[ \varepsilon_{c\mathbf{k}+\mathbf{Q}}\delta_{cc^{\prime}}+\Sigma_{cc^{\prime}} (\mathbf{k}+\mathbf{Q},\mathbf{k}^{\prime}+\mathbf{Q})]\\ &-\delta_{\mathbf{k}\mathbf{k}^{\prime}}\delta_{cc^{\prime}} \big{[}\varepsilon_{v\mathbf{k}}\delta_{vv^{\prime}}+\Sigma_{v^{\prime}v}( \mathbf{k}^{\prime},\mathbf{k})\big{]}-(D-X)_{vc^{\prime}c^{\prime}}(\mathbf{ k},\mathbf{k}^{\prime},\mathbf{Q})\end{split} \tag{11}\] where \[\begin{split}& D_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{ \prime},\mathbf{Q})=V_{c\mathbf{k}+\mathbf{Q},v^{\prime}\mathbf{k}^{\prime},c^{\prime}\mathbf{k}^{\prime}+\mathbf{Q},v\mathbf{k}}\\ & X_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q})=V_{c\mathbf{k}+\mathbf{Q},v^{\prime}\mathbf{k}^{\prime},v\mathbf{k},c^{\prime}\mathbf{k}^{\prime}+\mathbf{Q}}\end{split} \tag{12}\] and \[\Sigma_{nm}(\mathbf{k},\mathbf{k}^{\prime})=\sum_{j,\mathbf{k}^{\prime }}^{\mathrm{occ}}\left(V_{n\mathbf{k},j\mathbf{k}^{\prime\prime},m\mathbf{k}^{ \prime},\mathbf{k}^{\prime\prime}}-V_{n\mathbf{k},j\mathbf{k}^{\prime\prime}, j\mathbf{k}^{\prime\prime},m\mathbf{k}^{\prime}}\right) \tag{13}\] \(D\), \(X\) correspond to the direct and exchange interactions between the electron-hole pair, whereas \(\Sigma\) is the self-energy coming from the interaction of the electron/hole with the Fermi sea. At this point we could obtain the exciton spectrum diagonalizing (11). Instead, it is more convenient to solve first for the ground-state of (1) at the mean-field level, i.e. in the HF approximation [30]. If we now write (11) in the HF band basis, we obtain: \[(\varepsilon_{c\mathbf{k}+\mathbf{Q}}-\varepsilon_{i\mathbf{k}})A ^{\mathbf{Q}}_{vc}(\mathbf{k})+\sum_{v^{\prime},c^{\prime},\mathbf{k}^{\prime }}K_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q})A^{\mathbf{Q}}_{v^{\prime}c^{\prime}}(\mathbf{k}^{\prime})\] \[=E_{X}A^{\mathbf{Q}}_{vc}(\mathbf{k}) \tag{14}\] where \(\varepsilon_{n\mathbf{k}}\) are now the HF quasiparticle energies, and \(K=-(D-X)\) is the interaction kernel. Thus, the self-energies are now incorporated into the quasiparticle energies instead. Note that the Fermi sea energy has been set to zero, so that exciton energies can be compared directly with the gap of the system. This is the standard form of the Bethe-Salpeter equation for excitons using the Tamm-Dancoff approximation (TDA) [31, 32], and it defines the starting point for any exciton calculation. The main difference with MBPT comes from the interaction kernel, which there involves a dynamically screened interaction, usually in the random-phase approximation [33, 34, 35]. The determination of the dielectric constant is a computationally intensive task [23], which we avoid setting instead an effective static screening. So far we have seen that it is more convenient to pose the exciton problem in terms of the HF basis, as it simplifies the problem and allows to decouple excitation sectors. In practice, we do not address the problem of determining the mean-field solution to (1). Instead, we start directly from equation (14) assuming that the initial band structure, which is already known, verifies it. Namely, for tight-binding band structures we drop the self-energy terms assuming that we are using a HF solution. Alternatively, if the band structure comes from DFT or MBPT (e.g. GW approximation), then we also remove the self-energy terms since the quasiparticle energies already include self-energy corrections (although they do not cancel exactly with those from (11)). Thus, from now on we regard the starting band structure as the non-interacting Hamiltonian \(H_{0}\). ### Interaction matrix elements With Eq. (14) established, a practical expression for the interaction matrix elements (2) remains to be obtained. The single-particle states, using a basis of localized orbitals, can be written as: \[\varphi_{n\mathbf{k}}(\mathbf{r})=\frac{1}{\sqrt{N}}\sum_{\mathbf{R}}e^{i \mathbf{k}\cdot\mathbf{R}}\sum_{i,\alpha}C^{n\mathbf{k}}_{i\alpha}\phi_{ \alpha}(\mathbf{r}-\mathbf{R}-\mathbf{t}_{i}) \tag{15}\] where \(\{\phi_{\alpha}\}\) denote the orbitals located at the atom \(i\) of the motif and \(N\) is the number of unit cells of the system. As mentioned before, this wavefunction may correspond to that of a tight-binding model (meaning that the spatial nature of the orbitals is ignored and are typically considered orthonormal), or a DFT calculation with a local orbital basis set, which are in general non-orthogonal. While the origin of the single-particle states can be different, for the actual calculation of the interactions we will treat them on the same footing, approximating them as point-like orthonormal orbitals. Depending on how we treat the interaction, different working expressions for the matrix elements can be obtained. For instance, we address first the direct term, which is given by: \[D_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q})\] \[=\int d\mathbf{r}d\mathbf{r}^{\prime}\varphi^{*}_{c\mathbf{k}+ \mathbf{Q}}(\mathbf{r})\varphi^{*}_{v^{\prime}\mathbf{k}^{\prime}}(\mathbf{r}^ {\prime})V(\mathbf{r},\mathbf{r}^{\prime})\varphi_{c^{\prime}\mathbf{k}^{ \prime}+\mathbf{Q}}(\mathbf{r})\varphi_{i\mathbf{k}}(\mathbf{r}^{\prime}) \tag{16}\] We substitute the single-particle Bloch states (15) in Eq. (16). Expanding each term, we end up having to evaluate the same four-body integral, but now between the orbitals that compose each state: \[\int d\mathbf{r}d\mathbf{r}^{\prime}\phi^{*}_{\alpha}(\mathbf{r})\phi^{*}_{ \beta}(\mathbf{r}^{\prime})V(\mathbf{r},\mathbf{r}^{\prime})\phi_{\gamma}( \mathbf{r})\phi_{\delta}(\mathbf{r}^{\prime}) \tag{17}\] At this point, there are two ways to compute the present four-body integral: we can evaluate directly the interaction in real space, or, instead, use its Fourier series to work in reciprocal space. In both cases we consider point-like orbitals centered at \(\mathbf{R}+\mathbf{t}_{i}\): \[\phi_{\alpha}(\mathbf{r}-\mathbf{R}-\mathbf{t}_{i})\phi_{\beta}(\mathbf{r}- \mathbf{R}^{\prime}-\mathbf{t}_{j})\approx\delta_{\alpha\beta}\delta(\mathbf{r} -\mathbf{R}-\mathbf{t}_{i})\delta_{ij}\delta_{\mathbf{R},\mathbf{R}^{\prime}}. \tag{18}\] Integrating in real space, after simplifying the resulting deltas, we obtain the following expression for the direct term \(D\): \[D_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime},\mathbf{ Q})\] \[=\frac{1}{N}\sum_{ij}\sum_{\alpha\beta}(C^{\mathbf{ck}+\mathbf{Q}} _{i\alpha})^{*}(C^{v^{\prime}K}_{j\beta})^{*}C^{c^{\prime}K^{\prime}+\mathbf{Q }}_{i\alpha}C^{\mathbf{rk}}_{j\beta}V_{ij}(\mathbf{k}^{\prime}-\mathbf{k}) \tag{19}\] where \[V_{ij}(\mathbf{k}^{\prime}-\mathbf{k})=\sum_{\mathbf{R}}e^{i(\mathbf{k}^{ \prime}-\mathbf{k})\mathbf{R}}V(\mathbf{R}-(\mathbf{t}_{j}-\mathbf{t}_{i})). \tag{20}\] Here \(V_{ij}(\mathbf{k}^{\prime}-\mathbf{k})\) can be regarded as a lattice Fourier transform centered at \(\mathbf{t}_{j}-\mathbf{t}_{i}\). Since it is defined as a sum over lattice vectors and not an integral, one cannot use the shift property from the Fourier transform. Attempting to do so would result in breaking the spatial symmetries of the Hamiltonian. Then, the direct term can be interpreted as the weighted average of the Fourier transform of the interaction between the electron and the hole, over all positions and orbitals. The exchange term is computed analogously: \[X_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q})\] \[=\frac{1}{N}\sum_{ij}\sum_{\alpha\beta}(C^{\mathbf{ck}+\mathbf{Q} }_{i\alpha})^{*}(C^{v^{\prime}K}_{j\beta})^{*}C^{v\mathbf{k}}_{i\alpha}C^{c^{ \prime}K^{\prime}+\mathbf{Q}}_{i\beta}V_{ij}(\mathbf{Q}) \tag{21}\] In case that there is only one atom in the motif, then expressions (19), (21) simplify even further since the interaction decouples from the tight-binding coefficients, yielding: \[D_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q}) =\frac{1}{N}V(\mathbf{k}^{\prime}-\mathbf{k})(U^{\dagger}_{\mathbf{ k}+\mathbf{Q}}U_{\mathbf{k}+\mathbf{Q}})_{cc^{\prime}}(U^{\dagger}_{\mathbf{ k}}U_{\mathbf{k}^{\prime}})_{v^{\prime}v}\] \[X_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q}) =\frac{1}{N}V(\mathbf{Q})(U^{\dagger}_{\mathbf{k}+\mathbf{Q}}U_{ \mathbf{k}})_{cv}(U^{\dagger}_{\mathbf{k}^{\prime}}U_{\mathbf{k}^{\prime}+ \mathbf{Q}})_{v^{\prime}c^{\prime}} \tag{22}\] where \(U_{\mathbf{k}}\) is the unitary matrix that diagonalizes the Bloch Hamiltonian \(H(\mathbf{k})\)[36]. The evaluation of these expressions is much faster than the corresponding ones (19) and (21) for a general case. Additionally, for \(\mathbf{Q}=0\), the exchange term (21) becomes exactly zero, which is not true in general, although it is usually neglected. As mentioned before, for DFT band structures we evaluate the interaction using the same point-like approximation, performing first a Lowdin orthogonalization of the basis. This allows to improve the TB descriptions, incorporating fine details to the quasiparticle dispersion along the BZ. In such treatments, our interaction matrix elements are an approximation to the true ones involving ab-initio orbitals. Given that in DFT the orbitals are known (e.g. gaussian-type basis in the CRYSTAL [37] code), one could, in principle, evaluate the integrals (17) exactly for a closer ab-initio calculation of excitons. The previous calculation corresponds to the evaluation of the interaction matrix elements in real space. An alternative approach consists of writing the interaction as its Fourier series before evaluating (17) [17, 16]: \[V(\mathbf{r}-\mathbf{r}^{\prime})=\frac{1}{N}\sum_{\mathbf{q}}V(\mathbf{q})e^ {i\mathbf{q}(\mathbf{r}-\mathbf{r}^{\prime})} \tag{23}\] where \[V(\mathbf{q})=\frac{1}{V_{cell}}\int_{\Omega}V(\mathbf{r})e^{-i\mathbf{q} \cdot\mathbf{r}}d\mathbf{r} \tag{24}\] \(\Omega=NV_{cell}\) denotes the volume of the crystal. Usually, one takes \(\Omega\rightarrow\infty\) meaning that we can evaluate the integral analytically, this is, \(V(\mathbf{q})\) becomes the Fourier transform of the potential. Note, however, that \(\mathbf{q}\) is not restricted to the first Brillouin Zone (BZ), and \(V(\mathbf{q})\) is not periodic in the BZ. Therefore, in principle one has to sum over \(\mathbf{q}\in\) BZ, but also over reciprocal vectors \(\mathbf{G}\), i.e.: \[V(\mathbf{r}-\mathbf{r}^{\prime})=\frac{1}{N}\sum_{\mathbf{q}\in\text{BZ}} \sum_{\mathbf{G}}V(\mathbf{q}+\mathbf{G})e^{i(\mathbf{q}+\mathbf{G})(\mathbf{r }-\mathbf{r}^{\prime})} \tag{25}\] The evaluation of the integral is done in the same way, although in this case there is a plane wave instead of the electrostatic interaction. This approach is particularly useful when using a plane wave basis, since it allows to evaluate the four-body integrals exactly without need for approximation (18). The interaction matrix elements \(D\), \(X\) are now given by: \[D_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q}) =\frac{1}{N}\sum_{\mathbf{G}}V(\mathbf{k}-\mathbf{k}^{\prime}+ \mathbf{G})I^{\mathbf{G}}_{\mathbf{ck}+\mathbf{Q}c^{\prime}K^{\prime}+\mathbf{Q }}(I^{\mathbf{G}}_{\mathbf{ik},\mathbf{r}^{\prime}K^{\prime}})^{*}\] \[X_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q}) =\frac{1}{N}\sum_{\mathbf{G}}V(\mathbf{Q}+\mathbf{G})I^{\mathbf{G }}_{\mathbf{ck}+\mathbf{Q},\mathbf{k}}(I^{\mathbf{G}}_{c^{\prime}K^{\prime}+ \mathbf{Q},\mathbf{r}^{\prime}K})^{*} \tag{26}\] where \[I^{\mathbf{G}}_{n\mathbf{k},m\mathbf{K}^{\prime}}=\sum_{i\alpha}(C^{m\mathbf{ k}}_{i\alpha})^{*}C^{m\mathbf{K}^{\prime}}_{i\alpha}e^{i(\mathbf{k}-\mathbf{K}^{ \prime}+\mathbf{G})\cdot\mathbf{t}} \tag{27}\] Usually \(V(\mathbf{q})\) decays fast enough so it suffices to sum only over \(\mathbf{G}=\mathbf{0}\) for the excitons to converge in energy. Xatu allows to use the interactions evaluated in real-space (expressions (19), (21)), or in reciprocal space (expressions (26)). They are benchmarked in section 4; by default we use the interactions in real-space since the calculation converges faster with the number of points in the BZ mesh, it can be used rigorously for finite systems such as ribbons, and the current implementation performs on par with the reciprocal one. Once the interaction kernel is determined, Eq. (14) can be solved to obtain the exciton energies and wavefunctions, i.e., the coefficients \(A^{\mathbf{Q}}_{vc}(\mathbf{k})\). These can be used to compute different quantities. For instance, given that the exciton is written as a linear combination of electron-hole pairs with well-defined \(\mathbf{k}\) quantum number, we can define the probability density of finding the exciton in a specific pair in \(\mathbf{k}\)-space as: \[|\psi_{X}(\mathbf{k})|^{2}=\sum_{v,c}|A^{\mathbf{Q}}_{vc}(\mathbf{k})|^{2} \tag{28}\] which is the straightforward definition since all electron-hole pairs are orthonormal to each other. ### Spinful excitons If the single-particle basis includes spin, one can also compute the expected value of the total spin of the exciton, \(S^{T}_{z}=S^{e}_{z}+S^{h}_{z}\). Given that we are using a fully electronic description of the exciton, we need to specify the electrons whose spin we want to measure. To this purpose, we write the total spin operator in second quantization as: \[S^{T}_{z}=\sum_{c^{\prime},c,\mathbf{k}}\sigma_{c^{\prime}\mathbf{k}+\mathbf{ Q},\mathbf{k}+\mathbf{Q}}c^{\dagger}_{c^{\prime}\mathbf{k}+\mathbf{Q}}c_{ \mathbf{k}+\mathbf{Q}}-\sum_{v^{\prime},v,\mathbf{k}}\sigma_{v^{\prime} \mathbf{k},\mathbf{k}}c_{v^{\prime}\mathbf{k}}c^{\dagger}_{v\mathbf{k}} \tag{29}\] where \(\sigma_{mn}=\langle n|S_{z}|m\rangle\). The labels \(c,c^{\prime},v,v^{\prime}\) refer exclusively to the conduction and valence bands used in the definition of the excitons. Note that the second term, which corresponds to the spin of the hole, has a minus sign. This is because holes, when described as quasiparticles, have opposite momentum and spin of the corresponding electronic state, i.e. \(h^{\dagger}_{n,-\mathbf{k},-\sigma}=(-1)^{\sigma}c_{n\mathbf{k}\sigma}\), for states below the Fermi energy, \(\varepsilon_{n\mathbf{k}}<\varepsilon_{F}\)[38]. These \(h\) operators describe creation/annihilation of holes in terms of their electronic counterpart. Although we keep \(\mathbf{k}\) the same (since we are still in the electronic picture), we already incorporate this minus sign to give a correct description of the total spin of the exciton. As we will see later, this sign change is also necessary to retrieve the known singlet and triplet states when summing angular momentum. The two pictures are equivalent, and all the previous calculations can be reproduced in the electron-hole picture. The expected value of the total spin is then given by: \[\langle X|S^{T}_{z}|X\rangle=\left[\sum_{v,c,\mathbf{k}}\sum_{c^{ \prime}}A^{\mathbf{Q}}_{vc}(\mathbf{k})(A^{\mathbf{Q}}_{vv^{\prime}}(\mathbf{ k}))^{*}\sigma_{c\mathbf{k}+\mathbf{Q},c^{\prime}\mathbf{k}+\mathbf{Q}}\right.\] \[\left.-\sum_{v^{\prime}}A^{\mathbf{Q}}_{vc}(\mathbf{k})(A^{ \mathbf{Q}}_{v^{\prime}c}(\mathbf{k}))^{*}\sigma_{v\mathbf{k},v^{\prime} \mathbf{k}}\right] \tag{30}\] If \([H_{0},S_{z}]=0\), then the spin projection \(S_{z}\) is also a good quantum number for the Bloch states. Therefore, they can be written now as \(|n\mathbf{k}\sigma\rangle\), or in real space as \(\varphi_{n\mathbf{k}}(\mathbf{r})\chi_{\sigma}\), where \(\chi_{\sigma}\) denotes the spin part of the state. This means that the spin operator \(S_{z}\) is diagonal, \(\sigma_{mn}=\sigma_{n}\delta_{mn}\), which allows us to simplify expression (30): \[\langle S^{T}_{z}\rangle=\sum_{v,c,\mathbf{k}}|A^{\mathbf{Q}}_{vc}(\mathbf{k} )|^{2}(\sigma_{c}-\sigma_{v}) \tag{31}\] Another consequence of having the spin well-defined is that it also propagates to the electron-hole pairs that serve as a basis for the exciton states, i.e. \(|\tilde{v},\tilde{c},\mathbf{k},\mathbf{Q}\rangle=c^{\dagger}_{\mathbf{k}+ \mathbf{Q}}c^{\dagger}_{i\mathbf{k}}|GS\rangle\), where \(\tilde{v}=(v,\sigma_{v})\), \(\tilde{c}=(c,\sigma_{c})\). In principle, we allow the spin of the electron and the hole to be different, \(\sigma_{c}\neq\sigma_{v}\). Taking into account the spin in the computation of the interaction matrix elements, we arrive at constraints on which electron-hole pairs interact. Then the direct and exchange terms read: \[D_{\tilde{v}\tilde{v},\tilde{v}^{\prime}\tilde{v}^{\prime}}( \mathbf{k},\mathbf{k}^{\prime},\mathbf{Q}) =\delta_{\sigma_{v},\sigma_{v}}\delta_{\sigma_{v},\sigma_{v^{ \prime}}}D_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q}) \tag{32}\] \[X_{\tilde{v}\tilde{v},\tilde{v}^{\prime}\tilde{v}^{\prime}}( \mathbf{k},\mathbf{k}^{\prime},\mathbf{Q}) =\delta_{\sigma_{v},\sigma_{v}}\delta_{\sigma_{v^{\prime}},\sigma _{v^{\prime}}}X_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime}, \mathbf{Q})\] which can be directly obtained by substituting the single-particle states, since the spin part is not mixed with the orbital part of the states (i.e. \(|n\mathbf{k}\sigma\rangle=|n\mathbf{k}\rangle\otimes|\sigma\rangle\)). At this point, we can arrange the electron-hole pairs into four groups depending on their spin: \[\{|++\rangle,|--\rangle,|+-\rangle,|-+\rangle\}_{e}=\{|\sigma_{c}\sigma_{v} \rangle\}_{e}\] The \(e\) subindex is used to denote that this corresponds to the electronic picture. Then the Hamiltonian represented in terms of the spin groups, and taking into account (32) becomes: \[H=\begin{pmatrix}H_{0}-D+X&X&0&0\\ X&H_{0}-D+X&0&0\\ 0&0&H_{0}-D&0\\ 0&0&0&H_{0}-D\end{pmatrix} \tag{33}\] where \(H_{0}\), \(D\), \(X\) are blocks which include matrix elements corresponding to different electron-hole pairs but same spin group. If we now take into account that the hole in its quasiparticle representation must have spin opposite of that of the electron vacancy, then our states are \(\{\ket{+-},\ket{-+},\ket{++},\ket{--}\}_{eh}\), where \(eh\) denotes electron-hole picture. Therefore, the exciton spectrum would be composed of groups of three triplet states and one singlet state, as when adding angular momenta. If instead we turn off the exchange interaction, then every state should have at least four-fold degeneracy. Any additional degeneracy would come from spatial symmetries of the Hamiltonian, in particular from the irreducible representations of the little group at \(\mathbf{Q}\) (see A). ### Real-space wavefunction Plotting the probability density (28) is useful to extract some information about the exciton such as the wavefunction type (\(s\), \(p\), etc, following the hydrogenic model). The same can be argued for its real-space wavefunction, \(\psi_{\mathbf{x}}(\mathbf{r}_{e},\mathbf{r}_{h})\). However, obtaining it is not as straightforward as the \(\mathbf{k}\) wavefunction. To do so, first we define the field operators as: \[\psi^{\dagger}(\mathbf{r})=\sum_{n\mathbf{k}}\varphi_{n\mathbf{k}}^{*}( \mathbf{r})c_{n\mathbf{k}}^{\dagger},\ \psi(\mathbf{r})=\sum_{n\mathbf{k}}\varphi_{n\mathbf{k}}(\mathbf{r})c_{n \mathbf{k}} \tag{34}\] where \(\varphi_{n\mathbf{k}}(\mathbf{r})\) are the single-particle states in coordinate representation. Then, we can define the amplitude or real space wavefunction of the exciton in the following way: \[\psi_{X}(\mathbf{r}_{e},\mathbf{r}_{h})=\langle GS|\psi(\mathbf{r}_{e})\psi^ {\dagger}(\mathbf{r}_{h})|X\rangle \tag{35}\] This definition is motivated by the fact that \(\varphi_{n\mathbf{k}}(\mathbf{r})=\langle GS|\psi(\mathbf{r})|n\mathbf{k}\rangle\). Before computing the amplitude, it is convenient to switch to the electron-hole picture. The field operator written in terms of electron and hole operators is: \[\psi(\mathbf{r})=\sum_{c\mathbf{k}}\varphi_{c\mathbf{k}}(\mathbf{r})c_{c \mathbf{k}}+\sum_{i\mathbf{k}}\varphi_{i\mathbf{k}}(\mathbf{r})h_{v-\mathbf{k }}^{\dagger}\equiv\psi_{e}(\mathbf{r})+\psi_{h}^{\dagger}(\mathbf{r}) \tag{36}\] where \(\psi_{e}(\mathbf{r})\), \(\psi_{h}(\mathbf{r})\) are the annihilation field operator for the electrons, and holes respectively. Since we are switching from the electronic to the electron-hole picture, the same has to be done for the exciton state, \(|X\rangle=\sum_{v,c,\mathbf{k}}A_{vc}^{\mathbf{Q}}(\mathbf{k})c_{c\mathbf{k}+ \mathbf{Q}}^{\dagger}h_{v,-\mathbf{k}}^{\dagger}|0\rangle\). Evaluating the exciton amplitude in terms of the electron and hole field operators, we obtain: \[\psi_{X}(\mathbf{r}_{e},\mathbf{r}_{h}) =\langle GS|\psi_{e}(\mathbf{r}_{e})\psi_{h}(\mathbf{r}_{h})|X\rangle\] \[=\sum_{v,c,\mathbf{k}}A_{vc}^{\mathbf{Q}}(\mathbf{k})\varphi_{c \mathbf{k}+\mathbf{Q}}(\mathbf{r}_{e})\varphi_{i\mathbf{k}}^{*}(\mathbf{r}_{ h}) \tag{37}\] To obtain the first equality note that there are four cross terms containing electron and hole field operators. Two of them are zero, since they they move around the electron or the hole [e.g. \(\psi_{e}(\mathbf{r}_{e})\psi_{e}^{\dagger}(\mathbf{r}_{h})\)], meaning that the final state is still orthonormal to the ground state. There is a third term consisting on creation of an electron and a hole, \(\psi_{e}^{\dagger}(\mathbf{r}_{e})\psi_{h}^{\dagger}(\mathbf{r}_{h})\). This term is also zero because we assume that our ground state is the Fermi sea, meaning that it does not contain excited electrons. If this were the case, then the exciton could also consist on deexcitations or antiresonant transitions. This is known as the Tamm-Dancoff approximation, and it is also usually present in GW-BSE. To obtain the final expression for the wavefunction, it remains to substitute the expression of the field operators. One recovers the electron-hole pairs states of the exciton basis (up to a sign from operator permutation), and from orthonormality it results in expression (37). At this point, to be able to plot the exciton real space wavefunction, we still need to evaluate (37) in terms of the single-particle states \(\varphi_{n\mathbf{k}}(r)\). Since the exciton wavefunction depends on both the position of the electron and the hole, first we need to fix the position of either of them to be able to plot the wavefunction. Since we assume the orbitals are point-like, both the electron and hole can only be localized at the atomic positions, so we will evaluate the wavefunction and the probability density at these points only. We set the electron to be located at cell \(\mathbf{R}_{e}\) and atom \(\mathbf{t}_{m}\) of the motif, \(\mathbf{r}_{e}=\mathbf{R}_{e}+\mathbf{t}_{m}\), while the hole is at position \(\mathbf{r}_{h}=\mathbf{R}_{h}+\mathbf{t}_{n}\). Using the point-like approximation (18), the probability density of finding the electron at a given position with the hole fixed reads: \[|\psi_{X}(\mathbf{R}_{e}+\mathbf{t}_{m},\mathbf{R}_{h}+\mathbf{t}_{n})|^{2}= \sum_{\alpha\beta}|\psi_{X}^{\alpha\beta}(\mathbf{R}_{e}+\mathbf{t}_{m}, \mathbf{R}_{h}+\mathbf{t}_{n})|^{2} \tag{38}\] where \[|\psi^{\alpha\beta}_{X}({\bf R}_{e}+{\bf t}_{m}, {\bf R}_{h}+{\bf t}_{n})|^{2}=\] \[\frac{1}{N^{2}}\sum_{v,c,k}\sum_{\nu^{\prime},c^{\prime},k^{\prime} }A^{\bf Q}_{vc}({\bf k})(A^{\bf Q}_{\nu^{\prime}c^{\prime}}({\bf k^{\prime}}))^{ \ast}e^{i({\bf k}-{\bf k^{\prime}})\cdot({\bf R}_{n}-{\bf R}_{h})}\] \[\cdot C^{\bf c,k+Q}_{m\alpha}(C^{c^{\prime},k^{\prime}+Q}_{m \alpha})^{\ast}(C^{\nu,k}_{n\beta})^{\ast}C^{\nu^{\prime},k^{\prime}}_{n\beta} \tag{39}\] For both the reciprocal and the real-space probability densities, one could expect them to have the symmetries of the crystal, since \([H,C]=0\), where \(C\) is any symmetry operator from the space group. However, if the states are degenerate, then they are not necessarily eigenstates of the symmetry operators and consequently the associated densities will not be invariant under symmetry transformations. Still, in this case it is possible to define a probability density that preserves the symmetry of the crystal for each degenerate subspace: \[|\psi_{X}({\bf r},{\bf r}_{h})|^{2}=\sum_{n}|\psi^{(n)}_{X}({\bf r},{\bf r}_{h })|^{2} \tag{40}\] where the index \(n\) runs over exciton states degenerate in energy. An analogous expression holds for the \({\bf k}\) wavefunction. It is always good practice to check that the resulting probability densities preserve the symmetry of the crystal to ensure that the calculation was done correctly. The proof for the invariance under symmetry operations of (40) is given in A. ### Optical conductivity and light absorption As an example of a post-processing calculation, we investigate here the interaction of the material with an incident linearly-polarized electric field. We elaborate below how to compute the optical response by means of the exciton eigenfunctions. For a sufficiently low-intensity and linearly-polarized homogeneus electric pulse, \(\varepsilon(t)\), the induced current per unit frequency in the bulk of the material can be written \(J_{a}=\sum_{\beta}\sigma_{ab}(\omega)\tilde{\varepsilon}_{b}(\omega)\), where the linear optical conductivity reads [39] \[\sigma_{ab}(\omega)= \frac{\pi e^{2}\hbar}{V}\sum_{k}^{N_{X}}\frac{1}{E_{k}}\bigg{[} V^{a}_{k}(V^{b}_{k})^{\ast}\bigg{]}\delta(\hbar\omega-E_{k}) \tag{41}\] Here, \(N_{X}\) is the number of exciton states, \(E_{k}\) is the energy of the \(k\)-th excited state, \(V^{a}_{k}=\langle GS|\hat{\nu}^{a}|X_{k}\rangle\) is the velocity matrix element (VME) transition to the ground state and \(V\) is the volume of the solid under periodic boundary conditions. In the equation above, only excitons with \({\bf Q}=0\) are considered, as finite momentum excitons cannot be achieved by light incidence. Thus, we drop \({\bf Q}\) from the notation, and instead specify the excitation index \(k\) in the exciton coefficients, \(A^{k}_{vc}({\bf k})\). The exciton VME is found to be \[V^{a}_{k}=\sum_{c\kappa}A^{k}_{vc}({\bf k})v^{a}_{vc}({\bf k}), \tag{42}\] where \(v^{a}_{vc}({\bf k})\equiv\langle v{\bf k}|\hat{\nu}^{a}|c{\bf k}\rangle=i \hbar^{-1}\,\langle v{\bf k}|[H_{0},\hat{\nu}^{a}]|c{\bf k}\rangle\) (\(H_{0}\) is the non-interacting or mean-field Hamiltonian). With light polarized along the \(a\) direction, an exciton is dark (or bright) if \(V^{a}_{k}=0\) (\(V^{a}_{k}\neq 0\)). In general, the brightness of an exciton and its contribution to Eq. (41) is dictated by the selection rules of \(A^{k}_{vc}({\bf k})\) and \(v^{a}_{vc}({\bf k})\) over the Brillouin zone. The calculation of VMEs has to be worked out taking into account the underlying local basis of our approach. It is found [40; 41] (we simplify the notation here by doing \(i\alpha\to\alpha\) for the rest of the section) \[\begin{split}&\langle n{\bf k}|\hat{\nu}|n^{\prime}{\bf k}\rangle= \\ &\sum_{\alpha\alpha^{\prime}}(C^{\kappa}_{\alpha})^{\ast}C^{\nu \kappa}_{\alpha^{\prime}}{\bf V}_{\kappa}H_{\alpha\alpha^{\prime}}({\bf k}), \\ &+i\sum_{\alpha\alpha^{\prime}}(C^{\kappa\kappa}_{\alpha})^{\ast} C^{\nu\kappa}_{\alpha^{\prime}}\Big{[}\varepsilon_{n}({\bf k})\xi_{\alpha \alpha^{\prime}}({\bf k})-\varepsilon_{\alpha^{\prime}}({\bf k})\xi^{\ast}_{ \alpha^{\prime}\alpha}({\bf k})\Big{]}.\end{split} \tag{43}\] with \(\xi_{\alpha\alpha^{\prime}}({\bf k})=i\,\langle u_{\alpha\kappa}|{\bf V}_{ \kappa}u_{\alpha^{\prime}{\bf k}}\rangle\) the Berry connection between Bloch basis states. After some algebra, it reads \[\xi_{\alpha\alpha^{\prime}}({\bf k})=\sum_{{\bf R}}e^{i{\bf k}\cdot{\bf R}} \,\langle\alpha{\bf 0}|\hat{\bf r}|\alpha^{\prime}{\bf R}\rangle+i\nabla_{{\bf k}}S_{ \alpha\alpha^{\prime}}({\bf k}). \tag{44}\] In the case of an underlying non-orthonormal local orbital basis, the overlap matrix \(S_{\alpha\alpha^{\prime}}({\bf k})\equiv\langle\alpha{\bf k}|\alpha^{\prime}{ \bf k}\rangle\) is accounted and makes the Berry connection above non-hermitian. Instead, one has \(\xi_{\alpha\alpha^{\prime}}({\bf k})=\xi^{\ast}_{\alpha^{\prime}\alpha}({\bf k })+i\nabla_{{\bf k}}S_{\alpha\alpha^{\prime}}({\bf k})\). Eq. (43) allows to evaluate the optical matrix elements by means of the non-interacting hamiltonian plus position matrix elements between the local orbitals. In the case of an orthogonal basis set, as in tight-binding models, the overlap matrix is an unitary matrix at all points of the Brillouin zone. In this case VMEs read \[v^{a}_{vc}({\bf k})=\sum_{\alpha\alpha^{\prime}}(C^{\kappa\kappa}_{\alpha})^{ \ast}C^{\nu\kappa}_{\alpha^{\prime}}\bigg{[}\frac{\partial H_{\alpha\alpha^{ \prime}}({\bf k})}{\partial k_{a}}+iH_{\alpha\alpha^{\prime}}({\bf k})(t^{a}_{ \alpha^{\prime}}-t^{a}_{\alpha})\bigg{]} \tag{45}\] This expression is sometimes known as the "diagonal tight-binding approximation (TBA)" in ab-initio calculations involving the maximally-localized Wannier functions [42; 43], where the inter-orbital position matrix elements are discarded. Eq. (41) can thus be evaluated and is implemented in our code. Additionaly, Eq. (41) can be compared with the frequency dependent expression for its non-interacting counterpart. In the limit of no correlations, it reduces to \[\sigma_{ab}(\omega)=\frac{\pi e^{2}\hbar}{V}\sum_{cv\mathbf{k}} \frac{1}{\varepsilon_{c\mathbf{k}}-\varepsilon_{i\mathbf{k}}}\bigg{[}v^{a}_{ cv}(\mathbf{k})v^{b}_{vc}(\mathbf{k})\bigg{]}\\ \cdot\delta(\hbar\omega-[\varepsilon_{c\mathbf{k}}-\varepsilon_{ i\mathbf{k}}]) \tag{46}\] From the frequency-dependent optical conductivity one can obtain related quantities of interest. For instance, the ratio of absorbed incident flux per unit frequency and unit length (considering vaccum surroundings) is [44]\(S(\omega)=\sigma(\omega)/c\epsilon_{0}\), also called absorbance. Note that this quantity is ill-defined for 2D lattice systems. In such case, all the absorbance is assumed to occur at \(z=0\) reference plane of the material. ## 3 Implementation The programming languages of choice for the implementation of the exciton theory were C++ and Fortran, which are the usual options for heavy numerical computations. In the case of C++, to facilitate manipulation of matrices we use the library Armadillo[45], on top of the usual libraries for linear algebra (BLAS and LAPACK). The core of the code was written in C++, except the post-diagonalization calculation of the optical conductivity being written in Fortran. This routine is wrapped inside the C++ library. The software was designed with a hybrid approach in mind: previous packages such as DFT codes require the preparation of input files, which are then fed to the program and result in some output files which may be post-processed to extract information. We propose to use the same scheme, i.e. to prepare an input file which describes the system where we want to compute the excitons (namely the Hamiltonian \(H_{0}\)), and another one with the description of the excitons (participating bands, \(\mathbf{k}\) mesh, etc). However, there is an alternative usage, which is employing directly the exciton API defined to built the program. This a common approach, where one builds a library to expose some functionality to the user (e.g. Python libraries). Therefore, one can define some system and script the computation of excitons using the API. This is advised whenever we are interested in performing some manipulation of the excitons, and not only obtaining the spectrum or the absorption. There is a third approach, consisting on using the system files to leverage the definition of the system to other programs (e.g. DFT), and then use the API instead of the exciton configuration file. The different forms to use the code will be reviewed in B. The CLI option parsing has been done using the header-only library TCLAP, which is distributed with this package. ### Complexity analysis Next we will discuss the numerical implementation of the exciton computation and related quantities. Solving the Bethe-Salpeter equation (14) amounts to diagonalizing the corresponding matrix \(PHP\). Diagonalization is done using the standard linear algebra libraries, meaning that the main problem is constructing \(PHP\) as fast as possible. Consider a system formed by \(N\) unit cells in total (meaning \(\sqrt{N}\) along each Bravais vector for a two dimensional system). To treat the interaction rigorously, one has to compute the excitons on a BZ mesh with the same number of \(\mathbf{k}\) points as unit cells, due to the periodic boundary conditions. Therefore, one has to compute \(N^{2}\) matrix elements, and each of them requires computing the lattice Fourier transform, which involves summations over the \(N\) unit cells. This has to be done for all possible band pairs \(B\), so a naive implementation of (14) would have \(\mathcal{O}(N^{3}B^{2})\) time complexity, on par with matrix diagonalization algorithms. Note that each interaction matrix element also requires knowing the tight-binding coefficients \(\{C^{nk}_{i\alpha}\}\). If the dimension of the Bloch Hamiltonian is \(M\), then diagonalizing on the fly the system for each element of the BSE would result in time \(\mathcal{O}(N^{2}B^{2}(N+M^{3}))\). The easiest way to reduce the time complexity of the BSE construction is to increase the space complexity, i.e. to precompute and store quantities that appear multiple times, instead of computing them on the fly. This can be done for the Bloch Hamiltonian eigenvectors. Before constructing \(PHP\), we diagonalize \(H(\mathbf{k})\)\(\forall\mathbf{k}\in\) BZ, and store the eigenvectors. At this point, if were to store all eigenvectors, the spatial complexity would go from \(\mathcal{O}(1)\) to \(\mathcal{O}(NM^{2})\). Since we only need the eigenvectors corresponding to the bands that participate in the exciton formation, it suffices to store only those, meaning that the spatial complexity would be \(\mathcal{O}(NMB)\), i.e. we have to store \(N\) matrices of size \(M\times B\). Accessing directly the eigenvectors results in a time complexity of \(\mathcal{O}(N^{3}B^{2}+NM^{3})\). The same could be done for the lattice Fourier transform \(V_{ij}(\mathbf{k}-\mathbf{k}^{\prime})\). Since it depends on the difference between between two \(\mathbf{k}\) points, we could simply store \(V_{ij}\) for each pair of \(\mathbf{k}\) points. This implies high spatial complexity \(\mathcal{O}(N^{2})\), but overall it does not report any speed advantage, since precomputing this would be of order \(\mathcal{O}(N^{3})\). However, it is possible to reduce the time cost of the algorithm: as long as the \(\mathbf{k}\) point mesh covers the whole BZ uniformly (as given by Monkhorst-Pack), then we can map the \(\mathbf{k}\) point difference back to a single \(\mathbf{k}\) point using the periodicity of \(V_{ij}(\mathbf{k}-\mathbf{k}^{\prime})\): \[\forall\mathbf{k},\mathbf{k}^{\prime}\in\mathrm{BZ},\,\exists \mathbf{G}\in\text{Reciprocal lattice},\,\mathbf{k}^{\prime\prime}\in\mathrm{BZ}\] \[\text{s.t.}\,\,\mathbf{k}-\mathbf{k}^{\prime}=\mathbf{G}+ \mathbf{k}^{\prime\prime} \tag{47}\] Therefore, it suffices to compute and store \(V_{ij}(\mathbf{k})\)\(\forall\mathbf{k}\in\mathrm{BZ}\). Then, when initializing the matrix elements of \(PHP\), one has to find the vector \(\mathbf{k}^{\prime\prime}\) such that it verifies (47). The time complexity now is \(\mathcal{O}(N^{2})\), which is a reduction of an order of magnitude. The space complexity is also reduced, being now \(\mathcal{O}(N)\). With this, the algorithm for determining \(PHP\) has time order \(\mathcal{O}(N^{2}B^{2}+NM^{3})\), and the memory requirements are \(\mathcal{O}(N+NMB)=\mathcal{O}(NMB)\). As we will see, this allows for very fast computation of the BSE matrix, meaning that the main bottleneck lies in the diagonalization, as it often happens. In some cases we might be interested in the whole spectrum, but usually it suffices to determine the lowest energy eigenstates. To address this, the code includes a custom implementation of the Davidson algorithm, which is suited to obtain the ground state of quantum chemistry Hamiltonians [46]. So far the discussion was focused on how to reduce the complexity of the algorithm, but it is equally important to comment on how to perform the actual computation of the matrix elements. The big O notation neglects all constant factors, which is fine for theoretical considerations, but might have a considerable impact on the real behaviour of the code. The general strategy followed was to vectorize all calculations to make use of the highly optimized and parallel existing linear algebra routines. The remaining parts that do not allow vectorization, such as matrix element initialization in \(PHP\), were all parallelized with OpenMP. Currently, all the parallelism is shared-memory and distributed parallelism might be implemented in the future. For instance, consider the direct interaction term which requires computing expression (19). Supposed that the lattice Fourier transform of the interaction is already computed for all motif combinations \(i,j\) and for all \(\mathbf{k}\) points, we basically have to sum over tight-binding coefficients multiplied by the interaction. Given that the Bloch eigenstates are already stored as columns in matrices, we want to write this as matrix-vector products. Specifically, we can use \(V_{ij}\) as a bilinear form, so with a well-defined matrix \(\tilde{V}\) the direct term can be written as: \[D_{vc,v^{\prime}c^{\prime}}(\mathbf{k},\mathbf{k}^{\prime},\mathbf{Q})=C_{cc^{ \prime}}^{T}\tilde{V}(\mathbf{k}^{\prime}-\mathbf{k})C_{v^{\prime}v} \tag{48}\] where \[\tilde{V}=V(\mathbf{k}-\mathbf{k}^{\prime})\otimes\mathbb{I}_{n},\,\,\,C_{mn}= C_{n}^{*}\odot C_{m} \tag{49}\] \(\odot\) denotes element-wise array product, \(C_{n}\) is the vector of coefficients corresponding to state \(|n\rangle\) and \(\mathbb{I}_{n}\) denotes a square matrix of ones of dimension \(n\), \(n\) being the number of orbitals per atom. Note that this expression is only valid if all atoms have the same number of orbitals. Otherwise, one must take into account the different number of orbitals per chemical species when performing the Kronecker's products. The exchange term \(X\) can be computed in an analogous way. Note that this assumes that the order of the single-particle basis is \(\{|i\rangle\otimes|\alpha\rangle\otimes|\sigma\rangle\}\), i.e. for each atomic position, we run over orbitals, and for each orbital we run over spin. This is also relevant for the computation of the spin of the excitons, since it follows this convention. As we mentioned at the beginning, to compensate for the lack of screening of the theory, one typically uses the Rytova-Keldysh potential [47, 48] instead of the bare Coulomb potential in the context of two-dimensional materials. However, both interactions diverge at \(r=0\). We regularize this divergence by setting \(V(0)=V(a)\)[36], where \(a\) denotes the lattice parameter. Currently, the code only implements the Keldysh potential, given by: \[V(r)=\frac{e^{2}}{8\varepsilon_{0}\bar{\varepsilon}r_{0}}\left[H_{0}\left(\frac{ r}{r_{0}}\right)-Y_{0}\left(\frac{r}{r_{0}}\right)\right] \tag{50}\] where \(\bar{\varepsilon}=(\varepsilon_{m}+\varepsilon_{s})/2\), with \(\varepsilon_{s}\), \(\varepsilon_{m}\) being the dielectric constants of the substrate and the embedding medium (usually vacuum) respectively, and \(r_{0}\) the effective screening length. Those three parameters have to be specified for all calculations. \(H_{0}\), \(Y_{0}\) are Struve and Bessel functions of second kind respectively. Also, since the interaction decays quickly, we employ a radial cutoff, such that for distances \(r>R_{c}\) we take the interaction to be zero. Then, the effective interaction is: \[\tilde{V}(r)=\left\{\begin{array}{cc}V(a)&\text{if }r=0\\ V(r)&\text{if }r<R_{c}\\ 0&\text{else}\end{array}\right. \tag{51}\] where \(R_{c}\) is the cutoff radius. The cutoff has two purposes: first, it enforces the crystal symmetries in the transformed potential (as a function of \(\mathbf{k}\)). Secondly, it allows to compute the summation over lattice positions faster. Instead of evaluating the potential over all lattice positions, we restrict the sum to the lattice positions where we know the potential is different from zero. As for the interactions computed using the Fourier series of the potential, we set \(V(\mathbf{q}=0)=0\) to remove the long wavelength divergence. Lastly, it is also worth mentioning how to compute the probability of finding the electron on a given spatial position (37). Since this requires two summations over \(\mathbf{k},\mathbf{k}^{\prime}\), its cost would be \(\mathcal{O}(N^{2})\). To obtain the whole wavefunction, a priori we have to evaluate this over each position in the crystal, meaning that the cost would be \(\mathcal{O}(N^{3})\). However, this would be the worse case scenario in which the exciton is strongly delocalized in real-space. Usually, it will suffice to compute the real-space wavefunction on a contour of the hole position, for a few unit cells only. To actually compute the probability, we want to use the fact that we are storing the exciton coefficients as vectors. First, note that (37) can be written as: \[\begin{split}|\psi_{X}^{\alpha\beta}(\mathbf{t}_{n}+\mathbf{R}_{c },&\mathbf{t}_{m}+\mathbf{R}_{h})|^{2}\\ &=\left|\frac{1}{N}\sum_{v,c,\mathbf{k}}A_{vc}^{\mathbf{Q}}( \mathbf{k})e^{i\mathbf{k}\cdot(\mathbf{R}_{n}-\mathbf{R}_{h})}C_{m\alpha}^{c,\mathbf{k}+\mathbf{Q}}(C_{n\beta}^{v,\mathbf{k}})^{*}\right|^{2}\end{split} \tag{52}\] which already reduces the complexity down to \(\mathcal{O}(N)\). Then, the probability is computed as \(\|A\odot C\|^{2}\), where \(A\) is the vector of exciton coefficients that incorporates the exponential terms and \(C\) are the tight-binding coefficients arranged such that they match the electron-hole pair ordering of the exciton. ## 4 Examples So far we have discussed the theory underlying the code and its numerical implementation. Therefore, it remains to show actual examples of the capabilities of the code. One context where excitons are relevant is valleytronics: materials with honeycomb structure which exhibit the band gap at the \(\mathbf{K},\mathbf{K}^{\prime}\) points of the Brillouin zone (the "valleys"), and whose optical excitations can be tuned according to the valley [49; 50]. The materials most commonly used for this purpose are transition metal dichalcogenides (TMDs), with formula WX\({}_{2}\), where W is the transition metal and S some chalcogenide. Another similar material that has become highly relevant is hexagonal boron nitride (hBN), although in this case due to its good properties as an insulating substrate [51]. These materials have become the prototypical examples to test the capabilities of an exciton code, and have been studied extensively. We will characterize the excitons in both hBN and MoS\({}_{2}\), i.e. obtain the exciton spectrum for \(\mathbf{Q}=0\), show the associated wavefunctions and compute the optical conductivity. We will also show how a simple strain model of hBN can be used to break some crystal symmetries and modify the excitonic ground state. All the calculations shown are done with the real-space approach to the interaction matrix elements and neglecting the exchange term, unless specified otherwise. ### hBN Monolayer hexagonal boron nitride has a large quasiparticle band gap, with ab-initio calculations predicting a value of \(6-8\) eV depending on the method [52]. As we will see, the band structure of hBN is relatively flat along the \(\mathbf{M}-\mathbf{K}\) path in the Brillouin zone. This, in conjunction with small screening results in excitons that are strongly delocalized in reciprocal space, but are tightly bound in real space. This material can be described easily with a minimal 2-band tight-binding model [53], equivalent to graphene but with opposite onsite energies for each atom of the motif. The tight-binding model for hBN reads: \[H=\sum_{i}\frac{\Delta}{2}(c_{i}^{\dagger}c_{i}-d_{i}^{\dagger}d_{i})+\sum_{ \begin{subarray}{c}c,i,j>\\ i\neq j\end{subarray}}\left[tc_{i}^{\dagger}d_{j}+\text{h.c.}\right] \tag{53}\] where \(c^{\dagger}(d^{\dagger})\) denote creation operators for B (N) atoms. The indices \(i,j\) run over unit cells, and the summation over \(<i,j>\) spans only the first-neighbours. The parameters are \(t=-2.3\) eV, \(\Delta/2=3.625\) eV, and the corresponding system file can be found in the code repository under the folder /models. From a Slater-Koster perspective, hBN is described by \(\text{p}_{z}\) orbitals. Taking the model to be spin-polarized for simplicity, there are only two bands and there must be only one electron per unit cell (half-filling) for it to be an insulator. Once the model is defined and the system file appropriately constructed, we can begin setting the parameters of the calculation. First, we need to specify the constants that appear in the Keldysh potential in Eq. (50). These parameters determine the strength of the electrostatic interaction and consequently affect the exciton binding energies. Here we follow previous works to set these quantities [53], but sometimes we will be interested in exploring the effect of tuning the dielectric constants, or instead we will want to set them to reproduce known experimental results. Nevertheless, values for typical substrates can be found in literature and \(r_{0}\) can also be estimated from ab-initio calculations [6; 7]. The other parameters of the exciton file are related to the convergence of the excitons themselves. Varying the number of \(\mathbf{k}\) points in the mesh, \(N_{k}\), one obtains the convergence curves shown in Fig. (2). The convergence has been done with both the default interactions (in real-space) and with reciprocal interactions (Fig. (2a)). For reciprocal interactions energies converge much slower than the real-space counterpart, on top of requiring summing over several \(\mathbf{G}\) reciprocal cells. In materials with highly localized excitons in \(\mathbf{k}\) space, it usually suffices to take only \(\mathbf{G}=0\) (e.g. MoS\({}_{2}\)). However, we will see later that hBN excitons are highly delocalized in reciprocal space, which is why the interaction can see neighbouring reciprocal unit cells. After checking convergence, we can start studying the exciton themselves. The energies of the first 8 states and their degeneracies are given in table (1). To make sense of the degeneracies, one has to check the character table of the point group of the material: hBN has the crystallographic point group \(D_{3h}\), with both one- and two-dimensional irreducible representations. Since the symmetry operations and their action on single-particle states are specific to each problem, the code does not address Figure 1: (a) hBN lattice and (b) band structure of the tight-binding model. Figure 2: (a) Convergence of the ground state and first and second excited states as a function of the number of \(\mathbf{k}\) points for hBN, computed both with interactions in real and reciprocal space. The reciprocal space calculations have to be converged also with respect to the number of reciprocal lattice vectors included, \(N_{G}\), with \(N_{G}=25\) in this case. (b) Measured calculation time as a function of \(N\). For both the real and reciprocal space calculations, the asymptotic behaviour is \(\mathcal{O}(N^{3})\). However, at small values of \(N\) the required time is partially dominated by the BSE matrix initialization, which in both cases scales as \(\mathcal{O}(N^{2})\). the problem of identifying the irreducible representation of each exciton, nor labeling them in terms of symmetry eigenvalues. Instead, we only check that the \(\mathbf{Q}-\)excitonic wavefunctions have the allowed degeneracies and (40) is invariant under the little group at \(\mathbf{Q}\). The \(\mathbf{k}\) probability densities of the first eight excitonic states, grouped by degenerate levels, are shown in Fig. (3). Each energy level has the symmetry of the lattice, as expected since we are plotting (40). The additional symmetry in this case is due to time-reversal symmetry and the fact that \(\mathbf{Q}=0\) is a time-reversal invariant momenta. We see that the wavefunctions peak at the valleys, although they also spread over the \(\mathbf{K}-\mathbf{M}-\mathbf{K}^{\prime}\) paths. This means that the excitons are formed by strongly interacting electron-hole pairs in \(\mathbf{k}\) space, which explains why we need to sum over several reciprocal cells when using the reciprocal interactions. As for the shape of excitons, we find the common pattern: the first state peaks is \(s\)-like in the sense that it does not have nodes. The next state would be \(p\)-like and so on. Note that the hydrogen analogy only concerns the shape of the wavefunctions, and not the energy spectrum, which in general differs from the hydrogen series. Since the excitons are delocalized in reciprocal space, we expect them to be strongly localized in real space. The real space densities of each degenerate level are shown in Fig. (4). The hydrogenic picture makes more sense when looking at the real-space wavefunction, since it can be understood then as the problem of two interacting opposite sign charges. The spectrum and the degeneracies do not match that of hydrogen, but the wavefunctions behave radially as we would expect. In hBN the spin-orbit coupling is small and it suffices to compute the excitons as a spinless system, in particular given that we are also neglecting the exchange interaction. If we consider a spinful system, without exchange again, we obtain exactly the same energy levels but now four-fold degenerate (on top of the previous spatial degeneracy). The same stands for both types of wavefunctions. Our study of the exciton spectrum in hBN concludes with the calculation of the optical conductivity [16; 39; 52], which reflects the light absorbance from a source up to a constant factor. So far we have not discussed which excitons of the spectrum are bright or dark. This can be seen through the calculation of the optical oscillator \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(n\) & Energy (eV) & Binding energy (eV) & Degeneracy \\ \hline \hline 1 & 5.3357 & -1.9143 & 2 \\ 2 & 6.0738 & -1.1762 & 1 \\ 3 & 6.1641 & -1.0859 & 2 \\ 4 & 6.1723 & -1.0777 & 1 \\ 5 & 6.3511 & -0.8989 & 2 \\ \hline \end{tabular} \end{table} Table 1: Exciton spectrum from the tight-binding model for hBN computed with \(N_{k}=60^{2}\). The binding energy \(E_{b}\) is defined as \(E_{b}=E_{X}-\Delta\), where \(\Delta\) is the gap of the system. Figure 3: Plot of the \(\mathbf{k}\) exciton probability densities in TB hBN, for the first 5 energy levels (from left to right, top to bottom). For each level, we actually show \(|\Psi(\mathbf{k})|^{2}=\sum_{n}|\psi_{n}(\mathbf{k})|^{2}\) for \(\mathbf{Q}=0\), where the index \(n\) runs over degenerate states. strengths within Eq. (41), which determine the transition rate for photon emission. The frequency-dependent conductivity of monolayer hBN is given in Fig. (5). Electron-hole interactions move the spectral power from the continuum to pronounced sub-band gap peaks. Attending to Table (1) and Fig. (3), we see that non-degenerate excitons with mainly \(s\) character are bright. The relative height of the peaks can be understood by looking at the magnitude of the wavefunctions near the \(K\) and \(K^{\prime}\) points. All bright excitons can be excited with linearly polarized light along two orthonormal polarization directions, giving rise to an isotropic conductivity consistent with the \(D_{3h}\) point group of the material. It is of interest to check the validity of the results against a more refined description of the band structure of the material. This can be done with the code by using a local orbital-based DFT calculation as the starting Hamiltonian, instead of using a parametrized tight-binding model. The exciton energies will depend on the gap as estimated from the functional used, but we expect to get similar wavefunctions and conductivity. Since we consider several orbitals for each chemical species now, we have multiple valence and conduction bands so we should converge the exciton with respect to the number of bands as well. It is a proper check to do, but in this case the different bands are well separated, so their effect should be negligible. The DFT band structure and the wavefunctions of the ground state exciton are shown in Fig. (6). One could use standard LDA functionals, but here we opt for a hybrid functional (HSE06[54] in this case), which is efficiently implemented in CRYSTAL [37]. This type of functional yields a better estimation of the single-particle gap due to a different treatment of the exchange-correlation term. For both LDA (not shown) and hybrid functionals such as HSE06, the wavefunctions closely resemble those obtained with TB models. For instance, we observe the same sublattice polarization present in the TB real-space densities with the HSE06 calculation (Fig. 6c). The energy Figure 4: Plot of the real-space exciton probability densities in TB hBN with \(\mathbf{Q}=0\), for the first 5 energy levels (from left to right, top to bottom). The red dot shows the position of the hole. For each level we plot the sum of the probability densities of each state of the degenerate subspace, \(|\Psi(r_{e},r_{h})|^{2}=\sum_{n}|\psi_{n}(r_{e},r_{h})|^{2}\). Figure 5: Optical conductivity of monolayer hBN as a function of the incident energy. We compare the conductivity obtained with the Kubo formula in the independent particle approximation (IPA), and with the BSE, which shows a dramatic change from the inclusion of excitons. spectrum shows the same degeneracies, although the positions of some of the levels are exchanged. To illustrate the applicability of the code beyond standard cases, we now study the effect of strain on the exciton spectrum. If we apply some uniaxial in-plane strain along the \(x\) axis, the point group of the material will change to \(C_{2v}\) (with rotation axis along \(x\)). The degeneracy of the ground state came from the spatial symmetries, meaning that it should be broken for any strain value, given that all irreducible representations of \(C_{2v}\) are of dimension 1. Therefore, we can study the energy splitting of the ground state as a function of the applied strain. The strain model used is fairly straightforward. Based on the original tight-binding model, we now consider the hopping parameters to have an exponential dependence on the distance: \[t(r)=t_{0}e^{-a(r-r_{0})}, \tag{54}\] where \(a\) is some decay length, \(t_{0}\) the original value of the hopping and \(r_{0}\) the reference length. Additionally, the distortion of the lattice due to strain is taken to affect only bonds parallel to the strain. A rigorous approach would have to implement appropriate distortion of all atomic positions according to the stress tensor [55], but for our purposes this simple model suffices. This is illustrated in Fig. (7) The procedure to study the exciton spectrum as a function of strain is as follows: we generate different system files (i.e. different Hamiltonians) for different values of the strain, which translates into different atomic positions. Then, we run the exciton simulation for each system file, storing the energies. As we expected, now all states are non-degenerate because of the symmetry group \(C_{2v}\). We can plot the ground state splitting as a function of strain, which is shown in Fig. 8(a). In Fig.8(b) we show the conductivity for some finite value of the strain. The response is no longer isotropic due to the lattice symmetry breaking caused by strain, where the exciton peaks shift for both light polarizations. ### MoS\({}_{2}\) To conclude the examples section, we also analyze the exciton spectrum of MoS\({}_{2}\). Same as hBN, in monolayer form this material presents itself in a honeycomb lattice, although it is not planar. Instead, it is formed by three layers of composition S-Mo-S respectively. The description of the band structure of MoS\({}_{2}\) requires a more complex model, which is why we use it to showcase the code. We use a Slater-Koster tight-binding model [56], where each chemical species has a different set of orbitals (Mo has \(d\) orbitals, and S only \(p\) orbitals). This, together with the non-negligible spin-orbit coupling results in a more complex band structure than that of hBN. Both the lattice and the band structure can be found in Fig. (9). After checking convergence with the number of \(\mathbf{k}\) points and the number of bands, we obtain the spectrum shown in table (4.2). In this case, the point group of the material is again \(D_{3h}\) and the irreducible representations realized by the wavefunctions at \(\mathbf{Q}=0\) are compatible with the character table of the group [57]. As before, to ensure that the excitons were computed correctly we can plot the total densities to ensure that they have the expected symmetries. In Fig. (10a) we show the reciprocal probability density of the first energy level. As opposed to hBN, we observe that the states are strongly localized at the valleys. Resolving the degeneracy by labeling each exciton with the \(C_{3}\) eigenvalues would result in each exciton localized in a different valley [36]. This shows that at least the low energy spectrum of MoS\({}_{2}\) can be studied at one valley instead of the whole BZ [58]. This allows to get a more precise description of the exciton since one can use a more refined mesh. The wavefunction for the exciton obtained at one valley can be seen in Fig. (10b), using the code feature to reduce the BZ mesh by some integer factor. For higher excited states this does not hold since the states become more extended across the BZ, reaching both valleys. Since the excitons are very localized in reciprocal space, they should be delocalized in real-space, meaning \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(n\) & Energy (eV) & Binding energy (eV) & Degeneracy \\ \hline \hline 1 & 1.7673 & -0.3527 & 2 \\ 2 & 1.7797 & -0.3403 & 2 \\ 3 & 1.9105 & -0.2095 & 2 \\ 4 & 1.9232 & -0.1968 & 2 \\ \hline \end{tabular} \end{table} Table 2: Exciton spectrum from the tight-binding model for MoS\({}_{2}\) computed with \(N_{k}=40^{2}\), \(N_{v}=N_{c}=2\). This model has a direct gap at \(\mathbf{K}\) of 2.12 eV, used to compute the shown exciton binding energies. that the radius of the exciton should be large (e.g. compared to that of hBN). To complete the characterization of the excitons, we calculate the optical conductivity as shown in Fig. (11) While the exciton energies converge quickly with \(N_{k}\), it is usually necessary to include more \(\mathbf{k}\) points in the calculation of the optical conductivity in order to smooth unphysical oscillations derived from the discrete mesh. As it can be seen, the shape of the spectrum matches previous tight-binding studies [59; 36] and agrees well with ab-initio results [60]. At low energies, the optical conductivity of MoS\({}_{2}\) presents the characteristic A and B exciton peaks, that are understood considering the main spin-allowed electron-hole excitations at the \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\) points. The split of \(\sim\)100 meV between such peaks reflects the effect of SOC in TMD materials [61]. At higher energies, the main feature of the spectra is a pronounced peak similar to the non-interacting case but red-shifted in energy. The excitons giving rise to such peak are often called "C" excitons and were fully characterized in Ref. [59], already showing the potential of tight-binding methods for studying new exciton physics. ## 5 Conclusions We have developed a software package that allows to solve the Bethe-Salpeter equation constructed from either tight-binding models or DFT calculations based on localized orbitals. By considering orbitals as point-like, the computation of the interactions becomes drastically simplified. Together with an effective screening, this results in a fast determination of the BSE matrix. More specifically, our real-space implementation of the interaction matrix elements is shown to be faster and more precise Figure 8: (a) Energy splitting of the ground state exciton as a function of strain in hBN. We observe that the splitting is linear on strain, for small values. (b) Frequency-dependend conductivity on strained hBN, \(\varepsilon=0.1\). The dashed line shows the position of the ground state for \(\varepsilon=0\). Figure 6: (a) DFT band structure of monolayer hBN as obtained with the HSE06 functional. (b) Reciprocal and (c) real-space probability densities of the \(\mathbf{Q}=0\) ground state exciton with \(N_{k}=60^{2}\), \(N_{c}=N_{v}=1\). The DFT calculation involved a basis size of 36. We have run successfully exciton calculations on different systems with varying basis sizes, from 8 to 92. Figure 7: Schematic of the distortion of the hBN lattice due to the application of uniaxial strain along the \(x\) axis. Note that in reality all bonds should be distorted, due to the phenomena of Poisson contraction. than its reciprocal-space counterpart, which is the formulation more commonly used. As in GW-BSE approximations, the starting band structure plays a crucial role in determining the resulting exciton spectrum. Therefore it is key to select the best possible functional (typically hybrids) or the most accurate tight-binding models that capture the most prominent features of the band structure. Then, by choosing appropriately the screening parameters, it is possible to reproduce the results of GW-BSE or similar first-principles codes at a fraction of the computational cost. The Xatu code currently provides all the tools needed to extract and characterize the exciton spectrum, either using the binary or via its API. Nevertheless, the package is still under development, as new functionalities and optimizations are added. Our future plans include giving support for distributed parallelism to enable bigger system sizes and calculation of different excitation types such as trions or biexcitons. The code is currently aimed at the description of 2D materials, but it can support 0D and 3D systems. Since the Keldysh potential is only adequate for 2D systems, we will implement additional potentials suitable for different dimensionalities. We also plan to add the possibility of performing exact calculations of the interaction matrix elements when using Gaussian-based DFT codes to compute the band structure. Currently we provide an interface with the CRYSTAL code [37], and ideally more interfaces to community codes will be added over time, such as SIESTA [62] or Wannier90 [63]. The project has been released under an open-source license and as such community contributions are welcome and encouraged. Note added: Upon completion of this work we became aware of a very recent submission which also addresses the problem of determining the exciton spectrum from Wannier-based tight-binding models [64]. Nevertheless, we believe that the thorough characterization of excitons we provide here can be advantageous and complementary to other different tools. Figure 11: Optical conductivity of MoS\({}_{2}\) with and without excitons. The BSE calculation was done with \(N_{k}=34^{2}\), \(N_{v}=2\) and \(N_{c}=6\). The first two peaks correspond to the A B excitons at the valleys, while the rest of the conductivity can be regarded as a shift of the non-interacting one. Figure 10: (a) Probability density of the ground state exciton in MoS\({}_{2}\) obtained over the full BZ with \(N_{k}=60^{2}\) for \(\mathbf{Q}=0\). (b) Ground state exciton computed in a contour of the \(\mathbf{K}\) valley with \(N_{k}=30^{2}\) with a reduction factor of 2. Both calculations were done with \(N_{c}=N_{v}=2\). Figure 9: (a) Crystal and (b) tight-binding band structure of MoS\({}_{2}\). ## 6 Acknowledgments The authors acknowledge financial support from Spanish MICINN (Grant Nos. PID2019-109539GB-C43 & TED2021-131323B-I00), Maria de Maeztu Program for Units of Excellence in R&D (GrantNo.CEX2018-000805-M), Comunidad Autonoma de Madrid through the Nanomag COST-CM Program (GrantNo.S2018/NMT-4321), Generalitat Valenciana through Programa Prometeo (2021/017), Centro de Computacion Cientifica of the Universidad Autonoma de Madrid, and Red Espanola de Supercomputacion. ## Appendix A Symmetry Throughout the document, we have mentioned and used several times the fact that each individual exciton state does not necessarily have the symmetries of the lattice, but it is the sum of the square amplitude of them who does. In this appendix we will give a proof of this statement: Given \(|\Psi|^{2}=\sum_{n}|\psi_{n}|^{2}\), where \(\psi_{n}\) denotes the wavefunction of the exciton states on some degenerate subspace of \(PHP\), and given some symmetry operation \(C\) such that \([H,C]=0\), then \[C|\Psi|^{2}=|\Psi|^{2}. \tag{10}\] First we have the consider the action of the symmetry operator \(C\) on an exciton state. Given that the eigenstates of a degenerate subspace of \(H\) are not in general eigenstates of \(C\), the most general action is to mix the degenerate states, i.e.: \[C\psi_{n}=\sum_{i}\alpha_{in}\psi_{i} \tag{11}\] The coefficients \(\alpha_{in}\) are the matrix elements of \(C\). To prove (10), we need to know the action of \(C\) on the squared amplitude, \(C|\psi_{n}|^{2}\). So first we want to prove the following property: \[C|\psi_{n}|^{2}=|C\psi_{n}|^{2} \tag{12}\] This can be proven using the action of the symmetry operation on the coordinate of the wavefunction, i.e. \(C\psi_{n}(x)=\psi_{n}(C^{-1}x)\): \[C|\psi_{n}|^{2}(x) =|\psi_{n}|^{2}(C^{-1}x)=\psi_{n}(C^{-1}x)\psi_{n}^{*}(C^{-1}x)\] \[=C\psi_{n}(x)C\psi_{n}^{*}(x)=|C\psi_{n}|^{2}(x) \tag{13}\] where we have also used that \(C\psi_{n}^{*}(x)=(C\psi_{n})^{*}(x)\). This last identity can be proved conjugating the action of the symmetry on the coordinates: \[(C\psi_{n})^{*}(x)=\psi_{n}^{*}(C^{-1}x)=C\psi_{n}^{*}(x) \tag{14}\] This enables us to compute \(C|\psi_{n}|^{2}\) in terms of a expansion on the states of the degenerate subspace: \[C|\psi_{n}|^{2}=|C\psi_{n}|^{2}=\left|\sum_{i}\alpha_{in}\psi_{i}\right|^{2}= \sum_{i,j}\alpha_{in}\alpha_{jn}^{*}\psi_{i}\psi_{j}^{*} \tag{15}\] Finally, with this expression we can prove the symmetry invariance of \(|\Psi|^{2}=\sum_{n}|\psi_{n}|^{2}\). To do so, we act with the symmetry operation \(C\) on \(|\Psi|^{2}\): \[C|\Psi^{2}| =\sum_{n}C|\psi_{n}|^{2}=\sum_{n}\left[\sum_{i,j}\alpha_{in} \alpha_{jn}^{*}\psi_{i}\psi_{j}^{*}\right]\] \[=\sum_{ij}\left[\sum_{n}\alpha_{in}\alpha_{jn}^{*}\right]\psi_{i }\psi_{j}^{*}=\sum_{i}|\psi_{i}|^{2}=|\Psi|^{2} \tag{16}\] where we have used that \(C\) is unitary, i.e. \(\sum_{n}\alpha_{in}\alpha_{jn}^{*}=\delta_{ij}\). This proves that the sum of the squared amplitude of each degenerate state is invariant under the symmetry operations. On a different note, back in the examples we used the character table of the point group of the solid to justify the observed state degeneracies, and in the previous proof we also considered some general symmetry \(C\) such that \([H,C]=0\). For the abstract, unrepresented Hamiltonian, given any operation \(C\) of the point group of the solid, it is true that \([H,C]=0\). However, we are not working with the total Hamiltonian, but with a sector of it. So one must actually look for symmetry operations that commute with \(PHP\): \[[PHP,C]=0 \tag{17}\] Since the sectors of electron-hole pairs of different momentum are disconnected, we can define \(\tilde{H}(\mathbf{Q})=P_{\mathbf{Q}}HP_{\mathbf{Q}}\), where \(P_{\mathbf{Q}}\) is the projector over electron-hole pairs of \(\mathbf{Q}\) total momentum. This Hamiltonian is analogous to the Bloch Hamiltonian \(H(\mathbf{k})\), and it can be shown that it transforms in the same way: \[C^{-1}\tilde{H}(\mathbf{Q})C=\tilde{H}(C^{-1}\mathbf{Q}) \tag{18}\] meaning that for \(\mathbf{Q}=0\) the symmetry group is the crystallographic point group, but for \(\mathbf{Q}\neq 0\) the Hamiltonian is invariant only under symmetry operations of the little group of \(\mathbf{Q}\), whose irreducible representations thus dictate the (unitary) transformation properties of the \(\mathbf{Q}\)-excitonic wavefunctions. Proof that \(H(\mathbf{Q})\) transforms as the Bloch Hamiltonian \(H(\mathbf{k})\) under symmetry operations: \[C^{-1}H(\mathbf{Q})C =C^{-1}P_{\mathbf{Q}}CC^{-1}HCC^{-1}P_{\mathbf{Q}}C\] \[=C^{-1}P_{\mathbf{Q}}CHC^{-1}P_{\mathbf{Q}}C \tag{10}\] where we have used that \([H,C]=0\). So we only need to see how the projectors transform under the symmetry operation to determine how \(H(\mathbf{Q})\) transforms. \[C^{-1}P_{\mathbf{Q}}C=\sum_{\mathbf{k},v,c}C^{-1}c^{\dagger}_{c, \mathbf{k}+\mathbf{Q}}c_{v\mathbf{k}}\ket{GS}\bra{GS}c^{\dagger}_{v\mathbf{k }}c_{c,\mathbf{k}+\mathbf{Q}}C \tag{11}\] Inserting identities, we can transform each creation/annihilation operator according to \(C^{-1}c^{\dagger}_{n\mathbf{k}}C=c^{\dagger}_{n,C^{-1}\mathbf{k}}\), up to an arbitrary phase that is cancelled in (11). From (3) it follows that the Fermi sea is invariant under point group operations, i.e. \(C\ket{GS}=\ket{GS}\) (again, up to an arbitrary phase that is cancelled), we arrive to the following expression: \[C^{-1}P_{\mathbf{Q}}C\] \[=\sum_{\mathbf{k},v,c}c^{\dagger}_{c,C^{-1}\mathbf{k}+C^{-1} \mathbf{Q}}c_{v,C^{-1}\mathbf{k}}\ket{GS}\bra{GS}c^{\dagger}_{v,C^{-1}\mathbf{ k}}C^{c,C^{-1}\mathbf{k}+C^{-1}\mathbf{Q}} \tag{12}\] From the C-invariance of the BZ in \(\mathbf{k}\)-space, we arrive at the final expression for the transformed projector: \[C^{-1}P_{\mathbf{Q}}C=\sum_{\mathbf{k},v,c}\ket{v,c,\mathbf{k},C^{-1}\mathbf{ Q}}\bra{v,c,\mathbf{k},C^{-1}\mathbf{Q}}=P_{C^{-1}\mathbf{Q}} \tag{13}\] Therefore, the projected exciton Hamiltonian \(H(\mathbf{Q})\) also transforms in the same way: \[C^{-1}H(\mathbf{Q})C=H(C^{-1}\mathbf{Q}) \tag{14}\] Likewise, the application of time-reversal yields \(T^{-1}H(\mathbf{Q})T=H(-\mathbf{Q})\) whenever it is a symmetry of the system. ## Appendix B Usage The installation instructions can be found at the repository [https://github.com/alejandrojuria/xatu](https://github.com/alejandrojuria/xatu), so it will not be discussed here. The code has been developed with a hybrid approach in mind: one can resort to configuration files to run the program, in analogy with DFT codes, or instead program both the non-interacting system and run the exciton simulation using the provided API. First we are going to discuss its usage with configuration files. The basic usage as a CLI program is described by: ``` xatu[OPTIONS]systemfile[excitonfile] ``` The executable always expects one file describing the system where we want to compute the excitons, and then another file specifying the parameters of the simulation. Their content is addressed in the next sections. The executable can also take optional flags, generally to tune the output of the simulation. By default, running the program without additional flags prints the exciton energies, without writing the results to any file. ``` h(-help) ``` Used to print a help message with the usage of the executable and a list of all possible flags that may be passed. The simulation is not performed (even in presence of configuration files). ``` s(-states)nstates ``` The number of states specified with this flag is also used for any of the output flags. By default, the number of states is 8 (i.e. if the flag is not present). ``` p(-precision)decimals ``` One can specify the number of decimals used when printing the exciton energies. This is relevant to detect state degeneracy without inspecting manually the states. Deathults to 6 decimals if not present. ``` d(-dft)[ncells] ``` This flag is used to indicate that the systemfile provided corresponds to a CRYSTAL output file, instead of following the standarized format. DFT calculations usually involve several unit cells to determine the Bloch Hamiltonian, so the optional value ncells can be passed to specify how many we want to take into account. Otherwise all of them are read and used. **ack (--energy, --eigenstates, --kwf)** The optional flags -e, -c, -k, -r are used to specify which exciton output is written to file. -e writes the energies, -c writes the eigenvectors, -k writes the reciprocal density. Note that they can be combined instead of being written separately (e.g. -ek instead of -e -k). **r (=rswf) [holeIndex] [=r necells]** Used to write the real-space probability densities to a file. One can give the index of the atom where the hole is located (defaults to first atom of the motif). It can be used a second time to specify the number of unit cells where we want to compute the amplitude (e.g. -r 1 -r 10 fixes the hole at the second atom of the motif, and uses 10 unit cells along each axis). **ss (--spin)** Computes the total spin of the excitons, and writes it to a file. This assumes that the single-particle basis includes spin without performing any check, so incorrect usage could result in wrong results or runtime errors. **aa (--absorption)** Computes the optical conductivity (which reflects the absorption of light up to a constant factor) as a function of frequency using the exciton spectrum, and saves the result to a file. A file named "kubo_win" with the adequate format (shown below) must be present in the working directory. **m (--method) diag | davidson | sparse** Choose method to obtain the eigenstates of the BSE. By default, full diagonalization is used. If the Davidson or sparse (Lanczos) method is selected, then it is used to compute the number of states specified before. **b (--bands) kpointsfile** To check that the system file was written correctly, one can use this option to diagonalize the Bloch Hamiltonian on the \(\mathbf{k}\) points specified on a file, and write the energy bands to a file. No exciton calculation is performed. ### Structure of a system file The system configuration files contain all the information needed to characterize completely the material of study: it provides the lattice vectors and motif positions, which is required for the real-space evaluation of the excitons. Then, we have the number of orbitals of each unique chemical species, which is needed to compute the matrix elements correctly, and the filling, which determines which bands participate in the formation of the exciton. Finally, the file contains the matrices needed to build the Bloch Hamiltonian, this is, the Fock matrices \(H(\mathbf{R})\) and their corresponding Bravais vectors \(\mathbf{R}\). The Bloch Hamiltonian is then reconstructed as: \[H(\mathbf{k})=\sum_{\mathbf{R}}H(\mathbf{R})e^{i\mathbf{k}\cdot\mathbf{R}} \tag{12}\] Note that even though one has to provide the orbitals of each species, the specific type of orbital is not needed since the interaction is computed using the point-like approximation. A system file is specified using labels for each block. Blocks always begin with the block delimiter #, followed by a label. A block is then defined as all the content between two consecutive block delimiters. The expected content for each label will be discussed next. Any line containing! is regarded as a comment, and empty lines are skipped. **# BravaisLattice:**: Basis vectors of the Bravais lattice. The number of vectors present is also used to determine the dimensionality of the system. The expected format is one vector per line, x y z. **# Motif:**: List with the positions and chemical species of all atoms of the motif (unit cell). The chemical species are specified with an integer index, used later to retrieve the number of orbitals of that species. The expected format is one atom per line, x y z index. **# Orbitals:**: Number of orbitals of each chemical species present. The position of the number of orbitals for each species follows the indexing used in the motif block. This block expects one or more numbers of orbitals, the same as the number of different species present, n1 [n2...]. **# Filling:**: Total number of electrons in the unit cell. Required to identify the Fermi level, which is the reference point in the construction of the excitons. Must be an integer number. **# BravaisVectors:**: List of Bravais vectors \(\mathbf{R}\) that participate in the construction of the Bloch Hamiltonian (12). Expected one per line, in format x y z. * # FockMatrices: Matrices \(H(\mathbf{R})\) that construct the Bloch Hamiltonian \(H(\mathbf{k})\). The matrices must be fully defined, i.e., they cannot be triangular, since the code does not use hermiticity to generate the Bloch Hamiltonian. The Fock matrices given must follow the ordering given in the block BravaisVectors. The matrices can be real or complex, and each one must be separated from the next using the delimiter &. In case the matrices are complex, the real and imaginary parts must be separated by a space, and the complex part must carry the imaginary number symbol (e.g. 1.5 -2.1_j_). Both \(i\) and \(j\) can be used. * # [OverlapMatrices]: In case that the orbitals used are not orthonormal, one can optionally provide the overlap matrices \(S(\mathbf{R})\). The overlap in \(\mathbf{k}\) space is given by: \[S(\mathbf{k})=\sum_{\mathbf{R}}S(\mathbf{R})e^{\delta\mathbf{k}\cdot\mathbf{R}}\] This is necessary to be able to reproduce the bands, which come from solving the generalized eigenvalue problem \(H(\mathbf{k})S(\mathbf{k})\Psi=ES(\mathbf{k})\Psi\). This will be specially necessary if the system was determined using DFT, since in tight-binding we usually assume orthonormality. This block follows the same rules as FockMatrices: each matrix \(S(\mathbf{R})\) must be separated with the delimiter &, and they must follow the order given in BravaisVectors. Several examples of valid system files are provided in the code repository, under the folder /models. ### Structure of an exciton file The purpose of the system file was to specify completely the system where we want to compute the excitons. Then, the exciton file is used to describe the excitons themselves: number of points in the mesh or submesh, bands that participate and center-of-mass momentum for example, as well as some additional flags. The idea is to keep the functionality as orthogonal as possible between files. With one system file, we can test for the convergence of the excitons with the number of kpoints, or with the number of bands modifying the exciton file only. Finally, we have the runtime options of the program, which in general do not affect the energy and modify the output exclusively. The philosophy is to maximize the reproducibility and facilitate tracking of the experiments. The exciton files are built following the rules of the system files. They are composed of blocks, starting with #. Each block has a label, which determines the expected content of the block. Next we provide a list of the possible parameters used in the construction of an exciton file: * # Label: Used to specify the name of the files containing the output of the program. The files will be named [Label].eigval, [Label].kwf, etc. * # Bands: Number of bands above and below the Fermi level. The minimum value is 1, to describe one conduction band and one valence band (i.e. only one combination of bands). * # [BandList]: As an alternative to Bands, one can specify a list with the indices of the bands that compose the exciton. 0 is taken as the last valence band, meaning that 1 would be the first conduction band, -1 is the second valence band and so on. This option can be used to generate asymmetric combinations of bands. It overrides the Bands block. * # Ncells: Number of points in one direction of the Brillouin zone, or equivalently number of unit cells along one axis. The same number of points is taken along all directions. * # [Submesh]: Used to specify a submesh of the Brillouin zone. Takes a positive integer \(m\), which divides the BZ along each axis by that factor. The resulting area is meshed with the number of points specified in the Ncells block. This option can become memory intensive (it scales as \(\mathcal{O}(m^{d})\), \(d\) the dimension). * # [ShiftMesh]: In case that we are using a submesh, then probably we also want to shift the meshed area to center it at the gap, where the exciton peaks. Takes a vector with its components, kx ky kz. * # Dielectric: The Keldysh interaction requires setting the dielectric constants of substrate \(\epsilon_{s}\), the medium \(\epsilon_{m}\) and the screening length \(r_{0}\), which involves the dielectric constant of the material. This block expects three values, es em r0. * # [TotalMomentum]: One can optionally specify the total or center-of-mass momentum \(\mathbf{Q}\) of the exciton. By default, it is taken to be zero, unless this block is specified. It expects a vector in form qx qy qz. * [Reciprocal]: If present, the interaction matrix elements are computed in reciprocal space instead of direct space, which is the default. It takes an integer argument to specify the number of reciprocal cells to sum over, nG. * [Exchange]: Flag to turn on the exchange interaction. By default computations neglect the exchange, and use only the direct term. It has to be set to true or false. * [Scissor]: Used to specify a scissor shift of the bands to correct the gap. This optional field takes a single value, shift As it can be seen, a minimum exciton simulation only requires specifying the number of bands, the number of \(\mathbf{k}\) points and the dielectric constants. The modification of any of the present parameters is expected to result in a variation of the exciton results (energies, wavefunctions, conductivity), which is why all this parameters have been delegated to the same file. ### Absorption file For the calculation of the Kubo conductivity, one needs to provide a separate input file named kubo_w.in in the working folder. This file is used to specify all parameters relative to the conductivity calculation, namely the desired energy interval, the point sampling and the broadening to be used, as well as the output files. Its format is as follows: #initial frequency (eV) 5 #frequency range (eV) 8 #number of frequency points (integer) 300 #broadening parameter (eV) 0.05 #type of broadening lorentzian #output kubo name files kubo_hBN_sp.dat kubo_hBN_ex.dat Do note that as opposed to the previous configuration files, the name of each section starting with # is not actually relevant for the parsing; the program always expects the same fields to be present in the file, and in the same order as presented here. For the broadening, three different options are allowed ('lorentzian', 'exponential', 'gaussian') ### As a library So far we have discussed a more streamlined usage of the package. In some cases, however, the user could benefit from accessing directly the results of an exciton calculation, instead of having to dump it to a file to postprocess it later. To enable this possibility, the package has been also designed as a library, meaning that one can import the classes and functions defined in the API, and use them to build some extra functionality. Some use cases could be scenarios with exciton interactions, such as exciton-exciton interactions or exciton-polaritons. To do so, the package provides a header file which defines a namespace. Within the namespace we have access to all the exciton functionality, which is completely documented. For instructions on how to build the documentation, we refer to the project repository where the most up-to-date information will be present. Additionally, some usage examples can be found under the root directory in the folder /main. The outline for a general exciton simulation is the following: one first has to create a System object, which can be done with a system file. Alternatively, one can define a subclass that inherits from System, and use it to implement the desired behaviour (namely the Bloch Hamiltonian). Then this System is passed on to the Exciton class, which we configure with the desired parameters. The interacting Hamiltonian is initialized and solved, returning a Result object which contains the eigenvalues and eigenvectors. With this, now we can compute some observables, or instead use these states to perform some other calculations out of the scope of the code. ### Output To conclude the usage section, we will describe briefly the structure of the output files. Here we describe how each file is written so the user can write their own custom routines; we also provide some example Python scripts under the folder /plot. * Energy: The energy file has in the first line the total number of energies written in the file. The second line contains all the energies, separated by a tabulation, e1 e2... en. All energies are written, including degenerate levels. All the exciton energies are given with respect to the Fermi sea energy. To obtain the binding energy, one must substract the gap from the exciton energy. Units are \([E]=\) eV. * States: The first line contains the dimension of the BSE matrix \(n\), i.e. the number of different electron-hole pairs. The next \(n\) lines specify the valence, conduction bands of each electron-hole pair and their \(\mathbf{k}\) point, kx ky kz v c. Afterwards, each line specifies completely the coefficients of each exciton state. The format per line is: Re(A1) Im(A1) Re(A2) Im(A2).... * Reciprocal probability density: For the reciprocal density, on each line we specify the coordinates of the k point and the associated probability, kx ky kz P. Each state is separated from the next by a delimiter #. Units are \([k]=\)A\({}^{-1}\). * Real-space probability density: The first line has the coordinates of the hole, hx hy hz. The following ones each have the coordinates of one atomic position, and the probability of finding the electron: x y z P. Densities for different states are separated by #. Units are \([x]=\)A. * Spin: On each line we write the index of the current exciton, and next the total spin projection, the hole and the electron spin, n St Sh Se. Spin units are \([S_{z}]=\hbar\). * Absorption: Both conductivities with and without exciton effects are computed and written to two different files. Each row contains the following columns: \(\omega\), \(\sigma_{xx}\), \(\sigma_{xy}\), \(\sigma_{xz}\), \(\sigma_{yx}\), \(\sigma_{yy}\), \(\sigma_{yz}\), \(\sigma_{zx}\), \(\sigma_{zy}\), \(\sigma_{zz}\). Units are \([\omega]=\) eV, \([\sigma_{ij}]=e^{2}/\hbar\).
2301.06193
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
In recent years, Convolutional Neural Networks (CNNs) have become the standard class of deep neural network for image processing, classification and segmentation tasks. However, the large strides in accuracy obtained by CNNs have been derived from increasing the complexity of network topologies, which incurs sizeable performance and energy penalties in the training and inference of CNNs. Many recent works have validated the effectiveness of parameter quantization, which consists in reducing the bit width of the network's parameters, to enable the attainment of considerable performance and energy efficiency gains without significantly compromising accuracy. However, it is difficult to compare the relative effectiveness of different quantization methods. To address this problem, we introduce RedBit, an open-source framework that provides a transparent, extensible and easy-to-use interface to evaluate the effectiveness of different algorithms and parameter configurations on network accuracy. We use RedBit to perform a comprehensive survey of five state-of-the-art quantization methods applied to the MNIST, CIFAR-10 and ImageNet datasets. We evaluate a total of 2300 individual bit width combinations, independently tuning the width of the network's weight and input activation parameters, from 32 bits down to 1 bit (e.g., 8/8, 2/2, 1/32, 1/1, for weights/activations). Upwards of 20000 hours of computing time in a pool of state-of-the-art GPUs were used to generate all the results in this paper. For 1-bit quantization, the accuracy losses for the MNIST, CIFAR-10 and ImageNet datasets range between [0.26%, 0.79%], [9.74%, 32.96%] and [10.86%, 47.36%] top-1, respectively. We actively encourage the reader to download the source code and experiment with RedBit, and to submit their own observed results to our public repository, available at https://github.com/IT-Coimbra/RedBit.
André Santos, João Dinis Ferreira, Onur Mutlu, Gabriel Falcao
2023-01-15T21:27:35Z
http://arxiv.org/abs/2301.06193v1
# RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs ###### Abstract In recent years, Convolutional Neural Networks (CNNs) have become the standard class of deep neural network for image processing, classification and segmentation tasks. However, the large strides in accuracy obtained by CNNs have been derived from increasing the complexity of network topologies, which incurs sizeable performance and energy penalties in the training and inference of CNNs. Many recent works have validated the effectiveness of parameter quantization, which consists in reducing the bit width of the network's parameters, to enable the attainment of considerable performance and energy efficiency gains without significantly compromising accuracy. However, it is difficult to compare the relative effectiveness of different quantization methods. To address this problem, we introduce RedBit, an open-source framework that provides a transparent, extensible and easy-to-use interface to evaluate the effectiveness of different algorithms and parameters configurations on network accuracy. We use RedBit to perform a comprehensive survey of five state-of-the-art quantization methods applied to the MNIST, CIFAR-10 and ImageNet datasets. We evaluate a total of \(2300\) individual bit width combinations, independently tuning the width of the network's weight and input activation parameters, from \(32\) bits down to \(1\) bit (e.g., 8/8, 2/2, 1/32, 1/1, for weights/activations). Upwards of 20000 hours of compute time in a pool of state-of-the-art GPUs were used to generate all the results in this paper. For 1-bit quantization, the accuracy losses for the MNIST, CIFAR-10 and ImageNet datasets range between \([0.26\%,0.79\%]\), \([9.74\%,32.96\%]\) and \([10.86\%,47.36\%]\) top-1, respectively. We actively encourage the reader to download the source code and experiment with RedBit, and to submit their own observed results to our public repository, available at [https://github.com/IT-Coimbra/RedBit](https://github.com/IT-Coimbra/RedBit). Quantized Neural Networks; Deep Learning Accuracy; Binary Neural Networks; Convolutional Neural Network; ## 1 Introduction In recent years, CNNs have become increasingly adept at executing numerous complex image processing, classification and segmentation tasks. These improvements have been attained in large part at the expense of a continuous increase in size and complexity for new CNN network topologies. ResNet-50 [1] (introduced in 2015, with 26 million parameters), achieves a top-1 accuracy of 77.15%, and a top-5 accuracy of 93.29%. (Top-N accuracy corresponds to the proportion of scenarios for which the correct answer is contained in the network's \(N\) best guesses, for a given classification problem.) In contrast, EfficientNet-B7 [2] (introduced in 2019, with 66 million parameters, \(2.5\times\) larger than ResNet-50), achieves top-1 and top-5 accuracies for the ImageNet dataset of 84.40% and 97.10%, respectively. The upshot of this increase in the computational complexity is a substantial increase in the time and energy required to train and use them. The widespread adoption of CNNs is potentially most impactful in edge devices (e.g., autonomous vehicles, smartphones), which often come equipped with high-quality imaging sensors. However, these devices also carry very strict autonomy constraints, and are therefore unsuited to execute the complex and memory-intensive operations associated with conventional CNNs. The severity of this issue will continue to escalate in the near future, as newly proposed networks make use of increasingly large numbers of parameters, exacerbating their memory intensity and performance and energy overheads [3]. To curb the scaling challenges [4] presented by these larger networks, while retaining as many of their benefits as possible, it is possible to _quantize_ CNN parameters, yielding Quantized Convolutional Neural Networks (QCNNs). Quantization consists in reducing the bit width of a network's parameters to alleviate their computational and data movement requirements, and has been demonstrated by many prior works [5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44] to provide substantial performance and energy gains, with minimal losses in accuracy. Quantization allows 1) the use of less compute resources - less bits involved in logic operations - 2) the use of less memory to store the actual data and 3) the reduction of data movement. These lead to reductions on area footprints for computation hardware implementation and required memory. Other techniques can also alleviate computational requirements such as pruning [45, 46, 47, 48, 49], fine-tuning [50], compression [51, 52, 53], decomposition [54],
2305.04273
Braid groups and mapping class groups for 2-orbifolds
The main result of this article is that pure orbifold braid groups fit into an exact sequence $1\rightarrow K\rightarrow\pi_1^{orb}(\Sigma_\Gamma(n-1+L))\xrightarrow{\iota_{\textrm{PZ}_n}}\textrm{PZ}_n(\Sigma_\Gamma(L))\xrightarrow{\pi_{\textrm{PZ}_n}}\textrm{PZ}_{n-1}(\Sigma_\Gamma(L))\rightarrow1.$ In particular, we observe that the kernel $K$ of $\iota_{\textrm{PZ}_n}$ is non-trivial. This corrects Theorem 2.14 in [12](arXiv:2006.07106). Moreover, we use the presentation of the pure orbifold mapping class group $\textrm{PMap}^{\textrm{id},orb}_n(\Sigma_\Gamma(L))$ from [8] to determine $K$. Comparing these orbifold mapping class groups with the orbifold braid groups, reveals a surprising behavior: in contrast to the classical case, the orbifold braid group is a proper quotient of the orbifold mapping class group. This yields a presentation of the pure orbifold braid group which allows us to read off the kernel $K$.
Jonas Flechsig
2023-05-07T13:24:25Z
http://arxiv.org/abs/2305.04273v1
# Braid groups and mapping class groups ###### Abstract The main result of this article is that pure orbifold braid groups fit into an exact sequence \[1\to K\to\pi_{1}^{orb}(\Sigma_{\Gamma}(n-1+L))\xrightarrow{{}^{\iota_{\mathrm{PZ }_{n}}}}\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\xrightarrow{\pi_{\mathrm{PZ}_{n}}} \mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\to 1.\] In particular, we observe that the kernel \(K\) of \(\iota_{\mathrm{PZ}_{n}}\) is non-trivial. This corrects Theorem 2.14 in [12]. Moreover, we use the presentation of the pure orbifold mapping class group \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) from [8] to determine \(K\). Comparing these orbifold mapping class groups with the orbifold braid groups, reveals a surprising behavior: in contrast to the classical case, the orbifold braid group is a proper quotient of the orbifold mapping class group. This yields a presentation of the pure orbifold braid group which allows us to read off the kernel \(K\). ## 1 Introduction Orbifold braid groups are analogs of Artin braid groups or, more generally, surface braid groups. Instead of considering braids moving inside a disk or a surface, orbifold braids move inside a 2-dimensional orbifold. Orbifold braid groups have attracted interest since some of them contain spherical and affine Artin groups of type \(D_{n},\tilde{B}_{n}\) and \(\tilde{D}_{n}\) as finite index subgroups by work of Allcock [1]. For these Artin groups, the orbifold braid groups provide us with braid pictures. Roushon published several articles on the structure of orbifold braid groups [12, 13, 14, 15] and the contained Artin groups [11]. Further, Crisp-Paris [5] studied the outer automorphism group of the orbifold braid group. The present article also contributes to the study of the structure of orbifold braid groups. As in the work of Roushon [12], we consider braid groups on orbifolds with finitely many punctures and cone points (of possibly different orders). The underlying orbifolds are defined using the following data: Let \(\Gamma\) be a free product of finitely many finite cyclic groups. As such, \(\Gamma\) acts on a planar, contractible surface \(\Sigma\) (with boundary), obtained by thickening the Bass-Serre tree (see Example 2.3 for details). If we add \(L\) punctures, we obtain a similar orbifold as studied by Roushon, which we denote by \(\Sigma_{\Gamma}(L)\). In contrast to his article, we consider orbifolds with non-empty boundary but this does not affect the structure of the orbifold braid groups. The only singular points in the orbifold \(\Sigma_{\Gamma}(L)\) are cone points that correspond to the finite cyclic factors of the free product \(\Gamma\). The elements of orbifold braid groups \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) are represented by braid diagrams (see Figure 1 for an example) with \(n\) strands (drawn in black), \(N\) cone point bars (drawn in red with a cone at the top) and \(L\) bars that represent the punctures (drawn in blue with a diamond at the top). The composition of these diagrams works as in Artin braid groups. Introduction Let \(\mathbb{Z}_{n}(\Sigma_{\Gamma}(L))\) be a finite graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph. Let \(\Gamma\) be a graph and \(\Gamma\) a graph. Let \(\Gamma\) be a graph. The orbifold braid group \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) is deeply connected to the orbifold mapping class group of the punctured orbifold \(\Sigma_{\Gamma}(L)\) with \(n\) marked points. This group, denoted by \(\mathrm{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L)\right)\), is studied in [8]. A mapping class of \(\Sigma_{\Gamma}(L)\) is represented by a \(\Gamma\)-equivariant homeomorphism of \(\Sigma(L)\) that fixes cone points and the boundary \(\partial\Sigma(L)\) pointwise. Such a homeomorphism respects the \(n\) marked points if it preserves the \(\Gamma\)-orbit of the \(n\) marked points as a set. The equivalence relation is induced by \(\Gamma\)-equivariant ambient isotopies fixing cone points, marked points and the boundary. These orbifold mapping class groups admit a homomorphism \[\mathrm{Forget}_{n}^{orb}:\mathrm{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L) \right)\rightarrow\mathrm{Map}^{orb}\left(\Sigma_{\Gamma}(L)\right)\] by forgetting the marked points. Let \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) be the kernel of \(\mathrm{Forget}_{n}^{orb}\). ### Main results Concerning the relation between \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) and \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\), we observe the following: Evaluating a certain ambient isotopy at the marked points \(p_{1},...,p_{n}\) yields a homomorphism \[\mathrm{ev}:\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L) \right)\rightarrow\mathrm{Z}_{n}(\Sigma_{\Gamma}(L)),\] see Section 5.1 for details. In contrast to the classical situation, this evaluation map is not an isomorphism. However, the kernel of \(\mathrm{ev}\) can be described in terms of \(\frac{2\pi}{m_{\nu}}\)-twists \(U_{\nu}\) and \(C_{k\nu}\) of a marked point around the cone point \(c_{\nu}\) for \(1\leq\nu\leq N\). Locally, \(U_{\nu}\) and \(C_{k\nu}\) twist around the cone point as described in Figure 1.4. For further information about the embedding of the twisted disks, we refer to Section 4. **Theorem A**.: _The kernel of \(\mathrm{ev}\) is the normal closure of \(\{U_{\nu}^{m_{\nu}}\mid 1\leq\nu\leq N\}\) in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). The kernel of the restricted map \(\mathrm{ev}\mid_{\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)}\) is the normal closure of \(\{C_{k\nu}^{m_{\nu}}\mid 1\leq\nu\leq N,1\leq k\leq n\}\) in \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\)._ By [8, Proposition 4.22], we have a presentation of \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Together with Theorem A, this allows us to deduce the following presentation of \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) in terms of the braids from Figure 1.2: **Theorem B**.: _The orbifold braid group \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) is presented by generators_ \[h_{1},...,h_{n-1},t_{1},...,t_{L},u_{1},...,u_{N}\] _and the following defining relations for \(2\leq j<n\), \(1\leq\theta,\lambda\leq L\) with \(\theta<\lambda\) and \(1\leq\mu,\nu\leq N\) with \(\mu<\nu\):_ 1. \(u_{\nu}^{m_{\nu}}=1\)_,_ 2. _braid and commutator relations for the generators_ \(h_{1},...,h_{n-1}\)_,_ 3. \([t_{\lambda},h_{j}]=1\) _and_ \([u_{\nu},h_{j}]=1\)_,_ 4. \([h_{1}t_{\lambda}h_{1},t_{\lambda}]=1\) _and_ \([h_{1}u_{\nu}h_{1},u_{\nu}]=1\)_,_ 5. \([t_{\theta},b_{2\lambda}]=1\)_,_ \([u_{\mu},c_{2\nu}]=1\) _and_ \([t_{\lambda},c_{2\nu}]=1\) _with_ \(b_{2\lambda}=h_{1}^{-1}t_{\lambda}h_{1}\) _and_ \(c_{2\nu}=h_{1}^{-1}u_{\nu}h_{1}\) All the relations from Theorem B follow from geometric observations: The finite order relation B(1) follows as described in Remark 3.12, the relations B(2) are well-known and the commutator relations B(3)-B(5) follow from the braid pictures in Figure 5.1. Moreover, a similar presentation can also be deduced for the pure orbifold braid group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) (see Corollary 5.6). The main contribution of this article is a result on the structure of pure orbifold braid groups: Let \(\iota_{\mathrm{PZ}_{n}}\) be the map that considers elements of the orbifold fundamental group of the punctured orbifold \(\Sigma_{\Gamma}(n-1+L)\) as braids in \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) that only move the \(n\)-th strand. Moreover, let \(\pi_{\mathrm{PZ}_{n}}:\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\to\mathrm{PZ}_{n-1} (\Sigma_{\Gamma}(L))\) be the map that forgets the \(n\)-th strand. Using Theorems A and B, we obtain: **Theorem C**.: _The pure orbifold braid group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) fits into the exact sequence_ \[1\to K\to\pi_{1}^{orb}\left(\Sigma_{\Gamma}(n-1+L)\right)\xrightarrow{\iota_{ \mathrm{PZ}_{n}}}\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\xrightarrow{\pi_{ \mathrm{PZ}_{n}}}\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\to 1\] _where \(K\) is the normal closure of_ \[\mathrm{PC}\left(\{(x_{j}z_{\nu})^{m_{\nu}}(x_{j}^{-1}z_{\nu}^{-1})^{m_{\nu}} \mid 1\leqslant j<n,1\leqslant\nu\leqslant N\}\right)\] _in \(\pi_{1}^{orb}\left(\Sigma_{\Gamma}(n-1+L)\right)\) where \(\mathrm{PC}(S)\) is the set of partial conjugates of elements in \(S\) (see Section 5.2, Steps 1 and 2 for further details)._ This corrects Theorem 2.14 in [12] which claims that the kernel \(K\) is trivial. Moreover, Theorem C implies that contrary to [11, Proposition 4.1] the natural homomorphism \[\omega:\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\to\mathrm{Z}_{n+L}(\Sigma_{\Gamma})\] which maps punctures to fixed strands is not injective for \(L\geqslant 1\) (see Proposition 5.17 for details). **Overview** We introduce the group \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) as the orbifold fundamental group of an orbifold configuration space. The relevant concepts are summarized in Section 2. Since the \(\Gamma\)-action on \(\Sigma\) has a fundamental domain, we may reinterpret the elements in this group in terms of braid diagrams as in Figures 1.1, 1.2 and 1.3. This description is the subject of Section 3. In Section 4, we give a brief overview about orbifold mapping class groups which summarizes the results from [8]. Section 5 is the main achievement of this article: There we deduce Theorem A which yields a presentation for \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) and \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\). The latter presentation allows us to compute the non-trivial kernel \(K\) (Theorem C). ### Acknowledgments I would like to thank my adviser Kai-Uwe Bux for his support and many helpful discussions. Many thanks are also due to Jose Pedro Quintanilha and Xiaolei Wu for their helpful advice at different points of this project. Moreover, I am grateful to Jose Pedro Quintanilha for his comments on a draft of this text. The author was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) - 426561549. Further, the author was partially supported by Bielefelder Nachwuchsfonds. ## 2 Orbifolds and their fundamental groups In this article we only consider orbifolds that are given as the quotient of a manifold (typically a surface) by a proper group action. Recall that an action \[\phi:G\to\operatorname{Homeo}(M),g\mapsto\phi_{g}\] on a manifold \(M\) is _proper_ if for each compact set \(K\subseteq M\) the set \[\{g\in G\mid\phi_{g}(K)\cap K\neq\emptyset\}\] is compact. Since we endow \(G\) with the discrete topology, i.e. the above set is finite. Orbifolds that appear as proper quotients of manifolds are called _developable_ in the terminology of Bridson-Haefliger [2] and _good_ in the terminology of Thurston [16]. Above and in the following, all manifolds are orientable and all homeomorphisms are orientation preserving. **Definition 2.1** (Orbifolds, [2, Chapter III.G, 1.3]).: Let \(M\) be a manifold, possibly with boundary, and \(G\) a group with a monomorphism \[\phi:G\to\operatorname{Homeo}(M)\] such that \(G\) acts properly on \(M\). Under these conditions the 3-tuple \((M,G,\varphi)\) is called an _orbifold_, which we denote by \(M_{G}\). If \(\operatorname{Stab}_{G}(c)\neq\{1\}\) for a point \(c\in M\), the point \(c\) is called a _singular point_ of \(M_{G}\). If \(\operatorname{Stab}_{G}(c)\) for a point \(c\in M\) is cyclic of finite order \(m\), the point \(c\) is called a _cone point_ of \(M_{G}\) of order \(m\). A first example of an orbifold is the following: **Example 2.2**.: Let \(\mathbb{Z}_{m}\) be a cyclic group of order \(m\). The group \(\mathbb{Z}_{m}\) acts on a disk \(D\) by rotations around its center. The action is via isometries and the acting group is finite, i.e. the action is proper. Consequently, \(D_{\mathbb{Z}_{m}}\) is an orbifold with exactly one singular point in the center of \(D\), which is a cone point. Example 2.2 motivates a more general construction for a free product of finitely many finite cyclic groups which we describe briefly in the following. For further details, we refer to the author's PhD thesis [7, Section 2.1]. We will consider this generalization of the orbifold \(D_{\mathbb{Z}_{m}}\) throughout the article. **Example 2.3**.: Let \(\Gamma\) be a free product of finite cyclic groups \(\mathbb{Z}_{m_{1}},...,\mathbb{Z}_{m_{N}}\). The group \(\Gamma\) is the fundamental group of the following graph of groups with trivial edge groups As such, \(\Gamma\) acts on its Bass-Serre tree \(T\). The fundamental domain of this action is a path with \(N-1\) edges. The action is free on edges and the vertex stabilizers are conjugates \(\gamma\mathbb{Z}_{m_{\nu}}\gamma^{-1}\) with \(\gamma\in\Gamma\) and \(1\leqslant\nu\leqslant N\). By the choice of a generator \(\gamma_{\nu}\) for each \(\mathbb{Z}_{m_{\nu}}\) with \(1\leqslant\nu\leqslant N\), the link of each vertex carries a cyclic ordering. Let us consider a proper embedding of the Bass-Serre tree \(T\) into \(\mathbb{C}\) that respects the local cyclic order on each link. If we choose a regular neighborhood of \(T\) inside \(\mathbb{C}\), we obtain a planar, contractible surface \(\Sigma\) (with boundary), see Figure 2.1 for an example. This surface \(\Sigma\) inherits a proper \(\Gamma\)-action from the Bass-Serre tree such that vertex stabilizers act with respect to the cyclic order on the link of the stabilized vertex. Moreover, the action admits a fundamental domain corresponding to the fundamental domain in \(T\). In particular, we obtain an orbifold structure \(\Sigma_{\Gamma}\). A point in \(\Sigma_{\Gamma}\) is a singular point if and only if it corresponds to a vertex of \(T\). Hence, the singular points in \(\Sigma_{\Gamma}\) are all cone points and decompose into \(N\) orbits. The quotient \(\Sigma/\Gamma\) is a disk with \(N\) distinguished points that correspond to the orbits of the cone points. In general, we may choose a fundamental domain \(F\) that is a disk as pictured in Figure 2.2 and contains exactly \(N\) cone points \(c_{1},...,c_{N}\) that lie on the boundary such that each has exactly two adjacent boundary arcs that lie in the same \(\Gamma\)-orbit. If we remove the boundary of \(\Sigma\), the quotient \(\Sigma^{\circ}/\Gamma\) is homeomorphic to the complex plane with \(N\) distinguished points and associated cyclic groups \(\mathbb{Z}_{m_{\nu}}\) for \(1\leqslant\nu\leqslant N\). Adding \(\Gamma\)-orbits of punctures \(\Gamma(r_{\lambda})\) for \(1\leqslant\lambda\leqslant L\) to \(\Sigma\) such that \(\Gamma(r_{\theta})\neq\Gamma(r_{\lambda})\) for \(1\leqslant\theta,\lambda\leqslant L,\theta\neq\lambda\), we obtain the orbifold called \[\mathbb{C}(L,N,\mathbf{m})\text{ with }\mathbf{m}=(m_{1},...,m_{N})\] in [12]. In [1], Allcock studied braids on these orbifolds for \[(L,N,\mathbf{m})=(0,2,(2,2)),(0,1,(2))\text{ and }(1,1,(2)).\] Figure 2.2. The fundamental domain \(F\). Since we also want to study mapping class groups, which requires to fix the boundary, we will consider the orbifold \(\Sigma_{\Gamma}(L)\) with boundary. Moreover, we use the notation \(\Sigma_{\Gamma}(L)\) for the orbifold with underlying surface (with boundary) \[\Sigma(L):=\Sigma\backslash\Gamma(\{r_{1},...,r_{L}\}). \tag{1}\] We will consider orbifold fundamental groups using a concept of paths introduced in [2, Chapter III.G, 3]. There an orbifold is considered more generally as an _etale groupoid_\((\mathcal{G},X)\), see [2, Chapter III.G, 2]. If \(M_{G}\) is an orbifold in the sense of Definition 2.1, the associated etale groupoid is given by \[(\mathcal{G},X)=(G\times M,M).\] In the following, we will simplify the notation using \(G\) instead of \(\mathcal{G}=G\times M\). In particular, we introduce \(G\)-paths. These are the \(\mathcal{G}\)-paths in [2]. **Definition 2.4** (\(G\)-path, [2, Chapter III.G, 3.1]).: A \(G\)_-path_\(\xi=(g_{0},c_{1},g_{1},...,c_{p},g_{p})\) in \(M_{G}\) with initial point \(x\in M\) and terminal point \(y\in M\) over a subdivision \(a=t_{0}\leqslant...\leqslant t_{p}=b\) of the interval \([a,b]\) consists of 1. continuous maps \(c_{i}:[t_{i-1},t_{i}]\to M\) for \(1\leqslant i\leqslant p\) and 2. group elements \(g_{i}\in G\) such that \(g_{0}(c_{1}(t_{0}))=x\), \(g_{i}(c_{i+1}(t_{i}))=c_{i}(t_{i})\) for \(1\leqslant i<p\) and \(g_{p}(y)=c_{p}(t_{p})\) (see Figure 2.3). We call \(\xi\) a \(G\)_-loop based at \(x\)_, if the initial point \(x\in M\) is also the terminal point. If an element \(g_{i}\) is non-trivial, we say that \(\xi\) contains a \(G\)_-leap_ at time \(t_{i}\). For brevity, we write \((g_{0},c_{1},g_{1},...,c_{p})\) for \((g_{0},c_{1},g_{1},...,c_{p},g_{p})\) if \(g_{p}=1\). We say a \(G\)-path is _continuous_ if it is of the form \((g,c)\). The following equivalence relation identifies certain \(G\)-paths whose continuous pieces have the same \(G\)-orbits. **Definition 2.5** (Equivalence of \(G\)-paths, [2, Chapter III.G, 3.2]).: Let \[\xi=(g_{0},c_{1},g_{1},...,c_{p},g_{p})\] be a \(G\)-path over \(a=t_{0}\leqslant...\leqslant t_{p}=b\). 1. A _subdivision_ of \(\xi\) is a \(G\)-path obtained from \(\xi\) by choosing \(t^{\prime}\in[t_{i-1},t_{i}]\) for some \(1\leqslant i\leqslant p\) and replacing the entry \(c_{i}\) with the sequence \[(c_{i}|_{[t_{i-1},t^{\prime}]},1,c_{i}|_{[t^{\prime},t_{i}]}).\] 2. A _shift_ of \(\xi\) is a \(G\)-path obtained from \(\xi\) by choosing \(h\in G\) and replacing a subsequence \((g_{i-1},c_{i},g_{i})\) for some \(1\leqslant i\leqslant p\) with \[(g_{i-1}h^{-1},h\cdot c_{i},hg_{i}).\] We say that two \(G\)-paths are _equivalent_ if one can be obtained from the other by a sequence of subdivisions, inverses of subdivisions and shifts. Figure 2.4. Two \(G\)-paths equivalent by a shift. Figure 2.3. A \(G\)-path. Using this equivalence relation, we mimic the homotopy relation for paths in topological spaces for \(G\)-paths. **Definition 2.6** (Homotopy of \(G\)-paths, [2, Chapter III.G, 3.5]).: An _elementary homotopy_ between two \(G\)-paths \(\xi\) and \(\tilde{\xi}\) is a family of \(G\)-paths \(\xi_{s}=(g_{0},c_{1}^{s},...,g_{p})\) over the subdivisions \(0=t_{0}\leqslant t_{1}\leqslant...\leqslant t_{p}=1\). The family \(\xi_{s}\) is parametrized by \(s\in[s_{0},s_{1}]\) such that \(c_{i}^{s}\) depends continuously on the parameter and \(\xi^{s_{0}}=\xi\), \(\xi^{s_{1}}=\tilde{\xi}\). Two \(G\)-paths are _homotopic (relative to their endpoints)_ if one can pass from the first to the second by a sequence of the following operations: 1. equivalence of \(G\)-paths, 2. elementary homotopies. **Definition 2.7** (Orbifold fundamental group, [2, Chapter III.G, 3.6]).: Let \(x_{0}\) be a non-singular point in \(M_{G}\). On the set of homotopy classes of \(G\)-loops based at \(x_{0}\) one easily defines a composition, see [2, Chapter III.G, 3.4] for details. With this composition the set of homotopy classes of \(G\)-loops has a group structure. This group is called the _orbifold fundamental group_\(\pi_{1}^{\mathrm{orb}}(M_{G},x_{0})\) of \(M_{G}\). The neutral element is represented by the constant \(G\)-loop based at \(x_{0}\). Throughout the article we restrict to orbifolds \(M_{G}\) such that each two points are connected by a \(G\)-path, i.e. \(M_{G}\) is _\(G\)-path-connected_. As in the case of the fundamental group of a path-connected topological space, the choice of base point does not affect the fundamental group of a \(G\)-path-connected orbifold (up to isomorphism), see [2, Chapter III.G, Proposition 3.7]. Hence, we shorten our notation \(\pi_{1}^{\mathrm{orb}}(M_{G},x_{0})\) to \(\pi_{1}^{\mathrm{orb}}(M_{G})\) whenever the base point does not matter. We finish the section with two observations that relate orbifold fundamental groups to fundamental groups of topological spaces. In particular, we determine a presentation of \(\pi_{1}^{\mathrm{orb}}(\Sigma_{\Gamma})\). This sets the foundation to identify a semidirect product structure of pure orbifold braid groups on \(\Sigma_{\Gamma}\). For a fixed index \(1\leqslant j\leqslant p\), the shift defined in Definition 2.5 allows for the following choice: The element \(h\) shifts \(c_{j}\) to \(c_{j}^{\prime}:=h\cdot c_{j}\), \(g_{j-1}^{\prime}=g_{j-1}h^{-1}\) and \(g_{j}^{\prime}=hg_{j}\). Thus, the path \(\xi\) is equivalent to \[\xi^{\prime}=(g_{0},c_{1},g_{1},...c_{j-1},g_{j-1}^{\prime},c_{j}^{\prime},g_ {j}^{\prime},c_{j+1},...,c_{p},g_{p}). \tag{2}\] If we choose \(h=g_{j}^{-1}\), the element \(g_{j}^{\prime}\) is trivial. Replacing \(c_{j}^{\prime},1,c_{j+1}\) by \(c_{j}^{\prime}\cup c_{j+1}\), we obtain the path \[\tilde{\xi}^{\prime}=(g_{0},c_{1},g_{1},...c_{j-1},g_{j-1}^{\prime},c_{j}^{ \prime}\cup c_{j+1},g_{j+1},c_{j+1},g_{j+2},...,c_{p},g_{p}) \tag{3}\] which is equivalent to \(\xi\) and has shorter subdivision length. For proofs of the following Lemma 2.8 and Corollary 2.9, we refer to Lemma 2.12 and Corollary 2.13 in [7]. **Lemma 2.8** ([2, Chapter III.G, 3.9(1)]).: 1. _Every_ \(G\)_-path connecting_ \(x\) _to_ \(y\) _in_ \(M_{G}\) _is equivalent to a unique continuous_ \(G\)_-path_ \((g,c)\) _with_ \(c:I\to M\) _connecting_ \(g^{-1}(x)\) _to_ \(y\) Figure 2.5. An elementary homotopy of \(G\)-paths. 2. _Let_ \((g,c)\) _and_ \((g^{\prime},c^{\prime})\) _be two_ \(G\)_-loops based at a non-singular point_ \(x_{0}\) _in_ \(M_{G}\)_. Then these_ \(G\)_-loops represent the same element of_ \(\pi_{1}^{\mathrm{orb}}(M_{G},x_{0})\) _if and only if_ \(g=g^{\prime}\) _and_ \(c\) _is homotopic to_ \(c^{\prime}\)_._ **Corollary 2.9** ([2, Chapter III.G, 3.9(1)]).: _Let \(M_{0}\) be the path-component of a point \(x_{0}\in M\). Then \(G_{0}=\{g\in G\mid g^{-1}(x_{0})\in M_{0}\}\) is a subgroup of \(G\) and every \(G\)-loop at \(x_{0}\) is equivalent to a unique \(G\)-loop of the form \((g,c)\) where \(c\) is a path connecting \(g^{-1}(x_{0})\) and \(x_{0}\); therefore \(g\in G_{0}\). Hence, we have a short exact sequence_ \[1\to\pi_{1}(M_{0})\stackrel{{ i}}{{\to}}\pi_{1}^{\mathrm{orb}}(M _{G})\stackrel{{ p}}{{\to}}G_{0}\to 1.\] _In particular, the orbifold fundamental group of \(\Sigma_{\Gamma}\) from Example 2.3 is isomorphic to \(\Gamma\)._ If the \(G\)-action on \(M\) is free, the space \(M/G\) also admits the structure of a manifold. The following is well known: **Lemma 2.10**.: _Let \(M\) be a manifold with a proper, free \(G\)-action. If the quotient space \(M/G\) is path-connected, then \(\pi_{1}^{\mathrm{orb}}(M_{G})\cong\pi_{1}(M/G)\)._ For instance, a proof is presented in [7, Lemma 2.14]. ## 3. Orbifold braid groups In this section we introduce orbifold braid groups. For the orbifolds \(\Sigma_{\Gamma}(L)\), we explain how elements in these groups are encoded as orbifold braid diagrams. Similar braid diagrams were considered by Allcock and Roushon for the orbifolds \(\mathbb{C}(L,N,\mathbf{m})=\Sigma_{\Gamma}^{\circ}(L)\) with \(\mathbf{m}=(m_{1},...,m_{N})\in\mathbb{N}_{\geqslant 2}N\), see [1] and [12]. ### Artin braid groups Before we get into the details of the definition of orbifold braid groups, we recall the geometry of Artin braids. In particular, we discuss how these three-dimensional braids are encoded as two-dimensional Artin braid diagrams. For additional information on Artin braid groups, we refer to [9, Section 1]. **Definition 3.1** (Geometric braids and Artin braid group, [9, Section 1.2.1]).: Fix once and for all \(n\) distinct points \(p_{1},...,p_{n}\) in the interior of a compact disks \(D\). A _geometric braid_ is a set \(b\subseteq D^{\circ}\times I\) formed by \(n\) disjoint topological intervals \(b_{j},1\leqslant j\leqslant n\), called the _strands_ of \(b\), such that the natural projection \(D\times I\to I\) maps each strand homeomorphically onto \(I\) and \[b\cap(D\times\{0\})=\{(p_{1},0),...,(p_{n},0)\}\text{ and }b\cap(D\times\{1\})= \{(p_{1},1),...,(p_{n},1)\}.\] The above conditions imply that for each \(j\) the strand \(b_{j}\) meets each disk \(D\times\{t\}\) at exactly one point and connects \((p_{j},0)\) to \((p_{\sigma(j)},1)\) for some \(\sigma\in\mathrm{S}_{n}\). Two geometric braids \(b\) and \(b^{\prime}\) are _isotopic_ if \(b\) can be continuously deformed into \(b^{\prime}\) inside the class of geometric braids. The operation of stacking braids along the \(I\)-factor of \(D\times I\) descends to isotopy classes, giving a group structure on the set \(\mathrm{B}_{n}\) of isotopy classes of braids with \(n\) strands. \(\mathrm{B}_{n}\) is called the _Artin braid group_ on \(n\) strands. A geometric braid is pictured in a cylinder with the disk \(D\times\{0\}\) at the top and \(D\times\{1\}\) at its bottom. By definition, the interval factor of \(D\times I\) parametrizes each strand of the braid. So we typically think of the strands as oriented arcs \(b_{j}:I\to D\times I\) traversing the cylinder from top to bottom (see Figure 3.1, left). A braid inside its ambient cylinder is a three-dimensional object. An Artin braid diagram is designed to capture the information of this object in a two dimensional picture. **Definition 3.2** (Artin braid diagrams, [9, Section 1.2.2]).: Assume that \(D\subseteq\mathbb{C}\) is the compact disk centered at \(\frac{n+1}{2}\) with radius \(\frac{n+1}{2}\) and \(p_{j}:=j\) for each \(1\leq j\leq n\) (see Figure 3.2). Moreover, let \(u_{j}:I\to D\) be the continuous map such that the \(j\)-th strand \(b_{j}\) meets \(D\times\{t\}\) at the point \((u_{j}(t),t)\) for each \(t\in I\). If the map \(u_{j}\) is piecewise linear for each \(j\), the corresponding geometric braid is called _piecewise linear_. Define \[\pi:D\times I\to[0,n+1]\times I,(z,t)\mapsto(\operatorname{Re}(z),t).\] The image \(\pi(b)\) of a geometric braid \(b\) is called the _projection_ of \(b\). Let \(x,y\in b,x\neq y\) be such that \(\pi(x)=\pi(y)=:(p,t)\). Then \(x\) and \(y\) are in distinct strands \(b_{i}\) and \(b_{j}\), respectively. In this case, the point \((p,t)\) is called a _crossing_ at height \(t\) in \(\pi(b)\). The crossing is called _transverse_ if there is a neighborhood \(U\) of \(p\) in \([0,n+1]\times I\) such that the pair \((U,\pi(b)\cap U)\) is locally homeomorphic to \((\mathbb{R}^{2},\mathbb{R}\times\{0\}\cup\{0\}\times\mathbb{R})\) via a homeomorphism identifying \(\pi(b_{i})\) with \(\mathbb{R}\times\{0\}\) and \(\pi(b_{j})\) with \(\{0\}\times\mathbb{R}\). In the braid \(b\), the strand \(b_{i}\)_crosses over_\(b_{j}\) if \(\operatorname{Im}(u_{i}(t))<\operatorname{Im}(u_{j}(t))\). Otherwise \(b_{i}\)_crosses under_\(b_{j}\). We will consider the projection for those geometric braids \(b\) which satisfy the following conditions: 1. \(b\) is piecewise linear, 2. at most one pair of strands crosses at a height and 3. the strands cross transversely in each crossing. In this case, the projection \(\pi(b)\) together with the data of which strand crosses over (resp. under) is called an _Artin braid diagram_ for \(b\). If we draw an Artin braid Figure 3.1. A geometric braid and its Artin braid diagram. Figure 3.2. The embedding of the disk \(D\). diagram, an under-crossing strand is indicated by a line that is broken near the crossing; an over-crossing strand is represented by a continued line (see Figure 3.1, right). **Observation 3.3** (Generating the Artin braid group \(\mathrm{B}_{n}\)).: _Given an arbitrary geometric braid \(b\), there exists an isotopic braid \(\tilde{b}\) such that \(\pi(\tilde{b})\) with the data of which strand crosses over (resp. under) at every crossing is an Artin braid diagram._ _Further, the conditions 3.2(1)-3.2(3) allow us to decompose \(b\) into pieces \(b\cap(D\times[t_{i-1},t_{i}])\) such that each piece contains exactly one crossing. While the first piece starts at \(p_{1},...,p_{n}\) and the last piece ends in these points, the other pieces a priori neither start at or end in the points \(p_{1},...,p_{n}\). However, an isotopy that pulls back the endpoints of every piece (see Figure 3.3) allows us to assume that each piece connects \(p_{1},...,p_{n}\) to \(p_{\sigma(1)},...,p_{\sigma(n)}\) for a permutation \(\sigma\in\mathrm{S}_{n}\) depending on the piece. Since each piece contains only one crossing, the crossing strands are adjacent. Consequently, each of these pieces is isotopic to a braid from Figure 3.4 or an inverse. Hence the braids \(h_{j}\) for \(1\leqslant j<n\) generate the Artin braid group \(\mathrm{B}_{n}\)._ **Observation 3.4** (Geometric braids and configuration spaces).: _A geometric braid \(b\) corresponds to a closed path with base point \(\{p_{1},...,p_{n}\}\) in the configuration space_ \[\mathrm{Conf}_{n}(D^{\circ}):=\{(x_{1},...,x_{n})\in(D^{\circ})^{n}\mid x_{i} \neq x_{j},1\leqslant i,j\leqslant n,i\neq j\}/\,\mathrm{S}_{n}\] _mapping \(b\) to \(\{u_{1}(t),...,u_{n}(t)\}\) and vice versa. Moreover, two geometric braids are isotopic if and only if the corresponding paths in the configuration space are homotopic. Hence \(\mathrm{B}_{n}\) is isomorphic to \(\pi_{1}(\mathrm{Conf}_{n}(D^{\circ}))\), see [9, Section 1.4] for further details._ ### The definition of orbifold braid groups The next goal is to establish a similar projection that induces braid diagrams for _orbifold braids_. We begin with the definition of _orbifold braid groups_ as the orbifold fundamental group of an orbifold configuration space. In particular, the following definition is equivalent to the definition given in [1]. Figure 3.4. Generators of \(\mathrm{B}_{n}\). Figure 3.3. A decomposition of a braid into generators. **Definition 3.5** (Orbifold braid group).: Let \(M_{G}\) be an orbifold. The orbifold \[\operatorname{PConf}_{n}^{G}(M_{G}):=(M^{n}\backslash\Delta_{n}^{G}(M))_{G^{n}}\] with \(\Delta_{n}^{G}(M)=\{(x_{1},...,x_{n})\in M^{n}\mid x_{i}=g(x_{j})\text{ for some }g\in G,i\neq j\}\) is called the \(n\)_-th pure configuration space_ over \(M_{G}\). Since \(G\) acts properly on \(M\), the coordinatewise action of \(G^{n}\) on \((M^{n}\backslash\Delta_{n}^{G}(M))\) is also proper. Hence \(\operatorname{PConf}_{n}^{G}(M_{G})\) is an orbifold and its orbifold fundamental group \(\pi_{1}^{\operatorname{orb}}(\operatorname{PConf}_{n}^{G}(M_{G}))\) is called the \(n\)_-th pure orbifold braid group_, denoted by \(\operatorname{PZ}_{n}(M_{G})\). The orbifold \[\operatorname{Conf}_{n}^{G}(M_{G}):=(M^{n}\backslash\Delta_{n}^{G}(M))_{G^{n} \rtimes\operatorname{S}_{n}}\] is called the \(n\)_-th configuration space_ over \(M_{G}\). As above, the normal subgroup \(G^{n}\) acts coordinatewise and \(\operatorname{S}_{n}\) acts (on \(G^{n}\) and \(M^{n}\backslash\Delta_{n}^{G}(M)\)) via permutation of coordinates. The \(G^{n}\rtimes\operatorname{S}_{n}\)-action is also proper, i.e. \(\operatorname{Conf}_{n}^{G}(M_{G})\) is an orbifold. Its orbifold fundamental group \(\pi_{1}^{\operatorname{orb}}(\operatorname{Conf}_{n}^{G}(M_{G}))\) is called the \(n\)_-th orbifold braid group_, denoted by \(\operatorname{Z}_{n}(M_{G})\). ### A decomposition of orbifold braids into strands At first we observe that elements in orbifold braid groups decompose into _strands_. **Observation 3.6**.: _A closed \(G^{n}\rtimes\operatorname{S}_{n}\)-path \(\xi\) in \(\operatorname{Conf}_{n}^{G}(M_{G})\) is equivalent to a \(G^{n}\rtimes\operatorname{S}_{n}\)-path that corresponds to an \(n\)-tuple \((\xi_{1},...,\xi_{n})\) of \(G\)-paths_ \[\xi_{j}=\left(g_{0}^{j},c_{1}^{j},g_{1}^{j},...,c_{q}^{j},g_{q}^{j}\right)\] _in \(M_{G}\). The \(G\)-paths \(\xi_{j}\) have the initial points \(p_{j}\) and terminal points \(p_{\sigma(j)}\) for \(\sigma\in\operatorname{S}_{n}\). Moreover, these \(G\)-paths share a subdivision \(0=t_{0}\leqslant...\leqslant\ t_{q}=1\) and satisfy the condition_ \[c_{i}^{k}(t)\cap G(c_{i}^{l}(t))=\emptyset\] _for all \(t\in I\), \(1\leqslant k,l\leqslant n,k\neq l\) and a suitable \(1\leqslant i\leqslant q\) depending on \(t\)._ _For \(\sigma=id_{n}\), the \(G^{n}\rtimes\operatorname{S}_{n}\)-path \(\xi\) induces a \(G^{n}\)-path that represents an element in \(\operatorname{PZ}_{n}(M_{G})\). In particular, \(\operatorname{PZ}_{n}(M_{G})\) is a subgroup of \(\operatorname{Z}_{n}(M_{G})\)._ For a closed \(G^{n}\rtimes\operatorname{S}_{n}\)-path \(\xi\), the \(G\)-paths \(\xi_{j}\) as described above are called the _strands_. We fix the notation \(\xi\) for a \(G^{n}\rtimes\operatorname{S}_{n}\)-path with strands \(\xi_{j}\) of the same form as in Observation 3.6 for the rest of the section. While the Artin braid diagrams keep track of the crossings of strands, we want orbifold braid diagrams to keep track of crossings and the \(\Gamma\)-leaps in the strands of a \(G^{n}\rtimes\operatorname{S}_{n}\)-path. If \(\xi_{j}\) does not contain any \(\Gamma\)-leaps in \([a,b]\), we may consider \(\xi_{j}|_{[a,b]}:[a,b]\to M\) as a continuous function. If \(\xi_{j}\) contains a \(\Gamma\)-leap at time \(t_{i}\), the group element \(g_{i}^{j}\) translates \(c_{i+1}^{j}(t_{i})\) to \(c_{i}^{j}(t_{i})\). In this case, we consider \(\xi_{j}\) as the following continuous function defined on the disjoint union \([\![t_{i-1},t_{i}]\!]\): \[\bigcup_{i=1}^{q}[\![t_{i-1},t_{i}]\!]\to M,s\mapsto c_{i}^{j}(s)\text{ for }s\in[t_{i-1},t_{i}].\] Further, let \(\xi\) denote the union \(\xi_{j}\left(\![\xi_{i=1}^{q}[t_{i-1},t_{i}]\!]\right)\) inside \(\operatorname{Conf}_{n}^{G}(M_{G})\). ### Orbifold braids in \(\operatorname{Z}_{n}(\Sigma_{\Gamma}(L))\) and their braid diagrams To specify elements of orbifold braid groups through braid diagrams, we want to establish a similar projection as in Figure 3.1. Therefore, we restrict to the orbifolds \(\Sigma_{\Gamma}\) from Example 2.3. Even though in this case the underlying surface \(\Sigma\) embeds into the complex plane, it is not suitable for our purpose to consider a projection from \(\Sigma\subseteq\mathbb{C}\) to \(\mathbb{R}\) directly. Since a direct projection would not allow us to recover the projected orbifold braid group element, we will instead restrict to the fundamental domain \(F\) before projecting. Before we explain the projection, we endow the fundamental domain \(F\) with a set of marked points \(r_{1},...,r_{L}\) in the interior of \(F\) such that \(\Gamma(r_{\theta})\neq\Gamma(r_{\lambda})\) for all \(1\leqslant\theta,\lambda\leqslant L\) with \(\theta\neq\lambda\). Removing all the \(\Gamma\)-translates of these points, we obtain the surface \[\Sigma(L):=\Sigma\backslash\Gamma(\{r_{1},...,r_{L}\}) \tag{4}\] with a proper \(\Gamma\)-action. The set \(F(L):=F\cap\Sigma(L)\) is a fundamental domain of the \(\Gamma\)-action on \(\Sigma(L)\). Let \(\Sigma_{\Gamma}(L)\) denote the induced orbifold structure on \(\Sigma(L)\). If we further remove the cone points \(\Gamma(\{c_{1},...,c_{N}\})\) from \(\Sigma(L)\), we denote the resulting surface by \(\Sigma(L,N)\). The \(\Gamma\)-action on \(\Sigma(L,N)\) has a fundamental domain \(F(L,N):=F\cap\Sigma(L,N)\). Let \(\Sigma_{\Gamma}(L,N)\) denote the induced orbifold structure. Recalling the shape of the fundamental domain \(F\) from Figure 2.2, we can embed \(F(L)\) (likewise \(F(L,N)\)) in \(\mathbb{C}\) as the disk of radius \(\frac{n+L+N+2}{2}\) centered at \(\frac{n-L-N}{2}\). For each \(1\leqslant\nu\leqslant N\), let \(c_{\nu}\) be the upper boundary point of \(\partial F(L)\) with \(\mathrm{Re}(c_{\nu})=-L-\nu\), for each \(1\leqslant\lambda\leqslant L\) let \(r_{\lambda}\) be the point \(-\lambda\in\mathbb{R}\) and for each \(1\leqslant j\leqslant n\), let \(p_{j}\) be the point \(j\) in \(\mathbb{R}\) (see Figure 3.5). Moreover, recall that each cone point in \(\partial F(L)\) has two adjacent arcs that lie in \(\partial F(L)\). For technical reasons, let us assume that the arcs adjacent to \(c_{\nu}\) embed into \(\partial F(L)\) as the boundary arcs with positive imaginary part and real part between \(-L-\nu-\frac{1}{2}\) and \(-L-\nu\) or \(-L-\nu\) and \(-L-\nu+\frac{1}{2}\), respectively. This is not needed in this section but will be helpful in Section 5. **Proposition 3.7** (Reduction of \(\Gamma^{n}\rtimes\mathrm{S}_{n}\)-paths).: _Every element in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) can be represented by a \(\Gamma^{n}\rtimes\mathrm{S}_{n}\)-path \(\xi\) whose strands \(\xi_{j}=(\gamma_{0}^{j},c_{1}^{j},\gamma_{1}^{j},...,c_{p}^{j},\gamma_{p}^{j})\) satisfy the following conditions. For each \(1\leqslant i\leqslant p\) and \(1\leqslant j\leqslant n\),_ 1. \(c_{i}^{j}\) _is piecewise linear with image in the interior of_ \(\Sigma(L)\)_._ 2. \(c_{i}^{j}\) _does not intersect any cone points._ 3. \(c_{i}^{j}\big{(}[t_{i-1},t_{i}]\big{)}\subseteq F(L)\)_._ _The same holds for every element in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\)._ To adjust \(\xi\), such that it satisfies the above properties, we use so called _\(\Delta\)-moves_. **Definition 3.8** (\(\Delta\)-move, [9, p. 11]).: Let \(y_{0}:=(x_{0},t_{0}),y_{1}:=(x_{1},t_{1})\) and \(y_{2}:=(x_{2},t_{2})\) be three points in \(\Sigma(L)\times I\) with \(t_{0}<t_{1}<t_{2}\) such that the linear \(2\)-simplex \(\Delta\) spanned by the points is contained in \(\Sigma(L)\times I\). In particular, \(\Delta\) does not contain any punctures. Further, let \(\xi\) be a \(\Gamma^{n}\rtimes\mathrm{S}_{n}\)-path that represents an Figure 3.5. The embedding of the fundamental domain \(F(L)\) into \(\mathbb{C}\). element in \(\operatorname{Z}_{n}(\Sigma_{\Gamma}(L))\). If the \(\Gamma\)-orbits of the strands of \(\xi\) intersect \(\Delta\) precisely along the linear segment \(\overline{y_{0}y_{2}}\), we may replace \(\overline{y_{0}y_{2}}\) in the orbifold braid \(\xi\) by the concatenation \(\overline{y_{0}y_{1}}\cup\overline{y_{1}y_{2}}\). Since \(\Delta\) does not intersect any punctures or \(\Gamma\)-translates of further strands, the resulting \(\Gamma^{n}\rtimes\operatorname{S}_{n}\)-path is homotopic to \(\xi\). Due to the bounded \(2\)-simplex, we call the above operation and its inverse _\(\Delta\)-moves_. Proof of Proposition 3.7.: By Lemma 2.8(1), each strand \(\xi_{j}\) is equivalent to a unique continuous \(\Gamma\)-arc \((\gamma,c)\). Since the endpoints of \(c\) lie in the interior of \(\Sigma(L)\), it can be homotoped (relative endpoints) such that \(c\) lies entirely in the interior of \(\Sigma(L)\). Using piecewise linear approximation in \(\mathbb{C}\), we can find an approximation of \(c\) that lies in the open subspace \(\Sigma^{\circ}(L)\) of \(\mathbb{C}\), i.e. in every strand the continuous parts \(c_{i}^{j}\) are paths inside \(\Sigma(L)\) which embeds into \(\mathbb{C}\), whence 3.7(1). If \(\xi\) represents an element in \(\operatorname{Z}_{n}(\Sigma_{\Gamma}(L,N))\), property 3.7(2) is automatically satisfied. If \(\xi\) represents an element in \(\operatorname{Z}_{n}(\Sigma_{\Gamma}(L))\), we begin with reducing the number of situations where a strand stays at a cone point for a period of time. By subdivision of these intervals, we can assume that \(\Gamma\)-leaps do not occur in the interior of these intervals. Now performing a \(\Delta\)-move on the constant pieces (see Figure 3.6, top) allows us to assume that paths may intersect with but do not stay in cone points. Another \(\Delta\)-move (as indicated in the bottom half of Figure 3.6 possibly affected by a \(\Gamma\)-leap) allows us to remove the remaining cone point intersections. Since each strand of \(\xi\) contains only piecewise linear paths, we can adjust the contained paths such that there are only finitely many intersections with \(\Gamma(\partial F(L))\). Subdividing \(\xi\) at all times with a boundary intersection, the application of suitable shifts reduces \(\xi\) to the fundamental domain \(F(L)\) as claimed in 3.7(3). For 3.7(1) and 3.7(3), the same arguments apply if \(\xi\) is a \(\Gamma^{n}\rtimes\operatorname{S}_{n}\)-path that represents an element in \(\operatorname{Z}_{n}(\Sigma_{\Gamma}(L,N))\). **Corollary 3.9**.: _The homomorphisms_ \[\operatorname{Z}_{n}(\Sigma_{\Gamma}(L,N))\to\operatorname{Z}_{n}(\Sigma_{ \Gamma}(L))\ \text{ and }\ \operatorname{PZ}_{n}(\Sigma_{\Gamma}(L,N))\to \operatorname{PZ}_{n}(\Sigma_{\Gamma}(L))\] _induced by the inclusion \(\Sigma_{\Gamma}(L,N)\hookrightarrow\Sigma_{\Gamma}(L)\) are surjective._ Recall from Definition 3.1 that geometric braids are strands inside a cylinder. Due to the reduction from Proposition 3.7, we have a similar picture for orbifold Figure 3.6. Removing cone point intersections. braids. In this case, the strands are contained in a cylinder with base \(F(L)\) (see Figure 3.7, left). In contrast to Artin braids, the strands of orbifold braids may have finitely many discontinuity points. At these points, a \(\Gamma\)-leap compensates the gap between the adjacent pieces of the strand. As indicated in Figure 3.7, this picture allows us to describe a similar projection as in Figure 3.1. **Definition 3.10** (Orbifold braids and orbifold braid diagrams).: A \(\Gamma^{n}\rtimes\mathrm{S}_{n}\)-path \(\xi\) that satisfies the properties 3.7(1)-3.7(3) is called an _orbifold braid_ with strands \(\xi_{j}\) for \(1\leq j\leq n\). Orbifold braids will be specified by a projection \[\pi:F(L)\times\bigcup_{i=1}^{q}[t_{i-1},t_{i}]\to\mathbb{R}\times\bigcup_{i=1 }^{q}[t_{i-1},t_{i}]\] where the \(F(L)\)-coordinate projects to the real part with respect to the chosen embedding of \(F(L)\) into \(\mathbb{C}\) and the interval coordinate \(t\) maps identically to \(\bigcup_{i=1}^{q}[t_{i-1},t_{i}]\). The image \(\pi(\xi)\) of an orbifold braid is called the _projection_ of \(\xi\). Analogously to Definition 3.2, we fix the following notations. Let \(x,y\in\xi,x\neq y\) be such that \(\pi(x)=\pi(y)=:(p,t)\), then \(x\) and \(y\) are in distinct strands \(\xi_{i}\) and \(\xi_{j}\), respectively. In this case, the point \((p,t)\) is called a _crossing_ at height \(t\) in \(\pi(\xi)\). The crossing is called _transverse_ if there is a neighborhood \(U\) of \((p,t)\) in \([0,n+1]\times[t_{i-1},t_{i}]\) such that the pair \((U,\pi(\xi)\cap U)\) is locally homeomorphic to \((\mathbb{R}^{2},\mathbb{R}\times\{0\}\cup\{0\}\times\mathbb{R})\) via a homeomorphism identifying \(\pi(\xi_{i})\) with \(\mathbb{R}\times\{0\}\) and \(\pi(\xi_{j})\) with \(\{0\}\times\mathbb{R}\). In the orbifold braid \(\xi\) the strand \(\xi_{i}\)_crosses over_\(\xi_{j}\) if \(\mathrm{Im}(\xi_{i}(t))<\mathrm{Im}(\xi_{j}(t))\). Otherwise \(\xi_{i}\)_crosses under_\(\xi_{j}\). Similarly, we consider crossings with the punctures \(r_{\lambda}\). For this purpose, let \(x\in\xi_{i}\) be such that \(\pi(x)=(-\lambda,t)\) for some \(1\leq\lambda\leq L\). In this case, the point \((-\lambda,t)\) is called a _crossing_ of \(\pi(\xi)\). The crossing is called _transverse_ if there is a neighborhood \(U\) of \(p\) in \([0,n+1]\times[t_{i-1},t_{i}]\) such that the pair \[(U,(\pi(\xi)\cup\{-\lambda\}\times I)\cap U)\] is locally homeomorphic to \((\mathbb{R}^{2},\mathbb{R}\times\{0\}\cup\{0\}\times\mathbb{R})\) via a homeomorphism identifying \(\pi(\xi_{i})\) with \(\mathbb{R}\times\{0\}\) and \(\{-\lambda\}\times I\) with \(\{0\}\times\mathbb{R}\). In the orbifold braid \(\xi\) the strand \(\xi_{i}\)_crosses over_\(r_{\lambda}\) if \(\mathrm{Im}(\xi_{i}(t))<0\). Otherwise \(\xi_{i}\)_crosses under_\(r_{\lambda}\). We will consider the projection mainly for those orbifold braids \(\xi\) which satisfy the following conditions: 1. at most one crossing, either of two strands or a strand and a puncture, appears at a height, 2. the strands and punctures cross transversely in each crossing, Figure 3.7. An orbifold braid and its braid diagram. 3. no crossing occurs at the same time as a \(\Gamma\)-leap and 4. no two \(\Gamma\)-leaps occur at the same time. In this case, the projection \(\pi(\xi)\) of an orbifold braid \(\xi\) together with the data of which strand crosses over (resp. under) and the data which strand at which time contains a \(\Gamma\)-leap is called an _orbifold braid diagram_ for \(\xi\). As for the Artin braids, we can encode the orbifold braid diagrams in pictures (see Figure 3.7, right for an example). The crossings of two strands and the crossings of a strand and a puncture are illustrated as for the geometric braids. Further, recall that we have chosen cyclic generators \(\gamma_{\nu}\) of the cyclic factors \(\mathbb{Z}_{m_{\nu}}\) in \(\Gamma\) in Example 2.3. If a strand \(\xi_{j}\) contains a \(\Gamma\)-leap \(\gamma_{i}^{j}=\gamma_{\nu}^{\varepsilon}\) at time \(t_{i}\), we draw the \(j\)-th strand encircling the bar that corresponds to the \(\nu\)-th cone point. For \(\varepsilon=1\), we draw the \(j\)-th strand encircling the \(\nu\)-th cone point bar counterclockwise. For \(\varepsilon=-1\), we draw the \(j\)-th strand encircling the \(\nu\)-th cone point bar clockwise. Due to condition 3.7(3), a \(\Gamma\)-leap by \(\gamma_{\nu}^{1}\) with \(l\notin\{\pm 1\}\) cannot occur. Proposition 3.7 together with the standard techniques from the classical case and the shifts introduced in Definition 2.5(2) yields: **Lemma 3.11**.: _Every element in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) and \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\) is represented by an orbifold braid that projects to an orbifold braid diagram._ In the following, we will use the orbifold braid diagrams introduced above to encode orbifold braids. We will no longer distinguish between an orbifold braid and its homotopy class in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) and \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\), respectively. Motivated by Artin braid groups, we will begin with orbifold braids that do not braid with cone points or punctures: We define the orbifold braid \(h_{j}\) for \(1\leq j<n\) as the one represented by the following braid diagram: These orbifold braids generate a subgroup of \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) and \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\), respectively, that is isomorphic to the Artin braid group \(\mathrm{B}_{n}\). We can further define \[a_{ji}:=h_{j-1}^{-1}...h_{i+1}^{-1}h_{i}^{2}h_{i+1}...h_{j-1}\text{ for }1 \leq i<j\leq n, \tag{5}\] which is a braid in \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) with the following projection Figure 3.9. Braid diagram of \(a_{ji}\). Figure 3.8. The generator \(h_{j}\). Further, we introduce orbifold braids \(t_{\lambda}\) for \(1\leq\lambda\leq L\) and \(u_{\nu}\) for \(1\leq\nu\leq N\) that involve either cone points or punctures. In both cases, the last \(n-1\) strands are fixed. The first strands are pictured in Figure 3.10. For \(1\leq k\leq n\), \(1\leq\nu\leq N\) and \(1\leq\lambda\leq L\), we further define \[c_{k\nu}:=h_{k-1}^{-1}...h_{1}^{-1}u_{\nu}h_{1}...h_{k-1}\ \ \text{and}\ \ b_{k \lambda}:=h_{k-1}^{-1}...h_{1}^{-1}t_{\lambda}h_{1}...h_{k-1}. \tag{6}\] These elements project to the diagrams depicted in Figure 3.11 below. _Remark 3.12_.: Even though orbifold braid diagrams look similar to Artin braid diagrams, there is an essential difference: For each \(1\leq k\leq n\) and \(1\leq\nu\leq N\) the \(m_{\nu}\)-th power of \(c_{k\nu}\) is the trivial braid in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\). That is because \(c_{k\nu}^{m_{\nu}}\) is homotopic to a braid with all strands except the \(k\)-th one fixed. Further, we may apply shifts to the \(k\)-th strand, so that this strand is a continuous \(\Gamma\)-path and encircles the \(\nu\)-th cone point (see Figure 3.12). Since the loop is contractible in \(\Sigma(L)\), this implies that \(c_{k\nu}^{m_{\nu}}\) is trivial. Hence, \(c_{k\nu}\) is an element of finite order in \(\mathrm{Z}_{n}(\Sigma(L))\). This behavior was already emphasized by Allcock [1]. Figure 3.10. The first strand of \(u_{\nu}\) (left) and \(t_{\lambda}\) (right). In contrast, the loop in Figure 3.12 is not contractible if we remove the cone point, i.e. \(c_{k\nu}^{m_{\nu}}\) is not trivial in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\). This prevents the epimorphisms in Corollary 3.9 from being injective. Comparing orbifold braid diagrams for braids in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) to Artin braid diagrams, the relations \(u_{\nu}^{m_{\nu}}\stackrel{{\eqref{eq:def By Corollary 3.9, this in particular implies: **Corollary 3.16**.: _The pure orbifold braid group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) is generated by_ \[a_{ji},b_{k\lambda}\;\;\text{and}\;\;c_{k\nu}\;\;\text{for}\;\;1\leqslant i,j,k \leqslant n\;\;\text{with}\;\;i<j,\;1\leqslant\lambda\leqslant L\;\;\text{and} \;\;1\leqslant\nu\leqslant N.\] ## 4 Orbifold mapping class groups An important approach to the Artin braid groups is the identification with mapping class groups of punctured disks, see, for instance, [6, Section 9.1.3]. In this section, we recall the definition of orbifold mapping class groups (with marked points) and some results about them from [8]. This sets the basis to compare orbifold braid groups and orbifold mapping class groups. Given an orbifold \(M_{G}\), we want to define its mapping class group as a group of certain homeomorphisms of \(M\) modulo an equivalence relation. This generalizes the concept of mapping class groups of manifolds. If the acting group \(G\) is trivial, then the orbifold mapping class group of \(M_{\{1\}}\) defined below coincides with the mapping class group of \(M\). As it is usual in the case of manifolds, we consider homeomorphisms of \(M\) that fix the boundary pointwise. Moreover, the orbifold mapping class group should reflect the structure of the \(G\)-action on \(M\). For this reason, we restrict to the subgroup \(\mathrm{Homeo}^{orb}(M_{G},\partial M)\leqslant\mathrm{Homeo}(M,\partial M)\) of _\(G\)-equivariant_ homeomorphisms, i.e. for each \(H\in\mathrm{Homeo}^{orb}(M_{G},\partial M)\), we have \(H(g(x))=g(H(x))\) for all \(g\in G\) and \(x\in M\). An _ambient isotopy_ is a continuous map \[I\to\mathrm{Homeo}^{orb}(M_{G},\partial M).\] Two \(G\)-equivariant homeomorphisms \(H,H^{\prime}\) are _ambient isotopic_, denoted by \(H\sim\;H^{\prime}\), if there exists an ambient isotopy \(H_{t}\) with \(H_{0}=H\) and \(H_{1}=H^{\prime}\). Subject to the equivalence relation induced by ambient isotopies, we define the orbifold mapping class group. **Definition 4.1** (Orbifold mapping class group).: The group of \(G\)-equivariant homeomorphisms that fix the boundary pointwise modulo ambient isotopy \[\mathrm{Map}^{orb}\left(M_{G}\right):=\mathrm{Homeo}^{orb}(M_{G},\partial M)/\sim\] is called the _mapping class group_ of \(M_{G}\). Based on the fact that \(\mathrm{Homeo}^{orb}(M_{G},\partial M)\) is a topological group, the mapping class group also carries the structure of a topological group. For an example of an orbifold mapping class group, we refer to [8, Example 3.4]. **Definition 4.2** (Orbifold mapping class group with marked points).: Let \(M_{G}\) be an orbifold and let us fix a set of non-singular marked points \(P=\{p_{1},...,p_{n}\}\) in \(M\) such that \(G(p_{i})\neq G(p_{j})\) for \(1\leqslant i,j\leqslant n,i\neq j\). By \(\mathrm{Homeo}^{orb}_{n}(M_{G},\partial M)\) we denote the subgroup of homeomorphisms that preserve the orbit of the marked points \(G(P)\) as a set: \[\{H\in\mathrm{Homeo}^{orb}(M_{G},\partial M)\mid H(G(P))=G(P)\}.\] We consider these homeomorphisms up to ambient isotopies \(I\to\mathrm{Homeo}^{orb}_{n}(M_{G},\partial M)\). The corresponding equivalence relation is denoted by \(\sim_{n}\). By \[\mathrm{Map}^{orb}_{n}\left(M_{G}\right):=\mathrm{Homeo}^{orb}_{n}(M_{G}, \partial M)/\sim_{n}\] we denote the _orbifold mapping class group of \(M_{G}\) with respect to the \(n\) marked points_. We stress that the orbit of marked points \(G(P)\) is a discrete set. Hence, an ambient isotopy \(H_{t}\) through \(\mathrm{Homeo}^{orb}_{n}(M_{G},\partial M)\) is constant on marked points, i.e. \(H_{t}(p_{j})=H_{0}(p_{j})=H_{1}(p_{j})\) for each \(1\leqslant j\leqslant n\) and \(t\in I\). For an example of an orbifold mapping class group with marked points, we refer to [8, Example 3.6]. Homeomorphisms that map each marked point inside its \(G\)-orbit yield the so called _pure orbifold mapping class group_: **Definition 4.3** (Pure orbifold mapping class group).: Let \(\mathrm{PHomeo}_{n}^{orb}(M_{G},\partial M)\) be the group of _pure homeomorphisms_ \[\{H\in\mathrm{Homeo}_{n}^{orb}(M_{G},\partial M)\mid H(p_{j})=g_{j}(p_{j})\text{ with }g_{j}\in G\text{ for all }1\leqslant j\leqslant n\}.\] The subgroup of \(\mathrm{Map}_{n}^{orb}\left(M_{G}\right)\) induced by pure homeomorphisms is called the _pure orbifold mapping class group_ \[\mathrm{PMap}_{n}^{orb}\left(M_{G}\right):=\mathrm{PHomeo}_{n}^{orb}(M_{G}, \partial M)/\sim_{n}.\] At this point, we recall that a homeomorphism in the pure mapping class group of a manifold fixes each of the marked points. In contrast, we only require the homeomorphisms in \(\mathrm{PMap}_{n}^{orb}\left(M_{G}\right)\) to preserve the orbit of each marked point but not to fix the points themselves. Further, we emphasize that we allow different group actions on different orbits of marked points, i.e. \(H(p_{i})=g_{i}(p_{i})\) and \(H(p_{j})=g_{j}(p_{j})\) with \(g_{i}\neq g_{j}\) for \(i\neq j\). In [8], we studied the following subgroup of the orbifold mapping class group for the orbifold \(\Sigma_{\Gamma}(L)\) with underlying surface \(\Sigma\) punctured in \(\Gamma(\{r_{1},...,r_{L}\})\): The kernel of the homomorphism \[\mathrm{Forget}_{n}^{orb}:\mathrm{Map}_{n}^{orb}\left(\Sigma_{\Gamma}(L) \right)\to\mathrm{Map}^{orb}\left(\Sigma_{\Gamma}(L)\right)\] that forgets the marked points. **Definition 4.4**.: Let \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) denote the kernel of \(\mathrm{Forget}_{n}^{orb}\). This subgroup is induced by the subgroup \[\mathrm{Homeo}_{n}^{\mathrm{id},orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L)):= \{H\in\mathrm{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\mid H \sim\mathrm{id}_{\Sigma(L)}\}\] of \(\mathrm{Homeo}_{n}^{orb}(\Sigma_{\Gamma}(L))\). Moreover, let \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right):=\mathrm{ Forget}_{n}^{orb}|_{\mathrm{PMap}_{n}^{orb}(\Sigma_{\Gamma}(L))}\). This subgroup is induced by the subgroup \(\mathrm{PHomeo}_{n}^{\mathrm{id},orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\) that contains the pure homeomorphisms of \(\mathrm{Homeo}_{n}^{\mathrm{id},orb}(\Sigma_{\Gamma}(L),\partial\Sigma(L))\). This subgroup can be identified with an analogous subgroup in \(\mathrm{Map}_{n}\left(D(L,N)\right)\), see [8, Proposition 4.3] for details. In particular, this yields: **Theorem 4.5** (Birman exact sequence for orbifold mapping class groups, [8, Theorem A]).: _The following diagram is a short exact sequence:_ **Corollary 4.6** ([8, Theorem B]).: _The following diagram is a short exact sequence that splits:_ **Definition 4.7**.: A group \(G\) is a _semidirect product_ with _normal subgroup \(N\)_ and _quotient \(H\)_ if there exists a short exact sequence_ \[1\to N\xrightarrow{\to}G\xrightarrow{\pi}H\to 1\] that has a section \(s:H\to G\). In this case, we denote \(G=N\rtimes H\). In particular, Corollary 4.6 shows \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) has a semidirect product structure \[F_{n-1+L+N}\rtimes\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma} (L)\right).\] In the following, presentations of groups will be an important tool for us. In particular, presentations allow us to define group homomorphisms by assignments defined on generating sets. **Definition 4.8**.: Let \(G\) be a group with presentation \[\langle X\mid R\rangle=\langle x_{1},...,x_{k}\mid r_{1}=s_{1},...,r_{l}=s_{l}\rangle\] and \(H\) a group generated by a set of elements \(\{y_{1},...,y_{p}\}\) with \(p\geqslant k\). Moreover, let us assume that the words \(r_{j}\) and \(s_{j}\) are given by \(x_{j_{1}}^{\varepsilon_{1}}...x_{j_{q}}^{\varepsilon_{q}}\) and \(x_{j_{1}}^{\delta_{1}}...x_{j_{r}}^{\delta_{r}}\), respectively. Given _assignments_\(\phi:x_{i}\mapsto y_{i}\) for \(1\leqslant i\leqslant k\), we apply them letterwise to words mapping \(x_{i}^{-1}\) to \(y_{i}^{-1}\). We say that the assignments \(\phi\)_preserve the relations in_\(R\) if the relation \(y_{j_{1}}^{\varepsilon_{1}}...y_{j_{q}}^{\varepsilon_{q}}=y_{j_{1}}^{\delta_{1 }}...y_{j_{r}}^{\delta_{r}}\) is valid in \(H\) for each \(1\leqslant j\leqslant l\). **Theorem 4.9** (von Dyck, [10, p. 346]).: _Let \(G\) be a group with presentation \(\langle X\mid R\rangle\) as above and \(H\) a group generated by a set \(\{y_{1},...,y_{p}\}\) with \(p\geqslant k\). If the assignments_ \[\phi:x_{i}\mapsto y_{i}\] _preserve the relations in \(R\), then these assignments induce a homomorphism_ \[\phi:G\to H.\] **Lemma 4.10** ([7, Lemma 5.17]).: _Let \(N\) and \(H\) be groups given by presentations \(N=\langle X\mid R\rangle\) and \(H=\langle Y\mid S\rangle\). Then the following are equivalent:_ 1. \(G\) _is a semidirect product with normal subgroup_ \(N\) _and quotient_ \(H\)_._ 2. \(G\) _has a presentation_ \[G=\langle X,Y\mid R,S,y^{\pm 1}xy^{\mp 1}=\phi_{y^{\pm 1}}(x)\text{ for all }x\in X,y\in Y\rangle\] _such that_ \(\phi_{y^{\pm 1}}(x)\) _is a word in the alphabet_ \(X\) _for all_ \(x\in X\) _and_ \(y\in Y\)_. Moreover, for each_ \(y\in Y\)_, the assignments_ (7) \[x\mapsto\phi_{y}(x)\] _induce an automorphism_ \(\phi_{y}\in\operatorname{Aut}(N)\) _and the assignments_ (8) \[y\mapsto\phi_{y}\] _induce a homomorphism_ \(H\to\operatorname{Aut}(N)\)_._ _Remark 4.11_.: Lemma 4.10 will be essential in the proof of Theorem C. There we will apply it given a group \(G\) with a presentation to deduce that \(G\) is a semidirect product. In this case, we want to show that the given presentation satisfies the conditions from Lemma 4.10(2). The base to prove that is to divide the generating set into two disjoint subsets \(X\) and \(Y\) such that \(X\) generates the normal subgroup and \(Y\) generates the quotient. Further, we divide the relations into three disjoint subsets \(R,S\) and \(C\) such that \(R\) contains all relations in letters from \(X\), \(S\) contains all relations in letters from \(Y\) and \(C\) contains all the remaining relations. In particular, the relations in \(C\) should be given in the form \(y^{\pm 1}xy^{\mp 1}=\phi_{y^{\pm 1}}(x)\) for all \(x\in X\) and \(y\in Y\). To deduce a semidirect product structure, it remains to check that the relations from \(C\) satisfy the conditions on the assignments (7) and (8). It is reasonable to check these conditions in the following order: _Step 1_.: Using Theorem 4.9, the first step will be to check that the assignments \(\phi:y\mapsto\phi_{y}\) from (8) preserve the relations from \(S\). If \(S\) contains a relation \(y_{1}^{\varepsilon_{1}}...y_{q}^{\varepsilon_{q}}=\tilde{y}_{1}^{\delta_{1}}...\tilde{y}_{r}^{\delta_{r}}\), this requires that the assignments \[x\mapsto\phi_{y_{1}^{\varepsilon_{1}}}\circ...\circ\phi_{y_{q}^{ \varepsilon_{q}}}(x)\text{ and }\] \[x\mapsto\phi_{\tilde{y}_{1}^{\delta_{1}}}\circ...\circ\phi_{ \tilde{y}_{r}^{\delta_{r}}}(x)\] coincide on each letter \(x\in X\) (up to relations in \(R\)). In particular, we may check if \(\phi\) induces a homomorphism independently of the fact if \(x\mapsto\phi_{y}(x)\) induces an automorphism of the group presented by \(\langle X\mid R\rangle\). _Step 2_. In the second step, we check that the assignments \(\phi_{y}:x\mapsto\phi_{y}(x)\) induce an automorphism of the group presented by \(\langle X\mid R\rangle\) for all \(y\in Y\). To apply Theorem 4.9, we check if for all \(y\in Y\) the assignments \(\phi_{y}\) preserve the relations from \(R\). If this is the case, the assignments \(\phi_{y}\) induce an endomorphism of the group presented by \(\langle X\mid R\rangle\). By the first step, we further have \[\phi_{y^{-1}}\circ\phi_{y}=\operatorname{id}_{\langle X\mid R\rangle}=\phi_{y} \circ\phi_{y^{-1}},\] i.e. the endomorphism induced by \(x\mapsto\phi_{y}(x)\) is bijective and therefore an automorphism of the group presented by \(\langle X\mid R\rangle\). Corollary 4.6 and Lemma 4.10 together yield a presentation of \(\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). For a description of this presentation, we need to establish a set of homeomorphisms that induces a generating set. With respect to the embedding of \(F(L)\) described in Figure 3.5, let \(D_{i,j}\subseteq F(L)\) for every \(1\leqslant i<j\leqslant n\) be the disk \[\left(B_{\frac{1}{4}}(p_{i})\cup B_{\frac{1}{4}}(p_{j})\right) \cap\{x\in\mathbb{C}\mid\operatorname{Im}(x)\geqslant 0\}\] \[\cup A_{\frac{j-i}{2}-\frac{1}{4},\frac{j-i}{2}+\frac{1}{4}} \left(\frac{p_{i}+p_{i}}{2}\right)\cap\{x\in\mathbb{C}\mid\operatorname{Im}(x )\leqslant 0\}\] where \(A_{r,R}(x)\) denotes the annulus with inner radius \(r\) and outer radius \(R\) centered around \(x\). The disk \(D_{i,j}\) precisely contains the marked points \(p_{i}\) and \(p_{j}\). See Figure 4.1 (left) for a picture of \(D_{i,j}\). Moreover, for every \(1\leqslant k\leqslant n\) and \(1\leqslant\lambda\leqslant L\), let \(D_{r_{\lambda},k}\subseteq F(L)\) be the disk \[\left(B_{\frac{1}{4}}(r_{\lambda})\cup B_{\frac{1}{4}}(p_{k}) \right)\cap\{x\in\mathbb{C}\mid\operatorname{Im}(x)\geqslant 0\}\] \[\cup A_{\frac{k+\lambda}{2}-\frac{1}{4},\frac{k+\lambda}{2}+\frac{ 1}{4}}\left(\frac{r_{\lambda}+p_{k}}{2}\right)\cap\{x\in\mathbb{C}\mid \operatorname{Im}(x)\leqslant 0\}.\] The disk \(D_{r_{\lambda},k}\) precisely contains the marked point \(r_{\lambda}\) and \(p_{k}\). See Figure 4.1 (right) for a picture of \(D_{r_{\lambda},k}\). The homeomorphisms \(A_{ji}\) and \(B_{k\lambda}\) perform the twists pictured in Figure 4.2 on each \(\Gamma\)-translate of \(D_{i,j}\) and \(D_{r_{\lambda},k}\). Figure 4.1. The disks \(D_{i,j}\) (left) and \(D_{r_{\lambda},k}\) (right). Figure 4.2. The twists induced by \(A_{ji}\) (left) and \(B_{k\lambda}\) (right). Moreover, for every \(1\leqslant k\leqslant n\) and \(1\leqslant\nu\leqslant N\), let \(\tilde{D}_{c_{\nu},k}\) be the disk \[B_{\frac{1}{2}}(p_{k})\cap\{x\in\mathbb{C}\mid\operatorname{Im}(x)\geqslant 0\}\] \[\cup A_{\frac{k+L+\nu}{2}-\frac{1}{2},\frac{k+L+\nu}{2}+\frac{1}{2}} \left(\frac{-L-\nu+p_{k}}{2}\right)\cap\{x\in\mathbb{C}\mid\operatorname{Im}( x)\leqslant 0\}\] \[\cup\left\{x\in F\mid\operatorname{Im}(x)\geqslant 0, \operatorname{Re}(x)\in\left[-L-\nu-\frac{1}{4},-L-\nu+\frac{1}{4}\right]\right\}.\] Then \(D_{c_{\nu},k}:=\mathbb{Z}_{m_{\nu}}\cdot\tilde{D}_{c_{\nu},k}\) is a \(\mathbb{Z}_{m_{\nu}}\)-invariant disk that contains the cone point \(c_{\nu}\) and the adjacent marked points \(\mathbb{Z}_{m_{\nu}}(p_{k})\). See Figure 4.3 for a picture of \(\tilde{D}_{c_{\nu},k}\) (left) and an example of the disk \(D_{c_{\nu},k}\subseteq\Sigma(L)\) (right). Let \(C_{k\nu}\) be the homeomorphism that performs a \(\frac{2\pi}{m_{\nu}}\)-twist as in Figure 1.4 on each \(\Gamma\)-translate of \(D_{c_{\nu},k}\). For the homeomorphisms \(A_{ji},B_{k\lambda}\) and \(C_{k\nu}\), we will use their names as acronyms of the corresponding mapping classes. These elements satisfy the following relations: **Lemma 4.12** ([8, Lemma 4.18]).: _Let \(1\leqslant h,i,j,k,l<n\) with \(h<i<j<k<l\), \(1\leqslant\theta,\lambda\leqslant L\) with \(\theta<\lambda\) and \(1\leqslant\mu,\nu\leqslant N\) with \(\mu<\nu\). Then the following relations hold:_ 1. 1. \(A_{lj}A_{nj}A_{lj}^{-1}=A_{nj}^{-1}A_{nl}^{-1}A_{nj}A_{nl}A_{nj}\) _and_ a')_ \(A_{lj}^{-1}A_{nj}A_{lj}=A_{nl}A_{nj}A_{nl}^{-1}\)_,_ b)_ \(A_{ji}A_{nj}A_{ji}^{-1}=A_{ni}^{-1}A_{nj}A_{ni}\) _and_ b')_ \(A_{ji}^{-1}A_{nj}A_{ji}=A_{nj}A_{ni}A_{nj}A_{nj}A_{nj}A_{nj}^{-1}A_{nj}^{-1}\)_,_ c)__\(B_{j\lambda}A_{nj}B_{j\lambda}^{-1}=B_{n\lambda}^{-1}A_{nj}B_{n\lambda}\) _and_ c')__\(B_{j\lambda}^{-1}A_{nj}B_{j\lambda}=A_{nj}B_{n\lambda}A_{nj}B_{n\lambda}^{-1}A_{nj}^{-1}\)_,_ d)__\(C_{j\nu}A_{nj}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}C_{n\nu}\) _and_ d')__\(C_{j\nu}^{-1}A_{nj}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}C_{n\nu}^{-1}A_{nj}^{-1}\)_,_ e)__\(B_{j\lambda}B_{n\lambda}B_{j\lambda}^{-1}=B_{n\lambda}^{-1}A_{nj}^{-1}B_{n\lambda}A_{nj}B_{n\lambda}\) _and_ e')__\(B_{j\lambda}^{-1}B_{n\lambda}B_{j\lambda}=A_{nj}B_{n\lambda}A_{nj}^{-1}\)_,_ f)__\(C_{j\nu}C_{n\nu}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}^{-1}C_{n\nu}A_{nj}C_{n\nu}\) _and_ f')__\(C_{j\nu}^{-1}C_{n\nu}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}\)_,_ 2. 2. \([A_{ih},A_{nj}]=1\)_,_ \([B_{i\lambda},A_{nj}]=1\) _and_ \([C_{i\nu},A_{nj}]=1\)_,_ b)__\([A_{lk},A_{nj}]=1\)_,_ \([A_{ji},B_{n\lambda}]=1\)_,_ \([B_{j\theta},B_{n\lambda}]=1\)_,_ \([A_{ji},C_{n\nu}]=1\)_,_ \([B_{j\lambda},C_{n\nu}]=1\) _and_ \([C_{j\mu},C_{n\nu}]=1\)_,_ c)__\([A_{nl}A_{nj}A_{nl}^{-1},A_{li}]=1\)_,_ \([A_{nl}A_{nj}A_{nl}^{-1},B_{l\lambda}]=1\) _and_ \([A_{nl}A_{nj}A_{nl}^{-1},C_{l\nu}]=1\)_,_ d)__\([A_{nj}B_{n\theta}A_{nj}^{-1},B_{j\lambda}]=1\) _and_ \([A_{nj}B_{n\theta}A_{nj}^{-1},C_{j\nu}]=1\)_,_ e)__\([A_{nj}C_{n\mu}A_{nj}^{-1},C_{j\nu}]=1\)_._ _In particular, these relations imply:_ * \(A_{li}A_{nj}A_{li}^{-1}=A_{ni}^{-1}A_{nl}^{-1}A_{ni}A_{nl}A_{nj}A_{nl}^{-1}A_{nl} A_{ni}\)_,_ * \(A_{li}^{-1}A_{nj}A_{li}=A_{nl}A_{ni}A_{nl}^{-1}A_{nj}A_{ni}A_{nl}A_{nl}^{-1}A_{nl}^ {-1}\)_,_ * \(B_{l\lambda}A_{nj}B_{\lambda l}^{-1}=B_{n\lambda}^{-1}A_{nl}^{-1}B_{n\lambda}A_ {nl}A_{nj}A_{nl}^{-1}B_{n\lambda}^{-1}A_{nl}B_{n\lambda}\)_,_ * \(B_{l\lambda}^{-1}A_{nj}B_{l\lambda}=A_{nl}B_{n\lambda}A_{nl}^{-1}B_{n\lambda}^ {-1}A_{nj}B_{n\lambda}A_{nl}B_{n\lambda}^{-1}A_{nl}^{-1}\)_,_ * \(C_{l\nu}A_{nj}C_{l\nu}^{-1}=C_{n\nu}^{-1}A_{nl}^{-1}C_{n\nu}A_{nl}A_{nj}A_{nl}^ {-1}C_{n\nu}^{-1}A_{nl}C_{n\nu}\)_,_ * \(C_{l\nu}^{-1}A_{nj}C_{l\nu}=A_{nl}C_{n\nu}A_{nl}^{-1}C_{n\nu}^{-1}A_{nj}C_{n \nu}A_{nl}C_{n\nu}^{-1}A_{nl}^{-1}\)_,_ * \(B_{j\lambda}B_{n\theta}B_{j\lambda}^{-1}=B_{n\lambda}^{-1}A_{nj}^{-1}B_{n \lambda}A_{nj}B_{n\theta}A_{nj}^{-1}B_{n\lambda}^{-1}A_{nj}B_{n\lambda}\)_,_ * \(B_{j\lambda}^{-1}B_{n\theta}B_{j\lambda}=A_{nj}B_{n\lambda}A_{nj}^{-1}B_{n \theta}B_{n\lambda}A_{nj}B_{n\lambda}^{-1}A_{nj}^{-1}\)_,_ * \(C_{j\nu}B_{n\lambda}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}^{-1}C_{n\nu}A_{nj}B_{n \lambda}A_{nj}^{-1}C_{n\nu}^{-1}A_{nj}C_{n\nu}\)_,_ * \(C_{j\nu}^{-1}B_{n\lambda}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}C_{n\nu}^{-1}B_{n \lambda}C_{n\nu}A_{nj}C_{n\nu}^{-1}A_{nj}^{-1}\)_,_ * \(C_{j\nu}C_{n\mu}C_{j\nu}^{-1}=C_{n\nu}^{-1}A_{nj}^{-1}C_{n\nu}A_{nj}C_{n\mu}A_ {nj}^{-1}C_{n\nu}^{-1}A_{nj}C_{n\nu}\)_,_ * \(C_{j\nu}^{-1}C_{n\mu}C_{j\nu}=A_{nj}C_{n\nu}A_{nj}^{-1}C_{n\nu}^{-1}C_{n\mu}C_{ n\nu}A_{nj}C_{n\nu}A_{nj}C_{n\nu}^{-1}A_{nj}^{-1}\)_._ Now Corollary 4.6 together with Lemmas 4.10 and 4.12 induce the following presentation of \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\): **Corollary 4.13** ([8, Corollary 4.19]).: _The pure mapping class group \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) has a presentation with generators_ \[A_{ji},B_{k\lambda}\ \ \text{and}\ \ C_{k\nu},\] _for \(1\leq i,j,k\leq n\) with \(i<j\), \(1\leq\lambda\leq L\) and \(1\leq\nu\leq N\) and the following defining relations for \(1\leq i,j,k,l\leq n\) with \(i<j<k<l\), \(1\leq\theta,\lambda\leq L\) with \(\theta<\lambda\) and \(1\leq\mu,\nu\leq N\) with \(\mu<\nu\):_ * \(\left[A_{ji},A_{lk}\right]=1\)_,_ \(\left[B_{j\lambda},A_{lk}\right]=1\) _and_ \(\left[C_{j\nu},A_{lk}\right]=1\)_,_ * \(\left[A_{li},A_{kj}\right]=1\)_,_ \(\left[B_{l\lambda},A_{kj}\right]=1\)_,_ \(\left[B_{l\lambda},B_{k\theta}\right]=1\)_,_ \(\left[C_{l\nu},A_{kj}\right]=1\)_,_ \(\left[C_{l\nu},B_{k\lambda}\right]=1\) _and_ \(\left[C_{l\nu},C_{k\mu}\right]=1\)_,_ * \(\left[A_{lk}A_{lj}A_{lk}^{-1},A_{ki}\right]=1\)_,_ \(\left[A_{kj}A_{ki}A_{kj}^{-1},B_{j\lambda}\right]=1\)_,_ \(\left[A_{kj}B_{k\theta}A_{kj}^{-1},B_{j\lambda}\right]=1\)_,_ \(\left[A_{kj}A_{ki}A_{kj}^{-1},C_{j\nu}\right]=1\)_,_ \(\left[A_{kj}A_{kj}A_{kj}^{-1},C_{j\nu}\right]=1\) _and_ \(\left[A_{kj}B_{k\lambda}A_{kj}^{-1},C_{j\nu}\right]=1\)_,_ * \(A_{ji}A_{kj}A_{ki}=A_{ki}A_{ji}A_{kj}=A_{kj}A_{ki}A_{ji}\)_,_ \(A_{ji}B_{j\lambda}B_{i\lambda}=B_{i\lambda}A_{ji}B_{j\lambda}=B_{j\lambda}B_{i \lambda}A_{ji}\) _and_ \(A_{ji}C_{j\nu}C_{i\nu}=C_{i\nu}A_{ji}C_{j\nu}=C_{j\nu}C_{i\nu}A_{ji}\)_._ Furthermore, we consider the group \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Besides the elements mentioned above, this group contains elements represented by the following homeomorphisms. Let \(H_{j}\) for \(1\leq j<n\) be the homeomorphism that performs the following half-twist on each \(\Gamma\)-translate of the disk \(D_{j,j+1}\), i.e. the disk \(D_{i,j+1}\) from Figure 4.1 with \(i=j\): For every \(1\leq\lambda\leq L\) and \(1\leq\nu\leq N\), let \(T_{\lambda}:=B_{1\lambda}\) and \(U_{\nu}:=C_{1\nu}\). As for the pure generators, we will use the names \(H_{j},T_{\lambda}\) and \(U_{\nu}\) as acronyms for the represented mapping classes. The groups \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) and \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) are related by the short exact sequence \[1\to\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to \mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to\mathrm{S}_{n }\to 1.\] Figure 4.4. The half-twist \(H_{j}\). This yields the following presentation of \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\): **Proposition 4.14** ([8, Proposition 4.22]).: _For \(n\geq 1\), the group \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is presented by generators_ \[H_{1},...,H_{n-1},T_{1},...,T_{L},U_{1},...,U_{N}\] _and defining relations for \(2\leq j<n\), \(1\leq\theta,\lambda\leq L\) with \(\theta<\lambda\) and \(1\leq\mu,\nu\leq N\) with \(\mu<\nu\):_ 1. _braid and commutator relations for the generators_ \(H_{1},...,H_{n-1}\)_,_ 2. 1. \(\left[T_{\lambda},H_{j}\right]=1\)_,_ 2. \(\left[U_{\nu},H_{j}\right]=1\)_,_ 3. 1. \(\left[H_{1}T_{\lambda}H_{1},T_{\lambda}\right]=1\)_,_ 2. \(\left[H_{1}U_{\nu}H_{1},U_{\nu}\right]=1\) _and_ 4. 1. \(\left[T_{\theta},B_{2\lambda}\right]=1\) _for_ \(B_{2\lambda}=H_{1}^{-1}T_{\lambda}H_{1}\)_,_ 2. \(\left[U_{\mu},C_{2\nu}\right]=1\) _for_ \(C_{2\nu}=H_{1}^{-1}U_{\nu}H_{1}\) _and_ 3. \(\left[T_{\lambda},C_{2\nu}\right]=1\) _for_ \(C_{2\nu}=H_{1}^{-1}U_{\nu}H_{1}\)_._ Here above and in the following, we mean the relations \(H_{i}H_{i+1}H_{i}=H_{i+1}H_{i}H_{i+1}\) for \(1\leq i\leq n-2\) and \(\left[H_{j},H_{k}\right]=1\) for \(1\leq j,k<n\) with \(\left|j-k\right|\geq 2\) by _braid and commutator relations_ for \(H_{1},...,H_{n-1}\). ## 5. Relating orbifold braid groups and orbifold mapping class groups This section highlights two fundamental differences between orbifold braid groups and Artin braid groups. In the classical situation, the generalized Birman exact sequence implies that the Artin braid group \(\mathrm{B}_{n}\) is isomorphic to \(\mathrm{Map}_{n}\left(D\right)\), see [6, Theorem 9.1] for details. The inverse of the point-pushing map is the evaluation map \(\mathrm{ev}:\mathrm{Map}_{n}\left(D\right)\rightarrow\mathrm{B}_{n}\), which evaluates a certain ambient isotopy at the marked points. More precisely, given a self-homeomorphism \(H\) of the disk \(D\) that preserves the set of marked points \(\left\{p_{1},...,p_{n}\right\}\) and fixes the boundary, the Alexander trick yields an ambient isotopy from \(H\) to \(\mathrm{id}_{D}\). Evaluated at the marked points, this ambient isotopy describes the strands of a braid \(\mathrm{ev}([H])\). For orbifolds, we will establish an analogous map \[\mathrm{ev}:\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L) \right)\rightarrow\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\] which, in contrast to the classical case, is not an isomorphism (see Theorem 5.4). This difference between the orbifold mapping class group and the orbifold braid group has fundamental consequences: recall that the pure subgroup \(\mathrm{PB}_{n}\) fits into a short exact sequence \[1\rightarrow\underbrace{\pi_{1}(D(n-1))}_{=F_{n-1}}\rightarrow\mathrm{PB}_{n }\rightarrow\mathrm{PB}_{n-1}\to 1\] that stems from the characterization of \(\mathrm{PB}_{n}\) as the pure mapping class group \(\mathrm{PMap}_{n}\left(D\right)\) and a restriction of the generalized Birman exact sequence from [6, Theorem 9.1] to pure subgroups. For pure orbifold mapping class groups, we have a similar short exact sequence \[1\to F_{n-1+L}\rightarrow\mathrm{PMap}_{n}^{\mathrm{id},orb}\left( \Sigma_{\Gamma}(L)\right)\rightarrow\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left( \Sigma_{\Gamma}(L)\right)\to 1\] discussed in Corollary 4.6. In this section, we will consider similar maps for pure orbifold braid groups. We will show that we still have an exact sequence \[\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right)\to\mathrm{PZ}_{n}( \Sigma_{\Gamma}(L))\to\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\to 1\] but the map \(\pi_{1}^{\mathrm{orb}}(\Sigma_{\Gamma}(n-1+L))\to\mathrm{PZ}_{n}(\Sigma_{ \Gamma}(L))\) surprisingly has a non-trivial kernel \(K_{n}\) (see Corollary 5.16). This corrects Theorem 2.14 in [12]. Moreover, this implies that the canonical homomorphism \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\to\mathrm{Z}_{n+L}(\Sigma_{\Gamma})\) that sends punctures to fixed strands has non-trivial kernel (see Proposition 5.17). This corrects Proposition 4.1 in [11]. ### The orbifold braid group is a quotient of the orbifold mapping class group Our first goal is to define the evaluation map \[\mathrm{ev}:\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right) \to\mathrm{Z}_{n}(\Sigma_{\Gamma}(L)).\] Moreover, we want to consider the orbifold braid group \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\) on the orbifold \(\Sigma_{\Gamma}(L,N)\) that is also punctured at the cone points. We also want to establish a similar evaluation map \[\mathrm{ev}^{*}:\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L) \right)\to\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N)).\] Together with the epimorphism \(f:\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\to\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) from Corollary 3.9, the evaluation maps will fit into a commutative diagram (9) The idea of the evaluation maps is the following: Recall that each homeomorphism \(H\) that represents a mapping class in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) by Definition 4.4 is ambient isotopic to \(\mathrm{id}_{\Sigma}\) if we forget the marked points \(p_{1},...,p_{n}\). Let \(H_{t}\) be such an ambient isotopy that fixes \(r_{1},...,r_{L}\) pointwise. Then evaluating \(H_{t}\) at \(p_{1},...,p_{n}\) represents a braid in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\). However, it is not clear that the braid \([H_{t}(p_{1},...,p_{n})]\) does not depend on the choice of the representative \(H\) and the ambient isotopy. To address this problem, we recall that due to Proposition 4.14\(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) has a finite presentation in terms of generators \[H_{j},T_{\lambda}\ \ \mathrm{and}\ \ U_{\nu}\ \ \mathrm{for}\ \ 1\leq j<n,\ 1\leq\lambda\leq L\ \ \mathrm{and}\ \ 1\leq\nu\leq N.\] For each generator \(H_{j},T_{\lambda}\) and \(U_{\nu}\), there is an ambient isotopy to \(\mathrm{id}_{\Sigma}\) that forgets the marked points and performs the Alexander trick on each supporting disk, which is centered at a marked point or a cone point for \(T_{\lambda}\) and \(U_{\nu}\), respectively. Evaluating these ambient isotopies at the marked points, defines \(\mathrm{ev}\) on the generators. To check that the evaluation maps induce homomorphisms, we recall from Theorem 3.13 and Corollary 3.14 that the groups \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\) and \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) are generated by braids \[h_{j},t_{\lambda}\ \ \mathrm{and}\ \ u_{\nu}\ \ \mathrm{for}\ \ 1\leq j<n,\ 1\leq \lambda\leq L\ \ \mathrm{and}\ \ 1\leq\nu\leq N.\] The induced map sends \(H_{j}\) to \(h_{j}\), \(T_{\lambda}\) to \(t_{\lambda}\) and \(U_{\nu}\) to \(u_{\nu}\). In particular, this observation uses that all twists and half-twists pictured in Figures 1.4, 4.2 and 4.4 were defined moving clockwise. Consequently, the ambient isotopy to the identity moves the marked points counterclockwise. This matches the definition of \(h_{j},t_{\lambda}\) and \(u_{\nu}\) on page 16. In analogy to Proposition 4.14, we observe the following relations for the generators of the orbifold braid groups: **Lemma 5.1**.: _The generators \(h_{1},...,h_{n-1},t_{1},...,t_{L},u_{1},...,u_{N}\) of \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) satisfy the following relations for \(2\leqslant j<n\), \(1\leqslant\theta,\lambda\leqslant L,\theta<\lambda\) and \(1\leqslant\mu,\nu\leqslant N,\mu<\nu\):_ 1. \(u_{\nu}^{m_{\nu}}=1\)_,_ 2. _braid and commutator relations for the generators_ \(h_{1},...,h_{n-1}\)_,_ 3. 1. \([t_{\lambda},h_{j}]=1\) _and_ \(\mathrm{b})\ [u_{\nu},h_{j}]=1\)_,_ 4. 2. \([h_{1}t_{\lambda}h_{1},t_{\lambda}]=1\) _and_ \(\mathrm{b})\ [h_{1}u_{\nu}h_{1},u_{\nu}]\)_,_ 5. 1. \([t_{\theta},b_{2\lambda}]=1\)_,_ \(\mathrm{b})\ [u_{\mu},c_{2\nu}]=1\) _and_ \(c)\ [t_{\lambda},c_{2\nu}]=1\) _with_ \(b_{2\lambda}=h_{1}^{-1}t_{\lambda}h_{1}\) _and_ \(c_{2\nu}=h_{1}^{-1}u_{\nu}h_{1}\)_._ _With the exception of 5.1(1), the same relations hold in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\)._ Proof.: The braid and commutator relations for \(h_{1},...,h_{n-1}\) follow as in the surface case directly from the braid diagrams. The remaining commutator relations also follow from the braid diagrams (see Figure 5.1 for the relations that involve twists around cone points). The relation 5.1(1) was observed in Remark 3.12. This observation was based on the contractibility of a loop that contains a cone point (see Figure 3.12). This argument does not hold once cone points are removed. In fact, we will later see that the relation \(u_{\nu}^{m_{\nu}}=1\) does not hold in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\). Lemma 5.1 in particular implies that the assignments \(\mathrm{ev}\) and \(\mathrm{ev}\)* preserve the relations from Proposition 4.14. Hence, Theorem 4.9 yields: **Corollary 5.2**.: _The maps \(\mathrm{ev}\)* and \(\mathrm{ev}\) are homomorphisms that satisfy the condition from (9)._ Furthermore, using Lemma 2.10, we may deduce: **Corollary 5.3**.: _The map \(\mathrm{ev}\)* \(:\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to \mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\) is an isomorphism._ Figure 5.1. Observation of the relations 5.1(3)-5.1(5) by consideration of orbifold braid diagrams. Proof.: The group \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\) is defined as the orbifold fundamental group of the orbifold configuration space \[\mathrm{Conf}_{n}^{\Gamma}(\Sigma_{\Gamma}(L,N))=\widetilde{\mathrm{PConf}}_{n}^ {\Gamma}(\Sigma(L,N))_{\Gamma^{n}\rtimes\mathrm{S}_{n}},\] where \(\widetilde{\mathrm{PConf}}_{n}^{\Gamma}(\Sigma(L,N))\) denotes the underlying space \[\left\{(x_{1},...,x_{n})\in(\Sigma(L,N))^{n}\mid x_{i}\neq\gamma(x_{j})\text{ for all }1\leq i,j\leq n,i\neq j,\gamma\in\Gamma\right\}.\] To apply Lemma 2.10, we want to show that the \(\Gamma^{n}\rtimes\mathrm{S}_{n}\)-action on \(\widetilde{\mathrm{PConf}}_{n}^{\Gamma}(\Sigma(L,N))\) is free. Therefore, we recall that the action of \(((\gamma_{1},...,\gamma_{n}),\sigma)\in\Gamma^{n}\rtimes\mathrm{S}_{n}\) on \((x_{1},...,x_{n})\in\widetilde{\mathrm{PConf}}_{n}^{\Gamma}(\Sigma(L,N))\) is given by \[((\gamma_{1},...,\gamma_{n}),\sigma)(x_{1},...,x_{n})=(\gamma_{\sigma(1)}(x_{ \sigma(1)}),...,\gamma_{\sigma(n)}(x_{\sigma(n)})).\] Let us assume that \((x_{1},...,x_{n})\in\widetilde{\mathrm{PConf}}_{n}^{\Gamma}(\Sigma(L,N))\) is fixed by \(((\gamma_{1},...,\gamma_{n}),\sigma)\) in \(\Gamma^{n}\rtimes\mathrm{S}_{n}\). Since \(x_{i}\neq\gamma(x_{j})\) for all \(i\neq j\) and \(\gamma\in\Gamma\), we obtain \(\sigma=\mathrm{id}_{n}\). Moreover, \(\Gamma\) acts freely on \(\Sigma(L,N)\) such that \(\gamma_{i}=1\) for all \(1\leq i\leq n\), i.e. the action of \(\Gamma^{n}\rtimes S_{n}\) is free. By Lemma 2.10, this implies \[\pi_{1}^{\mathrm{orb}}(\mathrm{Conf}_{n}^{\Gamma}(\Sigma_{\Gamma}(L,N)))\cong \pi_{1}(\widetilde{\mathrm{PConf}}_{n}^{\Gamma}(\Sigma(L,N))/\Gamma^{n} \rtimes\mathrm{S}_{n}).\] Mapping \((\Gamma^{n}\rtimes\mathrm{S}_{n})\left(x_{1},...,x_{n}\right)\) to \(\mathrm{S}_{n}(\Gamma(x_{1}),...,\Gamma(x_{n}))\) defines a homeomorphism \[\widetilde{\mathrm{PConf}}_{n}^{\Gamma}(\Sigma(L,N))/\Gamma^{n}\rtimes \mathrm{S}_{n}\to\mathrm{PConf}_{n}(\Sigma(L,N)/\Gamma)/\mathrm{S}_{n}\,.\] Since \(\Sigma(L,N)/\Gamma\) is homeomorphic to \(D(L,N)\) and \[\mathrm{Conf}_{n}(D(L,N))=\mathrm{PConf}_{n}(D(L,N))/\mathrm{S}_{n},\] this implies that \[\mathrm{Z}_{n}(\Sigma_{\Gamma}(L,N))\cong\pi_{1}(\mathrm{Conf}_{n}(D(L,N))).\] From Theorem 4.5 we further obtain an isomorphism between \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) and \(\pi_{1}(\mathrm{Conf}_{n}(D(L,N)))\) such that the following diagram commutes: Thus, ev* is an isomorphism. The other homomorphism ev is no longer an isomorphism but we may determine its kernel. We will prove: **Proposition 5.4**.: _The kernel of ev is the normal closure of \(\{U_{\nu}^{m_{\nu}}\mid 1\leq\nu\leq N\}\) in \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). The kernel of the restricted map \(\mathrm{ev}\left|{}_{\mathrm{PMap}_{n}^{\mathrm{id},orb}(\Sigma_{\Gamma}(L))}\right.\) is the normal closure of \(\{C_{k\nu}^{m_{\nu}}\mid 1\leq\nu\leq N,1\leq k\leq n\}\) in \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\)._ For the proof, we want to keep track of cone point intersections of homotopies. Similar to Definition 3.10 it is helpful to make assumptions on crossings; in this case crossings with cone points \(c_{\nu}\). Therefore, we recall the embedding of \(F(L)\) as described in Figure 3.5. With respect to the projection \[\pi:F(L)\times\big{|}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Definition 5.5** (Cone point crossings and generic representatives in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\)).: Let \(\xi\) be a \(\Gamma^{n}\rtimes\mathrm{S}_{n}\)-path that represents an element in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\). Assume that \(\xi\) satisfies all the condition from the definition of a braid (see Definition 3.10) except that it eventually intersects a cone point. Let \(x\in\xi_{j}\) such that \(\pi(x)=(-L-\nu,t)\) for some \(1\leq\nu\leq N\) and \(t\in[t_{i-1},t_{i}]\). In this case, we say \(\pi(\xi)\)_crosses the cone point_\(c_{\nu}\) at height \(t\). The cone point crossing is called _transverse_ if there is a neighborhood \(U\) of \(\pi(x)\) in \([0,n+1]\times[t_{i-1},t_{i}]\) such that the pair \[(U,(\pi(\xi)\cup\{-L-\nu\}\times I)\cap U)\] is locally homeomorphic to \((\mathbb{R}^{2},\mathbb{R}\times\{0\}\cup\{0\}\times\mathbb{R})\) via a homeomorphism identifying \(\pi(\xi_{j})\) with \(\mathbb{R}\times\{0\}\) and \(\{-L-\nu\}\times I\) with \(\{0\}\times\mathbb{R}\). If all cone point crossings in \(\xi\) are transverse, the representative \(\xi\) is called _generic_. In particular, a generic \(\Gamma^{n}\rtimes\mathrm{S}_{n}\)-path does not stay in a cone point for a period of time. Proof of Proposition 5.4.: If we recall that \(\mathrm{ev}(U_{\nu})=u_{\nu}\) for each \(1\leq\nu\leq N\), the relation 5.1(1) implies that \(\left\langle\!\left\langle U_{\nu}^{m_{\nu}}\right\rangle\!\right\rangle_{ \mathrm{Map}_{n}^{\mathrm{id},orb}(\Sigma_{\Gamma}(L))}\subseteq\ker( \mathrm{ev})\). Moreover, \(\mathrm{ev}(C_{k\nu})=c_{k\nu}\) and by (6) \(c_{k\nu}\) is a \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\)-conjugate of \(u_{\nu}\). Hence, \(c_{k\nu}^{m_{\nu}}=1\) and \[\left\langle\!\left\langle C_{k\nu}^{m_{\nu}}\right\rangle\!\right\rangle_{ \mathrm{PMap}_{n}^{\mathrm{id},orb}(\Sigma_{\Gamma}(L))}\subseteq\ker( \mathrm{ev}\left|{}_{\mathrm{PMap}_{n}^{\mathrm{id},orb}(\Sigma_{\Gamma}(L))}).\] The opposite inclusions require more work. The idea is the following: Given an element \(H\in\ker(\mathrm{ev})\) in terms of the generators of the mapping class group \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\), it projects to a braid that is homotopic to the trivial braid. If this homotopy intersects a cone point, a suitable adjustment of the homotopy allows us to read off the used orbifold Reidemeister moves discussed in Remark 3.12 and Figure 3.13. In terms of the word that represents \(H\), this transformation induces a non-trivial insertion or deletion of subwords conjugate to \(C_{k\nu}^{m_{\nu}}\). This allows us to deduce the claim. For example, the homotopy pictured in Figure 5.2 (left) reflects the orbifold Reidemeister move depicted on the right of the same figure. This Reidemeister move induces a deletion of \(U_{\nu}^{3}\) in the word that represents the element \(H\in\ker(\mathrm{ev})\). Let \(H=\rho_{1}^{\varepsilon_{1}}...\rho_{q}^{\varepsilon_{q}}\) with \[\rho_{i}\in\{H_{1},...,H_{n-1},T_{1},...,T_{L},U_{1},...,U_{N}\}\quad\text{ and }\quad\varepsilon_{i}\in\{\pm 1\}\] be an element in \(\ker(\mathrm{ev})\). Then the braid \(\mathrm{ev}(H)=b=\sigma_{1}^{\varepsilon_{1}}...\sigma_{q}^{\varepsilon_{q}}\) with \(\sigma_{i}=h_{j}\) if \(\rho_{i}=H_{j}\), \(\sigma_{i}=t_{\lambda}\) if \(\rho_{i}=T_{\lambda}\) and \(\sigma_{i}=u_{\nu}\) if \(\rho_{i}=U_{\nu}\) is trivial. Applying suitable shifts, we may assume that the strands \(b_{1},...,b_{n}\) of \(b\) are continuous. Since \(b\) is trivial, we have a system \(h_{s}\) of homotopies \(h_{s}^{(j)}\) connecting the strands \(b_{j}\) to the constant maps \(I\to\Sigma,t\mapsto p_{j}\) for \(1\leqslant j\leqslant n\) such that the map \(h_{s}:t\mapsto(h_{s}^{(1)}(t),...,h_{s}^{(n)}(t))\) represents an element in \(\operatorname{Z}_{n}(\Sigma_{\Gamma}(L))\) for every \(s\in I\). Since \(b\) is piecewise linear, analogous to the classical case described in [9, Claim 1.7], the homotopy \(h_{s}\) can be realized as a sequence of \(\Delta\)-moves. For each \(s\in I\), we may assume that each strand of \(h_{s}\) intersects the set \(\Gamma(\partial F(L))\) in finitely many points. Applying the same ideas as in the proof of Lemma 3.7(3), this yields an equivalent representative \(\bar{h}_{s}\) such that the image of each strand \(\bar{h}_{s}^{(j)}\) is contained in \(F(L)\). Additionally, as for the classical case [9, Claim 1.8], we may also adjust the \(\Delta\)-move such that the representatives \(\bar{h}_{s}\) are generic for each \(s\in I\). In particular, in such a sequence of \(\Delta\)-moves each strand \(h_{s}^{(j)}\) intersects only finitely many times with cone points. Hence, we may induct on the number of intersections. _The base case._ If the homotopy \(h_{s}\) does not intersect any cone points, the diagram (9) implies that \(H\) is also contained in \(\ker(\operatorname{ev}^{*})\). By Lemma 5.3, we obtain \(H=\operatorname{id}_{\Sigma(L)}\). _The case where the homotopy intersects cone points._ If the homotopy \(h_{s}\) intersects the cone points at least once, there exists a triple \((s,t,j)\) with \(s,t\in I\) and \(1\leqslant j\leqslant n\) for each cone point intersection. For such a triple, we have \(h_{s}^{(j)}(t)=\gamma(c_{\nu})\) for some \(1\leqslant\nu\leqslant N\) and \(\gamma\in\Gamma\), where \(h_{s}^{(j)}\) denotes the strand of \(h_{s}\) that begins in \(p_{j}\). Let us fix such a triple \((s_{0},t_{0},j_{0})\). During the \(\Delta\)-move that contains the intersection \(h_{s_{0}}^{(j_{0})}(t_{0})=\gamma(c_{\nu})\) the only moving strand is the \(j_{0}\)-th one. Since \(\bar{h}_{s}\) is a generic braid for each \(s\in I\), we may assume that \(\pi(\bar{h}_{s})\) does not contain an additional cone point crossing at height \(t_{0}\) by shifting any such cone point crossing to a different height (see Figure 5.3 for an example). At this point, some explaining words about the following Figures 5.3, 5.4 and 5.5 are in order: Even if these pictures are drawn diagrammatically, they should not be misinterpreted as orbifold braid diagrams in the sense of Definition 3.10, where strands move inside the fundamental domain \(F(L)\). Instead, as in the more elaborated picture in Figure 5.2 (left), the arcs in these figures should be considered as strands of a \(\Gamma^{n}\rtimes\operatorname{S}_{n}\)-path in \(\operatorname{Z}_{n}(\Sigma_{\Gamma}(L))\) moving inside the neighborhood of a cone point. In these figures, the strands drawn in black indicate the braid \(\bar{h}_{s}\) for some \(s<s_{0}\) (resp. \(s>s_{0}\)). These braids are pictured as the strands of a braid diagram. Furthermore, the figures depict \(\Delta\)-moves that intersect a cone point. In contrast, to orbifold braid diagrams, after (resp. before) the application of such a \(\Delta\)-move, the moved strand (depicted in light blue) does not lie entirely in the fundamental domain. Figure 5.3. Shift of a cone point crossing. Let us say that the relevant \(\Delta\)-move has underlying triangle \(T\). Since \(\pi(\bar{h}_{s_{0}})\) crosses with \(c_{\nu}\) at time \(t_{0}\), a subdivision of the underlying triangle \(T\) allows us to find a triangle \(T^{\prime}\subseteq T\) and \(\delta_{1},\delta_{2},\delta^{\prime}_{1},\delta^{\prime}_{2}>0\) such that * \(T^{\prime}\) contains the cone point \(c_{\nu}\) in its interior. * \(T^{\prime}\) contains no point \(p_{j}\) for \(1\leqslant j\leqslant n\) that is an end point of a strand. * \(h_{s}:[s_{0}-\delta_{1},s_{0}+\delta_{2}]\to\Sigma(L)\) describes the \(\Delta\)-move supported on \(T^{\prime}\). * for every \(s\in[s_{0}-\delta_{1},s_{0}+\delta_{2}]\), no crossing in the sense of Definition 3.10 occurs in \(\bar{h}_{s}|_{[t_{0}-\delta^{\prime}_{1},t_{0}+\delta^{\prime}_{2}]}\). See Figure 5.4 for an illustration. Next, the idea is to adjust the \(\Delta\)-move supported on \(T^{\prime}\) such that it keeps all strands except the \(j_{0}\)-th one fixed at positions \(p_{1},...,p_{n}\). We divide this adjustment into three steps (see Figure 5.5 for an illustrating example): 1. First of all, for every \(s\in[s_{0}-\delta_{1},s_{0}+\delta_{2}]\), we reparametrize the \(t\)-component of the \(\Delta\)-move \(h_{s}|_{[t_{0}-\delta^{\prime}_{1},t_{0}+\delta^{\prime}_{2}]}\) shrinking the interval \([t_{0}-\delta^{\prime}_{1},t_{0}+\delta^{\prime}_{2}]\) to an interval \([t_{0}-\varepsilon_{1},t_{0}+\varepsilon_{2}]\) (see Figure 5.5, step (1)). 2. Moreover, let \(\delta^{\prime\prime}_{1},\delta^{\prime\prime}_{2}>0\) such that \[t_{0}-\delta^{\prime\prime}_{1}\in(t_{0}-\delta^{\prime}_{1},t_{0}- \varepsilon_{1})\quad\text{ and }\quad t_{0}+\delta^{\prime\prime}_{2}\in(t_{0}+ \varepsilon_{2},t_{0}+\delta^{\prime}_{2}).\] We want to adjust the endpoints of the homotopy \(h_{s}|_{[t_{0}-\delta^{\prime\prime}_{1},t_{0}+\delta^{\prime\prime}_{2}]}\) such that they are kept fixed. Therefore, we endow the strands with an order according to their real part in \(\bar{h}_{s}(t_{0}-\delta^{\prime\prime}_{1})\). Let us assume that the \(j_{0}\)-th strand appears in the \(k\)-th position in this order. Now we adjust \(h_{s}|_{[t_{0}-\delta^{\prime}_{1},t_{0}-\varepsilon_{1}]}\) and \(h_{s}|_{[t_{0}+\varepsilon_{2},t_{0}+\delta^{\prime}_{2}]}\) by suitable homotopies such that \[h_{s_{0}-\delta_{1}}(t_{0}-\delta^{\prime\prime}_{1})=(p_{1},...,p_{n})=h_{s_{0}-\delta_{1}}(t_{0}+\delta^{\prime\prime}_{2})\text{ and }\] \[h_{s_{0}+\delta_{2}}(t_{0}-\delta^{\prime\prime}_{1})=(p_{1},...,p_{n})=h_{s_{0}+\delta_{2}}(t_{0}+\delta^{\prime\prime}_{2}).\] If we realize this adjustment with respect to the order at height \(t_{0}-\delta^{\prime\prime}_{1}\), which means we homotope the \(i\)-th strand in this order into the position \(p_{i}\), we may assume that the adjustment does not create any crossings in \([t_{0}-\delta^{\prime}_{1},t_{0}-\varepsilon_{1}]\) and \([t_{0}+\varepsilon_{2},t_{0}+\delta^{\prime}_{2}]\) (see Figure 5.5, step (2)). 3. Further, we may adjust the homotopy so that all strands except the \(j_{0}\)-th are kept fixed during the homotopy \(h_{s}|_{[t_{0}-\delta^{\prime\prime}_{1},t_{0}+\delta^{\prime\prime}_{2}]}\), (see Figure 5.5, step (3)). If we realize this adjustment moving all the strands in front of the \(j_{0}\)-th one, we obtain a \(\Delta\)-move with constant \(i\)-th strand for \(i\neq j_{0}\) as in the final diagram in Figure 5.5. Figure 5.4. Adjustment of the \(\Delta\)-move to a smaller triangle. Since we moved in front of the \(j_{0}\)-th strand, this implies that the braids \(h_{s_{0}-\delta_{1}}\) and \(h_{s_{0}+\delta_{2}}\) differ by the orbifold Reidemeister move in Figure 5.6. If we assume that the braid \(h_{s_{0}-\delta_{1}}\) corresponds to a word \(w_{s_{0}-\delta_{1}}\) in the generators from Corollary 3.14, this implies that the word \(w_{s_{0}+\delta_{2}}\) which corresponds to the braid \(\bar{h}_{s_{0}+\delta_{2}}\) differs from \(w_{s_{0}-\delta_{1}}\) by insertion or deletion of \(h_{k-1}...h_{1}u_{\nu}^{m_{\nu}}h_{1}^{-1}...h_{k-1}^{-1}\), i.e. a conjugate of \(c_{k\nu}^{m_{\nu}}\). Analogously, the corresponding words \(W_{s_{0}-\delta_{1}}\) and \(W_{s_{0}+\delta_{2}}\) differ by insertion or deletion of \(H_{k-1}...H_{1}U_{\nu}^{m_{\nu}}H_{1}^{-1}...H_{k-1}^{-1}\). Applying the above adjustments to each cone point intersection of \(h_{s}\), we obtain that the word \(H=\rho_{1}^{\varepsilon_{1}}...\rho_{q}^{\varepsilon_{q}}\) differs from the empty word by a finite sequence of insertions or deletions * allowed by the relations in \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) and * of conjugates of \(C_{k\nu}^{\pm m_{\nu}}\) for suitable \(1\leq k\leq n\) and \(1\leq\nu\leq N\). This implies that \(H=\rho_{1}^{\varepsilon_{1}}...\rho_{q}^{\varepsilon_{q}}\) is contained in the normal closure of the set \(\{C_{k\nu}^{m_{\nu}}\mid 1\leq k\leq n,1\leq\nu\leq N\}\), i.e.: \[\ker(\operatorname{ev})\subseteq\langle\langle C_{k\nu}^{m_{\nu}}\mid 1\leq k \leq n,1\leq\nu\leq N\rangle\rangle_{\operatorname{Map}_{n}^{\operatorname{ id},orb}\left(\Sigma_{\Gamma}(L)\right)}.\] In \(\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) each element \(C_{k\nu}\) is a conjugate of \(U_{\nu}\). Hence, \[\ker(\operatorname{ev})= \langle\langle C_{k\nu}^{m_{\nu}}\mid 1\leq k\leq n,1\leq\nu\leq N \rangle\rangle_{\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{ \Gamma}(L)\right)}\] \[= \langle\langle U_{\nu}^{m_{\nu}}\mid 1\leq\nu\leq N\rangle \rangle_{\operatorname{Map}_{n}^{\operatorname{id},orb}\left(\Sigma_{\Gamma}(L) \right)}.\] The above arguments also yield \[\ker(\operatorname{ev}\mid_{\operatorname{PMap}_{n}^{\operatorname{id},orb} \left(\Sigma_{\Gamma}(L)\right)})=\langle\langle C_{k\nu}^{m_{\nu}}\mid 1\leq k\leq n,1\leq\nu\leq N \rangle\rangle_{\operatorname{PMap}_{n}^{\operatorname{id},orb}\left(\Sigma_{ \Gamma}(L)\right)}.\] Hence, the proposition follows. Figure 5.5. Adjustment of the homotopy to read off the induced orbifold Reidemeister move. Figure 5.6. Orbifold Reidemeister move that describes the difference between \(h_{s_{0}-\delta_{1}}\) and \(h_{s_{0}+\delta_{2}}\). This proves Theorem A and together with Corollary 4.13 and Proposition 4.14 we obtain presentations of \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) and its pure subgroup. **Corollary 5.6**.: _The pure braid group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) has a presentation with generators_ \[a_{ji},b_{k\lambda}\;\;\text{and}\;\;c_{k\nu}\] _for \(1\leqslant i,j,k\leqslant n\) with \(i<j\), \(1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\), and the following defining relations for \(1\leqslant i,j,k,l\leqslant n\) with \(i<j<k<l\), \(1\leqslant\theta,\lambda\leqslant L\) with \(\theta<\lambda\) and \(1\leqslant\mu,\nu,\leqslant N\) with \(\mu<\nu\):_ 1. \(c_{k\nu}^{m}=1\)_,_ 2. \([a_{ji},a_{lk}]=1\)_,_ \([b_{j\lambda},a_{lk}]=1\)_,_ \([c_{j\nu},a_{lk}]=1\)_,_ 3. \([a_{li},a_{kj}]=1\)_,_ \([b_{l\lambda},a_{kj}]=1\)_,_ \([b_{l\lambda},b_{k\theta}]=1\)_,_ \([c_{l\nu},a_{kj}]=1\)_,_ \([c_{l\nu},c_{k\lambda}]=1\)_,_ \([c_{l\nu},c_{k\mu}]=1\)_,_ 4. \([a_{lk}a_{lj}a_{lk}^{-1},a_{ki}]=1\)_,_ \([a_{kj}a_{ki}a_{kj}^{-1},b_{j\lambda}]=1\)_,_ \([a_{kj}b_{k\theta}a_{kj}^{-1},b_{j\lambda}]=1\)_,_ \([a_{kj}a_{ki}a_{kj}^{-1},c_{j\nu}]=1\)_,_ \([a_{kj}c_{k\mu}a_{kj}^{-1},c_{j\nu}]=1\)_,_ 5. \(a_{ji}a_{kj}a_{ki}=a_{ki}a_{ji}a_{kj}=a_{kj}a_{ki}a_{ji}\)_,_ \(a_{ji}b_{j\lambda}b_{i\lambda}=b_{i\lambda}a_{ji}b_{j\lambda}=b_{j\lambda}b_{i \lambda}a_{ji}\)_,_ \(a_{ji}c_{j\nu}c_{i\nu}=c_{i\nu}a_{ji}c_{j\nu}=c_{j\nu}c_{i\nu}a_{ji}\)_._ **Corollary 5.7**.: _The orbifold braid group \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) is presented by generators_ \[h_{1},...,h_{n-1},t_{1},...,t_{L},u_{1},...,u_{N}\] _and the following defining relations for \(2\leqslant j<n\), \(1\leqslant\theta,\lambda\leqslant L\) with \(\theta<\lambda\) and \(1\leqslant\mu,\nu\leqslant N\) with \(\mu<\nu\):_ 1. \(u_{\nu}^{m_{\nu}}=1\)_,_ 2. _braid and commutator relations for the generators_ \(h_{1},...,h_{n-1}\)_,_ 3. \([t_{\lambda},h_{j}]=1\) _and_ \([u_{\nu},h_{j}]=1\)_,_ 4. \([h_{1}t_{\lambda}h_{1},t_{\lambda}]=1\) _and_ \([h_{1}u_{\nu}h_{1},u_{\nu}]=1\)_,_ 5. \([t_{\theta},b_{2\lambda}]=1\)_,_ \([u_{\mu},c_{2\nu}]=1\) _and_ \([t_{\lambda},c_{2\nu}]=1\) _with_ \(b_{2\lambda}=h_{1}^{-1}t_{\lambda}h_{1}\) _and_ \(c_{2\nu}=h_{1}^{-1}u_{\nu}h_{1}\)_._ This proves Theorem B. ### An exact sequence of pure orbifold braid groups It remains to deduce the main goal: an exact sequence of pure orbifold braid groups. For preparation, we briefly recall the classical situation with the short exact sequence \[1\rightarrow\pi_{1}(D(n-1),p_{n})\rightarrow\mathrm{PB}_{n}\rightarrow\mathrm{ PB}_{n-1}\to 1.\] The first map embeds homotopy classes of loops in the punctured disk \(D(n-1)\) into \(\mathrm{PB}_{n}\) interpreting them as pure braids with \(n-1\) constant strands in positions of the punctures and the \(n\)-th strand moving along the loops. The second map forgets the \(n\)-th strand. Moreover, \(\pi_{1}(D(n-1),p_{n})\) is a free group with \(n-1\) generators. We want to establish a similar exact sequence \[\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L),p_{n}\right)\xrightarrow{ \iota_{\mathrm{PZ}_{n}}}\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\xrightarrow{ \pi_{\mathrm{PZ}_{n}}}\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\to 1 \tag{10}\] for pure orbifold braid groups. The definition of the maps is analogous to the classical case: The map \(\pi_{\mathrm{PZ}_{n}}\) forgets the \(n\)-th strand and the map \(\iota_{\mathrm{PZ}_{n}}\) interprets homotopy classes of \(\Gamma\)-loops with base point \(p_{n}\) as pure braids in \(\Sigma_{\Gamma}(L)\) with \(n-1\) constant strands in positions \(p_{1},...,p_{n-1}\) and the \(n\)-th strand moving along the \(\Gamma\)-loops. It will turn out that the map \(\iota_{\mathrm{PZ}_{n}}\) has non-trivial kernel \(K_{n}\). We begin by determining the orbifold fundamental group \(\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right)\). **Lemma 5.8**.: _The group \(\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right)\) is isomorphic to \(F_{n-1+L}*\Gamma\)._ Proof.: We apply an orbifold version of the Seifert-van Kampen theorem [2, Chapter III.G, 3.10(4)]. Let \(X_{0}\) be a punctured disk that satisfies the following conditions: * \(X_{0}\) is open as a subset of \(\Sigma(n-1+L)\). * \(\Gamma(X_{0})\) is a disjoint union of the \(\Gamma\)-translates of \(X_{0}\). * \(X_{0}\) contains the punctures in positions \(p_{1},...,p_{n-1}\) and \(r_{1},...,r_{L}\). Let \(X:=\Gamma(X_{0})\) and \(Y\) a \(\Gamma\)-invariant open neighborhood of the cone points such that \(Y\hookrightarrow\Sigma\) is a \(\Gamma\)-equivariant homotopy equivalence. Additionally, we assume that \(X\cup Y=\Sigma(n-1+L)\). The intersection \(Z:=X\cap Y\) is a disjoint union of disks \(\gamma(Z_{0})\) for \(\gamma\in\Gamma\) and each of these disks is open as a subset of \(\Sigma(n-1+L)\) and does not contain any punctures (see Figure 5.7 for an example). In general, we can obtain \(X\) and \(Y\) as follows: Let \(F(n-1+L)\) embed into \(\mathbb{C}\) as described in Figure 3.5 and define \[X_{0}:=F(n-1+L)\cap\{x\in\mathbb{C}\mid\operatorname{Im}(x)<\varepsilon_{+}\} \quad\text{ and }\quad\tilde{Y}=F(n-1+L)\cap\{x\in\mathbb{C}\mid \operatorname{Im}(x)>\varepsilon_{-}\}\] for \(\varepsilon_{+}>\varepsilon_{-}>0\) such that \[\varepsilon_{+}<\min\{\operatorname{Im}(x)\mid x\in F(n-1+L)\cap\gamma(F(n-1+ L))\text{ for some }\gamma\neq 1\}.\] See Figure 5.8. Then \(Y:=\Gamma(\tilde{Y})\) and \(X=\Gamma(X_{0})\) satisfy the above properties. Figure 5.7. Decomposition of \(\Sigma(n-1+L)\) into open subsets \(X\) and \(Y\). If we consider the orbifold fundamental group with respect to a base point in \(Z\), the Seifert-van Kampen theorem implies that the inclusion maps induce an isomorphism \[\pi_{1}^{\mathrm{orb}}(X_{\Gamma})\ast\pi_{1}^{\mathrm{orb}}(Y_{\Gamma})/ \langle\!\langle\iota_{X}(\gamma)\iota_{Y}(\gamma)^{-1}\mid\gamma\in\pi_{1}^{ \mathrm{orb}}(Z_{\Gamma})\rangle\!\rangle\to\pi_{1}^{\mathrm{orb}}\left( \Sigma_{\Gamma}(n-1+L)\right). \tag{11}\] Here above, \(\iota_{X}:\pi_{1}^{\mathrm{orb}}(Z_{\Gamma})\hookrightarrow\pi_{1}^{\mathrm{orb }}(X_{\Gamma})\) and \(\iota_{Y}:\pi_{1}^{\mathrm{orb}}(Z_{\Gamma})\hookrightarrow\pi_{1}^{\mathrm{orb }}(Y_{\Gamma})\) denote the homomorphisms induced by the inclusion of \(Z_{\Gamma}\) into \(X_{\Gamma}\) and \(Y_{\Gamma}\), respectively. Since the inclusion \(Y\hookrightarrow\Sigma\) is a \(\Gamma\)-equivariant homotopy equivalence, the fundamental group of \(Y_{\Gamma}\) equals \(\pi_{1}^{\mathrm{orb}}(\Sigma_{\Gamma})=\Gamma\), that was determined in Corollary 2.9. Let us use \(X_{0}\) and \(Z_{0}\) to denote the path components of \(X\) and \(Z\) that contain the base point. In both cases, the subgroup of \(\Gamma\) that leaves the path components invariant is trivial. By Corollary 2.9, the canonical maps \(\pi_{1}(X_{0})\to\pi_{1}^{\mathrm{orb}}(X_{\Gamma})\) and \(\pi_{1}(Z_{0})\to\pi_{1}^{\mathrm{orb}}(Z_{\Gamma})\) are isomorphisms. The subsurface \(X_{0}\) is a disk with punctures in \(p_{1},...,p_{n-1}\) and \(r_{1},...,r_{L}\). Hence, \(\pi_{1}(X_{0})\cong F_{n-1+L}\). The subsurface \(Z_{0}\) is a disk without punctures, i.e. \(\pi_{1}(Z_{0})\) is trivial. Consequently, \(\pi_{1}^{\mathrm{orb}}(X_{\Gamma})\cong F_{n-1+L}\) and \(\pi_{1}^{\mathrm{orb}}(Z_{\Gamma})\) is trivial. By (11), this implies that the group \(\pi_{1}^{\mathrm{orb}}(\Sigma_{\Gamma}(n-1+L)\) is isomorphic to the free product \(F_{n-1+L}\ast\Gamma\). In particular, the proof of Lemma 5.8 shows that \(\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right)\) is presented by generators \[x_{j},y_{\lambda}\;\;\mathrm{and}\;\;z_{\nu}\] with \(1\leq j<n\), \(,1\leq\lambda\leq L\) and \(1\leq\nu\leq N\) represented by the \(\Gamma\)-loops in Figure 5.9 and defining relations \(z_{\nu}^{m_{\nu}}=1\) for \(1\leq\nu\leq n\). In the following, we will not distinguish between the representatives \(x_{j},y_{\lambda}\) and \(z_{\nu}\) and the represented homotopy classes in \(\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right)\). _The homomorphism \(\iota_{\mathrm{PZ}_{n}}\)._ We define assignments \(\iota_{\mathrm{PZ}_{n}}\) by \[x_{j} \mapsto a_{nj}\;\;\mathrm{for}\;\;1\leq j<n,\] \[y_{\lambda} \mapsto b_{n\lambda}\;\;\mathrm{for}\;\;1\leq\lambda<L\;\; \mathrm{and}\] \[z_{\nu} \mapsto c_{n\nu}\;\;\mathrm{for}\;\;1\leq\nu<N.\] Corollary 5.6 yields that the assignments \(\iota_{\mathrm{PZ}_{n}}\) preserve the defining relations of the fundamental group \(\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right)\cong F_{n-1+L}\ast\Gamma\). By Theorem 4.9, the assignments \(\iota_{\mathrm{PZ}_{n}}\) induce a homomorphism. In the following, we will also use the notations \(x_{j},y_{\lambda}\) and \(z_{\nu}\) for their images under \(\iota_{\mathrm{PZ}_{n}}\) if we want to emphasize that an element can be represented by a braid where the only moving strand is the \(n\)-th one. _The homomorphism \(\pi_{\mathrm{PZ}_{n}}\)._ We define assignments \(\pi_{\mathrm{PZ}_{n}}\) by \[a_{ji} \mapsto a_{ji}\;\;\mathrm{for}\;\;1\leq i<j<n,\] \[b_{k\lambda} \mapsto b_{k\lambda}\;\;\mathrm{for}\;\;1\leq k<n,1\leq\lambda\leq L \;\;\mathrm{and}\] \[c_{k\nu} \mapsto c_{k\nu}\;\;\mathrm{for}\;\;1\leq k<n,1\leq\nu\leq N\] and the remaining generators \(x_{k},y_{\lambda}\) and \(z_{\nu}\), which only move the \(n\)-th strand, map to the identity under \(\pi_{\mathrm{PZ}_{n}}\). Once again, by Corollary 5.6, it is easy to verify that the assignments \(\pi_{\mathrm{PZ}_{n}}\) preserve the relations of \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\). By Theorem 4.9, the assignments \(\pi_{\mathrm{PZ}_{n}}\) induce a homomorphism. In particular, the definitions of \(\iota_{\mathrm{PZ}_{n}}\) and \(\pi_{\mathrm{PZ}_{n}}\) imply that \(\pi_{\mathrm{PZ}_{n}}\circ\iota_{\mathrm{PZ}_{n}}\) is the trivial map. _A right-inverse homomorphism \(\mathrm{s}_{\mathrm{PZ}_{n}}\) of \(\pi_{\mathrm{PZ}_{n}}\)._ A right inverse map of \(\pi_{\mathrm{PZ}_{n}}\) is induced by mapping the generators \(a_{ji},b_{k\lambda}\) and \(c_{k\nu}\) of \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\) to their homonyms in \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\). Once more, using the presentation from Corollary 5.6, we can read off that this assignments preserve the defining relations of \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\). By Theorem 4.9, this implies that these assignments induce a homomorphism from \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\) to \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\). On the level of generators, it is easy to see that this homomorphism yields a right-inverse of \(\pi_{\mathrm{PZ}_{n}}\). _Exactness._ Since \(\pi_{\mathrm{PZ}_{n}}\) is right invertible, it is surjective. As emphasized above \(\mathrm{im}(\iota_{\mathrm{PZ}_{n}})\subseteq\ker(\pi_{\mathrm{PZ}_{n}})\). For the opposite inclusion, let \(z\in\ker(\pi_{\mathrm{PZ}_{n}})\). By Corollary 3.16, \(z\) decomposes into generators \[a_{ji},b_{k\lambda}\ \ \text{and}\ \ c_{k\nu}\] for \(1\leqslant i,j,k\leqslant n,i<j,1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\). On the one hand, for \(j=n\) and \(k=n\), we denote \(a_{ni}=x_{i}\), \(b_{n\lambda}=y_{\lambda}\) and \(c_{n\nu}=z_{\nu}\). On the other hand, for \(1\leqslant j,k<n\), (5) and (6) yield \[a_{ji} =h_{j-1}^{-1}...h_{i+1}^{-1}h_{i}^{2}h_{i+1}...h_{j-1},\] \[b_{k\lambda} =h_{k-1}^{-1}...h_{1}^{-1}t_{\lambda}h_{1}...h_{k-1}\ \ \text{and}\] \[c_{k\nu} =h_{k-1}^{-1}...h_{1}^{-1}u_{\nu}h_{1}...h_{k-1}.\] Since none of these products contains the generator \(h_{n-1}\), every \(z\in\ker(\pi_{\mathrm{PZ}_{n}})\) decomposes into generators \[x_{k},y_{\lambda},z_{\nu}\ \ \text{and}\ \ h_{j},t_{\lambda},u_{\nu}\] with \(1\leqslant j,k<n,j\neq n-2,1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\). Using that \(\mathrm{Map}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) projects onto \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\), the relations from [8, Lemma 4.21] also hold for the corresponding elements in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\). This allows us to split \(z\) into subwords \[W_{1}(x_{1},...,x_{n-1},y_{1},...,y_{L},z_{1},...,z_{N})\ \ \text{and}\] \[W_{2}(h_{1},...,h_{n-2},t_{1},...,t_{L},u_{1},...,u_{N})\] with \(z=W_{1}\cdot W_{2}\). Here, the motion of the \(n\)-th strand is separated into \(W_{1}\), the braid \(W_{2}\) only moves the first \(n-1\) strands. Consequently, \(W_{1}\) maps trivially under the map \(\pi_{\mathrm{PZ}_{n}}\) that forgets the \(n\)-th strand. Since \(z\) is from \(\ker(\pi_{\mathrm{PZ}_{n}})\), this implies that \(W_{2}\) is also a pure braid and \(\pi_{\mathrm{PZ}_{n}}(W_{2})=1\) in \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\). Further, \(W_{2}\) is contained in the subgroup \[\left\langle a_{ji},b_{k\lambda},c_{k\nu}\ \ \text{for}\ \ \begin{array}{c}1 \leqslant i,j,k<n,\ i<j,\\ 1\leqslant\lambda\leqslant L\ \text{and}\ 1\leqslant\nu\leqslant L \end{array}\right\rangle\leqslant\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L)).\] If we restrict \(\pi_{\mathrm{PZ}_{n}}\) to this subgroup, we obtain an isomorphism; the section \(\mathrm{s}_{\mathrm{PZ}_{n}}\) constructed above yields an inverse. Hence, \(W_{2}=1\) in \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\), i.e. \(z=W_{1}\) is contained in \(\mathrm{im}(\iota_{\mathrm{PZ}_{n}})\). This implies that the sequence from (10) is exact and the homomorphism \(\pi_{\mathrm{PZ}_{n}}\) has a section. Let us denote \(K=K_{n}:=\ker(\iota_{\mathrm{PZ}_{n}})\). Then Definition 4.7 yields: **Corollary 5.9**.: _The group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) is a semidirect product_ \[\left((F_{n-1+L}*\mathrm{I})/K\right)\rtimes\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}( L)).\] It remains to calculate the kernel \(K\). For this purpose, we determine a presentation of \((F_{n-1+L}*\Gamma)/K\). The proof is divided in the following two steps. 1. Find a relation that holds in \((F_{n-1+L}*\Gamma)/K\) but not in \(F_{n-1+L}*\Gamma\), i.e. the kernel \(K\) is non-trivial. 2. Construct relations that suffice to reformulate the presentation of the pure orbifold braid group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) from Corollary 5.6 such that it satisfies the conditions from Lemma 4.10(2). Then the latter presentation describes the semidirect product structure of \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) and we can read off a presentation of \((F_{n-1+L}*\Gamma)/K\). In particular, this determines \(K\). **Step 1. The kernel \(K\) is non-trivial** Let us consider the alphabet \(X=\{x_{1},...,x_{n-1},y_{1},...,y_{L},z_{1},...,z_{N}\}.\) Recall that our goal in Step 2 will be to establish a set of relations \(R\) such that the assignments \[X \to(F_{n-1+L}*\Gamma)/K,\] \[x_{j} \mapsto x_{j}K,\] \[y_{\lambda} \mapsto y_{\lambda}K\;\text{ and }\] \[z_{\nu} \mapsto z_{\nu}K\] induce an isomorphism \(\langle X\mid R\rangle\to(F_{n-1+L}*\Gamma)/K\). Obviously, the set \(R\) contains the relations \(z_{\nu}^{m_{\nu}}=1\) for \(1\leq\nu\leq N\) coming from the relations in \(F_{n-1+L}*\Gamma\). The goal in Step 1 is to observe that the set \(R\) contains further relations. Therefore, we recall that \((F_{n-1+L}*\Gamma)/K\) embeds into \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) via an embedding induced by \({}^{{}_{t}\mathrm{PZ}_{n}}\). We start observing additional relations in \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\). Later on it will be important that these additional relations follow from the conjugation relations summarized in 5.13(C). In particular, 5.13(C) contains the following relations: \[c_{k\nu}a_{nk}c_{k\nu}^{-1} \stackrel{{\text{\tiny{\sf{n}}}}}{{=}} c_{n\nu}^{-1}a_{nk}c_{n\nu}\stackrel{{\text{ \tiny{\sf{v}}}}}{{=}} c_{n\nu}^{-1}(a_{nk}^{-1}a_{nk})a_{nk}c_{n\nu}, \tag{13}\] \[c_{k\nu}c_{n\nu}c_{k\nu}^{-1} \stackrel{{\text{\tiny{\sf{d}}}}}{{=}} c_{n\nu}^{-1}a_{nk}^{-1}c_{n\nu}a_{nk}c_{n\nu},\] (14) \[c_{k\nu}a_{nj}c_{k\nu}^{-1} \stackrel{{\text{\tiny{\sf{h}}}}}{{=}} c_{n\nu}^{-1}a_{nk}^{-1}c_{n\nu}a_{nk}a_{nj}a_{nk}^{-1}c_{n\nu}^{-1}a_{nk}c_{n\nu},\] (15) \[c_{k\nu}b_{n\lambda}c_{k\nu}^{-1} \stackrel{{\text{\tiny{\sf{d}}}}}{{=}} c_{n\nu}^{-1}a_{nk}^{-1}c_{n\nu}a_{nk}b_{n\lambda}a_{nk}^{-1}c_{n\nu}^{-1}a_{nk}c_{n\nu},\] (16) \[c_{k\nu}c_{n\mu}c_{k\nu}^{-1} \stackrel{{\text{\tiny{\sf{f}}}}}{{=}} c_{n\nu}^{-1}a_{nk}^{-1}c_{n\nu}a_{nk}c_{n\mu}a_{nk}^{-1}c_{n\nu}^{-1}a_{nk}c_{n\nu}. \tag{12}\] These relations are the ev-images of relations of the form in d), f) in Lemma 4.12(1) and c), e) and f) in Lemma 4.12(3), respectively. Since ev induces a homomorphism from \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) to \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\), the relations (12) to (16) are satisfied in \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\). **Lemma 5.10**.: _Given the group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\), the generators satisfy the following relations for each \(z\in\mathbb{N}\):_ \[c_{k\nu}^{z}a_{nk}c_{k\nu}^{-z} =(c_{n\nu}^{-1}a_{nk}^{-1})^{z}a_{nk}(a_{nk}c_{n\nu})^{z},\] \[c_{k\nu}^{z}c_{n\nu}c_{k\nu}^{-z} =(c_{n\nu}^{-1}a_{nk}^{-1})^{z}c_{n\nu}(a_{nk}c_{n\nu})^{z},\] \[c_{k\nu}^{z}a_{nj}c_{k\nu}^{-z} =(c_{n\nu}^{-1}a_{nk}^{-1})^{z}(c_{n\nu}a_{nk})^{z}a_{nj}(a_{nk}^{ -1}c_{n\nu}^{-1})^{z}(a_{nk}c_{n\nu})^{z},\] \[c_{k\nu}^{z}b_{n\lambda}c_{k\nu}^{-z} =(c_{n\nu}^{-1}a_{nk}^{-1})^{z}(c_{n\nu}a_{nk})^{z}b_{n\lambda}(a _{nk}^{-1}c_{n\nu}^{-1})^{z}(a_{nk}c_{n\nu})^{z},\] \[c_{k\nu}^{z}c_{n\mu}c_{k\nu}^{-z} =(c_{n\nu}^{-1}a_{nk}^{-1})^{z}(c_{n\nu}a_{nk})^{z}c_{n\mu}(a_{nk }^{-1}c_{n\nu}^{-1})^{z}(a_{nk}c_{n\nu})^{z}.\] Proof.: Relation (12) serves as the base case for an induction to show \[c_{k\nu}^{z}a_{nk}c_{k\nu}^{-z}=(c_{n\nu}^{-1}a_{nk}^{-1})^{z}a_{nk}(a_{nk}c_{ n\nu})^{z} \tag{17}\] for all \(z\in\mathbb{N}\). Assuming the claim for \(z-1\), we obtain: \[c_{k\nu}^{z}a_{nk}c_{k\nu}^{-z}\] \[= c_{k\nu}c_{k\nu}^{z-1}a_{nk}c_{k\nu}^{-(z-1)}c_{k\nu}^{-1}\] \[\stackrel{{\text{i.h.}}}{{=}} c_{k\nu}(c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}a_{nk}(a_{nk}c_{n\nu})^{z-1}c_{k \nu}^{-1}\] \[\stackrel{{\vee}}{{=}} c_{k\nu}(c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}(c_{k\nu}^{-1}c_{k\nu})a_{nk}(c_{k \nu}^{-1}c_{k\nu})(a_{nk}c_{n\nu})^{z-1}c_{k\nu}^{-1}\] \[\stackrel{{\eqref{eq:12}}}{{=}} (c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}c_{k\nu}\alpha_{nk}c_{k\nu}^{-1}(a_ {nk}c_{n\nu})^{z-1}\] \[\stackrel{{\eqref{eq:12}}}{{=}} (c_{n\nu}^{-1}a_{nk}^{-1})^{z}a_{nk}(a_{nk}c_{n\nu})^{z}.\] By induction on \(z\), this implies that the first equation holds for every \(z\in\mathbb{N}\). Similarly, we obtain \(c_{k\nu}^{z}c_{n\nu}c_{k\nu}^{-z}=(c_{n\nu}^{-1}a_{nk}^{-1})^{z}c_{n\nu}(a_{nk }c_{n\nu})^{z}\) for \(z=1\) from equation (13) and the induction step follows as for the first equation. For \(z=1\), the third relation follows from (14). Assuming it for \(z-1\), we may deduce \[c_{k\nu}^{z}a_{nj}c_{k\nu}^{-z} = c_{k\nu}c_{k\nu}^{z-1}a_{nj}c_{k\nu}^{-(z-1)}c_{k\nu}^{-1}\] \[\stackrel{{\text{i.h.}}}{{=}} c_{k\nu}(c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}(c_{n\nu}a_{nk})^{z-1}a_{nj}(a_{nk}^{-1}c_{n \nu}^{-1})^{z-1}(a_{nk}c_{n\nu})^{z-1}c_{k\nu}^{-1}\] \[\stackrel{{\vee}}{{=}} c_{k\nu}(c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}(c_{k\nu}^{-1}c_{k\nu})(c_{n\nu}a_{nk})^{z-1}a_{nj}(a_{nk}^{-1}c_{n \nu}^{-1})^{z-1}(c_{k\nu}^{-1}c_{k\nu})(a_{nk}c_{n\nu})^{z-1}c_{k\nu}^{-1}\] \[\stackrel{{\eqref{eq:12}}}{{=}} (c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}c_{k\nu}(c_{n\nu}a_{nk})^{z-1}a_{nj}( a_{nk}^{-1}c_{n\nu}^{-1})^{z-1}c_{k\nu}^{-1}(a_{nk}c_{n\nu})^{z-1}\] \[\stackrel{{\vee}}{{=}} (c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}c_{k\nu}(c_{n\nu}a_{nk})^{z-1}(c_{k \nu}^{-1}c_{k\nu})a_{nj}(c_{k\nu}^{-1}c_{k\nu})(a_{nk}^{-1}c_{n\nu}^{-1})^{z-1 }c_{k\nu}^{-1}(a_{nk}c_{n\nu})^{z-1}\] \[\stackrel{{\eqref{eq:12}}}{{=}} (c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}c_{n\nu}^{-1}a_{nk}^{-1}(c_{n\nu}a_{nk })^{z-1}a_{nk}c_{n\nu}c_{k\nu}a_{nj}\] \[\stackrel{{\text{c}_{k\nu}^{-1}c_{n\nu}^{-1}a_{nk}^{-1 }(a_{nk}^{-1}c_{n\nu}^{-1})^{z-1}a_{nk}c_{n\nu}(a_{nk}c_{n\nu})^{z-1}}\] \[\stackrel{{\eqref{eq:12}}}{{=}} (c_{n\nu}^{-1}a_{nk}^{-1})^{z-1}c_{n\nu}^{-1}a_{nk}^{-1}(c_{n\nu}a_{nk })^{z-1}a_{nk}c_{n\nu}(c_{n\nu}^{-1}a_{nk}^{-1}c_{n\nu}a_{nk}a_{nj}a_{nk}^{-1}c _{n\nu}^{-1}a_{nk}c_{n\nu})\] \[\stackrel{{\text{c}_{n\nu}^{-1}a_{nk}^{-1}(a_{nk}^{-1 }c_{n\nu}^{-1})^{z-1}a_{nk}c_{n\nu}(a_{nk}c_{n\nu})^{z-1}}\] \[= (c_{n\nu}^{-1}a_{nk}^{-1})^{z}(c_{n\nu}a_{nk})^{z}a_{nj}(a_{nk}^{ -1}c_{n\nu}^{-1})^{z}(a_{nk}c_{n\nu})^{z}.\] Hence, the third relation from the claim follows for every \(z\in\mathbb{N}\). The remaining relations \(c_{k\nu}^{z}b_{n\lambda}c_{k\nu}^{-z}=(c_{n\nu}^{-1}a_{nk}^{-1})^{z}(c_{n\nu} a_{nk})^{z}b_{n\lambda}(a_{nk}^{-1}c_{n\nu}^{-1})^{z}(a_{nk}c_{n\nu})^{z}\) and \(c_{k\nu}^{z}c_{n\mu}c_{k\nu}^{-z}=(c_{n\nu}^{-1}a_{nk}^{-1})^{z}(c_{n\nu}a_{nk })^{z}c_{n\mu}(a_{nk}^{-1}c_{n\nu}^{-1})^{z}(a_{nk}c_{n\nu})^{z}\) are given for \(z=1\) in (15) and (16). The induction step follows as for the third relation. If we further recall from relation 5.6(1) that the \(m_{\nu}\)-th power of \(c_{k\nu}\) is trivial, the first relation from the previous lemma yields \[a_{nk}\stackrel{{\text{\ref{eq:12}}}}{{=}} c_{k\nu}^{m_{\nu}}a_{nk}c_{k\nu}^{-m_{\nu}}=(c_{n\nu}^{-1}a_{nk}^{-1})^{m_{\nu}}a_{nk}(a_{nk}c_{n \nu})^{m_{\nu}}=(c_{n\nu}^{-1}a_{nk}^{-1})^{m_{\nu}-1}c_{n\nu}^{-1}(a_{nk}c_{n \nu})^{m_{\nu}} \tag{18}\] for each \(1\leqslant k<n\) and \(1\leqslant\nu\leqslant N\). By left multiplication with \(c_{n\nu}(a_{nk}c_{n\nu})^{m_{\nu}-1}\), this is equivalent to \[(c_{n\nu}a_{nk})^{m_{\nu}}=(a_{nk}c_{n\nu})^{m_{\nu}}. \tag{19}\] Using that \(\iota_{\mathrm{PZ}_{n}}\) induces a monomorphism from \((F_{n-1+}*\Gamma)/K\) into \(\mathrm{PZ}_{n}(\Sigma(L))\), the corresponding relations \((z_{\nu}Kx_{k}K)^{m_{\nu}}=(x_{k}Kz_{\nu}K)^{m_{\nu}}\) for \(1\leqslant k<n\) and \(1\leqslant\nu\leqslant N\) hold in \((F_{n-1+}*\Gamma)/K\). Hence, the relations \((z_{\nu}x_{k})^{m_{\nu}}=(x_{k}z_{\nu})^{m_{\nu}}\) are contained in \(R\) for each \(k\) and \(\nu\). Clearly, the corresponding reduced words do not coincide in the free product \(F_{n-1+}*\Gamma\), i.e. the above relations lead to non-trivial elements in the kernel \(K\). **Step 2. Determining the kernel \(K\)** To find a presentation of the group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) that satisfies the conditions from Lemma 4.10(2), we need to observe further relations that hold in \((F_{n-1+L}*\Gamma)/K\). For this purpose, we introduce _partial conjugations_: **Definition 5.11** (Partial conjugation).: Let us consider the alphabet \(X\) from above and the free group \(F(X)\). For \(1\leqslant i,k\leqslant n,i<k,1\leqslant\iota\leqslant L\) and \(1\leqslant\nu\leqslant N\), let \[pc_{a_{ki}},pc_{b_{k_{L}}}\ \ \text{and}\ \ pc_{a_{k\nu}}:F(X)\to F(X)\] be the endomorphisms that replace each letter according to Table 1 by the words in the second, third or forth column, respectively. These endomorphisms are called _partial conjugations_. Given a word \(W\) in the above alphabet, we can apply a sequence of various of the partial conjugations one after another. We call the resulting word \(W^{\prime}\) a _partial conjugate_ of \(W\). For any set \(S\) of words in the alphabet \(X\), let \(\mathrm{PC}(S)\) denote the _set of their partial conjugates_. For each \(a\in\{a_{ki},b_{k\iota},c_{k\nu}\}\), the assignments in Table 1 are chosen such that the partial conjugation \(pc_{a}\) describes the conjugation with \(a\) on the subgroup \[(F_{n-1+L}*\Gamma)/K\leqslant\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\] on the level of words in the alphabet \(X\). More precisely, for each letter \(x\in X\), the words \(pc_{a}(x)\) satisfy \(axa^{-1}=pc_{a}(x)\) in \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\). **Observation 5.12**.: _Since each partial conjugation maps \(z_{\nu}\) to a conjugate, these maps preserve the defining relations \(z_{\nu}^{m_{\nu}}=1\) of \(F_{n-1+L}*\Gamma\), i.e. these maps induce endomorphisms of \(F_{n-1+L}*\Gamma\) that we also denote by \(pc_{a}\) with \(a=a_{ki},b_{k\iota}\) or \(c_{k\nu}\). By the definition of the partial conjugates, the following diagram commutes:_ \(F(X)\)\(F_{n-1+L}*\Gamma\)\(F_{n-1+L}*\Gamma\)\(F_{n-1+L}*\Gamma\)\(F_{n-1+L}*\Gamma\)\(F_{n-1+L}*\Gamma\)\(F_{n-1+L}*\Gamma\)\(F_{n-1+L}*\Gamma\)\(F_{n-1+L}*\Gamma)/K\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\)\(\)\(\)\ _Here above, \(\pi\) denotes the projection induced by the identity on letters and \(conj_{a}\) is the automorphism of \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) induced by conjugation with \(a\). By definition, \(\ker(\iota_{\mathrm{PZ}_{n}}\circ\pi)=\langle\langle R\rangle\rangle\). The commutativity of the diagram implies_ \[r\in\ker(\iota_{\mathrm{PZ}_{n}}\circ\pi)=\langle\langle R\rangle\rangle \Rightarrow pc_{a}(r)\in\ker(\iota_{\mathrm{PZ}_{n}}\circ\pi)=\langle\langle R \rangle\rangle,\] _i.e. if \(r=1\) is a relation in \((F_{n-1+L}*\Gamma)/K\), then the relation \(pc_{a}(r)=1\) also holds in \((F_{n-1+L}*\Gamma)/K\). Consequently, the relation \(W^{\prime}=1\) is contained in \(\langle\langle R\rangle\rangle\) for each \(W^{\prime}\) in_ \[\mathrm{PC}(\{(x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}\mid 1 \leqslant k<n\text{ and }1\leqslant\nu\leqslant N\}).\] Adding these relations to \(R\), we obtain the following presentation for \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\): **Proposition 5.13**.: _The pure orbifold braid group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) is generated by_ \[a_{ji},b_{k\lambda},c_{k\nu}\text{ \ and \ }x_{k},y_{\lambda},z_{\nu}\] _for \(1\leqslant i,j,k<n\) with \(i<j\), \(1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\), and the following defining relations:_ 1. \(z_{\nu}^{m_{\nu}}=1\) _for_ \(1\leqslant\nu\leqslant N\) _and_ \(W^{\prime}=1\) _for_ \(W^{\prime}\in\mathrm{PC}(\{(x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{ m_{\nu}}\mid 1\leqslant k<n\text{ and }1\leqslant\nu\leqslant N\})\)_._ _For \(1\leqslant i,j,k,l<n\) with \(i<j<k<l\), \(1\leqslant\theta,\lambda<L\) with \(\theta<\lambda\) and \(1\leqslant\mu,\nu\leqslant N\) with \(\mu<\nu\), the following relations hold:_ 1. \(c_{k\nu}^{m_{\nu}}=1\)_,_ 2. \([a_{ji},a_{lk}]=1\)_,_ \([b_{j\lambda},a_{lk}]=1\)_,_ \([c_{j\nu},a_{lk}]=1\)_,_ 3. \([a_{li},a_{kj}]=1\)_,_ \([b_{l\lambda},a_{kj}]=1\)_,_ \([b_{l\lambda},b_{k\theta}]=1\)_,_ \([c_{l\nu},a_{kj}]=1\)_,_ \([c_{l\nu},b_{k\lambda}]=1\)_,_ \([c_{l\nu},c_{k\mu}]=1\)_,_ 4. \([a_{lk}a_{lj}a_{lk}^{-1},a_{ki}]=1\)_,_ \([a_{kj}a_{ki}a_{kj}^{-1},b_{j\lambda}]=1\)_,_ \([a_{kj}b_{k\theta}a_{kj}^{-1},b_{j\lambda}]=1\)_,_ \([a_{kj}a_{ki}a_{kj}^{-1},c_{j\nu}]=1\)_,_ \([a_{kj}b_{k\lambda}a_{kj}^{-1},c_{j\nu}]=1\)_,_ \([a_{kj}b_{k\lambda}a_{kj}^{-1},c_{j\nu}]=1\)_,_ \([a_{kj}b_{k\lambda}a_{kj}^{-1},c_{j\nu}]=1\)_,_ 5. \(a_{ji}a_{kj}a_{ki}=a_{ki}a_{ji}a_{kj}=a_{kj}a_{ki}a_{ji}\)_,_ \(a_{ji}b_{j\lambda}b_{i\lambda}=b_{i\lambda}a_{ji}b_{j\lambda}=b_{j\lambda}b_{i \lambda}a_{ji}\)_,_ \(a_{ji}c_{j\nu}c_{i\nu}=c_{i\nu}a_{ji}c_{j\nu}=c_{j\nu}c_{i\nu}a_{ji}\)_._ _For \(1\leqslant h,i,j,k,l\leqslant n\) with \(h<i<j<k<l\), \(1\leqslant\theta,\iota,\lambda<L\) with \(\theta<\iota<\lambda\) and \(1\leqslant\mu,\nu,\circ\leqslant N\) with \(\mu<\nu<o\), the following relations hold:_ 1. \(a_{lk}x_{j}a_{lk}^{-1}=a_{lk}^{-1}x_{j}a_{lk}=x_{j}\)_,_ 2. \(a_{lj}x_{j}a_{lj}^{-1}=x_{j}^{-1}x_{l}^{-1}x_{j}x_{l}x_{j}\)_,_ 3. \(a_{lj}^{-1}x_{j}a_{lj}=x_{l}x_{j}x_{l}^{-1}\)_,_ 4. \(a_{li}x_{j}a_{li}^{-1}=x_{i}^{-1}x_{l}^{-1}x_{i}x_{l}x_{j}x_{l}^{-1}x_{i}^{-1}x _{l}x_{i}\)_,_ 5. \(a_{li}^{-1}x_{j}a_{li}=x_{l}x_{i}x_{l}^{-1}x_{i}^{-1}x_{j}x_{i}x_{l}x_{i}x_{l}^{ -1}x_{l}^{-1}\)_,_ 6. \(b_{l\lambda}x_{j}b_{l\lambda}^{-1}=y_{\lambda}^{-1}x_{l}^{-1}y_{\lambda}x_{l}x_{ j}x_{l}^{-1}y_{\lambda}^{-1}x_{l}y_{\lambda}\)_,_ 7. \(b_{l\lambda}^{-1}x_{j}b_{l\lambda}=x_{l}y_{\lambda}x_{l}^{-1}y_{\lambda}^{-1}x _{j}y_{\lambda}x_{l}y_{\lambda}^{-1}x_{l}^{-1}\)_,_ 8. \(c_{l\nu}x_{j}c_{l\nu}^{-1}=z_{\nu}^{-1}x_{l}^{-1}z_{\nu}x_{l}x_{j}x_{l}^{-1}z_{ \nu}^{-1}x_{l}z_{\nu}\)_,_ 9. \(c_{l\nu}^{-1}x_{j}c_{l\nu}=x_{l}z_{\nu}x_{l}^{-1}z_{\nu}^{-1}x_{j}z_{\nu}x_{l}z _{\nu}^{-1}x_{l}^{-1}\)_,_ 10. \(a_{ji}x_{j}a_{ji}^{-1}=x_{i}^{-1}x_{j}x_{i}\)_,_ 11. \(a_{ji}^{-1}x_{j}a_{ji}=x_{j}x_{i}x_{j}x_{i}^{-1}x_{j}^{-1}\)_,_ 12. \(b_{j\lambda}x_{j}b_{j\lambda}^{-1}=y_{\lambda}^{-1}x_{j}y_{\lambda}\)_,_ 13. \(b_{j\lambda}^{-1}x_{j}b_{j\lambda}=x_{j}y_{\lambda}x_{j}y_{\lambda}^{-1}x_{j}^{-1}\)_,_ 14. \(c_{j\nu}x_{j}c_{j\nu}^{-1}=z_{\nu}^{-1}x_{j}z_{\nu}\)_,_ 15. \(c_{j\nu}^{-1}x_{j}c_{j\nu}=x_{j}z_{\nu}x_{j}z_{\nu}^{-1}x_{j}^{-1}\)_,_ 16. \(a_{ih}x_{j}a_{ih}^{-1}=a_{ih}^{-1}x_{j}a_{ih}=x_{j}\)_,_ 17. \(b_{i\lambda}x_{j}b_{i\lambda}^{-1}=b_{i\lambda}^{-1}x_{j}b_{i\lambda}=x_{j}\)_,_ * \(c_{i\nu}x_{j}c_{i\nu}^{-1}=c_{i\nu}^{-1}x_{j}c_{i\nu}=x_{j}\), * \(a_{ji}y_{a}a_{ji}^{-1}=a_{ji}^{-1}y_{\lambda}a_{ji}=y_{\lambda}\), * \(b_{j\theta}y_{bj}b_{\theta}^{-1}=b_{j\theta}^{-1}y_{\theta}b_{j\theta}=y_{\iota}\), * \(b_{j\lambda}y_{\lambda}b_{j\lambda}^{-1}=y_{\lambda}^{-1}x_{j}^{-1}y_{\lambda} x_{j}y_{\lambda}\), * \(b_{j\lambda}^{-1}y_{\lambda}b_{j\lambda}=x_{j}y_{\lambda}x_{j}^{-1}\), * \(b_{j\lambda}y_{b}b_{j-1}^{-1}=y_{\lambda}^{-1}x_{j}^{-1}y_{\lambda}x_{j}y_{ \lambda}x_{j}^{-1}y_{\lambda}\), * \(b_{j\lambda}^{-1}y_{\lambda}b_{j\lambda}=x_{j}y_{\lambda}x_{j}^{-1}y_{\lambda} x_{j}y_{\lambda}x_{j}y_{\lambda}^{-1}x_{j}y_{\lambda}\), * \(b_{j\lambda}^{-1}y_{\lambda}b_{j\lambda}=x_{j}y_{\lambda}x_{j}^{-1}y_{\lambda} y_{\lambda}x_{j}y_{\lambda}^{-1}x_{j}^{-1}\), * \(c_{j\nu}y_{\lambda}c_{j\nu}^{-1}=z_{\nu}^{-1}x_{j}^{-1}z_{\nu}x_{j}y_{\lambda} x_{j}^{-1}z_{\nu}^{-1}x_{j}z_{\nu}\), * \(c_{j\nu}^{-1}y_{\lambda}c_{j\nu}=x_{j}z_{\nu}x_{j}^{-1}z_{\nu}^{-1}y_{\lambda} z_{\nu}x_{j}z_{\nu}^{-1}x_{j}^{-1}\), * \(a_{ji}z_{\nu}a_{ji}^{-1}=a_{ji}^{-1}z_{\nu}a_{ji}=z_{\nu}\), * \(b_{j\lambda}z_{\nu}b_{j\lambda}^{-1}=b_{j\lambda}^{-1}z_{\nu}b_{j\lambda}=z_{\nu}\), * \(c_{j\mu}z_{\nu}c_{j\mu}^{-1}=c_{j\mu}^{-1}z_{\nu}c_{j\mu}=z_{\nu}\), * \(c_{j\nu}^{-1}z_{\nu}c_{j\nu}=z_{\nu}^{-1}x_{j}^{-1}z_{\nu}x_{j}z_{\nu}\), * \(c_{j\nu}^{-1}z_{\nu}c_{j\nu}=x_{j}z_{\nu}x_{j}^{-1}\), * \(c_{j\nu}^{-1}z_{\nu}c_{j\nu}=z_{\nu}z_{\nu}x_{j}^{-1}z_{\nu}x_{j}z_{\nu}\), * \(c_{j\nu}^{-1}z_{\nu}c_{j\nu}=z_{\nu}z_{\nu}x_{j}^{-1}z_{\nu}z_{\nu}x_{j}z_{\nu }^{-1}z_{\nu}z_{\nu}\), * \(c_{j\nu}^{-1}z_{\nu}c_{j\nu}=x_{j}z_{\nu}x_{j}^{-1}z_{\nu}^{-1}z_{\nu}z_{\nu} z_{\nu}z_{\nu}z_{\nu}^{-1}x_{j}^{-1}\). Proof.: Recall that in Corollary 5.6 we have found a presentation for \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\). It remains to check that the group given by the above presentation is isomorphic. Therefore, we consider assignments \(\rho\) from the above generating set to the generating set from Corollary 5.6. For \(1\leq i,j,k<n\) with \(i<j\), \(1\leq\lambda\leq L\) and \(1\leq\nu\leq N\), let \(\rho\) be defined by the following assignments: \[a_{ji} \mapsto a_{ji}, x_{j} \mapsto a_{nj},\] \[b_{k\lambda} \mapsto b_{k\lambda},\text{ and } y_{\lambda} \mapsto b_{n\lambda},\] \[c_{k\nu} \mapsto c_{k\nu} z_{\nu} \mapsto c_{n\nu}.\] We prove that \(\rho\) and its inverse induce homomorphisms. _The map \(\rho\) induces a homomorphism_. The relations 5.13(S1)-5.13(S5) are the relations of a subgroup \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\). Hence, they are covered by relations 5.6(1)-5.6(5). Moreover, it follows from the proof of Corollary 4.13 that \(\rho\) preserves the relations from 5.13(C): based on Lemma 4.12 this Corollary shows that the analogs of the relations from 5.13(C) follow from the relations in the presentation of the pure subgroup \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\,(\Sigma_{\Gamma}(L))\) from Corollary 4.13. Since the evaluation map ev restricts to a homomorphism from \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\,(\Sigma_{\Gamma}(L))\) to \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\), this implies that the analogous relations follow from the relations in the presentation of \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) from Corollary 5.6. For each \(1\leq\nu\leq N\), the relation \(z_{\nu}^{m_{\nu}}=1\) from 5.13(R) maps to \(c_{n\nu}^{m_{\nu}}=1\) which is covered by relation 5.6(1). By Lemma 5.10 and equation (19), the relations from Corollary 5.6 also imply the relation \((a_{nk}c_{n\nu})^{m_{\nu}}=(c_{n\nu}a_{nk})^{m_{\nu}}\), i.e. the above assignments \(\rho\) respect the relations \((x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}=1\) for each \(j\) and \(\nu\). Furthermore, the partial conjugates of \((x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}\) map to \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\)-conjugates of \((a_{nk}c_{n\nu})^{m_{\nu}}(a_{nk}^{-1}c_{n\nu}^{-1})^{m_{\nu}}\). Hence, \(\rho\) also respects the relation \(W^{\prime}=1\) for each partial conjugate \(W^{\prime}\) of \((a_{nk}c_{n\nu})^{m_{\nu}}(a_{nk}^{-1}c_{n\nu}^{-1})^{m_{\nu}}\). By Theorem 4.9, this proves that \(\rho\) induces a homomorphism. _The inverse map \(\rho^{-1}\) induces a homomorphism_. The relations 5.6(1)-5.6(5) with every index \(<n\) map exactly to the relations 5.13(S1)-5.13(S5). Moreover, \(c_{n\nu}^{m_{\nu}}=1\) maps to \(z_{\nu}^{m_{\nu}}=1\) from relation 5.13(R). That \(\rho^{-1}\) respects the remaining relations follows from relation 5.13(C). For this observation, we give the \(\rho^{-1}\)-image of the relations involving an index \(n\) and show how they fit into the conjugation relations from 5.13(C): \[[a_{ji},x_{k}]=1 \Leftrightarrow a_{ji}x_{k}a_{ji}^{-1}\stackrel{{ p}}{{=} }x_{k},\] \[[b_{j\lambda},x_{k}]=1 \Leftrightarrow b_{j\lambda}x_{k}b_{j\lambda}^{-1}\stackrel{{ q}}{{=}}x_{k},\] \[[c_{j\nu},x_{k}]=1 \Leftrightarrow c_{j\nu}x_{k}c_{j\nu}^{-1}\stackrel{{ \underline{r}}}{{=}}x_{k},\] \[[x_{i},a_{kj}]=1 \Leftrightarrow a_{kj}x_{i}a_{kj}^{-1}\stackrel{{ \underline{a}}}{{=}}x_{i},\] \[[y_{\lambda},a_{kj}]=1 \Leftrightarrow a_{kj}y_{\lambda}a_{kj}^{-1}\stackrel{{ s}}{{=}}y_{\lambda},\] \[[y_{\lambda},b_{k\theta}]=1 \Leftrightarrow b_{k\theta}y_{\lambda}b_{k\theta}^{-1}\stackrel{{ t}}{{=}}y_{\lambda},\] \[[z_{\nu},a_{kj}]=1 \Leftrightarrow a_{kk}z_{\nu}a_{kj}^{-1}\stackrel{{ \underline{a^{\prime}}}}{{=}}z_{\nu},\] \[[z_{\nu},b_{k\lambda}]=1 \Leftrightarrow b_{k\lambda}z_{\nu}b_{k\lambda}^{-1}\stackrel{{ \underline{b^{\prime}}}}{{=}}z_{\nu},\] \[[c_{j\mu},z_{\nu}]=1 \Leftrightarrow c_{j\mu}z_{\nu}c_{j\mu}^{-1}\stackrel{{ \underline{c^{\prime}}}}{{=}}z_{\nu},\] \[a_{ji}x_{j}x_{i}=x_{i}a_{ji}x_{j}\Leftrightarrow a_{ji}^{-1}x_{i}a _{ji}\stackrel{{\underline{c}}}{{=}}x_{j}x_{i}x_{j}^{-1},\] \[a_{ji}x_{j}x_{i}=x_{j}x_{i}a_{ji}\Leftrightarrow x_{j}x_{i} \stackrel{{\vee}}{{=}}(x_{i}^{-1}x_{j}x_{i}x_{i}^{-1}x_{j}^{-1}x_{ i})x_{j}x_{i}\stackrel{{ j\centerdot j,b}}{{=}}a_{ji}x_{j}x_{i}a_{ji}^{-1},\] \[y_{\lambda}b_{i\lambda}x_{i}=b_{i\lambda}x_{i}y_{\lambda} \Leftrightarrow b_{i\lambda}^{-1}y_{\lambda}b_{i\lambda}\stackrel{{ \underline{r}}}{{=}}x_{i}y_{\lambda}x_{i}^{-1},\] \[x_{i}y_{\lambda}b_{i\lambda}=y_{\lambda}b_{i\lambda}x_{i} \Leftrightarrow y_{\lambda}^{-1}x_{i}y_{\lambda}\stackrel{{ \underline{r}}}{{=}}b_{i\lambda}x_{i}b_{i\lambda}^{-1},\] \[x_{i}z_{\nu}c_{i\nu}=c_{i\nu}x_{i}z_{\nu}\Leftrightarrow x_{i}z_{ \nu}\stackrel{{\vee}}{{=}}(z_{\nu}^{-1}x_{i}z_{\nu}z_{\nu}z_{\nu}^ {-1}x_{i}^{-1}z_{\nu})x_{i}z_{\nu}\stackrel{{\underline{n}}{{=}} }c_{i\nu}x_{i}z_{\nu}c_{i\nu}^{-1},\] \[x_{i}z_{\nu}c_{i\nu}=z_{\nu}c_{i\nu}x_{i}\Leftrightarrow c_{i\nu} x_{i}c_{i\nu}^{-1}\stackrel{{ n}}{{=}}z_{\nu}^{-1}x_{i}z_{\nu},\] \[[x_{k}x_{j}x_{k}^{-1},a_{ki}]=1 \Leftrightarrow x_{k}x_{j}x_{k}^{-1} \stackrel{{\vee}}{{=}} (x_{i}^{-1}x_{k}x_{i}x_{i}^{-1}x_{k}x_{i}x_{j}x_{k}^{-1}(x_{i}^ {-1}x_{k}x_{i}x_{i}^{-1}x_{k}^{-1}x_{i})\] \[\stackrel{{\text{j,d}}}{{=}} a_{ki}x_{k}x_{j}x_{k}^{-1}a_{ki}^{-1},\] \[[x_{j}x_{i}x_{j}^{-1},b_{j\lambda}]=1 \Leftrightarrow x_{j}x_{i}x_{j}^{-1} \stackrel{{\vee}}{{=}} (y_{\lambda}^{-1}x_{j}y_{\lambda}y_{\lambda}^{-1}x_{j}^{-1}y_{ \lambda})x_{j}x_{i}x_{j}^{-1}(y_{\lambda}^{-1}x_{j}y_{\lambda}y_{\lambda}^{-1}x _{j}^{-1}y_{\lambda})\] \[\stackrel{{\text{l,f}}}{{=}} b_{j\lambda}x_{j}x_{i}x_{j}^{-1}b_{j\lambda}^{-1},\] \[[x_{j}y_{\theta}x_{j}^{-1},b_{j\lambda}]=1 \Leftrightarrow x_{j}y_{\theta}x_{j}^{-1} \stackrel{{\vee}}{{=}} (y_{\lambda}^{-1}x_{j}y_{\lambda}y_{\lambda}^{-1}x_{j}^{-1}y_{ \lambda})x_{j}y_{\theta}x_{j}^{-1}(y_{\lambda}^{-1}x_{j}y_{\lambda}y_{\lambda} ^{-1}x_{j}^{-1}y_{\lambda})\] \[\stackrel{{\text{j,w}}}{{=}} b_{j\lambda}x_{j}y_{\theta}x_{j}^{-1}b_{j\lambda}^{-1},\] \[[x_{j}x_{i}x_{j}^{-1},c_{j\nu}]=1 \Leftrightarrow x_{j}x_{i}x_{j}^{-1} \stackrel{{\vee}}{{=}} (z_{\nu}^{-1}z_{\nu}z_{\nu}^{-1}x_{j}^{-1}z_{\nu})x_{j}x_{i}x_{j} ^{-1}(z_{\nu}^{-1}x_{j}z_{\nu}z_{\nu}^{-1}x_{j}^{-1}z_{\nu})\] \[\stackrel{{\text{n,h}}}{{=}} c_{j\nu}x_{j}x_{i}x_{j}^{-1}c_{j\nu}^{-1},\] \[[x_{j}z_{\mu}x_{j}^{-1},c_{j\nu}]=1 \Leftrightarrow x_{j}z_{\mu}x_{j}^{-1} \stackrel{{\vee}}{{=}} (z_{\nu}^{-1}x_{j}z_{\nu}z_{\nu}^{-1}x_{j}^{-1}z_{\nu})x_{j}z_{\mu}x_ {j}^{-1}(z_{\nu}^{-1}x_{j}z_{\nu}z_{\nu}^{-1}x_{j}^{-1}z_{\nu})\] \[\stackrel{{\text{n,f}}}{{=}} c_{j\nu}x_{j}z_{\mu}x_{j}^{-1}c_{j\nu}^{-1}.\] This shows that \(\rho^{-1}\) induces an inverse homomorphism and \(\text{PZ}_{n}(\Sigma_{\Gamma}(L))\) has the above presentation. The proof of Proposition 5.13 in particular shows that the relations \(W^{\prime}=1\) for \[W^{\prime}\in\text{PC}(\{(x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{ \nu}}\mid 1\leqslant k<n\text{ and }1\leqslant\nu\leqslant N\})\] and some relations from 5.13(C) are not required to obtain a presentation of \(\text{PZ}_{n}(\Sigma_{\Gamma}(L))\). But these relations are the key to prove the following: **Lemma 5.14**.: _The presentation of \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) from Proposition 5.13 satisfies the conditions from Lemma 4.10\((2)\), i.e. \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))=\langle X\mid R\rangle\rtimes\mathrm{PZ}_{n -1}(\Sigma_{\Gamma}(L))\) with_ \[X=\{x_{1},...,x_{n-1},y_{1},...,y_{L},z_{1},...,z_{N}\}\] _and \(R\) contains the relations from 5.13\((\mathrm{R})\)._ Proof.: As described above the normal subgroup is presented by \(\langle X\mid R\rangle\). The quotient is generated by \[a_{ji},b_{k\lambda}\ \ \text{and}\ \ c_{k\nu}\] for \(1\leqslant i,j,k<n,i<j\), \(1\leqslant\lambda\leqslant L\) and \(1\leqslant\nu\leqslant N\) with defining relations from 5.13\((\mathrm{S1})\)-5.13\((\mathrm{S5})\), i.e. the presentation of \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\) from Corollary 5.6. Let \(\langle Y\mid S\rangle\) denote this presentation of \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\). Further, the presentation contains the relations in 5.13\((\mathrm{C})\) which are of the form \(axa^{-1}=\phi_{a}(x)\) with \[x\in \{x_{k},y_{\lambda},z_{\nu}\mid 1\leqslant k<n,1\leqslant\lambda \leqslant L,1\leqslant\nu\leqslant N\}\ \text{and}\] \[a\in \{a_{ji},b_{k\lambda},c_{k\nu}\mid 1\leqslant i,j,k<n,i<j,1\leqslant \lambda\leqslant N,1\leqslant\nu\leqslant N\}\] and \(\phi_{a}(x)=pc_{a}(x)\) is a word in the alphabet \(X\). It remains to check that this presentation satisfies the conditions from Lemma 4.10\((2)\). That is, the assignments \(x\mapsto\phi_{a}(x)\) induce an automorphism \(\phi_{a}\in\mathrm{Aut}(\langle X\mid R\rangle)\) and the assignments \(\phi:a\mapsto\phi_{a}\) induce a homomorphism \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\to\mathrm{Aut}(\langle X\mid R\rangle)\). We follow the Steps 1 and 2 described in Remark 4.11. _Step 1_.: The assignments \(\phi:a\mapsto\phi_{a}\) induce a homomorphism. This step requires to check that \(\phi\) preserves the relations from 5.13\((\mathrm{S1})\)-5.13\((\mathrm{S5})\). For the relation \(c_{k\nu}^{m_{\nu}}=1\) from 5.13\((\mathrm{S1})\), we need to verify that \(\phi_{c_{k\nu}}^{m_{\nu}}\) induces a trivial action on every generator. On the basis of the relations from 5.13\((\mathrm{C})\), Lemma 5.10 implies \[c_{k\nu}^{m_{\nu}}x_{l}c_{k\nu}^{-m_{\nu}} =x_{l},\] \[c_{k\nu}^{m_{\nu}}x_{k}c_{k\nu}^{-m_{\nu}} =(z_{\nu}^{-1}x_{k}^{-1})^{m_{\nu}}x_{k}(x_{k}z_{\nu})^{m_{\nu}},\] \[c_{k\nu}^{m_{\nu}}x_{j}c_{k\nu}^{-m_{\nu}} =(z_{\nu}^{-1}x_{k}^{-1})^{m_{\nu}}(z_{\nu}x_{k})^{m_{\nu}}x_{j}( x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}(x_{k}z_{\nu})^{m_{\nu}},\] \[c_{k\nu}^{m_{\nu}}y_{\lambda}c_{k\nu}^{-m_{\nu}} =(z_{\nu}^{-1}x_{k}^{-1})^{m_{\nu}}(z_{\nu}x_{k})^{m_{\nu}}y_{ \lambda}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}(x_{k}z_{\nu})^{m_{\nu}},\] \[c_{k\nu}^{m_{\nu}}z_{\mu}c_{k\nu}^{-m_{\nu}} =(z_{\nu}^{-1}x_{k}^{-1})^{m_{\nu}}(z_{\nu}x_{k})^{m_{\nu}}z_{\mu }(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}(x_{k}z_{\nu})^{m_{\nu}},\] \[c_{k\nu}^{m_{\nu}}z_{\nu}c_{k\nu}^{-m_{\nu}} =(z_{\nu}^{-1}x_{k}^{-1})^{m_{\nu}}z_{\nu}(x_{k}z_{\nu})^{m_{\nu}},\] \[c_{k\nu}^{m_{\nu}}z_{\sigma}c_{k\nu}^{-m_{\nu}} =z_{o}.\] The relations \((z_{\nu}x_{k})^{m_{\nu}}(z_{\nu}^{-1}x_{k}^{-1})^{m_{\nu}}=1\) for \(1\leqslant k\leqslant n\) and \(1\leqslant\nu\leqslant N\) from 5.13\((\mathrm{R})\) imply that \(\phi_{c_{k\nu}}^{m_{\nu}}(x)=x\) for each \(x\in X\). Further, it remains to check that \(\phi\) preserves the relations 5.13\((\mathrm{S2})\)-5.13\((\mathrm{S5})\) of \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\). Therefore, we recall that the evaluation map \(\mathrm{ev}\) for each \(n\in\mathbb{N}\) induces a homomorphism \[\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to\mathrm{PZ} _{n}(\Sigma_{\Gamma}(L)).\] Hence, the evaluation map \(\mathrm{ev}\) preserves the relations from \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). In particular, we emphasize that the relations 5.13\((\mathrm{S2})\)-5.13\((\mathrm{S5})\) of \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\) have corresponding relations in \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) (see 4.13\((1)\)-4.13\((4)\)). Now we recall from Corollary 4.6 that \(\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) is a semidirect product \(F_{n-1+L+N}\rtimes\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\). Hence, Lemma 4.10\((2)\) in particular implies that the conjugation relations from Lemma 4.12 induce assignments \(\psi:A\mapsto\psi_{A}\) that yield a homomorphism \(\psi:\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\to \mathrm{Aut}(F_{n-1+L+N})\). That is, the assignments \(\psi:A\mapsto\psi_{A}\) preserve the defining relations of \(\mathrm{PMap}_{n-1}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right)\) from Corollary 4.13. Since \(\mathrm{ev}:\mathrm{PMap}_{n}^{\mathrm{id},orb}\left(\Sigma_{\Gamma}(L)\right) \rightarrow\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) is a homomorphism, the assignments of \(\mathrm{ev}\) in particular preserve the relations from Lemma 4.12. The \(\mathrm{ev}\)-images of the relations from Lemma 4.12 are the relations from 5.13(C). Thus, the assignments \(\phi:a\mapsto\phi_{a}\) induced by the relations in 5.13(C) correspond to the assignments \(\psi:A\mapsto\psi_{A}\) induced by Lemma 4.12. Finally, we obtain that the assignments \(\phi\) correspond to the assignments \(\psi\) and \(\psi\) preserves the relations that correspond to 5.13(S2)-5.13(S5). As a consequence, the fact that \((F_{n-1+L}*\Gamma)/K\) is a quotient of \(F_{n-1+L+N}\) allows us to deduce that \(\phi\) preserves the relations 5.13(S2)-5.13(S5). This finishes the proof of Step 1 and shows that the assignments \(\phi:a\mapsto\phi_{a}\) induce a homomorphism. _Step 2_.: The assignments \(\phi_{a}:x\mapsto\phi_{a}(x)\) induce an automorphism in \(\mathrm{Aut}(\langle X\mid R\rangle)\). To observe that \(\phi_{a}\) preserves the relations \(z_{\nu}^{m_{\nu}}=1\), we recall that the conjugation relations from 5.13(C) identify each \(\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\)-conjugate of \(z_{\nu}\) with an \(\langle X\mid R\rangle\)-conjugate of \(z_{\nu}\). This in particular implies that \(\phi_{a}(z_{\nu}^{m_{\nu}})=\phi_{a}(z_{\nu})^{m_{\nu}}=1\). Further, recall that the map \(\phi_{a}\) describes the conjugations \(conj_{a}\) on the level of words in the alphabet \(X\), i.e. this map coincides with the partial conjugation \(pc_{a}\). Hence, Observation 5.12 shows that the set of partial conjugates \[\mathrm{PC}(\{(x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}\mid 1 \leqslant k<n\text{ and }1\leqslant\nu\leqslant N\})\] is invariant under \(\phi_{a}\). This implies, if \(W^{\prime}\) is a word from this set of partial conjugates, the relation \(W^{\prime}=1\) maps to \(pc_{a}(W^{\prime})=1\) which is also covered by the relations in \(R\). Hence, \(\phi_{a}\) induces an \(\langle X\mid R\rangle\)-automorphism for every \(a\in\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\), which was claimed in Step 2. Finally, the presentation from Proposition 5.13 satisfies the conditions from Lemma 4.10(2). Thus, we obtain that \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) is a semidirect product \[\langle X\mid R\rangle\rtimes\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L)).\] This proves the claim. Lemma 5.14 also allows us to determine \(K\). **Proposition 5.15**.: _The kernel \(K=K_{n}\) of \(\iota_{\mathrm{PZ}_{n}}\) is given by the normal closure of_ \[\mathrm{PC}(\{(x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}\mid 1 \leqslant k<n\text{ and }\ 1\leqslant\nu\leqslant N\})\] _inside \(\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right)\cong F_{n-1+L}*\Gamma\)._ Proof.: By Corollary 5.9, the group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) is a semidirect product \[((F_{n-1+L}*\Gamma)/K)\rtimes\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L)).\] Further, Lemma 5.14 implies that \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) decomposes as a semidirect product with normal subgroup \(\langle X\mid R\rangle\) with \[X=\{x_{1},...,x_{n-1},y_{1},...,y_{L},z_{1},...,z_{N}\}\] and defining relations \(R\) from Proposition 5.13(R). This implies that the subgroup of \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) generated by \(x_{1},...,x_{n-1},y_{1},...,y_{L}\) and \(z_{1},...,z_{N}\) is isomorphic to both \((F_{n-1+L}*\Gamma)/K\) and the group with presentation \(\langle X\mid R\rangle\), i.e. \((F_{n-1+L}*\Gamma)/K\) has the presentation \(\langle X\mid R\rangle\). In particular, we obtain that the kernel \(K\) is the normal closure of the set \[\mathrm{PC}(\{(x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}\mid 1 \leqslant k<n\text{ and }1\leqslant\nu\leqslant N\})\] inside \(F_{n-1+L}*\Gamma\). Finally, we have proven Theorem C: **Corollary 5.16**.: _The pure braid group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) fits into the following exact sequence of pure orbifold braid groups_ \[1\to K_{n}\to\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right) \xrightarrow{\iota_{\mathrm{PZ}_{n}}}\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L)) \xrightarrow{\pi_{\mathrm{PZ}_{n}}}\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\to 1\] _with \(\pi_{1}^{\mathrm{orb}}\left(\Sigma_{\Gamma}(n-1+L)\right)\cong F_{n-1+L}*\Gamma\) and_ \[K_{n}=\langle\langle\mathrm{PC}(\{(x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{- 1})^{m_{\nu}}\mid 1\leqslant k<n,1\leqslant\nu\leqslant N\})\rangle\rangle_{F_{n-1+L}* \Gamma}. \tag{20}\] _Moreover, the homomorphism \(\pi_{\mathrm{PZ}_{n}}\) has a right inverse homomorphism \(\mathrm{sp}_{\mathrm{Z}_{n}}\), i.e. the group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) is a semidirect product \(((F_{n-1+L}*\Gamma)/K_{n})\rtimes\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L))\)._ Corollary 5.16 corrects Theorem 2.14 in [12]. This raises the natural question if the consequences described in [12] still hold. In particular, Roushon deduced that * \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) and consequently the affine Artin groups of types \(\tilde{A}_{n},\tilde{B}_{n},\tilde{C}_{n}\) and \(\tilde{D}_{n}\), and the braid groups of finite complex type \(G(de,e,r)\) for \(d,r\geqslant 2\) are virtually poly-free, see [12, Theorem 2.19]. * \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) and consequently the affine Artin groups of type \(\tilde{D}_{n}\) satisfy the Farrell-Jones isomorphism conjecture, see [12, Theorem 2.20]. By [1], the Artin group of type \(\tilde{A}_{n}\) embeds into the Artin group of type \(B_{n+1}\). The same article shows that the Artin group of type \(\tilde{C}_{n}\) embeds into the Artin group of type \(A_{n}\). For these spherical Artin groups it is known that they are virtually poly-free, see [3]. Since the property of being virtually poly-free passes to subgroups, this settles the questions for Artin groups of type \(\tilde{A}_{n}\) and \(\tilde{C}_{n}\). Since the complex braid groups of type \(G(de,e,r)\) by [4, Proposition 4.1] are isomorphic to a subgroup of \(A(B_{r})\), they are also virtually poly-free. Moreover, to the best of the author's knowledge, it remains open for the Artin groups of type \(\tilde{B}_{n}\) and \(\tilde{D}_{n}\) whether they are virtually poly-free. For the Artin group of type \(\tilde{D}_{n}\), it remains open whether they satisfy the Farrell-Jones isomorphism conjecture. The question whether the Artin group of type \(\tilde{B}_{n}\) satisfies the Farrell-Jones isomorphism conjecture is treated in [11]. But, as we will point out below, Proposition 4.1 in that article is not correct. Proposition 4.1 seems to be essential for proving [11, Theorems 3.2 and 3.3] which yield that the Artin group of type \(\tilde{B}_{n}\) satisfies the Farrell-Jones isomorphism conjecture. To the best of the author's knowledge, this question is also open. ### Fixed strands and punctures At this point, a remark about fixed strands and punctures is in order. Instead of considering the group \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) of braids in an orbifold with punctures in \(\Gamma(\{r_{1},...,r_{L}\})\), we also could have considered a subgroup \(\mathrm{Z}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\) of \(\mathrm{Z}_{n+L}(\Sigma_{\Gamma})\) where \(L\) strands do not move. More precisely, each element in the subgroup \(\mathrm{Z}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\) is represented by a braid that ends in the points \(r_{1},...,r_{L},p_{1},...,p_{n}\) such that the strands that end in positions \(r_{1},...,r_{L}\) are constant. However, using similar observations as above, it turns out that the group \(\mathrm{Z}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\) differs from \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\). On the level of braid diagrams this is reflected by the following observation: In the group \(\mathrm{Z}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\) the relation described in Figure 5.10 holds. In contrast, the group \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\) does not allow the transformation described in Figure 5.10. Since we require that the strand that ends in position \(r_{\lambda}\) is fixed by every representative, the braid in the middle of Figure 5.10 is not contained in \(\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\). To make this precise, let us consider the pure subgroup \(\mathrm{PZ}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\) of \(\mathrm{Z}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\). By the same arguments as in Corollary 3.16, we obtain a generating set for this group. To underline the similarities with \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) let us denote the generating set of this group by braids \[a_{ji},b_{k\lambda}\ \text{ and }\ c_{k\nu}\] for \(1\leq i,j,k\leq n\) with \(i<j,\ \ 1\leq\lambda\leq L\) and \(1\leq\nu\leq N\) with braid diagrams as in Figures 3.9 and 3.11. Now let \(\omega:\mathrm{Z}_{n}(\Sigma_{\Gamma}(L))\to\mathrm{Z}_{n+L}^{\mathrm{fix}(L) }(\Sigma_{\Gamma})\leq\mathrm{Z}_{n+L}(\Sigma_{\Gamma})\) be the homomorphism that is induced by sending the punctures in positions \(r_{1},...,r_{L}\) to fixed strands. Concerning this homomorphism, the relation described in Figure 5.10 implies: **Proposition 5.17**.: _For \(L\geq 1\), the homomorphism \(\omega\) is not injective._ Proof.: For the proof, we consider the restriction \[\omega|_{\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))}:\mathrm{PZ}_{n}(\Sigma_{\Gamma} (L))\to\mathrm{PZ}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\leq\mathrm{PZ}_{ n+L}(\Sigma_{\Gamma}).\] We will prove that this restriction is not injective. For this purpose, we use the following exact sequence obtained from Corollary 5.16: \[1\to K_{n+L}\to F_{n-1+L}*\Gamma\xrightarrow{\iota_{\mathrm{PZ}_{n}}} \mathrm{PZ}_{n+L}(\Sigma_{\Gamma})\xrightarrow{\pi_{\mathrm{PZ}_{n}}} \mathrm{PZ}_{n-1+L}(\Sigma_{\Gamma})\to 1\] with (21) Restricting this exact sequence to the subgroup \(\mathrm{PZ}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\), we further obtain: \[1\to K_{n+L}\to F_{n-1+L}*\Gamma\xrightarrow{\iota_{\mathrm{PZ}_{n}}} \mathrm{PZ}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\xrightarrow{\pi_{ \mathrm{PZ}_{n}}}\mathrm{PZ}_{n-1+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma}) \xrightarrow{1}. \tag{22}\] Restricting the homomorphism \(\mathrm{s}_{\mathrm{PZ}_{n+L}}:\mathrm{PZ}_{n-1+L}(\Sigma_{\Gamma})\to\mathrm{ PZ}_{n+L}(\Sigma_{\Gamma})\) as defined on page 36, yields a section of the homomorphism \(\mathrm{PZ}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\to\mathrm{PZ}_{n-1+L}^{ \mathrm{fix}(L)}(\Sigma_{\Gamma})\). Thus, the group \(\mathrm{PZ}_{n+L}^{\mathrm{fix}(L)}(\Sigma_{\Gamma})\) is the semidirect product \[(F_{n-1+L}*\Gamma)/K_{n+L}\rtimes\mathrm{PZ}_{n-1+L}^{\mathrm{fix}(L)}(\Sigma _{\Gamma}).\] In comparison, the group \(\mathrm{PZ}_{n}(\Sigma_{\Gamma}(L))\) is the semidirect product \[(F_{n-1+L}*\Gamma)/K_{n}\rtimes\mathrm{PZ}_{n-1}(\Sigma_{\Gamma}(L)).\] To deduce that the two groups do not coincide, it remains to check the following: _Claim. \(K_{n}\subsetneq K_{n+L}\)._ On the one hand, we have \(K_{n}\) from (20) and \(K_{n+L}\) as in (21). This directly implies that \(K_{n}\) is contained in \(K_{n+L}\). On the other hand, let us fix \(1\leq\theta\leq L\) and \(1\leq o\leq N\) and let \(\mathbb{Z}*\mathbb{Z}_{m_{o}}\) be the free product with \(\mathbb{Z}=\langle y_{\theta}\rangle\) and \(\mathbb{Z}_{m_{o}}=\langle z_{o}\rangle\). The following assignments induce a homomorphism: \[q_{\theta,o}:F_{n-1+L}*\Gamma \to\mathbb{Z}*\mathbb{Z}_{m_{\nu}},\] \[y_{\theta} \mapsto y_{\theta},\] \[z_{o} \mapsto z_{o},\] \[x \mapsto 1\quad\text{ for the remaining generators.}\] Under \(q_{\theta,o}\) the element \((y_{\theta}z_{o})^{m_{o}}(y_{\theta}^{-1}z_{o}^{-1})^{m_{o}}\) maps to a non-trivial normal form in \(\mathbb{Z}*\mathbb{Z}_{m_{\nu}}\), i.e \((y_{\theta}z_{o})^{m_{o}}(y_{\theta}^{-1}z_{o}^{-1})^{m_{o}}\notin\ker(q_{ \theta,o})\). Moreover, for each \(1\leq k<n\) and \(1\leq\nu\leq N\), the element \((x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\nu}}\) maps trivially under \(q_{\theta,o}\). Consequently, \[\operatorname{PC}(\{(x_{k}z_{\nu})^{m_{\nu}}(x_{k}^{-1}z_{\nu}^{-1})^{m_{\mu}} \mid 1\leq k<n,1\leq\nu\leq L\})\subseteq\ker(q_{\theta,o})\] and thus \(K_{n}\subseteq\ker(q_{\theta,o})\). This implies that \(K_{n+L}\neq K_{n}\) and finally \(K_{n+L}\supsetneq K_{n}\). Hence, the group \(\operatorname{PZ}_{n+L}^{\operatorname{fix}(L)}(\Sigma_{\Gamma})\) in comparison to \(\operatorname{PZ}_{n}(\Sigma_{\Gamma}(L))\) satisfies the additional relations \[(b_{n\lambda}c_{n\nu})^{m_{\nu}}=(c_{n\nu}b_{n\lambda})^{m_{\nu}} \tag{23}\] for \(1\leq\lambda\leq L\) and \(1\leq\nu\leq N\). This implies that \(\omega|_{\operatorname{PZ}_{n}(\Sigma_{\Gamma}(L))}\) is not injective for \(L\geq 1\). In particular, the homomorphism \(\omega\) is not injective in this case. This proves that Proposition 4.1 from [11] is not correct. Comparing \(\operatorname{Z}_{n+L}^{\operatorname{fix}(L)}(\Sigma_{\Gamma})\) to \(\operatorname{Z}_{n}(\Sigma_{\Gamma}(L))\), (23) boils down to additional relations \((t_{\lambda}u_{\nu})^{m_{\nu}}=(u_{\nu}t_{\lambda})^{m_{\nu}}\) for \(1\leq\lambda\leq L\) and \(1\leq\nu\leq N\). **The kernel \(K_{n}\) for \(n\leq 2\)** Under additional assumptions, we can give a more compact description for the normal generating set of the kernel \(K_{n}\) if \(n\leq 2\). For \(n=1\), no non-trivial partial conjugations in \(\operatorname{PZ}_{1}(\Sigma_{\Gamma}(L))\) exist. Hence, the map \[\pi_{1}^{\operatorname{orb}}\left(\Sigma_{\Gamma}(L)\right)\to\operatorname{ PZ}_{1}(\Sigma_{\Gamma}(L))\] is an isomorphism. In particular, the kernel \(K_{n}\) is trivial for \(n=1\). In this case, both groups are isomorphic to \(F_{L}*\Gamma\). For \(n=2\), \(\Gamma=\mathbb{Z}_{m}\) and \(L=0\), the map \[\pi_{1}^{\operatorname{orb}}\left(D_{\mathbb{Z}_{m}}(1)\right)\to\operatorname{ PZ}_{2}(D_{\mathbb{Z}_{m}})\] by Corollary 5.16 has kernel \(K_{2}=\langle\langle\operatorname{PC}((x_{1}z)^{m}(x_{1}^{-1}z^{-1})^{m}\rangle\rangle\). In this case, the only partial conjugation is induced by \(c_{1z}\) that maps \[\begin{array}{ccc}z&\mapsto&z^{-1}x_{1}^{-1}zx_{1}z\quad\text{ and }\\ x_{1}&\mapsto&z^{-1}x_{1}z,\end{array}\] i.e. the conjugation by \(z^{-1}x_{1}^{-1}\). Hence, the kernel \(K_{2}\) is normally generated by \((x_{1}z)^{m}(x_{1}^{-1}z^{-1})^{m}\). In particular, the exact sequence for \(\operatorname{PZ}_{2}^{\operatorname{fix}(1)}(D_{\mathbb{Z}_{m}})\) from (22) yields \[1\to\langle\langle(x_{1}z)^{m}(x_{1}^{-1}z^{-1})^{m}\rangle\rangle\to\mathbb{ Z}*\mathbb{Z}_{m}\xrightarrow{\iota_{\operatorname{PZ}_{n}}}\operatorname{PZ}_{2}^{ \operatorname{fix}(1)}(D_{\mathbb{Z}_{m}})\xrightarrow{\pi_{\operatorname{PZ}_ {n}}}\underbrace{\operatorname{PZ}_{1}^{\operatorname{fix}(1)}(D_{\mathbb{Z}_ {m}})}_{=1}\to 1,\] i.e. \(\operatorname{Z}_{2}^{\operatorname{fix}(1)}(D_{\mathbb{Z}_{m}})=\operatorname{ PZ}_{2}^{\operatorname{fix}(1)}(D_{\mathbb{Z}_{m}})=\langle x_{1},z\mid z^{m}=1,(x_{1}z)^{m}=( zx_{1})^{m}\rangle\neq\mathbb{Z}*\mathbb{Z}_{m}\).
2307.05714
Strong Lensing and $H_0$
Time delays from strong gravitational lensing provide a one-step absolute distance measurement. Thus, they measure $H_0$ independently of all other probes. We first review the foundations and history of time-delay cosmography. Then, we illustrate the current state of the art by means of two recent case studies that have been real breakthroughs: i) the quadruply imaged quasar lensed by a galaxy-scale deflector RXJ1131$-$1231, for which spatially resolved stellar kinematics is available; ii) the multiply imaged supernova "Refsdal", the first with measured time delays, lensed by cluster MACS1149.5$+$2223. We conclude by discussing the exciting future prospects of time-delay cosmography in the coming decade.
Tommaso Treu, Anowar J. Shajib
2023-07-11T18:35:43Z
http://arxiv.org/abs/2307.05714v1
# Strong Lensing and \(H_{0}\) ###### Abstract Time delays from strong gravitational lensing provide a one-step absolute distance measurement. Thus, they measure \(H_{0}\) independently of all other probes. We first review the foundations and history of time-delay cosmography. Then, we illustrate the current state of the art by means of two recent case studies that have been real breakthroughs: i) the quadruply imaged quasar lensed by a galaxy-scale deflector RXJ1131\(-\)1231, for which spatially resolved stellar kinematics is available; ii) the multiply imaged supernova "Refsdal", the first with measured time delays, lensed by cluster MACS1149.5\(+\)2223. We conclude by discussing the exciting future prospects of time-delay cosmography in the coming decade. ## 1 Introduction Following Fermat's Principle, multiple images of the lensed source form at stationary points of the arrival time surface. The difference in arrival time between multiple images arises from the combination of the geometric delay, which makes some light paths longer than others, and the Shapiro delay [93], arising from the difference in gravitational potential along different paths. Thus, if one can measure the difference in arrival time and reconstruct the gravitational potential, one obtains an absolute measurement of the difference in the length of the light paths. It is then a simple matter to convert this measurement into a combination of angular diameter distances, known as the "time-delay distance" [97], and thus obtain a direct measurement of the Hubble constant, \(H_{0}\)[75]. The principle is elegant and straightforward, and Refsdal [75] recognized its potential well before strong gravitational lenses were discovered in the late 70s [115]. Since then, breakthroughs in observations, methodology, and theory, have finally made Refsdal's dream a reality. Time-delay cosmography, as it has come to be called, has been demonstrated to yield measurements of \(H_{0}\) at the level of precision and accuracy of a few percent for a single strong lens system [97; 87]. As strong lens systems are being discovered and followed up at an increasing pace, the overall precision achievable by combining multiple systems is improving rapidly. Progress has also been achieved in understanding systematic errors [63; 38; 7; 112; 39; 28] and how to control them so as to achieve unbiased sample averages at the level of 1-2% in accuracy and precision required to help settle the Hubble tension [9]. Time-delay cosmography has several advantages as a probe of \(H_{0}\). This method provides a one-step measurement of \(H_{0}\) that is completely independent of other \(H_{0}\) probes. The measured cosmological distances are angular diameter distances. Thus, the method is not susceptible to uncertain dust extinction laws that luminosity distance indicators could be susceptible to [31]. In this Chapter, we first provide some background in the theory and history of the method in Section 2. We then review the current state of the art in Section 3, by covering two recent case studies in some detail. First, in 1.3.1 to Section 3.4 we describe the case of RXJ1131\(-\)1231, a galaxy deflector lensing a quasar, that has been modeled based on excellent data, including ground-based spatially resolved kinematics. Second, in Section 3.5 we described the case of supernova Refsdal, multiply imaged by a foreground cluster, the first lensed supernova that has been used to measure H\({}_{0}\). We conclude with our future outlook in Section 4. Due to space limitations, we only focus on the main points currently relevant to the measurement of \(H_{0}\) and refer to previous reviews for more extensive treatments of the theory and history of the method [106; 101; 108; 8]. ### 2 Background #### Theory This section briefly reviews the theory of time-delay cosmography. See [84] for a detailed treatment of the strong lensing formalism. #### Strong lensing formalism The lensing phenomenon is described by the lens equation \[\mathbf{\beta}=\mathbf{\theta}-\mathbf{\alpha}(\mathbf{\theta}) \tag{1}\] that maps a source plane coordinate vector \(\mathbf{\beta}\) to an image plane coordinate vector \(\mathbf{\theta}\), where \(\mathbf{\alpha}(\mathbf{\theta})\) is the deflection angle vector. The lensing deflection is produced by the mass distribution between the source and the observer. In the thin lens approximation, the lensing mass distribution is described by the surface mass density \(\Sigma(\mathbf{\tilde{\theta}})\) projected on the lens plane. The dimensionless lensing convergence is defined as \(\kappa\equiv\Sigma/\Sigma_{\rm cr}\), where the critical density \(\Sigma_{\rm cr}\) is given by \[\Sigma_{\rm cr}\equiv\frac{c^{2}}{4\pi G}\frac{D_{\rm s}}{D_{\rm d}D_{\rm ds}}. \tag{2}\] Here, \(D_{\rm d}\) is the angular diameter distance between the observer and the deflector, \(D_{\rm s}\) is the angular diameter distance between the observer and the source, and \(D_{\rm ds}\) is the angular diameter distance between the deflector and the source. The angular diameter distance between two redshifts \(z_{1}\) and \(z_{2}\) is given by \[D_{\rm A}(z_{1},z_{2})=\frac{c}{H_{0}(1+z_{2})}f_{k}(z_{1},z_{2},\mathbf{\Theta}), \tag{3}\] where \(\mathbf{\Theta}\) is the set of cosmological parameters excluding \(H_{0}\) in a given cosmology and \(f_{k}(z_{1},z_{2},\mathbf{\Theta})\) is a function whose form depends on the sign of the curvature density \(\Omega_{k}\)[116; 72]. The deflection angle \(\mathbf{\alpha}\) is related to the convergence \(\kappa\) as \[\kappa=\frac{1}{2}\nabla\cdot\mathbf{\alpha}, \tag{4}\] and to the lensing potential \(\psi\) as \[\mathbf{\alpha}(\mathbf{\theta})=\nabla\psi(\mathbf{\theta}). \tag{5}\] Thus, the lensing potential is connected to the surface mass distribution by the two-dimensional Poisson equation: \[\nabla^{2}\psi(\mathbf{\theta})=2\kappa(\mathbf{\theta}). \tag{6}\] We can define the Fermat potential [83; 10] as \[\tau(\mathbf{\theta},\mathbf{\beta})=\frac{(\mathbf{\theta}-\mathbf{\beta})^{2}}{2}+\psi(\mathbf{ \theta}). \tag{7}\] The time delay between photon arrival times between images A and B is given by \[\Delta t=\frac{D_{\Delta t}}{c}\left[\tau(\mathbf{\theta}_{\rm A})-\tau(\mathbf{ \theta}_{\rm B})\right], \tag{8}\] where \(D_{\Delta t}\) is the "time-delay distance" [75; 97] defined as \[D_{\Delta t}\equiv(1+z_{\rm d})\frac{D_{\rm d}D_{\rm s}}{D_{\rm ds}}. \tag{9}\] The time delay \(\Delta t\) can be measured for transient sources such as supernovae (SNe) or quasars. The time-delay distance is inversely proportional to the Hubble constant \(H_{0}\). Therefore, if we can measure the time delay \(\Delta t\) and the lensing potential \(\psi\), we can measure \(H_{0}\). The two unknowns \(\Delta\psi\) and \(\beta\) in equation 8 are obtained by modeling the data. The practical aspects of lens modeling with imaging data are described in Section 3.2. The next section discusses the well-known "mass-sheet degeneracy" [MSD 32; 84]. Limiting the MSD is crucial to achieving precise and accurate H\({}_{0}\) measurements. #### Mass-sheet degeneracy The mass-sheet transformation (MST) of the convergence \(\kappa\) given by \[\kappa\rightarrow\kappa^{\prime}=\lambda\kappa+(1-\lambda) \tag{10}\] leaves all the imaging observables invariant with a simultaneous rescaling of the unknown source position \[\beta\rightarrow\beta^{\prime}=\lambda\beta, \tag{11}\] where \(\lambda\) is a constant. This degeneracy in the convergence \(\kappa\) and thus the potential \(\psi\) is called the mass-sheet degeneracy (MSD). The time delay \(\Delta t\) transforms under the mass-sheet transformation as \[\Delta t\rightarrow\Delta t^{\prime}=\lambda\Delta t. \tag{12}\] Thus, the time-delay distance \(D_{\Delta t}\) and Hubble constant \(H_{0}\) inferred from the observed time-delay \(\Delta t\) will change as \[D^{\prime}_{\Delta t}=D_{\Delta t}/\lambda, \tag{13}\] \[H^{\prime}_{0}=\lambda H_{0}. \tag{14}\] In order to gain an understanding of the MST, it is useful to describe the "true" physical convergence \(\kappa_{\rm true}\) in terms of two components as \[\kappa_{\rm true}(\theta)=\kappa_{\rm int}(\theta)+\kappa_{\rm ext}, \tag{15}\] where \(\kappa_{\rm int}(\theta)\) is the convergence produced by the mass distribution of the galaxy or group or cluster acting as the main deflector, while \(\kappa_{\rm ext}\) is the convergence produced by the mass distribution not physically associated with the main deflector, e.g., along the line of sight (LOS). Since for a physical deflector \(\lim_{\theta\rightarrow\infty}\kappa_{\rm int}(\theta)=0\), we can see that \(\lim_{\theta\rightarrow\infty}\kappa_{\rm true}(\theta)=\kappa_{\rm ext}\). Due to the mass sheet degeneracy, this external convergence term cannot be constrained from the imaging of the lensing system. However, it can be estimated by comparing the statistics of LOS mass distribution between cosmological simulations and that observed using photometric and spectroscopic surveys (Section 3.3). If the external convergence \(\kappa_{\rm ext}\) is ignored during lens modeling, then the modeled convergence \(\kappa^{\prime}_{\rm model}\) will be an MST of the true convergence \(\kappa_{\rm true}\) with \(\lambda=1/(1-\kappa_{\rm ext})\) as \[\kappa^{\prime}_{\rm model}(\mathbf{\theta})=\frac{\kappa_{\rm true}(\mathbf{\theta} )}{1-\kappa_{\rm ext}}+1-\frac{1}{1-\kappa_{\rm ext}}=\frac{\kappa_{\rm int}( \mathbf{\theta})}{1-\kappa_{\rm ext}}. \tag{16}\] Here, the condition \(\lim_{\mathbf{\theta}\to\infty}\kappa^{\prime}_{\rm model}=0\) is satisfied, since \(\kappa^{\prime}_{\rm model}\) is only attributed to the central galaxy's or galaxies' mass distribution that ought to vanish at infinity. In practice, lens models are often described with simply parameterized profiles, which implicitly break the MSD. Therefore, the best fit simply-parametrized model \(\kappa_{\rm model}\) could be thought to be an approximate MST of \(\kappa^{\prime}_{\rm model}\) as \[\kappa^{\prime}_{\rm model}(\mathbf{\theta})=\lambda_{\rm int}(\mathbf{\theta})\kappa _{\rm model}(\mathbf{\theta})+1-\lambda_{\rm int}(\mathbf{\theta}). \tag{17}\] Here, the internal MST parameter \(\lambda_{\rm int}(\theta)\) cannot be a constant mass-sheet to satisfy \(\lim_{\mathbf{\theta}\to\infty}\kappa^{\prime}_{\rm model}=0\). A \(\lambda_{\rm int}(\mathbf{\theta})\) can be designed, which acts as a constant sheet of mass near the central region of the lensing system (\(\theta\lesssim\mathcal{O}(10\theta_{\rm E})\)), but vanishes at \(\theta\gg\theta_{\rm E}\)[11; 7], where \(\theta_{\rm E}\) is the Einstein radius. As a result, the true internal mass distribution can be expressed as \[\kappa_{\rm int}(\mathbf{\theta})=(1-\kappa_{\rm ext})[\lambda_{\rm int}(\mathbf{ \theta})\kappa_{\rm model}(\mathbf{\theta})+1-\lambda_{\rm int}(\mathbf{\theta})] \tag{18}\] Thus, the true Fermat potential difference relates to the modeled Fermat potential difference as \[\Delta\tau_{\rm true}=\Delta\tau_{\rm model}\lambda_{\rm int}(1-\kappa_{\rm ext }). \tag{19}\] Here, we have expressed \(\lambda_{\rm int}\) as a constant since it is approximately a constant in the region where lensed images appear. Combining equations 8, 9, and 19, the measured Hubble constant from time-delay cosmography can be expressed as \[H_{0}=\frac{\Delta\tau_{\rm model}(1-\kappa_{\rm ext})\lambda_{\rm int}}{ \Delta t}\,\frac{f_{k}(0,z_{\rm d},\Theta)f_{k}(0,z_{\rm s},\Theta)}{f_{k}(z _{\rm d},z_{\rm s},\Theta)}. \tag{20}\] In this formulation, the time delay \(\Delta t\) is directly measured from light curves of the images (Section 3.3), \(\Delta\tau_{\rm model}\) is obtained from lens modeling of high-resolution imaging data of the lens system (Section 3.3), \(\kappa_{\rm ext}\) is estimated from photometric and spectroscopic surveys of the lens environment (Section 3.3), and \(\lambda_{\rm int}\) is constrained from the stellar kinematics of the lens galaxy (Section 3.3.4). This formulation also illustrates that \(H_{0}\) measured by time-delay cosmography weakly depends on the expansion history of the Universe from redshift \(z_{\rm s}\), through redshift \(z_{\rm d}\), up to redshift \(z=0\). Usually, a cosmological model is assumed to compute the \(f_{k}\) functions, and \(H_{0}\) can be slightly degenerate with other cosmological parameters, such as the dark energy's equation of state parameter \(w\)[12; 118]. However, it is also possible to infer \(H_{0}\) by constraining the Universe's expansion history empirically, using relative distances of the type Ia supernova. This inverse distance ladder method [102] allows one to obtain \(H_{0}\) without assuming specific values for the other cosmological parameters. #### A brief history After Refsdal's [75] suggestion, time-delay cosmography stayed dormant until the first lensed quasars were discovered some 15 years later [115]. A period of excitement followed in the 1980s when astronomers tried to measure time delays for the first time [32]. However, determining time delays proved to be a significant challenge. First, the stochastic nature of quasar light curves makes it more difficult than using the well-behaved supernovae light curve, as originally suggested by Refsdal. Second, light curves at optical wavelengths (corresponding usually to rest frame UV or blue) are severely affected by microlensing, which further confounds the signal. Third, the typical image separation of galaxy-scale lenses is similar to the image quality of seeing-limited ground-based optical telescopes. Overcoming these challenges required monitoring campaigns with significantly higher precision per epoch, higher cadence, and longer duration than initially thought. By the end of the 1990s and early 2000s, such campaigns started to produce reliable time delays with few percent precision, both in the optical [57; 82; 70] and in the radio [33]. Once the ability to obtain time delays was demonstrated, attention turned to constraining the lensing potential. A two-pronged approach proved successful. On the one hand, improvements in data quality and lens modeling techniques allowed astronomers to capture the information content of multiply imaged extended sources, e.g., quasar host galaxies and radio jets and emissions. This step increases by several orders of magnitude the constraints on the mass distribution of the deflector with respect to the positions of the quasars in the image plane used in previous work. On the other hand, it became possible to measure the stellar kinematics of the deflector, a dynamical mass tracer [110] that is crucially insensitive to the limiting factors of lensing, chiefly the MSD. Stellar kinematics also provides an independent handle on the angular diameter distance to the deflector, further enhancing the cosmological constraints [49; 50]. Conversely, lensing is insensitive to the mass-anisotropy degeneracy affecting dynamical estimates of mass. As shown in Section 3.4, the combination of lensing and stellar dynamics is crucial to the success of this methodology [86]. The final piece of the puzzle for precision time-delay cosmography at the galaxy scale was accounting for the effects of the line of sight and the local environment [97]. Under or overdense lines of sight, nearby perturbers, multiple-plane lensing, and group/cluster scale halos can affect the inferred distances for galaxy-scale lenses as much as 5-10% if not properly accounted for. Fittingly, 50 years after Refsdal's paper, the first multiply imaged supernova was discovered in 2014 [54]. The supernova, properly named "Refsdal", was multiply imaged by a cluster of galaxies, making the geometry and arrival time sequence significantly more complex than those observed for quasars lensed by galaxy-scale potentials. The discovered "Einstein-cross" configuration is primarily due to a cluster galaxy and has little value for cosmography since the time delays are short [77]. Additional images, however, were predicted upon discovery based on the cluster mass model [54; 68; 94]. One was in the past and thus sadly lost, but the other was predicted to re-appear approximately a year later [111], and it indeed appeared when and where the model predicted [53]. Measurements of \(H_{0}\) based on SN Refsdal have been reported [56; 55]. Additional multiply imaged supernovae have been discovered since SN Refsdal [35], and this is an area where much growth is expected in the coming decade. ### Current State of the Art We now review the current state of the art by means of two case studies. First, in Sections 3.1 to Section 3.4, we describe the galaxy-scale lens RXJ1131\(-\)1231. In this system, the variable source is a quasar (Fig. 2), and it is arguably the galaxy-scale system with the highest quality dataset to date. Then, in Section 3.5, we examine the case of the SN Refsdal, lensed by cluster MACS1149.5\(+\)2223. This is the first example of \(H_{0}\) measured via a multiply imaged supernova and presents a number of interesting differences with respect to the more established method of using quasars lensed by galaxies. #### Time delay measurements Time delays (i.e., \(\Delta t\) in Eq. 20) between lensed quasar images are measured using the quasar's intrinsic variability. The light curves of the individual images are measured with 1-m or 2-m class telescopes. The COSMOGRAIL collaboration [22] has monitored several lensed quasar systems, spanning up to more than a decade, providing precise measurements to a few percent [23; 103; 62]. Fig. 1 shows light curves from COSMOGRAIL for RXJ1131\(-\)1231 based on 16 years (2003-2019) of monitoring. [24] have demonstrated that robust time delays can be measured within a single season with almost a daily cadence and millimag photometric precision to detect small amplitude variations of the order of 10-50 millimag. Given the numerous discoveries of new lensed quasar systems from recent, ongoing, and future surveys, such fast turnaround for time delay measurements will be essential to rapidly produce cosmological constraints. The main challenge in measuring the time delay is the extrinsic variability caused by the microlensing of foreground stars. This extrinsic variability pattern is unique to each lensed image. The microlensing variability can be of two types. The first one is a fast rise and fall, giving a sharp peak in the light curve. This fast variability happens when the lines of formally infinite magnification [caustics 84] from a single foreground star cross over the quasar accretion disk. The second one is a long-term variability owing to overlapping caustics resulting from crowded stars creating a smooth change in the microlensing magnification as they move in front of the background quasar, due to internal and peculiar transverse motions. Techniques developed to extract the time delays from the light curves while accounting for the microlensing variability include: cross-correlation that does not require explicit modeling of the microlensing variability [73]; explicit modeling of the intrinsic and microlensing variabilities with spline fitting or Gaussian processes [103; 46]. The "Time Delay Challenge" (TDC) [29; 59] validated these techniques with simulated data. The PyCS software used for the COSMOGRAIL data analysis was among the techniques achieving the target precision and accuracy in the TDC. Figure 1: Light curve of RXJ1131\(-\)1231 in the \(R\)-band from COSMOGRAIL. The bottom panels show the difference curves between pairs of images after shifting the curves corresponding to the measured delays. These difference curves illustrate the long-term extrinsic variability due to microlensing. Figure from [62]. ### Lens modeling The main objective in lens modeling is to constrain the lensing potential that gives rise to the observed lensed images and distorted arcs from the background quasar and its host. This lensing potential provides \(\Delta\tau_{\rm model}\) in Eq. 20. Usually, high-resolution imaging from the _HST_ is used in lens modeling due to the superb stability in the point spread function (PSF) and diffraction-limited nature of the PSF to resolve lensed quasars that are separated by \(\sim 1^{\prime\prime}\) (e.g., Figure 2). However, adaptive-optics-assisted imaging from large ground-based telescopes, such as the NIRC2 imager at the Keck Observatory, has also been successfully modeled for cosmographic analysis after meticulous reconstruction of the PSF [19]. The lens model has four main components: (i) the lensing potential or the deflector mass distribution, (ii) the flux distribution in the deflector galaxy, (ii) the flux distribution in the quasar host galaxy, and (iv) the point spread function (PSF). Thus, the model for the imaging data can be reconstructed from which the likelihood function \(p(\boldsymbol{d}_{\rm imaging}\mid\xi_{\rm mass},\xi_{\rm light},\xi_{\rm source },\mathcal{P})\) can be computed, where \(\boldsymbol{d}_{\rm imaging}\) is the imaging data, \(\xi_{\rm mass}\) is the mass model parameters, \(\xi_{\rm light}\) is the deflector galaxy's light model parameter, \(\xi_{\rm source}\) is the quasar host galaxy's light model parameter, and \(\mathcal{P}\) is the PSF model. The PSF \(\mathcal{P}\) can be initially estimated from a few stars within the imaging data. However, due to color mismatch between the stellar type and the quasar, spatial variations, and undersampling, this initial PSF model needs to be improved during the lens modeling to fit the pixels around the lensed quasar images to the noise level [18; 4]. The PSF usually needs also be reconstructed at higher pixel resolution than the original image [91; 89]. The lens model parameters \(\xi_{\rm mass}\), \(\xi_{\rm light}\), and \(\xi_{\rm source}\) are then constrained by sampling the likelihood function. The most common choices for the deflector galaxy's mass or potential model are "simply parametrized" models, where the mass distribution of the main deflector galaxy and other nearby galaxies are described with functions depending on a few free parameters. The nearby galaxies along the LOS are added to the parametric lens model when their higher order lensing effect (i.e., flexion) is non-negligible [61; 117; 6; 88]. The lensing contribution from all the other LOS structures can be accounted for by the independently estimated external convergence term (Sec. 3.3) and an external shear component added to the lens model's parametric description. The simplest choice that yields good residuals for the simply parametrized model of galaxy-scale lenses is the elliptical power-law mass distribution with the 3D radial density profile \(\rho(r)\propto r^{-\gamma}\)[78]. The dark matter and baryonic distributions in massive galaxies are individually not power-laws. Yet, a simple power-law radial mass profile has been sufficient to describe both lensing [36; 1] and non-lensing observables such as stellar dynamics [17; 27] and X-ray intensities [47]. The total density profile is well approximated by a power-law close to the isothermal \(r^{-}2\), a phenomenon known as the "bulge-halo conspiracy" [107; 30]. [97] allowed departure from the power-law model using pixelated perturbations in the potential and found that large deviations (\(>2\%\)) from the power-law form were not required. An alternative choice of simply parametrized mass profile is a composite of the baryonic mass distribution that follows the observed light distribution with a spa tially uniform mass-to-light ratio and the dark matter distribution described with a Navarro-Frenk-White profile [67; 65]. The \(H_{0}\) inferred using this composite mass model is reassuringly consistent with that using the simpler power-law model [63]. The composite model with the dark matter distribution described by an NFW profile has also been consistent for a sample of non-time-delay galaxy-galaxy lenses [30; 90]. These simply parametrized mass models are sufficient to fit the lens imaging data to the noise level and internally consistent within a few percent for each individual system and within 1% for the combined \(H_{0}\) values from 7 systems [118; 63]. However, the true mass distribution can potentially differ from that inferred from lensing data only, owing to the mass-sheet degeneracy [90]. A physical interpretation for such possible deviation - small and not required by the data - is shown in Fig. 3. To model the deflector light profile, one, two, or three Sersic profiles [85] are used [97; 117; 91]. The quasar host galaxy's light profile can be described on a pixellated grid with a regularization condition [98; 114]. For example, such a pixellated scheme is adopted to reconstruct the source of RXJ1131\(-\)1231 illustrated in Fig. 2. An alternative parametric approach is to adopt a basis set of a Sersic profile and shapelets (i.e., 2D Gauss-Hermite polynomials [74]), whose amplitudes are determined through linear inversion of the observed image [5]. It is often necessary to make choices in the model settings, for example, the pixel resolution of the reconstructed host galaxy. To avoid any systematic bias and account for modeling uncertainty, different models are constructed by taking a combination of the plausible choices, and then this source of error is marginalized over by combining the posteriors from all these models. Fig. 4 illustrates this marginalization over multiple lens models with differing resolutions in the host galaxy reconstruction. #### External convergence from the LOS The mass structures between the background source and the observer (i.e., galaxies, groups, and clusters) contribute additional lensing effect that can modify the estimated Fermat potential difference and thus the inferred \(H_{0}\) by a few percent. This shift is estimated with the external convergence term \(\kappa_{\rm ext}\) in Eq. 20. The contributing LOS structures can be grouped into two categories: (i) for lensing effects falling outside the tidal regime1, the non-linear lensing contribution needs to be taken into account by directly including their mass distributions in the lens model (described in Section 3.2), and (ii) for lensing effect falling within the tidal regime, the combined lensing effect can be accounted for with the external convergence term \(\kappa_{\rm ext}\) and an external shear term that is already included in the lens modeling. The LOS structures of the first category are typically within \(\sim\)10\({}^{\prime\prime}\) from the central deflector. The commonly used criterion to select the perturbers in this category is a "flexion shift" threshold [61]. The estimation of flexion shift requires photometric redshifts and mass measurements of the LOS galaxies. Spectroscopic redshifts are used over photometric redshifts whenever available. If the LOS galaxies form a group (or cluster), then the group-scale (or cluster-scale) halo also needs to be explicitly modeled if it satisfies the flexion-shift criterion [52, 64, 80]. The velocity dispersions of the LOS galaxies are additionally used to infer group or cluster memberships of those galaxies [95, 14]. The external convergence for the second category of perturbers can be estimated statistically from the number count of galaxies [34] around the lens system (usually within 120\({}^{\prime\prime}\)) or using weak lensing effect created by these LOS perturbers on the shapes of background galaxies. The galaxy number counts, either directly or weighted by quantities that correlate with lensing strength, such as projected distance from the central deflector, the perturber's mass, and redshift [40, 79], are used as summary statistics for the lens environment. LOS cones with matching summary statistics are then selected from cosmological simulations [45, 44]. The external convergence values computed for these simulated cones then provide a probability distribution of the external convergence of the real target (e.g., as obtained by [97] Figure 2: Illustration of lens model of RXJ1131\(-\)1231 from [97]. **Top left:** Observed _HST_ ACS/F814W image. **Top middle:** Reconstructed lensed arcs of the quasar host galaxy using the best-fit lens model. **Top right:** Reconstructed quasar images and the lens galaxy light. **Bottom right:** Total reconstructed image obtained by summing the top-middle and top-right panels. **Bottom middle:** Normalized residuals for the reconstructed image. **Bottom right:** The reconstructed quasar host galaxy morphology on the source plane. for RXJ1131\(-\)1231). The number counts for both the observed lens environment are normalized with a large number of control fields for which the photometric data is available at the same quality, and the same is done for the simulated fields. Taking these relative number counts as the summary statistics minimizes the impact of the chosen cosmological parameters in the cosmological simulation. From high-quality and wide-field imaging, the distortion in the shapes of background galaxies can be used to constrain the weak lensing shear created by the LOS structures. In the linear regime, this shear uniquely maps to external convergence [51]. [105; 104] applied this technique to provide alternative estimates of the external convergence \(\kappa_{\rm ext}\) for two systems, finding estimates in agreement with those based on galaxy number counts. Figure 3: Physical interpretation of residual uncertainty allowed by mass-sheet degeneracy on mass density profile. [88] modeled a set of non-time-delay lens galaxies with exquisite _HST_ images and unresolved stellar velocity dispersion of the deflector fully accounting for the mass-sheet degeneracy and expressed the results as deviations from standard β€œcomposite” (top row) and power-law mass profiles (bottom row). The standard composite model comprising an NFW [66] dark matter halo and a stellar component with constant mass-to-light ratio is consistent with the data, although a small amount of contraction/expansion of the halo (top left panel) or a small gradient in the mass-to-light ratio (top right panel) cannot be ruled out. Similarly, a power-law mass density profile is consistent with the data, although the data cannot rule out small deviations (purple bands in the bottom panels). See [88] for more description. When available, additional information – such as spatially resolved stellar kinematics – reduces the residual freedom and thus tightens the bounds on \(H_{0}\) when applied to time-delay lenses. #### Stellar kinematics Stellar kinematics traces the 3D potential of the deflector, whereas lensing traces the 2D projected potential. Thus, in combination, dynamics and lensing can break the degeneracies inherent to each method: the mass-sheet degeneracy in lensing and the mass-anisotropy degeneracy in dynamics. Therefore, stellar kinematics provide the \(\lambda_{\rm int}\) term in Eq. 20. Figure 4: Combined posterior of the lens model parameters power-law exponent \(\gamma\), Einstein radius \(\theta_{\rm E}\), and external shear magnitude \(\gamma_{\rm ext}\). The illustrated model-predicted time-delay distance \(D_{\Delta t}^{\rm model}\) is obtained using a fiducial cosmology from the predicted Fermat potential difference \(\Delta\tau_{\rm model}\) from the lens model parameters. Therefore, the illustrated \(D_{\Delta t}^{\rm model}\) is simply a reformulation of \(\Delta\tau_{\rm model}\), thus the measured time delays are still necessary to measure \(H_{0}\) using the lens model posteriors. The combined posterior marginalizes over the choice of pixel resolution in the host galaxy reconstruction, where the individual posterior for each choice is illustrated with colored distributions. Figure from [100]. Owing to the angular size and faintness of the deflectors, the most common type of measurement for stellar kinematics is an unresolved (i.e., aperture-integrated) LOS velocity dispersion from long-slit spectra. These spectra are usually taken using large ground-based telescopes, for example, the Keck Observatory and the Very Large Telescope (VLT), in seeing-limited cases in the optical. The stellar velocity dispersion is usually modeled by solving the Jeans equation [48; 16], which is derived from the collisionless Boltzmann equation [3]. The line of sight velocity dispersion can then be expressed, in the spherical case, as \[\sigma_{\rm los}^{2}(R)=\frac{2G}{I(R)}\int_{R}^{\infty}\mathcal{K}_{\beta} \left(\frac{r}{R}\right)\frac{l(r)M(r)}{r}\mathrm{d}r, \tag{1.21}\] where \(I(R)\) is the surface brightness distribution of the kinematic tracer in the deflector, \(l(r)\) is the 3D luminosity density of the same tracer, \(M(r)\) is the 3D enclosed mass, \(\mathcal{K}_{\beta}\) is a function that depends on the anisotropy profile \(\beta(r)\)[60]. The anisotropy parameter \(\beta(r)\) is defined as \[\beta(r)\equiv 1-\frac{\sigma_{\rm t}^{2}(r)}{\sigma_{\rm r}^{2}(r)}, \tag{1.22}\] where \(\sigma_{\rm t}\) is the tangential component of the velocity dispersion and \(\sigma_{\rm r}\) is the radial component. In Eq. 1.21, the terms related to the light distribution, that is, \(I(R)\) and \(l(r)\), are well-constrained from the observed light, albeit with assumptions to deproject from 2D \(I(R)\) to 3D \(l(r)\). However, there is a degeneracy between the mass distribution giving \(M(r)\) and the anisotropy profile giving \(\mathcal{K}_{\beta}\), namely the mass-anisotropy degeneracy [25]. Although combining constraints from lensing and dynamics breaks the mass-sheet and mass-anisotropy degeneracies, the breaking power from unresolved velocity dispersion is limited. In the past, the mass-sheet degeneracy was broken chiefly through the mass profile assumption (i.e., the power-law or composite form), and then the stellar kinematics provided tighter constraint on the mass profile distribution with the mass-anisotropy degeneracy marginalized over [100]. In such cases, the internal mass-sheet transformation parameter \(\lambda_{\rm int}\) (in Eq. 1.20) was effectively set to \(\lambda_{\rm int}=1\). With these assumptions, for RXJ1131\(-\)1231 \(H_{0}\)=78.3\({}^{+3.4}_{-3.3}\) km s\({}^{-1}\) Mpc\({}^{-1}\) based on joint lens modeling constraints from \(HST+\)AO imaging [96; 19]. Spatially resolved velocity dispersion constrains anisotropy more tightly than the unresolved case. Thus, spatially resolved velocity dispersion is much more effective in simultaneously breaking the above-mentioned degeneracies [86]. Furthermore, an axisymmetric mass model can be constrained based on the spatial information in the resolved kinematics, allowing one to go beyond simple spherical symmetry in the mass models[15]. The oblateness or prolateness of the 3D mass shape can potentially be constrained by the misalignment between the kinematic and photometric major axes [58]. The only time-delay lens system with spatially resolved kinematics published so far is RXJ1131\(-\)1231, based on data from the Keck Cosmic Web Imager (KCWI) on the Keck Telescope (Fig. 1.5, 1.6). Obtaining integral field spectra for kinematic measurement has been challenging from the ground since the quasar contribution will contaminate the central deflector's spectra without AO in the seeing limited case as the typical separation between the two is \(\sim 1^{\prime\prime}\). The kinematic measurement made by [92] accounted for quasar light contamination in extracting the velocity dispersion of the deflector by forward modeling it. Combining this resolved kinematics with the _HST_ imaging, measured time delays, and estimated external convergence yields \(H_{0}=77.3^{+7.1}_{-7.3}\) km s\({}^{-1}\) Mpc\({}^{-1}\)[92]. This value, obtained by combining resolving kinematics and mass modeling with relaxed assumptions, agrees very well with that obtained from simple parametric mass models, thus corroborating the standard assumptions made in time-delay cosmography [92; 118]. #### Supernova time delay cosmography: the "Refsdal" case study SN Refsdal (Fig. 8; [54]) is the first multiply imaged SN with measured time delays [56; 77]. It is worth studying this case in some detail as it provides some important lessons for SN time delays, which we expect to be a major contributor to cosmography in the next decade. Before going through all the steps leading to the recent measurements of \(H_{0}\)[55], it is useful to summarize the main differences between this case and the one examined in the previous sections, representing quasars lensed by galaxy-scale deflectors: Figure 5: Keck/KCWI integral field spectra of RXJ1131\(-\)1231. **Left:** 2D representation of the datacube obtained by summing across the wavelength dimension. The yellow contour shows the region within which spectra were extracted to measure resolved velocity dispersion of the deflector. The grey box marks the pixel for which the observed spectra and model fitting is shown in the right-hand panel. **Right:** The observed spectra (grey line) and the estimated deflector spectra (orange line) after removing the modeled quasar contribution (blue line). The model for the observed spectra using X-shooter Spectral Library (XSL) and having fitted the velocity dispersion is shown with the red line. The spatially resolved velocity dispersion map is thus extracted in 41 bins within the yellow contour (Fig. 6). Figure from [92]. 1. The intrinsic light curve of an SN (Fig. 8) is usually well described by a template or a low-order polynomial. This regular nature simplifies the task of obtaining a high-precision time delay with respect to the stochastic light curves of lensed quasars (Fig. 1). Extrinsic effects due to the foreground, e.g., microlensing, can affect both cases. 2. The main deflector of SN Refsdal is a cluster of galaxies. Clusters of galaxies are not dynamically relaxed, in contrast to the inner regions of massive elliptical galaxies - the typical deflectors of galaxy-scale lenses. Thus, cluster lenses are generally significantly more complex to model, both from a lensing and dynamical point of view, with respect to galaxy-scale ones. 3. The caustics of clusters of galaxies cover a much larger solid angle on the sky than those of galaxies. Therefore, tens or even hundreds [2] of multiply imaged sources can be used to constrain the lens model in a cluster. There is generally one multiply imaged source for galaxy-scale lenses, although systems with up to a handful of them have been found [37; 87]. Note that having several families of multiply imaged sources at different redshifts helps mitigate the mass-sheet degeneracy [13; 41; 42]. SN Refsdal was first discovered [54] as an Einstein cross in images taken as part of the Grism Lens-Amplified Survey from Space (GLASS) program [109]. Figure 6: Spatially resolved kinematic maps in 41 bins for RXJ1131\(-\)1231 from Keck/KCWI IFU spectra. The top row corresponds to the LOS velocity dispersion, and the bottom row corresponds to the LOS mean velocity. The left column shows the mean values and the right column shows the uncertainties. Figure from [92]. Figure 1.7: Comparison of \(H_{0}\) values with various ways to break the mass-sheet degeneracy. For the seven systems analyzed by the TDCOSMO collaboration, the MSD broken by simple parametric assumption on the mass profile (with \(\lambda_{\rm int}=1\) fixed) gives \(74.2^{+1.6}_{-1.6}\) km s\({}^{-1}\) Mpc\({}^{-1}\) (black) [63], and the MSD broken by unresolved kinematics gives \(73.3^{+5.8}_{-5.8}\) km s\({}^{-1}\) Mpc\({}^{-1}\) (emerald) [7]. For RXJ1131\(-\)1231, the MSD broken by simple parametric mass profile assumption gives \(78.3^{+3.4}_{-3.3}\) km s\({}^{-1}\) Mpc\({}^{-1}\) (blue) [19], and the MSD broken by spatially resolved kinematics gives \(77.1^{+7.3}_{-7.1}\) km s\({}^{-1}\) Mpc\({}^{-1}\) (red) [92]. Using spatially resolved kinematics for one system gives similar uncertainty on \(H_{0}\) from seven systems with unresolved kinematics, illustrating the superior power of resolved kinematics in breaking the MSD. Fig. from [92]. Figure 1.8: Figure from [56]. **Left:**_HST_ images of SN Refsdal, summarizing the time evolution of the phenomenon, including the original discovery of the Einstein Cross in 2014 and the appearance of SX in 2015/2016. **Right:** light curve of SN Refsdal compared to SN1987A-like light curves and polynomial fits. Figures from [56]. Although the time delays between the cross images were expected to be of order days/week (and thus not very useful for cosmography) because of the symmetry and separation of the configuration, it was immediately recognized [54; 68; 94] that one additional and more distant image (SX) would re-appear some time in the near future, with great potential for cosmography. Considerable effort went into predicting SX's timing, position, and brightness using updated lens models based on extensive spectroscopy [111; 43] before it appeared in the sky. Indeed, SX appeared as predicted [54], helping to build confidence in the models. The good agreement was not a foregone conclusion, considering the complexity of the cluster lens models. The first estimate of the magnification ratio and of the long time delay (SX to the cross; the one with the most cosmographic constraining power) was used by one team to produce an estimate of \(H_{0}\)[113]. That study highlighted the spread between the different cluster lens models. The spread between all the possible models is not surprising, considering that many of the lensing codes employed were not designed for precision cosmology and therefore did not have the necessary numerical precision and resolution. Pixellated models and models based on heavily regularized basis sets do not have by design the angular resolution needed for time-delay cosmography. The resolution requirement can be understood in terms of astrometric precision. Measuring the Hubble constant to a few percent precision usually requires knowing how the Fermat potential (and image positions) vary over tens of milliarcseconds [4]. Furthermore, many of the early models did not make full use of all the available information, e.g., cluster membership, stellar velocity dispersions of cluster members, and spectroscopic redshift of multiple images. A thorough discussion of the sources of uncertainty is given by [42]. After the re-appearance, a major effort was undertaken to obtain a blind, high-precision (1.5%) measurement of the time delay [56] and fold it in with existing lens models to obtain a blind measurement of \(H_{0}\)[55]. It is crucial to stress that all the analysis choices were made blindly with respect to \(H_{0}\) (the time delay was kept blind Figure 1.9: Left: unblinded posterior distribution function of H\({}_{0}\) based on supernova Refsdal. The lens models have been weighted according to their agreement with the H\({}_{0}\)-independent observables, such as image positions and magnification ratios. Right: error budget on H\({}_{0}\) measurement. Figures from [55]. throughout the analysis, for example) and that the analysis was not modified after unblinding (with the exception of the correction of a mistake in sign convention on magnifications). This is a crucial step to prevent "experimenter bias," and we advocate that every cosmological measurement should be carried out blindly. As highlighted by [113], the spread between lens models is clearly significant. It should be noted, however, that they include in their analysis models that were later discovered to be affected by substantial numerical errors [55], and therefore their spread was overestimated. Two options were considered by [55] to account for the spread between models. First, it was decided to consider only the two models that are based on codes designed to do high-precision cosmography, glee[99] and glafic[69], weighted by their agreement with observables independent of \(H_{0}\), yielding \(H_{0}=66.6^{+4.1}_{-3.3}\) km s\({}^{-1}\) Mpc\({}^{-1}\). Alternatively, an analysis considering all models was run, again weighted by the same scheme, finding \(H_{0}=64.8^{+4.3}_{-4.3}\) km s\({}^{-1}\) Mpc\({}^{-1}\). The results are very similar between the two schemes because the glee and glafic models provide, by far, the best match to the observables. If one prefers not to weight the models according to observables, combining the glee and glafic models with equal weight yields \(H_{0}=67.2^{+4.1}_{-3.7}\) km s\({}^{-1}\) Mpc\({}^{-1}\), again consistent within the errors. The measurement from SN Refsdal [56] is not precise enough to help solve the "Hubble tension." Given the uncertainties [42; 56], it is consistent with both _Planck_ and the local distance ladder method [76]. The latter is only \(1.5\sigma\) away from the best fit, thus consistent with the SN Refsdal analysis. The important lessons from SN Refsdal are two. First, the blind efforts succeeded to the level that only the most optimistic practitioners of cluster lens modeling would have expected. SX appeared exactly where and when it was predicted to be, and the inferred value of \(H_{0}\) is perfectly consistent with those of other methods. It did not have to be this way, considering the complexity of the mass distribution in MACS1149.5\(+\)2223. SX could have appeared elsewhere or at a different time, and \(H_{0}\) could have turned out to be 30 or 100 km s\({}^{-1}\) Mpc\({}^{-1}\). Second, 6% is a very small uncertainty for an absolute distance measurement from a single system, and compelling arguments show that it is not significantly underestimated [42; 55] when the data quality is as high as in this case. A simple \(\sqrt{N}\) scaling suggests that ten systems will suffice to reach \(\sim 2\%\) precision and thus contribute to solving the Hubble tension if no yet-to-be-discovered systematic floor arises. ### Conclusions and future outlook The two recent case studies we chose to highlight represent genuine breakthroughs. We will now discuss them in the context of other measurements to give a sense of the landscape (a selection of measurements is summarized in Figure 10). RXJ1131\(-\)1231 is the first galaxy-scale lens with time delay and spatially resolved kinematics. The exquisite data for this system enabled [92] to reach 9% precision from a single lens, using what we call "conservative" assumptions about the mass distribution, i.e., allowing the mass-sheet degeneracy to be only constrained by kinematic data, and thus allowing maximum uncertainty on \(H_{0}\). This is remarkable, as the precision is comparable to what was obtained by [7] with seven lenses for which only unresolved kinematics were available. With the higher angular resolution attainable with JWST or with future instruments with adaptive optics, we can expect the precision per system to reach \(\sim 4\%\)[119], better than that was achieved in the past for individual systems by using the more "assertive" approach of breaking the mass-sheet degeneracy by imposing a functional form for the mass density profile [88]. With this kind of precision, reaching the 2% precision, which was previously achieved under "assertive" assumptions by [118; 63], seems within reach under "conservative" assumptions with the systems known today [9]."Free form" models, that is, the ones in which the surface mass density or potential of the main deflector is rendered as pixels (see [26; 71; 20; 81], should be able to obtain similar precision if they can be constrained by the full, high-information content from data with state-of-the-art quality. This has not been achieved yet but should be attainable. Along the way, work will need to continue in order to uncover and mitigate new sources of systematic errors, including those arising from selection effects [21]. The success of MACS1149.5\(+\)2223 paves the way for lensed SNe to become a major contributor to time-delay cosmography. A single system with excellent data quality, modeled in an "assertive" way, obtains a 6% precision on \(H_{0}\), comparable to the average time-delay quasars of those analyzed by [118]. With major synoptic surveys such as _Euclid_ and the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) about to begin, we are confident that many such systems will be discovered in the coming decade [69]. Hopefully, samples of lensed SN will soon reach comparable precision (or better!) to lensed quasars, thus providing a vital sanity check and increased overall precision with respect to quasar-only forecasts such as those presented by [108], thus getting us closer to solving the "Hubble Tension." ###### Acknowledgements. TT and AJS thank their many colleagues and friends working in the field of time delay cosmography, without whom the progress described here would have been impossible. We thank Fred Courbin, Pat Kelly, Veronica Motta, and Paul Schechter for their constructive comments on an early draft of this manuscript. TT gratefully acknowledges support by the National Science Foundation, the National Aeronautics and Space Administration, the Packard Foundation, and the Moore Foundation. Support for this work was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51492 awarded to AJS by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. Figure 10: Comparison of \(H_{0}\) measurements based on time-delay cosmography, in \(\Lambda\)CDM cosmology. The measurements are grouped by: i) the lensing configuration (galaxy+QSO vs. cluster+SN). ii) Assumptions on the mass distribution of the main deflector, ”assertive” and ”conservative” for simply parametrized models or ”free form” for pixellated models. iii) Amount of information used per lens; in the case of a galaxy-scale lens, ”low info” utilizes quasar positions and time delays, ”high info” adds the extended surface brightness distribution of the multiple images of the quasar host galaxy, stellar kinematics of the main deflector, and number counts or weak lensing to estimate the line of sight convergence. For the cluster+SN case, we define it as high info due to the large number of spectroscopically confirmed multiply-images and cluster members, and we define it as assertive because the glee and glafic models used for this measurement are based on simply parametrized forms. We give the reference and the number of time delay lenses for each measurement. The measurements shown in red have been blinded to prevent experimenter bias. The figure is updated from [108].
2304.12738
Variable stars detection in the field of open cluster NGC 188
This work presents the charge-coupled device (CCD) photometric survey of the old open cluster NGC 188. Time-series V-band photometric observations were conducted for ten nights in January 2017 using the Nanshan One-meter Wide-field Telescope (NOWT) to search for variable stars in the field of the cluster field. A total of 25 variable stars, including one new variable star, were detected in the target field. Among the detected variables, 16 are cluster member stars, and the others are identified as field stars. The periods, radial velocities, effective temperatures, and classifications of the detected variables are discussed in this work. Most of the stars' effective temperatures are between 4200 K and 6600 K, indicating their spectral types are G or K. The newly discovered variable is probably a W UMa system. In this study, a known cluster variable star (V21 = V0769 Cep) is classified as an EA-type variable star based on the presence of an 0.5 magnitude eclipse in its light curve.
Fang-Fang Song, Hu-Biao Niu, Ali Esamdin, Yu Zhang, Xiang-Yun Zeng
2023-04-25T11:27:43Z
http://arxiv.org/abs/2304.12738v1
# Variable stars detection in the field of open cluster NGC 188 ###### Abstract This work presents the charge-coupled device (CCD) photometric survey of the old open cluster NGC 188. Time-series V-band photometric observations were conducted for ten nights in January 2017 using the Nanshan One-meter Wide-field Telescope (NOWT) to search for variable stars in the field of the cluster field. A total of 25 variable stars, including one new variable star, were detected in the target field. Among the detected variables, 16 are cluster member stars, and the others are identified as field stars. The periods, radial velocities, effective temperatures, and classifications of the detected variables are discussed in this work. Most of the stars' effective temperatures are between 4200 K and 6600 K, indicating their spectral types are G or K. The newly discovered variable is probably a W UMa system. In this study, a known cluster variable star (V21 = V0769 Cep) is classified as an EA-type variable star based on the presence of an 0.5 magnitude eclipse in its light curve. Galaxy -- open cluster: individual: NGC 188 -- stars: variables: general -- technique: photometric -- method: data analysis Vol.0 (20xx) No.0, 000-000 ## 1 Introduction Open Clusters (OCs), which are composed of young stars, are crucial for studying the formation and evolution of the stellars and Galactic disk (Piskunov et al. 2006; Ozeren et al. 2014; Gillen et al. 2020). Because the cluster members born from the same interstellar cloud are assumed to have a common age, distance, reddening, and chemical abundance, the studies in cluster variables provide significant clues to probe the structure and evolution of stars and clusters. The high precision in the astrometric and photometric measurements by Gaia gives more opportunities to classify the cluster member stars from the field stars, which is great progress for the studies of variable stars in the open clusters (Gaia Collaboration et al. 2016; Cantat-Gaudin et al. 2020). As one of the most ancient, rich open clusters known in our Milky Way, NGC 188 ( 1 = 122.843 deg, b = + 22.384 deg; C 0039+850) is a captivating open cluster that is intensively studied by numerous studies (Sarajedini et al. 1999; Friel, Jacobson, & Pilachowski 2010; Hills et al. 2015; Cohen, Geller, & von Hippel 2020). It is an excellent laboratory based on its abundant member stars, easy to observe, and less contaminated by field stars owning to its special location that it is Located at high latitude and far away from the galactic disk (Bonatto, Bica, & Santos 2005; Fornal et al. 2007; Wang et al. 2015). 857 cluster member stars with probabilities over 70% was identified by Cantat-Gaudin et al. (2020). Lists of the previous observational surveys for the cluster are shown in Fornal et al. (2007); Geller et al. (2008); Wang et al. (2015). Various works about the fundamental parameters of NGC 188 have been performed after the release of the Gaia data which provide the unprecedented precise parallax measurements (Cantat-Gaudin et al. 2020; Monteiro et al. 2020; Bonatto 2019; Gaia Collaboration et al. 2018). Taken the fitted parameters by Cantat-Gaudin et al. (2020), The basic cluster parameters of NGC 188 are as follows, the distance modulus 11.15 mag, corresponding to the distance 1698 pc, \(log(Age)=9.85yr\), extinction \(A_{V}=0.21mag\). Also, NGC 188 is a special cluster owning for its abundant and various variable stars. Early in the 1960s, four short-period W UMa stars (known as EQ Cep, ER Cep, ES Cep, EP Cep) and a suspected variable NSV 395 were discovered by Hoffmeister (1964). Kaluzny & Shara (1987) and Kaluzny (1990) discovered another 7 short-period variables through CCD photometric surveys using the 0.9 m telescope at Kitt Peak National Observatory (KPNO). Subsequently, Xin, Zhang, & Deng (2002) and Zhang et al. (2004) yielded sixteen variables by monitoring the cluster in 1 \(deg^{2}\) field with the 60/90-cm Schmidt telescope located at the Xinglong Station of the National Astronomical Observatories of the Chinese Academy of Sciences. Meanwhile, Kafka & Honeycutt (2003) reported 51 faint variables in the cluster's central area of \(17\times 17\ arcmin^{2}\) with the WIYN 3.5-m telescope, but only two variables were detected by the subsequent monitoring of other telescopes. Another 18 variables were discovered by Mochejska et al. (2008) as the results of searching for transiting planets in the open cluster NGC 188, part of the project of Planets in Stellar Clusters Extensive Search (PISCES). In the field of \(1.5\times 1.5\ deg^{2}\) around the cluster, 18 new variable stars were identified by the MASTER series robotic telescope in the Astronomical Observatory of Ural Federal University (Popov et al. 2013). These variable stars were included in the electronic catalog of the American Association of Variable Star Observers (AAVSO/VSX1) in which 68 variable stars are known within a 30 arcmin radius around NGC 188. The Gaia collaboration identified another 61 variable stars based on the the Gaia DR3 database (Gaia Collaboration 2022). A total of 129 variables were collected in this field. Footnote 1: [http://www.aavso.org/vxs/](http://www.aavso.org/vxs/) This paper is structured as follows. In Section 2 the observations and data reductions are presented. We focused on the variable identification and memberships in Section 3. In Section 4 we compared our results with previous works and emphasized our work of the new variable star. The conclusions are given in Section 5. ## 2 Observations and Data Reductions In January 2017, ten days of photometric observations were conducted by using the Nanshan One-meter Wide-field Telescope (NOWT; Song et al. (2016); Ma et al. (2018); Bai et al. (2020)) located at the Nanshan station of Xinjiang Astronomical Observatory (XAO), Chinese Academy of Sciences (CAS). An E2V CCD203-82 (blue) chip CCD camera with 4096 \(\times\) 4136 pixels was mounted at the prime focus of the telescope, providing a field of view (FOV) of 1.3 \(\times\) 1.3 \(deg^{2}\). The telescope was equipped with a Johnson-Cousins standard UBVIR filter system for broadband photometry and operated at -120\({}^{\circ}\)C with liquid nitrogen cooling. During the observations, a number of bias and twilight flat frames were taken, and the seeing in all of these images was below 2.2 arcsec. In total, over 79 hours with 4198 images of useful data were obtained on 10 nights. The journal of the observations of NGC 188 is listed in Table 1. The exposure time of time-series observations for the V band is 35s. And the typical field of view in this period is 54.62 \(\times\) 45.2 \(arcmin^{2}\), corresponding to 2900 \(\times\) 2400 pixels. The observed field of NGC 188 is shown in Figure 1. The data reduction steps followed the standard procedures employed for optical CCD aperture photometry. The observed images were pre-processed by IRAF2 for overscan and bias subtraction and division flat-fields correction. The dark correction was ignored because the telescope was operated in a low-temperature environment and the thermionic noise was less than 1 e \(pix^{-1}h^{-1}\). The instrumental magnitudes of the stars in each frame were extracted by the automated photometric software \begin{table} \begin{tabular}{c c c c c} \hline \hline UT date & Object & Pixel & Duration & V \\ & & & (Hour) & (N\(\times\)Exp) \\ \hline 05 Jan 2017 & NGC 188 & 2900\(\times\)2400 & 1.2 & 38 \(\times\) 35s \\ 06 Jan 2017 & NGC 188 & 2900\(\times\)2400 & 1.8 & 58 \(\times\) 35s \\ 07 Jan 2017 & NGC 188 & 4096\(\times\)4136 & 5.1 & 171 \(\times\) 35s \\ & & 2900\(\times\)2400 & 2.6 & 110 \(\times\) 35s \\ 08 Jan 2017 & NGC 188 & 2600\(\times\)2400 & 4.5 & 267 \(\times\) 35s \\ & & 2900\(\times\)2400 & 5.4 & 290 \(\times\) 35s \\ 09 Jan 2017 & NGC 188 & 2900\(\times\)2400 & 12.5 & 649 \(\times\) 35s \\ 10 Jan 2017 & NGC 188 & 2900\(\times\)2400 & 12.5 & 657 \(\times\) 35s \\ 11 Jan 2017 & NGC 188 & 2900\(\times\)2400 & 11.0 & 598 \(\times\) 35s \\ 12 Jan 2017 & NGC 188 & 2900\(\times\)2400 & 0.7 & 67 \(\times\) 35s \\ 13 Jan 2017 & NGC 188 & 2900\(\times\)2400 & 9.7 & 502 \(\times\) 35s \\ 14 Jan 2017 & NGC 188 & 2900\(\times\)2400 & 12.5 & 666 \(\times\) 35s \\ \hline \end{tabular} \end{table} Table 1: Journal of CCD photometric observations for NGC 188 where N represents the number of images and Exp represents observation exposure time for each filter. Figure 1: The field of NGC 188 observed using the Nanshan One-meter Wide-field Telescope (NOWT) with the detected variable stars marked in red circles. North is up and east is to the left. SExtractor(Bertin & Arnouts, 1996) and the equatorial coordinates (RA, DEC) for each detected star were computed by triangular matching with the UCAC4 (Zacharias et al., 2013). Figure 2 (a) represents the photometric errors of this work as a function of instrumental magnitudes in the V band, which indicates that the photometric errors are less than 0.1 mag when the observational magnitudes are brighter than 17 mag. Following the same procedure described in Song et al. (2016); Ma et al. (2018); Li et al. (2021), the differential light curves of 3585 stars were obtained using the data processing system of our XAO time-domain survey pipeline. ## 3 Variables Identification We examined the light curves of all detected stars by visual inspection and found 25 stars with obvious light variability.These variables were carefully examined for any blending or contaminating by neighboring stars. The main characteristics of the 25 variable stars are given in Table 2. The finding chart of these variable stars in the field of the cluster are shown in Figure 1. These stars are not at the edge of CCD frames, and their light curves do not have outliers. The light variations of these variables can be seen in Figures 3 and 4. To investigate the spread in data points for variable stars, Figure 2 (b) demonstrates the root-mean-square (RMS, labeled as \(\sigma\)) scatter as a function of instrumental V magnitude. It indicates that, in general, detected variable stars (the red dots) have larger standard deviation in magnitude compared to non-variable stars. The periods of periodic variable stars were obtained by the Generalized Lomb-Scargle Periodogram (GLS; Zechmeister & Kurster (2009)) considering the effect of noise as a sub-package of PyAstronomy (Czesla et al., 2019). The generalised Lomb-Scargle periodogram computes the error-weighted Lomb-Scargle periodogram, and provides more accurate frequencies compared with the Lomb-Scargle periodogram. Figure 2: Panel(a) represents the photometric errors of this work as a function of instrumental magnitude in the V band, meanwhile, Panel(b) is the standard deviation as a function of magnitude. The red dots represent the variables stars identified in this work. odogram. The adopted periods correspond to the frequency of the maximum power and the accuracies of those values are influenced by the mean magnitude, amplitude, and measurement errors. To avoid false variability arising solely due to the aliasing effect, we rechecked all the calculated periods and none of them can be seen as a factor of one day. After binning in intervals of 0.01 phase, we calculate the mean magnitude in each bin. The resulting phase-folded light curves of periodic variables are shown in Figures 3. Our data are insufficient to yield accurate periods for the other 7 long-period variable stars, and the light curves of these variables are shown in Figure 4. For the amplitudes of the periodic variables, phased data were sorted from small to large first, then we averaged the sorted data with a sampling interval of 60 data points using the moving average method (Shan et al. 2022). Then the amplitude of the variation in periodic variables was calculated by substracting the minimum values from the maximum values. We did almost the same operation for the amplitudes of other detected variables, except that the data were sorted by Julian Dates. Bai et al. (2019) gives the stellar effective temperature regression for the second Gaia data release applied the supervised machine-learning algorithm, based on the combination of the stars in four spectroscopic surveys: the Large Sky Area Multi-Object Fiber Spectroscopic Telescope, Sloan Extension for Galactic Understanding and Exploration, the Apache Point Observatory Galactic Evolution Experiment, and the Radial Velocity Extension. We have given the effective temperatures of most variables by cross-match the coordinates with Bai et al. (2019), except V11 and V15, which have no values of effective temperatures given in this table due to inconformity with source selection criteria of Bai et al. (2019). The effective temperature of these variable stars ranges from 4200 to 6700K, corresponding to the spectral types of K - F types. The identified variables were cross-checked with the International Variable Star Index (VSX3) and the catalogue of Gaia DR3 Part 4 Variability (Gaia Collaboration 2022), and found most variables are matched with the online catalogs except one star V18, which implies that in our sample only V18 is a new discovery. The periods, amplitudes and shapes of the light curves of known stars produced by this work are mostly consistent with those given on the VSX website. We found an about 0.5 mag eclipse appeared in the light curve of V21, which implies that it could be an Algol-type eclipsing binary rather than a BY Draconis-type variable star given by Mochejska et al. (2008). Footnote 3: [http://www.aavso.org/vxs/](http://www.aavso.org/vxs/) ### Cluster membership of the detected variables To identify the cluster membership of the detected variable stars in this work,we cross-matched our coordinates of variable stars with Cantat-Gaudin et al. (2020), for which provided 857 cluster members with probabilities over 70% of open cluster NGC 188 using the membership assignment code Unsupervised Photometric Membership Assignment in Stellar Clusters (UPMASK, Krone-Martins & Moitinho (2014)) based on the Gaia DR2 database (Evans et al. 2018). Fourteen variables are identified as cluster members with 100% membership probabilities and the others are possible field stars for their larger proper motions in declination direction compared with cluster members. Tarricq et al. (2022) points out that Cantat-Gaudin et al. (2018) might ignore the cluster members in the peripheral regions of OCs. In order not to miss the cluster member variables, we took advantage of Gaia DR3 (Gaia Collaboration 2022), which provides more exquisite astrometric precision than Gaia DR2, to revisit the memberships of the detected variable stars. First, we queried a cone of 30 arcmin radius around the cluster centre with non-zero astrometric and photometric parameters, as well as errors in G mag smaller than 0.005, to create Basic Sources. Figure 5 (a) presents the spatial positions of the Basic Sources and the 25 variable stars. The Basic Sources and variables are represented in grey and red/blue dots. The cluster members are obviously concentrated in the space center. In proper motion space, as shown in Figure 5 (b), we set a blue circular region centered on \((pmRA,pmDE)=(-2.307,-0.960)~{}mas/yr\) with radius \(1.0~{}mas/yr\) as the selection criteria, and fifteen variable stars are contained in the circle which are labeled in red dots, including all the fourteen variables identified in the preceding paragraph and V9. Then we checked all the stellar parameters of the fifteen stars in the other subgraphs in 5. In Figure 5 (a), most red dots are concentrated in the center of the subgraph, V22 and V9 are two slightly distant dots. Figure 5 (c) is the histogram of parallax(\(\omega\)) and Figure 5 (d) presents the observed color-magnitude diagram without reddening considered. None of the fifteen stars can be excluded as non-members. To confirm the membership of V9, we checked the catalog of Cantat-Gaudin et al. (2018), which provided the membership probabilities for 883 sources in the field of NGC 188 using UPMASK, and no records found. However, we found that Platais et al. (2003) classified V9 as a possible member star ( the probability is 73 % ) using the astrometry of Tycho-2 catalog. The absolute magnitude \(M_{G}\) versus intrinsic color item \(G_{BP}-G_{RP}\) CMD for the open cluster NGC 188 is shown in Figure 6. The absolute magnitude \(M_{G}\) is transformed by the observational magnitude Gmag and distance of each member star. The distances are taken from Bailer-Jones et al. (2021) estimated by probabilistic methods using a 3D a priori model of the Galaxy based on Gaia EDR3 data. We calculated the extinction coefficients \(A_{G},A_{BP}\), and \(A_{RP}\) as follows: \[A_{M}/A_{V}=c_{1M}+c_{2M}(G_{BP}-G_{RP})+c_{3M}(G_{BP}-G_{RP})^{2}+c_{4M}(G_{BP }-G_{RP})^{3}+c_{5M}A_{V}+c_{6M}A_{V}{}^{2}+c_{7M}(G_{BP}-G_{RP})A_{V} \tag{1}\] \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline ID & \multicolumn{1}{c}{Coords (J2000)} & G & BP-RP & Distance & parallax & pmRA & pmDE & Amplitude & Period & Memb & \(T_{eff}\) & Type \\ & \multicolumn{1}{c}{(hh mm ss dd mm ss)} & \multicolumn{1}{c}{(mag)} & \multicolumn{1}{c}{(mag)} & \multicolumn{1}{c}{(pc)} & mas & mas/yr & mas/yr & (mag) & (day) & (yes/no?) & (K) & \\ \hline V1 & 00 46 54.53 +85 21 44.1 & 16.525(6) & 1.12(2) & \(1587^{+98}_{-87}\) & 0.58(4) & -2.47(5) & -0.84(4) & 0.43(5) & 0.28961(2) & yes & \(5423\pm 300\) & EW/KW \\ V2 & 00 47 33.92 +85 16 24.8 & 16.49(1) & 1.16(6) & \(1568^{+97}_{-86}\) & 0.61(4) & -2.47(5) & -0.76(4) & 0.83(6) & 0.30675(2) & yes & \(5293\pm 261\) & EW/KW \\ V3 & 00 50 27.76 +85 15 09.1 & 1.566(1) & 1.05(5) & \(1214^{+43}_{-34}\) & 0.80(3) & -0.09(4) & -1.88(3) & 0.72(5) & 0.28575(2) & no & \(5391\pm 155\) & EW/KW \\ V4 & 00 50 50.37 +85 16 12.9 & 15.688(9) & 1.06(4) & \(2077^{+87}_{-109}\) & 0.47(3) & -2.44(3) & -0.79(3) & 0.46(4) & 0.34209(2) & yes & \(5502\pm 340\) & EW/KW \\ V5 & 00 46 12.12 +85 14 02.0 & 16.14(1) & 1.10(5) & \(1713^{+101}_{-86}\) & 0.54(3) & -1.87(4) & -0.94(4) & 0.44(9) & 0.32849(4) & yes & \(5464\pm 270\) & EW/KW \\ V6 & 00 47 16.44 +85 15 35.4 & 15.933(3) & 1.04(1) & \(1758^{+78}_{-88}\) & 0.54(3) & -2.24(4) & -0.98(3) & 0.14(4) & 0.33014(6) & yes & \(5447\pm 231\) & EW/KW \\ V7 & 00 33 48.89 +85 29 22.3 & 15.147(7) & 1.01(3) & \(1105^{+22}_{-22}\) & 0.88(2) & 6.05(2) & 4.07(3) & 0.45(4) & 0.31602(2) & no & \(5552\pm 223\) & EW \\ V8 & 00 44 10.18 +84 54 13.0 & 17.480(8) & 1.84(5) & \(948^{+46}_{-44}\) & 0.96(7) & -4.04(8) & -1.57(8) & 0.81(0) & 0.27707(4) & no & \(4252\pm 315\) & EW \\ V9 & 00 49 22.30 +84 52 57.6 & 16.392(7) & 1.01(2) & \(2990^{+344}_{-286}\) & 0.31(4) & -1.38(5) & -0.61(4) & 0.43(5) & 0.38597(5) & yes & \(5578\pm 166\) & EW \\ V10 & 00 51 15.03 +85 24 51.1 & 15.512(3) & 0.97(1) & \(1716^{+592}_{-68}\) & 0.55(2) & -2.24(3) & -0.60(3) & 0.14(3) & 0.35794(5) & yes & \(5648\pm 119\) & EW \\ V11 & 01 01 01 50.68 +85 24 00.3 & 13.005(4) & 0.793(8) & \(1029^{+271}_{-202}\) & 0.6(3) & -9.43(8) & 8.4(3) & 0.11(4) & 0.321(3) & no & & \(-\) & EW \\ V12 & 04 28.28 +85 15 54.9 & 15.826(5) & 1.21(2) & \(2054^{+98}_{-72}\) & 0.49(3) & -2.46(4) & -1.06(3) & 0.31(4) & 0.58426(8) & yes & \(5287\pm 458\) & EB \\ V13 & 05 08.76 +85 19 05.9 & 17.622(2) & 1.342(2) & \(1639^{+183}_{-12}\) & 0.52(7) & -2.31(1) & -0.86(8) & 0.21(1) & 0.30483(8) & yes & \(4842\pm 358\) & EB \\ V14 & 01 02 23.28 +85 23 49.1 & 13.49(1) & 0.64(5) & \(3705^{+143}_{-131}\) & 0.24(1) & 8.21(1) & -0.95(1) & 0.47(4) & 0.4980(2) & no & \(6673\pm 228\) & RRAB \\ V15 & 00 34 05.31 +84 51 59.0 & 14.106(6) & 0.72(1) & \(1416^{+452}_{-261}\) & -1.2(6) & 3.9(7) & 2.4(8) & 0.14(4) & 0.23646(9) & no & & \(-\) & RRC \\ V16 & 00 57 47.95 +85 02 29.0 & 13.540(2) & 0.889(6) & \(77^{+7}_{-6}\) & 1.27(1) & -13.17(1) & 7.97(2) & 0.16(4) & 0.873(2) & no & \(5673\pm 129\) & VAR \\ V17 & 00 43 23.96 +85 20 32.5 & 14.986(3) & 0.76(27) & \(1863^{+43}_{-48}\) & 0.51(2) & -2.36(2) & -1.03(2) & 0.05(3) & 0.227(1) & yes & \(6246\pm 246\) & VAR \\ V18 & 00 48 54.43 +84 58 31.6 & 15.492(3) & 0.803(8) & \(1850^{+7}_{-76}\) & 0.51(2) & 5.33(3) & 3.93(3) & 0.09(4) & 0.31698(9) & no & \(6116\pm 179\) & EW \\ V19 & 00 52 37.72 +85 10 34.6 & 14.599(3) & 0.88(8) & \(1841^{+84}_{-49}\) & This is the transformation relation for Gaia bands defined by Gaia Collaboration et al. (2018), where M represents for G, BP or RP band, and \(c_{1...7M}\) represent a set of coefficients. The Padova theoretical isochrone (Bressan et al. 2012) is represented by a black solid line in Figure 6. For the metallicity, we adopted the metal value \([Fe/H]=+0.064\pm 0.018\) dex provided by WIYN/Hydra spectra of Sun et al. (2022). Other cluster parameters adopted are taken from Cantat-Gaudin et al. (2020) (log(t) = 9.85 yr, Z = 0.0152, V - \(M_{V}\) = 11.15 mag, \(A_{V}\) = 0.21 mag). ## 4 Results and Discussion ### Compared to Previous Works The four well-known W UMa variable stars identified by Hoffmeister (1964), V1 - V4, are easily detected in our observations. The suspected variable NSV 395 was not included in our field of view. All the variables identified by Kaluzny & Shara (1987) were detected in our observations, but only the light curves of three variables(V5,V6, V12), showed large enough variabilities. NSV 15158 (V25) was con Figure 3: The phased light curves of periodic variable stars. firmed in this work. The variations of the variables discovered by Xin, Zhang, & Deng (2002) were also detected in this work, except two outside of the field of view. None of the faint variables found by Kafka & Honeycutt (2003) were confirmed in our catalog. Among the eight variables identified by Zhang et al. (2004), three of them were outside of the field of view, and no variations were found in the other three light curves, only two variables(V9, V15) were confirmed in this work. It is difficult to observe the variations of the BY Draconis type variables discovered by Mochejska et al. (2008) due to the telescope's limitation. The eclipsing binary (V13) and one BY Draconis type star (V21) of Mochejska et al. (2008) were detected in this work. Among the eighteen variables of Popov et al. (2013), eleven were out of the field of view, and the light curves of two variables were flat in our observations, the other five were confirmed in this work. The four suspected variables, are detected in our observation, while NSV 15164 is saturated, and the light curves of the other three variables did not show changes. None of the variables discovered by the TAROT Suspected Variable Star Catalog (TSVSC1) are confirmed by this work. For the two variables detected by ASAS-SN, the detached eclipsing binary (V20) was detected, but there were no variations detected in the light curve of another variable in this work. All the detected 24 known variables are listed in Table 2. Among the 24 detected known variables in the field of NGC 188, good-phased light curves were recorded for the first 17 variables listed in Table 2 as shown in Figure 3. As shown in Figure 4, the observational durations of the other seven variables are not long enough to determine the period of these variable stars. All the memberships of the detected variables are listed in Table 2. We didn't investigate the known variable stars if our data are basically consistent with the previous conclusions. V16 and V17 were found as variable stars by Popov et al. (2013). Because of the low amplitudes of their light variations, the classifications for the two stars still remain unknown. V16 is a certain foreground field star and is a spectroscopic variable star classified by Tian et al. (2020) based on the data of LAMOST DR4. V17 is a certain blue straggler star (BSS) based on the location of the CMD, and it was classified as a single-lined BSS with rapid rotation by Geller et al. (2009). V21 was classified as a BY Draconis type variable by Mochejska et al. (2008). It was detected by our observations because an about 0.5 mag eclipse appeared in the light curve of this star, which implies that it might be an Algol-type eclipsing binary. The location of this star in our abosute \(M_{G}\) versus \(G_{BP}-G_{RP}\) CMD also reinforces this view. More observations for this star are needed to determine its period and properties. Figure 4: The light curves of long-period variable stars. ### Classification of New Variables One new periodic variable star (V18) is identified in this work. The phased light curves are shown in Figure 3, and the basic parameters for the variable are listed in Table 2. As discussed above, this is a field star with definite variations of brightness. Based on the distances of the star, V18 is a field star mixed with cluster members. We considered the period, amplitude, light-change curve shape, effective temperature, and positions on the CMD to make the classification of the variable stars obtained in this study. We checked the star's position on the CMD and compared with the statistical positions of the periodic variable stars on the CMDs given by (Gaia Collaboration et al. 2019), and found it could be a W UMa binary. ## 5 Conclusions In this paper, we have presented the time-series V-band photometric survey of the open cluster NGC 188, with particular emphasis on variable stars. The results of this study are the following: i). We detected 25 variable stars in a \(55\times 45\ arcmin^{2}\) field of view around the cluster, including one new variable. Their memberships are determined by the research of CG20 and reconfirmed by Gaia DR3. Most results are consistent with CG20, except V9, for which is a possible cluster member in our research. Our results suggest that 15 variables are cluster members while the other 14 stars belong to the field star population. ii). Based on the behaviors and periods of the light curves as well as their positions on the CMD, we discussed the classifications of the 25 variable stars. Most results of the known variables are coincident with the VSX catalog, except V21 (V0769 Cep), which is preferred to be an EA-type eclipsing binary Figure 5: (a) spatial distribution for the stars in the field of NGC 188 and 25 variable stars; (b) proper motion distribution; (c): histogram of parallax (\(\omega\)); (d) observed color-magnitude diagram without reddening considered. In the panels, light grey dots represent the Basic Sources of Gaia DR3. Red and blue dots represent the variable members and non-members, respectively. than a BY Draconis type variable star. The new variable is likely to be a W UMa eclipsing star. The detection and analysis of the variable stars in the old open cluster NGC 188 yield valuable samples, especially for the study of W UMa stars. ###### Acknowledgements. We are grateful to an anonymous referee for valuable comments which have improved the paper significantly. This work has been financially supported by the Resource sharing platform construction project of Xinjiang Uygur Autonomous Region (No. PT2306) and the Chinese Academy of Sciences (CAS) "Light of West China" Program (No. 2020-XBQNXZ-016). This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.esa.org/cgi/bin/expos/](https://www.esa.org/cgi/bin/expos/)). Figure 6: CMD for cluster members of NGC 188 with the positions of the 16 cluster variables marked in red color. The cluster members are provided by CG20 with probabilities over 70 %. The cluster parameters used in the Padova theoretical isochrone fitting are come from CG20 and Sun et al. (2022). www.cosmos.esa.int/gaia), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). The CCD photometric data of NGC 188 were obtained with the Nanshan 1 m telescope administered by Xinjiang Astronomical Observatory.
2306.04723
Intrinsic Dimension Estimation for Robust Detection of AI-Generated Texts
Rapidly increasing quality of AI-generated content makes it difficult to distinguish between human and AI-generated texts, which may lead to undesirable consequences for society. Therefore, it becomes increasingly important to study the properties of human texts that are invariant over different text domains and varying proficiency of human writers, can be easily calculated for any language, and can robustly separate natural and AI-generated texts regardless of the generation model and sampling method. In this work, we propose such an invariant for human-written texts, namely the intrinsic dimensionality of the manifold underlying the set of embeddings for a given text sample. We show that the average intrinsic dimensionality of fluent texts in a natural language is hovering around the value $9$ for several alphabet-based languages and around $7$ for Chinese, while the average intrinsic dimensionality of AI-generated texts for each language is $\approx 1.5$ lower, with a clear statistical separation between human-generated and AI-generated distributions. This property allows us to build a score-based artificial text detector. The proposed detector's accuracy is stable over text domains, generator models, and human writer proficiency levels, outperforming SOTA detectors in model-agnostic and cross-domain scenarios by a significant margin.
Eduard Tulchinskii, Kristian Kuznetsov, Laida Kushnareva, Daniil Cherniavskii, Serguei Barannikov, Irina Piontkovskaya, Sergey Nikolenko, Evgeny Burnaev
2023-06-07T18:38:04Z
http://arxiv.org/abs/2306.04723v2
# Intrinsic Dimension Estimation for Robust Detection of AI-Generated Texts ###### Abstract Rapidly increasing quality of AI-generated content makes it difficult to distinguish between human and AI-generated texts, which may lead to undesirable consequences for society. Therefore, it becomes increasingly important to study the properties of human texts that are invariant over text domains and various proficiency of human writers, can be easily calculated for any language, and can robustly separate natural and AI-generated texts regardless of the generation model and sampling method. In this work, we propose such an invariant of human texts, namely the intrinsic dimensionality of the manifold underlying the set of embeddings of a given text sample. We show that the average intrinsic dimensionality of fluent texts in natural language is hovering around the value \(9\) for several alphabet-based languages and around \(7\) for Chinese, while the average intrinsic dimensionality of AI-generated texts for each language is \(\approx 1.5\) lower, with a clear statistical separation between human-generated and AI-generated distributions. This property allows us to build a score-based artificial text detector. The proposed detector's accuracy is stable over text domains, generator models, and human writer proficiency levels, outperforming SOTA detectors in model-agnostic and cross-domain scenarios by a significant margin. ## 1 Introduction Modern large language models (LLMs) generate human-looking texts increasingly well, which may also lead to worrisome consequences (Fagni et al., 2021; Adelani et al., 2020; Stokel-Walker, 2022), so the ability to detect AI-generated texts (artificial text detection, ATD) becomes crucial for media, education, politics, creative industries and other spheres of human social activities. A straightforward idea would be to train a classifier to detect artificial text; many such classifiers exist (Zellers et al., 2019; Gehrmann et al., 2019; Solaiman et al., 2019), but most of them are designed to detect samples of individual generation models, either using the model itself (Mitchell et al., 2023) or training on a dataset of its generations. This leads to poor generalization to new models and unknown data domains. Another idea, known as _watermarking_, is to inject some detectable artifacts into model generations; Kirchenbauer et al. (2023) propose to intentionally inject statistical skew that can be detected in a text sample. However, later works showed that the watermark detector can be broken by adversarial attacks, e.g. by text perturbations or paraphrasing (He et al., 2023). Since text generation is constantly evolving, Sadasivan et al. (2023) claim that perfect artificial text detection is impossible; Krishna et al. (2023) address this statement and propose a retrieval-based detector that could be implemented by text generation service providers: they should store the hash value of every text generated by their model and retrieve it by request. This approach works even for a perfect human-like text generator, but it does not apply to publicly available models, and there already are plenty. In this work, we show that the _intrinsic dimension_ of text samples can serve as a helpful score function allowing to separate artificial and generated texts in a very general setting, without additional knowledge about the generator. The only assumption is that generation is good enough to create fluent grammatical samples of length \(\approx 200\) words. We propose a method based on persistent homology dimension theory, which allows to estimate the dimension of text samples with high accuracy, and show that the proposed dimension-based classifier outperforms other artificial text detectors with a large margin in the general-purpose setup, for a very wide class of generators. Many works have estimated the intrinsic dimension of data representations (Pope et al., 2021; Barannikov et al., 2021), neural network weights (Ansuini et al., 2019), or parameters needed to adapt to some downstream task (Aghajanyan et al., 2021), but these objects are very complex. Even if we are certain that a dataset fits into some surface in a high-dimensional feature space, it is not easy to estimate its dimension due to various kinds of noise (natural irregularities, measurement noise, numerical approximations) and the ambiguity of estimating a surface from a sparse set of points. We estimate the geometry of every text sample as a separate object. Since texts generated by modern LLMs are fluent and usually do not contain grammatical, syntactical, or local semantic inconsistencies, we are interested in global sample geometry rather than properties that could be detected in short text spans. We show that the persistent dimension estimator provides an excellent way to deal with textual data: it turns out that real texts have a higher intrinsic dimension than artificial ones (Fig. 1). We propose an efficient method to implement the estimator and evaluate its classification ability in various settings, proving its robustness for artificial texts detection and showing that it works equally well across a number of different languages. Our main contributions are: (1) we propose to estimate the intrinsic dimensionality of natural language texts with the persistent homology dimension estimator and develop an efficient algorithm for computing it; (2) we show that the intrinsic dimension serves as a good score for artificial text detection for modern LLMs; in cross-domain and cross-model settings our method outperforms other general purpose classifiers by a large margin, is robust to adversarial attacks, and works for all considered languages; (3) we show that our text detector reduces the bias against non-native speakers in comparison to available ATD models; (4) we release a multilingual dataset of generations produced by GPT3.5 and natural texts from the same domain in order to enable further ATD research. Below, Section 2 reviews related work, Section 3 introduces instrinsic dimension and its estimation with persistent homology, Section 4 applies it to artificial text detection, Section 5 presents our experimental evaluation, Section 6 discusses the limitations, and Section 7 concludes the paper. ## 2 Related work Artificial text detection (ATD) becomes increasingly important with modern LLMs. GPT-2 (Radford et al., 2018) was accompanied by a work by Solaiman et al. (2019) on potential dangers and defences against them; the best ATD classifier there was based on supervised fine-tuning of RoBERTa (Liu et al., 2019). Supervised approaches can work well for other generative models and data domains (Krishna et al., 2023; Guo et al., 2023; He et al., 2023) but they do not generalize to other text domains, generation models, and even sampling strategies (Bakhtin et al., 2019; Solaiman et al., 2019). In the zero-shot setting, Solaiman et al. (2019) threshold the average log-probability score of a sample calculated Figure 1: Real and artificial text have different intrinsic dimension: (a-b) idea; (c) actual results. by some pre-trained language model (LM). DetectGPT (Mitchell et al., 2023) treats log-probability calculation as a function, estimates its curvature in a small neighbourhood of the text, and shows that this curvature score is smaller for artificial texts (there are "flat" regions around them); however, DetectGPT needs the likelihood to come from the same LM as the sample. We focus on developing an ATD model that could generalized to unseen text generation models and domains. Zellers et al. (2019) detect generated texts perfectly with a discriminator model built on top of the generator, but the quality drops significantly when the generator changes, even with supervised adaptation; a similar "model detects itself" setup was adopted by Mitchell et al. (2023). Solaiman et al. (2019) show that a simple score-based approach by the likelihood score works well in the "model detects itself" setting but does not generalize to different generator and discriminator; as for transferability, they show that a supervised classifier generalizes well when it is trained on the output of a more complex model and transferred to a less complex one but not in the reverse direction. Bakhtin et al. (2019) consider different types of generalization: in-domain (train and test generators are the same), cross-corpus (train and text generators fine-tuned on different corpora), and cross-architecture (train and test generators have different architectures but the same training corpora); their model shows good in-domain generalization ability, handles relatively well cross-architecture generalization, but loses quality in cross-corpus generalization. Mitchell et al. (2023) demonstrate the stability of their method over text domains compared to supervised models, which are better on in-domain data but lose efficiency in a cross-domain setting. Finally, Krishna et al. (2023) show all methods failing dramatically against the DIPPER paraphrase attack (except for a lower-performing approach developed for text quality ranking Krishna et al. (2022)). We also note Liang et al. (2023) who show the bias of artificial text detectors against non-native speakers and show that all existing detectors can be broken by generating texts with controllable complexity. Geometrical and topological methods have shown their usefulness for analysing the intrinsic dimensionality of data representations. Some works focus on data manifolds (Pope et al., 2021; Barannikov et al., 2021) and others consider hidden representations of neural networks and investigate through the lens of intrinsic dimensionality. Ansuini et al. (2019) apply TwoNN to internal representations in CNNs, establish a connection to the model's generalization ability. Birdal et al. (2021) show that the generalization error of these models can be bounded via persistent homology dimension. Vision transformers were also investigated in (Xue et al., 2022) and (Magai and Ayzenberg, 2022). Moreover, intrinsic dimensionality was connected to the generalization of Transformer-based LLMs (Aghajanyan et al., 2021). Valeriani et al. (2023) analyze the intrinsic dimensionality of large Transformer-based models. Topological properties of the inner representations of Transformer-based models (Vaswani et al., 2017), including BERT (Devlin et al., 2019) and HuBERT (Hsu et al., 2021), were successfully applied for solving a wide variety of tasks, from artificial text detection (Kushnareva et al., 2021) and acceptability judgement (Cherniavskii et al., 2022) to speech processing (Tulchinskii et al., 2022). ## 3 Intrinsic dimension and persistent homology dimension Informally speaking, the intrinsic dimension of some subset \(S\subset\mathbb{R}^{n}\) is the number of degrees of freedom that a point moving inside \(S\) has. This means that in a small neighbourhood of every point \(S\) can be described as a function of \(d\) parameters, \(d\leq n\), and this number cannot be reduced. This idea is formalized in the notion of a \(d\)-dimensional _manifold_ in \(\mathbb{R}^{n}\): it is a subset \(M\subset\mathbb{R}^{n}\) where for each point \(x\in M\) there exists an open neighborhood which is equivalent to an open ball in \(\mathbb{R}^{d}\) for some value \(d\). Importantly, if \(M\) is a connected set then \(d\) should be the same for all its points, so we can talk about the dimension of the entire manifold. Data representations often use excessive numbers of features, some of which are highly correlated. This overparametrization has been noticed many times (Hein and Audibert, 2005; Kuleshov et al., 2017; Pope et al., 2021), and the idea that real data lies (approximately) on some low-dimensional manifold in the feature space is known as the _manifold hypothesis_(Goodfellow et al., 2016). However, there are obstacles to estimating the intrinsic dimension of a dataset. First, a real dataset can be a combination of sets of different dimensions. Second, data can be noisy and contain outliers. Moreover, real data can have a complicated hierarchical structure, so different approximation methods lead to different intrinsic dimension values. For an analogy, consider the observations of a single spiral galaxy that consists of separate points (stars, planets etc.) but forms a compact \(3\)-dimensional manifold. At some level of approximation the galaxy looks like a disk, which is \(2\)-dimensional, but if we take a closer look we discover the structure of a \(3\)-dimensional core and basically \(1\)-dimensional arms. Moreover, if we add observations over time, the dataset will consist of \(1\)-dimensional trajectories of individual points that exactly correspond to well-defined mathematical trajectories (the noise here comes only from measurement errors); these trajectories form an approximate \(3\)-dimensional cylinder in \(4\)-dimensional space with a much higher level of noise around its borders. As a result, the dimension of the entire object can be estimated by any number from 1 to 4 depending on the detector's sensitivity to noise and outliers, preference for global or local features, and the way to average the values of non-uniform distribution of the points. Thus, it is natural that there exist several different methods for intrinsic dimension (ID) estimation, and we have to choose the one most suitable for the task at hand. For example, many ID estimators are based on constructing a global mapping of the data into a lower-dimensional linear subspace, with either linear projection (e.g., PCA), kernel-based methods, or distance-preserving nonlinear transformations. However, in our preliminary experiments these type of dimensional estimation have seemed to be losing the key information for artificial text detection. We focus on the _persistent homology dimension_ estimator (PHD) (Schweinhart, 2021), which belongs to the class of _fractal dimension_ approaches. Consider a ball of radius \(r\) inside a \(d\)-dimensional manifold \(M\). As \(r\) grows, the volume of the ball increases proportionally to \(r^{d}\). Let \(x_{1},...,x_{N}\) be points uniformly sampled from \(M\). Then the expected number of points in a ball of radius \(r\) also changes as \(r^{d}\) with \(r\). Naturally, real datasets usually do not correspond to the uniform distribution of points, but this issue can be overcome by considering the asymptotic behaviour of the number of points in an \(r\)-ball as \(r\to 0\). In this case, it suffices for the data distribution to be smooth and therefore close to uniform in the neighbourhood of every point. Accurate straightforward estimation of \(d\) based on the above observation is not sample-efficient but there exist several approximate approaches, including MLE dimension that evaluates the data likelihood (Levina and Bickel, 2004), TwoNN dimension that uses the expected ratio of distances from the given point to its two nearest neighbours (Facco et al., 2017), and MADA (Farahmand et al., 2007) that uses the first order expansion of the probability mass function. We also report MLE-based results as it's performance is comparable to PHD in some tasks. We propose to use _persistence homology dimension_ (PHD) that has several appealing properties compared to other fractal intrinsic dimension estimators. First, the above methods operate locally while PHD combines local and global properties of the dataset. Second, according to our experiments, this method is sample-efficient and redundant to noise (see below). Third, it has a solid theoretical background that connects topological data analysis, combinatorics, and fractal geometry (Adams et al., 2020; Birdal et al., 2021; Jaquette and Schweinhart, 2020; Schweinhart, 2021). The formal definition of PHD is based on the concept of the _persistent homology_ for a set of points in a metric space, which is the basic notion of topological data analysis (TDA) (Chazal and Michel, 2017; Barannikov, 1994, 2021). TDA tries to recover the underlying continuous shape for a set of points by filling in the gaps between them that are smaller than some threshold \(\mathrm{t}\), and studying the topological features of the resulting object as \(\mathrm{t}\) increases. Each persistent homology \(\mathrm{PH}_{i}\) in a sequence \(\mathrm{PH}_{0},\mathrm{PH}_{1},\ldots\) is defined by the set of _topological features_ of dimension \(i\): \(0\)-dimensional features are connected components, \(1\)-dimensional features are non-trivial cycles, \(2\)-dimension features are tunnels, etc. For each feature we calculate its "lifespan", a pair \((\mathrm{t}_{\mathrm{birth}},\mathrm{t}_{\mathrm{death}})\), where \(\mathrm{t}_{\mathrm{birth}}\) is the minimal threshold where the feature arises, and \(\mathrm{t}_{\mathrm{death}}\) is the threshold where it is destroyed. Following Adams et al. (2020), we introduce persistent homology dimension as follows. Consider a set of points \(X=\{x_{1},\ldots,x_{N}\}\subset\mathbb{R}^{n}\). We define the \(\alpha\)_-weighted sum_ as \(E_{\alpha}^{i}(X)=\sum_{\gamma\in\mathrm{PH}_{i}(X)}|I(\gamma)|^{\alpha}\), where \(I(\gamma)=\mathrm{t}_{\mathrm{death}}(\gamma)-\mathrm{t}_{\mathrm{birth}}(\gamma)\) is the lifespan of feature \(\gamma\). For \(i=0\), \(E_{\alpha}^{i}\) can be expressed in terms of the minimal spanning tree (MST) of \(X\): its edges map to lifespans of \(0\)-dimensional features \(\gamma\in PH_{0}(X)\)(Bauer, 2021; Birdal et al., 2021). Thus, the definition of \(E_{\alpha}^{0}(X)\) is equivalent to \(E_{\alpha}^{0}(X)=\sum_{e\in\mathrm{MST}(X)}|e|^{\alpha}\), where \(|e|\) is the length of edge \(e\). There is a classical result on the growth rate of \(E_{\alpha}^{0}(X)\)(Steele, 1988): if \(x_{i}\), \(0<i<\infty\) are independent random variables with a distribution having compact support in \(\mathbb{R}^{d}\) then with probability one \(E_{\alpha}^{0}(X)\sim Cn^{\frac{d-\alpha}{d}}\) as \(n\rightarrow\infty\), where equivalence mean that the ratio of the terms tends to one. It shows that \(E_{\alpha}^{0}\) tends to infinity with \(N\) if and only if \(\alpha<d\). Now one can define the intrinsic dimension based on MST as the minimal value of \(\alpha\) for which the score is bounded for finite samples of points from \(M\)(Schweinhart, 2021): \[\mathrm{dim}_{\mathrm{MST}}(M)=\inf\{d\mid\exists C\text{ such that }E_{d}^{0}(X)\leq C \text{ for every finite }X\subset M\},\] and a sequence of PH dimensions as \[\dim_{\mathrm{PH}}^{i}(M)=\inf\{d\mid\exists C\text{ such that }E_{d}^{i}(X)\leq C \text{ for every finite }X\subset M\}.\] We now see that \(\dim_{\mathrm{MST}}(M)=\dim_{\mathrm{PH}}^{0}(M)\) for any manifold \(M\). This fact, together with the growth rate result above, provides a sample-efficient way to estimate \(\dim_{PH}^{0}(M)\)[Birdal et al., 2021]: sample subsets \(X_{n_{i}}=\{x_{1},\ldots,x_{n_{i}}\}\subset M\) of \(n_{i}\) elements for a growing sequence of \(n_{i}\), for every subset find its MST and calculate \(E_{\alpha}^{0}(X_{n_{i}})\), and then estimate the exponent of the growth rate of the resulting sequence by linear regression between \(\log E_{\alpha}^{0}(X_{n_{i}})\) and \(\log n\), since we know that \(\log E_{\alpha}^{0}(X_{n_{i}})\sim(1-\frac{\alpha}{d})\log n_{i}+\tilde{C}\) as \(n_{i}\to\infty\). Next, we show empirically that our method of ID estimation via PHD approximates the real dimension of a manifold well and is well suited for the conditions mentioned earlier: presence of noise and small number of samples. To compare with other ID estimators, we utilize a benchmark by Campadelli et al. (2015) designed specifically for the evaluation of ID estimation methods and used the _scikit-dimensions_ library [Bac et al., 2021] with efficient implementations of \(12\) approaches to ID estimation, popular for different tasks. We evaluated many of these approaches on artificial datasets from Bac et al. (2021), \(1000\) samples each, without noise. Choosing three "winners"--MLE, TwoNN, and MADA--we evaluated their sample efficiency and noise tolerance in comparison with our implementation of the PHD estimator. Fig. 2 shows the results: PHD is the only method tolerant to noise, and it does not degrade when the data is scarce. It outperforms all other methods in the noisy setup for any sample size. The second-best method is MLE, which performs relatively well on small samples (\(200\)-\(500\)) in noisy settings and has a small variance. Below we will show that as a result, MLE is also applicable to artificial text detection, but it lags a little behind PHD on average. ## 4 Methodology We consider consistent text samples of medium size, with length \(\approx 300\) tokens; we assume that each text contains a complete thought or is devoted to a single topic. We estimate the dimension of each text sample, considering it as a separate manifold. To do this, we obtain contextualized embeddings for every token in the text by a pretrained Transformer encoder. In our experiments, we use RoBERTa-base [Liu et al., 2019] for English and XLM-R [Goyal et al., 2021] for other languages. Each embedding is a numerical vector of a fixed length, so we view it as a point in the Euclidean space. We drop the artificial first and last tokens (<CLS> and <SEP>) and evaluate the persistent homology dimension of the resulting point cloud using the growth rate theorem (see Section 3). Given a set of points \(S\), \(|S|=n\), we first sample subsets \(S_{i}\subset S,i=1,\ldots,k\) whose sizes \(n_{1},\ldots,n_{k}\) are uniformly distributed in \([1,n]\). For each \(S_{i}\) we calculate its persistent score \(E_{0}^{1}(S_{i})\) (just \(E(S_{i})\) below); this can be done with a classical MST algorithm in linear time. Then we prepare a dataset consisting of \(k\) pairs \(D=\{(\log n_{i},\log E(S_{i}))\}\) and apply linear regression to approximate this set by a line. Now the dimension \(d\) can be estimated as \(\frac{1}{1-\kappa}\), where \(\kappa\) is the slope of the fitted line. In general, our method for PHD calculation is similar to the the computational scheme proposed by Birdal et al. (2021). But since we are dealing with sets that are much smaller and less uniformly distributed, their algorithm becomes unstable, with variance up to 35% of the value from different Figure 2: A comparison of ID estimators with noise on artificial datasets; lower is better. random seeds; moreover, if one of the subsets \(S_{i}\) slips into a local density peak and has an unusually low persistence score, the algorithm may even produce a meaningless answer (e.g., negative \(d\)). To overcome this issue, we add several rounds of sampling and averaging to improve the stability of calculation. We estimate the expectation \(\mathbb{E}_{s\subset S,|s|=n_{i}}[E(s)]\) for a given \(n_{i}\) instead of direct calculation of \(E(S_{i})\) for a single sample. For that, we perform the whole process of computing \(d\) several times, averaging the results. Details of our sampling schema can be found in the Appendix. Finally, we construct a simple single-feature classifier for artificial text detection with PHD as the feature, training a logistic regression on some dataset of real and generated texts. ## 5 Experiments **Datasets**. Our main dataset of human texts is Wiki40b [Guo et al., 2020]. We measured intrinsic dimension of fiction stories on the target split of the WritingPrompts dataset [Fan et al., 2018], a collection of short texts written by Reddit users. For multilingual text detection experiments, we generated a new WikiM dataset for 10 languages by GPT3.5-turbo. We use the header and first sentence from a Wikipedia page as the prompt and ask the model to continue. In cross-domain and paraphrase robustness experiments, we use Wiki and Reddit datasets (3k samples each) [Krishna et al., 2023] that use two consecutive sentences (Wiki) or the question (Reddit) as a prompt and generate texts by GPT2-XL, OPT13b, and GPT3.5-davinci-003. Following their pipeline for Reddit, we have also generated a StackExchange dataset by GPT3.5-davinci-003 as the third domain. We select questions posted after 2019-08-01 from non-technical categories, where both question and answer have rating more then 10, and clean them removing HTML artifacts. In order to assess the bias in our estimator, we use the data provided by Liang et al. [2023]. **Intrinsic dimensionality of real and generated texts**. First, we observe an intriguing fact: the intrinsic dimension of natural texts is mostly concentrated between values \(\mathbf{9}\) and \(\mathbf{10}\), while the dimension of generated texts is lower and is approximately equal to \(\mathbf{8}\), regardless of the generator. This is illustrated in Figure 3. Table 1 shows that this value is stable across different text genres but slightly varies for different languages: it is approximately equal to \(\mathbf{9\pm 1}\) for most European languages, slightly larger for Italian and Spanish (\(\approx\mathbf{10\pm 1}\)), and lower for Chinese and Japanese (\(\approx\mathbf{7\pm 1}\)); details are shown in Fig. 4. But we always observe a clear difference between this distribution and generated texts on the same language (see Appendix for more experiments). Next, we check how the PHD estimation depends on the base model that we use for text embedding calculation. Fig. 5 demonstrates that PHD changes slightly with the change of the base LM, decreasing for models with fewer parameters. RoBERTa-base embeddings provide the best variance for PHD estimation, so we use this model for all further experiments in English, and XLM-R of the same size for multilingual experiments. \begin{table} \begin{tabular}{l|r r r} \hline \hline & Wikipedia articles & \begin{tabular}{c} Fiction stories \\ (Reddit) \\ \end{tabular} & \begin{tabular}{c} Question answering \\ (Stack Exchange) \\ \end{tabular} \\ \hline PHD & \(9.491\pm 1.010\) & \(9.212\pm 1.288\) & \(9.594\pm 1.29\) \\ MLE & \(11.827\pm 0.768\) & \(11.553\pm 1.197\) & \(12.131\pm 1.004\) \\ \hline \hline \end{tabular} \end{table} Table 1: Intrinsic dimensions of English texts of different genres. Figure 3: Boxplots of PHD distributions for different generating models in comparison to human-written text on Wikipedia data. Embeddings are obtained from RoBERTa-base LM. **Artificial text detection**. We show that intrinsic dimension can lead to a robust method of artificial text detection. In all experiments below, we use the one-feature thresholding classifier (see Section 4). **Comparison with universal detectors**. First, we show that our detector is the best among general-purpose methods designed to detect texts of any domain, generated by any AI model, without access to the generator itself. Such methods are needed, e.g., for plagiarism detection. To be applicable in real life, the algorithm should provide high artificial text detection rate while avoiding false accusations of real authors. Besides, it should be resistant to adversaries who transform the content generated by popular AI models to reduce the chance to be caught. Here we adopt the experimental settings by Krishna et al. (2023) and use the baseline results presented there. We compare PHD and MLE with two general-purpose detectors: GPTZero (Tian, 2023), targeted to detect the texts generated by contemporary LLMs (GPT-3, GPT-4, ChatGPT, BARD), and OpenAI (OpenAI, 2023) announced together with the ChatGPT model in order to reduce the expected social harm. Our third baseline is DetectGPT (Mitchell et al., 2023), which is a state of the art thresholding classifier that evaluates text samples by the probability curvature obtained via the generator model. It works best when the base model coincides with the generator model ("model detects itself") but the authors claim that it can generalize to cross-model setup with reasonable quality. RankGen (Krishna et al., 2022) is a method originally developed for ranking hypotheses during text generation; it demonstrates a surprising ability to handle adversarial attacks. Following Krishna et al. (2023), we report the detection accuracy with false positive rate (FPR) fixed at 1%. Table 2 shows that our PHD-based classifier outperforms all baselines with a large margin: \(+10\%\) for GPT-\(3.5\), \(+14\%\) for OPT. Note that DetectGPT uses GPT-2 as the base model, which explains its results for GPT-2. PHD is also invulnerable to the DIPPER paraphrasing attack (Krishna et al., 2023). When generated texts are transformed by DIPPER, they lose some characteristic features of the generator, which causes a dramatic drop in quality for most detectors; but for the PHD classifier the accuracy of artificial text detection even increases slightly after this perturbation. Interestingly, the MLE dimension estimator also works quite well for this task, and even achieves \(6\%\) better detection for GPT-\(3.5\) generations; but its adversarial robustness is significantly worse. Figure 4: Boxplots of PHD distributions in different languages on Wikipedia data. Embeddings are obtained from XLM-RoBERTa-base (multilingual). Figure 5: Boxplots of PHD distributions obtained by different LMs on English Wikipedia data. **Cross-domain and cross-model performance**. Table 3 shows that our ID estimation is stable across text domains; consequently, our proposed PHD text detector is robust to domain transfer. We compare the cross-domain ability of PHD with a supervised classifier obtained by fine-tuning RoBERTa-base with a linear classification head on its \(CLS\) token, a supervised classification approach used previously for artificial texts detection with very high in-domain accuracy (Solaiman et al., 2019; Guo et al., 2023; He et al., 2023). We split data into train / validation / test sets in proportion 80%/10%/10%. Table 3 reports the results of the classifier's transfer between three datasets of different text styles--long-form answers collected from Reddit, Wikipedia-style texts, and answers from StackExchange--using data generated by GPT-3.5(davinci-003). Although supervised classification is virtually perfect on in-domain data, it fails in cross-domain evaluation, while the PHD classifier is not influenced by domain transfer. On average, the PHD classifier slightly outperforms the supervised baseline, while being much more stable. Table 3 also reports cross-model transfer ability, where the classifier is trained on the output of one generation model and tested on another. We consider generations of GPT-2, OPT, and GPT-3.5(davinci-003) in the Wiki domain and observe that the PHD classifier, again, is perfectly stable. This time, RoBERTa-base supervised classifier handles the domain shift much better and outperforms PHD on average, but it has a higher cross-domain generalization gap. This means that we can expect the PHD classifier to be more robust to entirely new AI models. **PHD-based classification for other languages**. Table 4 presents the results of PHD-based artificial text detection for Wikipedia-style texts generated by ChatGPT. in \(10\) languages. Text embeddings were obtained with XLM-RoBERTa-base, the multi-language version of RoBERTa. As quality metric we report the area under ROC-curve (ROC-AUC). We see that both ID classifiers provide solid results for all considered languages, with the average quality of \(0.78\) for PHD and \(0.8\) for MLE; MLE performs better for almost all languages. The worst quality is on Chinese and Japanese (PHD 0.71 and 0.74, MLE 0.65 and 0.75 respectively), the best is for Spanish and Italian (PHD 0.83, MLE 0.85 \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline **Language:** & **cn-zh** & **en** & **fr** & **de** & **it** & **jp** & **pl** & **ru** & **es** & **uk** \\ \hline PHD & 0.709 & 0.781 & 0.790 & 0.767 & 0.831 & 0.737 & 0.794 & 0.777 & 0.833 & 0.768 \\ MLE & 0.650 & 0.770 & 0.804 & 0.788 & 0.852 & 0.753 & 0.850 & 0.816 & 0.853 & 0.821 \\ \hline \hline \end{tabular} \end{table} Table 4: Quality of artificial text detection in different languages (ROC-AUC) for ChatGPT text. \begin{table} \begin{tabular}{c|c c c|c c} \hline \hline \multirow{2}{*}{Generator} & \multicolumn{4}{c|}{Existing Solutions} & \multicolumn{4}{c}{Our methods} \\ & DetectGPT & OpenAI & GPTZero & RankGen & PHD & MLE \\ \hline GPT-2 & 70.3* & 21.6 & 13.9 & 13.5 & **25.2** & 23.8 \\ + DIPPER & 4.6 & 14.8 & 1.2 & **28.5** & 27.6 & 19.7 \\ OPT & 14.3 & 11.3 & 8.7 & 3.2 & **28.0** & 26.7 \\ + DIPPER & 0.3 & 10.0 & 1.0 & 13.5 & **30.2** & 22.1 \\ GPT-3.5 & 0.0 & 30.0 & 7.1 & 1.2 & 40.0 & **46.7** \\ + DIPPER & 0.0 & 15.6 & 1.8 & 7.3 & **41.2** & 33.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Artificial text detection (accuracy at 1% FPR) for open-ended generation using Wikipedia prompts. DIPPER was run with Lex=60, Order=60. \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{4}{c|}{**RoBERTa-cls**} & \multicolumn{4}{c}{**Intrinsic Dimension (PHD)**} \\ Train \(\backslash\) Eval & Wikipedia & Reddit & StackExchange & Reddit & Wikipedia & StackExchange \\ \hline Wikipedia & 0.990 & 0.535 & 0.690 & 0.843 & 0.781 & 0.795 \\ Reddit & 0.388 & 0.997 & 0.457 & 0.855 & 0.776 & 0.773 \\ StackExchange & 0.525 & 0.473 & 0.999 & 0.834 & 0.778 & 0.800 \\ \hline \hline Train \(\backslash\) Eval & GPT2 & OPT & GPT3.5 & GPT2 & OPT & GPT3.5 \\ \hline GPT2 & 0.992 & 0.993 & 0.933 & 0.769 & 0.759 & 0.832 \\ OPT & 0.988 & 0.997 & 0.967 & 0.769 & 0.763 & 0.837 \\ GPT3.5 & 0.937 & 0.982 & 0.990 & 0.759 & 0.757 & 0.843 \\ \hline \hline \end{tabular} \end{table} Table 3: Cross-domain and cross-model accuracy of PHD and RoBERTa-based classifiers on data from three different domains and three different models; classes are balanced in training and evaluation. for both). Note that the best and worst classified languages are those with the largest and smallest ID values in Fig. 4; we leave the investigation of this phenomenon for further research. **Non-native speaker bias**. Finally, we show how our model helps to mitigate the bias present in ML-based artificial text detectors. We follow Liang et al. (2023) who demonstrate that current artificial text detectors are often too hard on texts written by non-native speakers. We use OpenAI and GPTZero as the baselines (see Appendix for more results) and PHD and MLE classifiers, choosing the thresholds was chosen on data unrelated to this task, as the equal error classifier on introductions of Wikipedia articles (real vs GPT-3.5-turbo) where it achieved EER of 26.8% for PHD and 22.5% for MLE. On the left, Fig. 6 shows false positive rates (FPR) for three sets of student essays: TOEFL essays by non-native speakers (red), same texts processed by GPT-4 asked to improve the text (grey), and native speakers (blue). First, blue bars are almost invisible for all detectors because the FPR for a native speaker is very small (\(<1\%\)) while non-native speakers can be wrongly accused by OpenAI and GPTZero in \(58\%\) and \(52\%\) of the cases respectively. The PHD classifier reduces this discriminating rate by 2x, showing FPR \(26\%\) for non-native speakers. After GPT-4 polishing, this rate further decreases to \(7.7\%\) compared to \(19\%\) for GPTZero. Interestingly, OpenAI also deals with GPT-4 polished texts suprisingly well, its FPR drops by 15x. The MLE detector also demonstrates less biased behaviour compared to baselines, but worse than PHD. On the right, Fig. 6 shows the true positive rates (TPR) of these methods on essays generated by ChatGPT. Red bars show that our classifiers greatly outperform baselines. Grey bars demonstrate the robustness of ID detectors to changes in generation style via prompt design. If the adversary asks ChatGPT to generate a text with some predefined level of complexity ("use simple words", or "more complex words"), baseline systems fail to correctly recognize such texts while both ID classifiers keep high detection rate. ## 6 Limitations and broader impact We see three main limitations of our method. First, it is stochastic in nature. PH dimensions of texts from the same generator vary widely, and the estimation algorithm is stochastic as well, which adds noise, while rerolling the estimation several times would slow down the method. Second, "out of the box" our method can detect only "good" (fluent) generators with relatively small temperature of generation. The PH dimension of "bad" or high-temperature generators is actually higher on average than for real texts, so the detector will need to be recalibrated. Third, we have tested only on several relatively high-resource languages and we do not know how the method transfers to low-resource languages; this is a direction for future work. Nevertheless, our method provides a new tool for recognizing fake content without discriminating non-native speakers, which is also much more robust to model change and domain change than known tools. ## 7 Conclusion We have introduced a novel way of estimating the intrinsic dimension of a text sample. We find that this dimension is approximately the same for all human-written samples in a given language, while texts produced by modern LLMs have lower dimension in average, which allows us to construct an artificial text detector. Our comprehensive experimental study proves the robustness of this classifier to domain shift, model shift, and adversarial attacks. We believe that we have discovered a new interesting feature of neural text representations that should be studied further. Figure 6: Comparison of GPT detectors in non-standard environment. Left: bias against non-native English writing samples. Right: decrease in performance due to prompt design.
2303.01085
Many partitions of mass assignments
In this paper, extending the recent work of authors with Calles Loperena and Dimitrijevi\'c Blagojevi\'c, we give a general and complete treatment of a problem of partition of mass assignments with prescribed arrangements of hyperplanes on Euclidean vector bundles. Using a new configuration test map scheme, as well as an alternative topological framework, we are able to reprove known results, extend them to arbitrary bundles as well as to put various types of constraints on the solutions. Moreover, the developed topological methods allow us to give new proofs and extend results of Guth and Katz, Schnider, and Sober\'on and Takahashi. In this way we place all these results under one ``roof''.
Pavle V. M. Blagojevic, Michael C. Crabb
2023-03-02T09:14:01Z
http://arxiv.org/abs/2303.01085v2
# Many partitions of mass assignments ###### Abstract. In this paper, extending the recent work of authors with Calles Loperena & Dimitrijevic Blagojevic, we give a general and complete treatment of a problem of partition of mass assignments with prescribed arrangements of hyperplanes on Euclidean vector bundles. Using a new configuration test map scheme, as well as an alternative topological framework, we are able to reprove known results, extend them to arbitrary bundles as well as to put various types of constraints on the solutions. Moreover, the developed topological methods allow us to give new proofs and extend results of Guth & Katz, Schnider and Soberon & Takahashi. In this way we place all these results under one "roof". The research by Pavle V. M. Blagojevic leading to these results has received funding from the Serbian Ministry of Science, Technological development and Innovations. The associated affine hyperplane is defined by \(H_{u;a}:=\{x\in\mathrm{V}\colon\langle x,u\rangle=a\}\). Furthermore, the oriented affine hyperplane \(H(u;a)\) defines two closed half-spaces by \[H_{u;a}^{u}:=\{x\in\mathrm{V}\colon\langle x,u\rangle-a\geq 0\}\quad\text{and} \quad H_{u;a}^{-u}:=\{x\in\mathrm{V}\colon\langle x,-u\rangle+a\geq 0\}.\] In other words, an oriented affine hyperplane is a triple \(H(u;a)=(H_{u;a},H_{u;a}^{u},H_{u;a}^{-u})\). An _arrangement_ of \(k\) (oriented) _affine hyperplanes_\(\mathcal{H}\) in \(\mathrm{V}\) is an ordered collection \(\mathcal{H}=\big{(}H(u_{1};a_{1}),\dots,H(u_{k};a_{k})\big{)}\) of \(k\) oriented affine hyperplanes in \(\mathrm{V}\). Such an arrangement \(\mathcal{H}\) and a collection of unit normal vectors \((v_{1},\dots,v_{k})\in\{u_{1},-u_{1}\}\times\dots\times\{u_{k},-u_{k}\}\) to the elements of the arrangement \(\mathcal{H}\) determine an _orthant_ as the intersection of the corresponding closed half-spaces: \[\mathcal{O}^{\mathcal{H}}_{(v_{1},\dots,v_{k})}:=H_{u_{1};a_{1}}^{v_{1}}\cap \dots\cap H_{u_{k};a_{k}}^{v_{k}}.\] There are \(2^{k}=\operatorname{card}\big{(}\{u_{1},-u_{1}\}\times\dots\times\{u_{k},-u_{k }\}\big{)}\) orthants determined by the arrangement \(\mathcal{H}\). The orthants are not necessarily distinct or non-empty. The arrangement of hyperplanes \(\mathcal{H}=\big{(}H(u_{1};a_{1}),\dots,H(u_{k};a_{k})\big{)}\) is _orthogonal_ if \(u_{r}\perp u_{s}\) for every \(1\leq r<s\leq k\). Now, we say that an arrangement \(\mathcal{H}=\big{(}H(u_{1};a_{1}),\dots,H(u_{k};a_{k})\big{)}\) in \(\mathrm{V}\)_equiparts_ a collection of masses \(\mathcal{M}\) in \(\mathrm{V}\) if and only if for every mass \(\mu\in\mathcal{M}\) and every \((v_{1},\dots,v_{k})\in\{u_{1},-u_{1}\}\times\dots\times\{u_{k},-u_{k}\}\) holds: \[\mu(\mathcal{O}^{\mathcal{H}}_{(v_{1},\dots,v_{k})})=\frac{1}{2^{k}}\,\mu( \mathbb{R}^{d}).\] Furthermore, a collection of masses \(\mathcal{M}\) in \(\mathrm{V}\)_can be equiparted by an arrangement_ of \(k\) affine hyperplanes if there exists an arrangement \(\mathcal{H}\) of \(k\) oriented affine hyperplanes in \(\mathrm{V}\) which equiparts \(\mathcal{M}\). The GHR problem for masses asks for _the minimal dimension \(d=\Delta(j,k)\) of a Euclidean space \(\mathrm{V}\) in which every collection \(\mathcal{M}\) of \(j\) masses can be equiparted by an arrangement of \(k\) affine hyperplanes_. Some classical results about the function \(\Delta\) include the ham-sandwich theorem \(\Delta(d,1)=d\), and the results of Grunbaum \(\Delta(1,2)=2\) and Hadwiger \(\Delta(2,2)=3\), \(\Delta(1,3)=3\). Furthermore, Avis and Ramos showed that \(\frac{2^{k}-1}{k}j\leq\Delta(j,k)\), while Peter Mani-Levitska, Sinisa Vrecica & Rade Zivaljevic in [25, Thm. 39] proved that \(\Delta(j,k)\,\leq\,j+(2^{k-1}-1)2^{\lfloor\log_{2}j\rfloor}\). For a complete proof of this result see [6, Lem. 4.2]. The list of known values of the function \(\Delta\) is given in [7]. In our recent paper with Calles Loperena and Dimitrijevic Blagojevic [6], motivated by the work of Patrick Schnider [32] and Ilani Axelrod-Freed & Soberon [2], we studied an extension of the GHR problem for masses to the problem for mass assignments over Grassmann manifolds. Figure 1. An illustration of an oriented affine hyperplane, associated half-spaces, arrangement of two affine hyperplanes, and an orthant \(\mathcal{O}^{\mathcal{H}}_{(u_{1},-u_{2})}\) where \(\mathcal{H}=(H(u_{1},2),H(u_{2},-1))\). ### What is the GHR problem for mass assignments? Let \(M_{+}(X)\) be the space of all finite Borel measures on a topological space \(X\) equipped with the weak topology. That is the minimal topology on \(M_{+}(X)\) with the property that for every bounded and upper semi-continuous function \(f\colon X\longrightarrow\mathbb{R}\), the induced function \(M_{+}(X)\longrightarrow\mathbb{R}\) given by \(\nu\longmapsto\int fd\nu\), is upper semi-continuous. For \(X=\mathrm{V}\), a Euclidean space, the subspace of all masses is denoted by \(M_{+}^{\prime}(\mathrm{V})\subseteq M_{+}(\mathrm{V})\). Let \(E\) be a Euclidean vector bundle over a path-connected space \(B\) with fibre \(E_{b}\) at \(b\in B\). Consider the associated fibre bundle (1) Any cross-section \(\mu\colon B\longrightarrow M_{+}^{\prime}(E)\) of the fibre bundle (1) is called _mass assignment_ on the Euclidean vector bundle \(E\). In particular, \(\mu(b)\) is a mass on \(E_{b}\) for every \(b\in B\). More generally, let us now write \(M_{+}(E)\longrightarrow B\) for the locally trivial bundle with fibre at \(b\in B\) the space \(M_{+}(E_{b})\) of finite Borel measures on the sphere \(E_{b}\). A continuous section \(\mu\) will be called a _family of (probability) measures_ on \(E\) if \(\mu_{b}\in M_{+}(E_{b})\) is a (probability) measure for each \(b\in B\). In the following we give an illustrative example of a family of probability measures on \(E\). **Examples 1.1**.: Let \(E\) be a Euclidean vector bundle over a path-connected space \(B\). Suppose that \(X\longrightarrow B\) is a finite cover embedded fibrewise in \(E\), and suppose that \(p:X\rightarrow[0,1]\) is a continuous function such that, for each \(b\in B\), \(\sum_{x\in X_{b}}p(x)=1\). For a Borel subset \(A\subseteq E_{b}\), define \(\mu_{b}(A):=\sum_{x\in A\cap X_{b}}p(x)\). Then \(\mu\) defines a family of probability measures on \(S(E)\). The GHR problem for mass assignments on a Euclidean vector bundle \(E\) over \(B\) asks _for all pairs of positive integers \((j,k)\) with the property that for every collection of \(j\) mass assignments \(\mathcal{M}=(\mu_{1},\dots,\mu_{j})\) on \(E\) there exists a point \(b\in B\) such that the collection of \(j\) masses \(\mathcal{M}(b):=(\mu_{1}(b),\dots,\mu_{j}(b))\) on \(E_{b}\) can be equiparted by an arrangement of \(k\) affine hyperplanes in \(E_{b}\)._ If we denote by \(\Delta(E)\) the set of such pairs \((j,k)\), then the GHR problem for mass assignments on \(E\) is a question of describing the set \(\Delta(E)\subseteq\mathbb{N}^{2}\). Recently, with Calles Loperena and Dimitrijevic Blagojevic [6], we studied the GHR problem for mass assignment over tautological vector bundles over Grassmann manifolds. In particular, with appropriate reformulation, the result [6, Thm. 1.5] can be stated as follows. **Theorem 1.2**.: _Let \(d\geq 2\) and \(\ell\geq 1\) be integers where \(1\leq\ell\leq d\), and let \(E_{\ell}^{d}\) be the tautological vector bundle over the Grassmann manifold \(\mathrm{G}_{\ell}(\mathbb{R}^{d})\) of all \(\ell\)-dimensional linear subspaces in \(\mathbb{R}^{d}\). Then_ \[\left\{(j,k)\in\mathbb{N}^{2}\,:\,1\leq k\leq\ell,\,2^{\lfloor\log_{2}j\rfloor }(2^{k-1}-1)+j\leq d\right\}\subseteq\Delta(E_{\ell}^{d}).\] In this paper, following the ideas of Barany and Matousek [3] and Crabb [15], we extend mass assignment partition problems in a Euclidean space by affine hyperplane arrangements to mass assignment partition problems on the unit Euclidean sphere by arrangements of equatorial spheres. Additionally, we will restrict, and therefore simplify, our notions of mass and mass assignment. ### What are the GHR problems on spheres and sphere bundles? First, we show how the GHR problem for masses in \(\mathbb{R}^{d}\) induces the corresponding mass partition problem on the unit sphere in \(\mathbb{R}^{d+1}\). Let \(d\geq 1\) be an integer. Embed \(\mathbb{R}^{d}\) into \(\mathbb{R}^{d+1}\) via the embedding \(x\longmapsto(x,-1)\). In this way \(\mathbb{R}^{d}\) coincides with the tangent space to the unit sphere \(S(\mathbb{R}^{d+1})\cong S^{d}\) at the point \(y_{0}:=(0,\ldots,0,-1)\). Let \(p\colon\mathbb{R}^{d}\longrightarrow\Lambda\) be the homeomorphism, between \(\mathbb{R}^{d}\) and the open lower hemisphere \(\Lambda:=\{y\in S(\mathbb{R}^{d+1})\,:\,\langle y,y_{0}\rangle>0\}\) of the sphere \(S(\mathbb{R}^{d+1})\), given by \(x\longmapsto\frac{1}{\sqrt{\|x\|^{2}+1}}(x,-1)\) for \(x\in\mathbb{R}^{d}\). Now, every mass \(\mu\) on the Euclidean space \(\mathbb{R}^{d}\) induces a measure (mass) \(\mu^{\prime}\) on \(S(\mathbb{R}^{d+1})\) defined by \(\mu^{\prime}(A):=\mu(p^{-1}(A\cap\Lambda))\), where \(A\subseteq S(\mathbb{R}^{d+1})\) is an element of the Borel \(\sigma\)-algebra on \(S(\mathbb{R}^{d+1})\). In particular, measure \(\mu^{\prime}\) vanishes on each equatorial sphere of \(S(\mathbb{R}^{d+1})\). Here, an equatorial sphere of \(S(\mathbb{R}^{d+1})\) can be always presented as an intersection of \(S(\mathbb{R}^{d+1})\) and a unique linear hyperplane in \(\mathbb{R}^{d+1}\). Furthermore, every affine hyperplane \(H\) in \(\mathbb{R}^{d}\) is mapped via \(p\) to a part of an equatorial sphere of \(S(\mathbb{R}^{d+1})\). More precisely, \[p(H)=\operatorname{span}(H)\cap\Lambda=\{\lambda\cdot(x,-1)\,:\,\lambda\in \mathbb{R},\,x\in H\}\cap\Lambda,\] where span denotes linear span in \(\mathbb{R}^{d+1}\). Using the transition of masses on \(\mathbb{R}^{d}\) into measures on \(S^{d}\), and affine hyperplanes in \(\mathbb{R}^{d}\) into equatorial spheres on \(S^{d}\), we can formulate the GHR problem for masses on sphere as follows: _determine the minimal dimension \(d=\Delta_{S}(j,k)\) of a unit Euclidean sphere \(S^{d}\) in which every collection of \(j\) masses can be equiparted by an arrangement of \(k\) equatorial spheres._ Here, the notions of masses and equipartition of masses are naturally extended from the affine to the spherical setup. Motivated by this spherical extension of the classical problem and with a desire to simplify the treatment of the mass assignments, we restate the GHR problem for mass assignments in the following way. Let \(E\) be a Euclidean vector bundle over path-connected space \(B\), and let \(S(E)\) denote the unit sphere bundle associated to \(E\). Now, _we are looking for all pairs of positive integers \((j,k)\) with the property that for every collection of \(j\) continuous real valued functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\), there exists a point \(b\in B\) and an arrangement \(\mathcal{H}^{b}=(H^{b}_{1},\ldots,H^{b}_{k})\) of \(k\) linear hyperplanes in the fibre \(E_{b}\) of \(E\) such that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H^{b}_{1}\cup\cdots\cup H^{b}_{k})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] Here integration is assumed to be with the respect to the measure on the sphere \(S(E_{b})\) induced by the metric. Once again, \(\Delta_{S}(E)\) denotes the set of all such pairs \((j,k)\). Since Euclidean and spherical partition problems are tightly related, we will Figure 2. An illustration of a transition of a mass on \(\mathbb{R}^{2}\) into a measure on \(S^{2}\). not make a particular distinction between them. From now on instead of a mass assignment we consider a real valued continuous function from the sphere bundle, and instead of an affine hyperplane we take linear hyperplane which induces an equatorial sphere. ## 2. Statements of the main results After collecting the first family of results for tautological bundles (Theorem 1.2) it is natural to ask various followup questions: * Why not consider partitions of mass assignments on arbitrary vector bundles instead of only tautological vector bundles? * Can we constrain our choice of desired partitions on the given vector bundle by forcing normals of hyperplanes into chosen fixed vector subbundles of the ambient vector bundle? * What about partitions with pairwise orthogonal hyperplanes, like it was considered in classical case? * And finally, how can we fit all these questions into a common framework? In the following we present multiple answers to the question we just asked. Interconnection between the main results of the paper is given in Figure 3. We begin the list of our results with the full generalisation of [6, Thm. 1.1]. In other words, the old result becomes a special case of the next theorem in the case of tautological vector bundles. For an \(n\)-dimensional Euclidean vector bundle \(E\) over a compact and connected ENR1\(B\) and an integer \(k\geq 1\) we denote by Footnote 1: Euclidean Neighbourhoods Retract \[R_{k}(B):=H^{*}(B;\mathbb{F}_{2})[x_{1},\ldots,x_{k}]\] the ring of polynomials in \(k\) variables \(x_{1},\ldots,x_{k}\) of degree \(1\) with coefficients in the cohomology ring of the base space \(H^{*}(B;\mathbb{F}_{2})\). Note that by definition an ENR is locally path-connected and so the assumption of connectedness for ENR is equivalent with the assumption of being path-connected. Classically, we denote by \(w_{i}(E)\), \(i\geq 0\), the Stiefel-Whitney classes of the vector bundle \(E\). In addition, we consider the ideal \[\mathcal{I}_{k}(E):=\Big{(}\sum_{s=0}^{n}w_{n-s}(E)\,x_{r}^{s}\,:\,1\leq r \leq k\Big{)}\ \subseteq\ R_{k}(B),\] and the element \[e_{k}(B):=\prod_{(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{F}_{2}^{k}-\{0\}}( \alpha_{1}x_{1}+\cdots+\alpha_{k}x_{k})\ \in\ R_{k}(B).\] Figure 3. The main results of the paper and connections between them. Now a generalisation of Theorem 1.2, which is proved in Section 4.1, can be stated as follows. **Theorem 2.1**.: _Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\), and let \(k\geq 1\) and \(j\geq 1\) be integers. If the element \(e_{k}(B)^{j}\) does not belong to the ideal \(\mathcal{I}_{k}(E)\), then \((j,k)\in\Delta_{S}(E)\). In other words, if \(e_{k}(B)^{j}+\mathcal{I}_{k}(E)\neq\mathcal{I}_{k}(E)\) in \(R_{k}(B)/\mathcal{I}_{k}(E)\), then for every collection of \(j\) continuous real valued functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\), there exists a point \(b\in B\) and an arrangement \(\mathcal{H}^{b}=(H^{b}_{1},\ldots,H^{b}_{k})\) of \(k\) linear hyperplanes in the fibre \(E_{b}\) of \(E\) such that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H^{b}_{1}\cup\cdots\cup H^{b}_{k})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] The first generalisation of Theorem 2.1 is obtained by a restriction of the family of the arrangements in which we are looking for our partition. Concretely, we ask for \(i\)-th hyperplane in the arrangement to have its normal vector in a specific vector subbundle. For that we modify our setup as follows. Let \(k\geq 1\) be an integer, and let \(E(1),\ldots,E(k)\) be Euclidean vector bundles over a compact and connected ENR \(B\). Denote by \(n_{i}\) the dimension of the vector bundle \(E(i)\) for \(1\leq i\leq k\). We consider the ideal in \(R_{k}(B)\): \[\mathcal{I}_{k}(E(1),\ldots,E(k)):=\Big{(}\sum_{s=0}^{n_{r}}w_{n_{r}-s}(E(r)) \,x_{r}^{s}\,:\,1\leq r\leq k\Big{)}\ \subseteq\ R_{k}(B),\] and set \[\iota_{k}(E(1),\ldots,E(k)):=\max\big{\{}j:e_{k}(B)^{j}\notin\mathcal{I}_{k}( E(1),\ldots,E(k))\big{\}}.\] Finally, we say that an arrangement of \(k\) linear hyperplanes \(\mathcal{H}^{b}=(H^{b}_{1},\ldots,H^{b}_{k})\) in the fibre \(E_{b}\) is determined by the collection of vector subbundles \(E(1),\ldots,E(k)\) if a unit normal of the linear hyperplane \(H^{b}_{i}\) belongs to the fibre \(E(i)_{b}\), for every \(1\leq i\leq k\). Now, the generalisation, proved in Section 4.2, says the following. **Theorem 2.2**.: _Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\), \(k\geq 1\) and \(j\geq 1\) integers, and let \(E(1),\ldots,E(k)\) be vector subbundles of \(E\) of dimensions \(n_{1},\ldots,n_{k}\), respectively. If \(j\leq\iota_{k}(E(1),\ldots,E(k))\), then for every collection of \(j\) continuous real valued functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\), there exists a point \(b\in B\) and an arrangement \(\mathcal{H}^{b}=(H^{b}_{1},\ldots,H^{b}_{k})\) of \(k\) linear hyperplanes in fibre \(E_{b}\) of \(E\) determined by the collection of vector subbundles \(E(1),\ldots,E(k)\) such that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H^{b}_{1}\cup\cdots\cup H^{b}_{k})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] After a generalisation and an extension of [6, Thm. 1.1], it is natural to ask whether the algebraic criteria from Theorem 2.1 and Theorem 2.2 can be substituted by appropriate numerical criteria. In other words, is there an appropriate generalisation of Theorem 1.2 in the case of an arbitrary vector bundle. We start our discussion from the case \(k=1\), the ham-sandwich. Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\) and let \(k=1\). Since the ideal \(\mathcal{I}_{1}(E)=\Big{(}\sum_{s=0}^{n}w_{n-s}(E)\,x_{1}^{s}\Big{)}\) and \(e_{1}(B)^{n-1}=x_{1}^{n-1}\notin\mathcal{I}_{1}(E)\) we conclude that \[\iota_{1}(E)=\max\big{\{}j:x_{1}^{j}\notin\mathcal{I}_{1}(E)\big{\}}\geq n-1.\] The equality \(\iota_{1}(E)=n-1\) is attained in the case when the base space \(B\) is a point. Indeed, when \(B=\mathrm{pt}\) then the vector bundle \(E\) is a trivial, \(w(E)=1\), and so \(\mathcal{I}_{1}(E)=(x_{1}^{n})\) implying that \(\iota_{1}(E)=\max\big{\{}j:x_{1}^{j}\notin(x_{1}^{n})\big{\}}=n-1\). We just proved the following ham-sandwich type result for Euclidean vector bundles. **Corollary 2.3**.: _Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\). If \(j\leq n-1\) then for every collection \(\varphi_{1},\dots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) of \(j\) continuous real valued functions, there exists a point \(b\in B\) and a hyperplane \(H^{b}\) in \(E_{b}\) such that for the connected components \(\mathcal{O}^{\prime}\) and \(\mathcal{O}^{\prime\prime}\) of the complement \(E_{b}-H^{b}\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\dots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] The previous result is general and holds for all vector bundles and therefore rather crude because it must contain the classical ham-sandwich theorem. It is natural to ask how the topology of the vector bundle \(E\) affects the upper bound for the number of functions we can equipart in a fibre. In other words can we say more about the number \(\iota_{1}(E)\). Indeed, the following proposition, proved in Section 5.1, explains a connection between the topology of \(E\) and the number \(\iota_{1}(E)\). **Proposition 2.4**.: _Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\). Then_ \[\iota_{1}(E)=\max\big{\{}j:0\neq w_{j-n+1}(-E)\in H^{j-n+1}(B;\mathbb{F}_{2}) \big{\}}.\] As a special case of the previous result we recover the ham-sandwich result for the tautological vector bundle [6, Cor. 1.2]. **Corollary 2.5**.: _Let \(d\geq 2\) and \(\ell\geq 1\) be integers where \(1\leq\ell\leq d\), and let \(E_{\ell}^{d}\) be the tautological vector bundle over the Grassmann manifold \(\mathrm{G}_{\ell}(\mathbb{R}^{d})\) of all \(\ell\)-dimensional linear subspaces in \(\mathbb{R}^{d}\). Then_ \[\iota_{1}(E_{\ell}^{d})=d-1.\] Proof.: We have that \[w(-E_{\ell}^{d})=1+w_{1}((E_{\ell}^{d})^{\perp})+\dots+w_{d-\ell}((E_{\ell}^{ d})^{\perp}),\] where the orthogonal complement is considered inside the trivial vector bundle \(\mathrm{G}_{\ell}(\mathbb{R}^{d})\times\mathbb{R}^{d}\). Since \(w_{d-\ell}(-E_{\ell}^{d})=w_{d-\ell}((E_{\ell}^{d})^{\perp})\neq 0\) and \(w_{i}(-E_{\ell}^{d})=w_{i}((E_{\ell}^{d})^{\perp})=0\) for \(i\geq d-\ell+1\), consult for example [23, p. 523], we have from Proposition 2.4 that \[\iota_{1}(E_{\ell}^{d})=\max\big{\{}j:0\neq w_{j-\ell+1}(-E_{\ell}^{d})\big{\}} =d-1.\] The following spherical version of the result of Axelrod-Freed and Soberon [2, Thm. 1.3], which was previously conjectured by Schnider [33, Conj. 2.4], is a direct consequence of our Theorem 2.2 and Corollary 2.5. **Corollary 2.6**.: _Let \(d\geq 2\) and \(\ell\geq 1\) be integers where \(1\leq\ell\leq d\), and let \(W\) be an arbitrary \((\ell-1)\)-dimensional vector subspace of \(\mathbb{R}^{d}\)._ _If \(j\leq d-1\) then for any collection of continuous functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E_{l}^{d})\longrightarrow\mathbb{R}\), there exists:_ _-_ \(V\in G_{\ell}(\mathbb{R}^{d})\) _which contains_ \(W\)_, and_ _-_ \(U\in G_{\ell-1}(\mathbb{R}^{d})\)_, which is contained in_ \(V\)__ _such that for the connected components \(\mathcal{O}^{\prime}\) and \(\mathcal{O}^{\prime\prime}\) of the complement \(V-U\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(V)}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(V)}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime}\cap S (V)}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(V)}\varphi_{j}.\] Proof.: Consider the vector bundle \(E=E(1)=H(W^{\perp})\oplus\underline{W}\) over \(\mathbb{P}(W^{\perp})\) where \(\underline{W}=\mathbb{P}(W^{\perp})\times W\) is the trivial vector bundle over \(\mathbb{P}(W^{\perp})\). According to Theorem 2.2 in the case \(k=1\) we have: if \(j\leq\iota_{1}(E)\), then for any \(j\) continuous functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) there exists a line \(L\in\mathbb{P}(W^{\perp})\) and a linear hyperplane \(U\) in \(V:=L\oplus W\) such that for the connected components \(\mathcal{O}^{\prime}\) and \(\mathcal{O}^{\prime\prime}\) of the complement \(V-U\) the following equalities hold: \[\int_{\mathcal{O}^{\prime}\cap S(V)}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(V)}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime}\cap S (V)}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(V)}\varphi_{j}.\] Since \(w(E)=w(H(W^{\perp})\oplus\underline{W})=w(H(W^{\perp}))\) and \(H(W^{\perp})\cong E_{1}^{d-\ell+1}\) the Corollary 2.5 implies that \(\iota_{1}(E)=\iota_{1}(E_{1}^{d-\ell+1})=d-1\). This concludes the proof of the corollary. Further on, if \(E(1)\) is an \(n_{1}\) dimensional vector subbundle of the vector bundle \(E\) then \[\iota_{1}(E(1))\leq\iota_{1}(E).\] Indeed, if \(E(1)^{\perp}\) is the orthogonal complement vector bundle of \(E(1)\) in \(E\) then \[x_{1}^{n}+w_{1}(E)x_{1}^{n-1}+\cdots+w_{n}(E)=\\ \big{(}x_{1}^{n_{1}}+w_{1}(E(1))x_{1}^{n_{1}-1}+\cdots+w_{n_{1}}( E(1))\big{)}\\ \big{(}x_{1}^{n-n_{1}}+w_{1}(E(1)^{\perp})x_{1}^{n-n_{1}-1}+ \cdots+w_{n-n_{1}}(E(1)^{\perp})\big{)}.\] Consequently, \(x_{1}^{j}\notin\mathcal{I}_{1}(E(1))\) implies \(x_{1}^{j}\notin\mathcal{I}_{1}(E)\). Recall, that \[e_{k}(\mathrm{pt})=\prod_{(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{F}_{2}^{k} -\{0\}}(\alpha_{1}x_{1}+\cdots+\alpha_{k}x_{k})\ \in\ R_{k}(\mathrm{pt})\cong\mathbb{F}_{2}[x_{1},\ldots,x_{k}].\] Now, for positive integers \(m_{1},\ldots,m_{k}\) we define \[\iota_{k}(m_{1},\ldots,m_{k}):=\max\big{\{}j:e_{k}(\mathrm{pt})^{j}\notin(x_{1 }^{m_{1}},\ldots,x_{k}^{m_{k}})\big{\}}.\] For example, if \(E=\underline{\mathbb{R}^{n}}\) is a trivial \(n\) dimensional real vector bundle over \(B=\mathrm{pt}\), then \[\iota_{k}(\underline{\mathbb{R}^{n_{1}}},\ldots,\underline{\mathbb{R}^{n_{k}} })=\iota_{k}(n_{1},\ldots,n_{k}).\] Notice that the equality holds for all integers \(n\geq\max\{n_{1},\ldots,n_{k}\}\). Indeed, since \(w(\underline{\mathbb{R}^{n_{1}}})=\cdots=w(\underline{\mathbb{R}^{n_{k}}})=1\), it follows that \[(x_{1}^{n_{1}},\ldots,x_{k}^{n_{k}})=\mathcal{I}_{k}(\underline{\mathbb{R}^{n_ {1}}},\ldots,\underline{\mathbb{R}^{n_{k}}}).\] In general, the following inequality always holds \[\iota_{k}(n_{1},\ldots,n_{k})\leq\iota_{k}(E(1),\ldots,E(k)).\] In fact, the condition \(e_{k}(\operatorname{pt})^{j}\notin(x_{1}^{n_{1}},\ldots,x_{k}^{n_{k}})\), for some integer \(j\), implies the existence of a monomial \(x_{1}^{m_{1}}\cdots x_{k}^{m_{k}}\), in the additive presentation of \(e_{k}(\operatorname{pt})^{j}\) with respect to the monomial base of \(\mathbb{F}_{2}[x_{1},\ldots,x_{k}]\), with the property that \(m_{1}\leq n_{1}-1,\ldots,m_{k}\leq n_{k}-1\). Since the ideal \(\mathcal{I}_{k}(E(1),\ldots,E(k))\) is generated by polynomials \(x_{1}^{n_{i}}+w_{1}(E(i))x_{1}^{n_{i}-1}+\cdots+w_{n_{i}}(E(i))\), \(1\leq i\leq k\), the existence of the monomial \(x_{1}^{m_{1}}\cdots x_{k}^{m_{k}}\) in the presentation of \(e_{k}(\operatorname{pt})^{j}\) implies that \(e_{k}(B)^{j}\notin\mathcal{I}_{k}(E(1),\ldots,E(k))\). Actually, we can say more, as the following proposition illustrates. For the proof see Section 5.2. **Proposition 2.7**.: _Let \(k\geq 1\) be an integer, and let \(E(1),\ldots,E(k)\) be Euclidean vector bundles over a compact and connected ENR \(B\). Denote by \(n_{i}\) the dimension of the vector bundle \(E(i)\) for \(1\leq i\leq k\). If_ \[0\neq w_{\iota_{1}(E(1))-n_{1}+1}(-E(1))\cdots w_{\iota_{1}(E(k))-n_{k}+1}(-E (k))\in H^{*}(B;\mathbb{F}_{2}),\] _then_ \[\iota_{k}(\iota_{1}(E(1))+1,\ldots,\iota_{1}(E(k))+1)=\iota_{k}(E(1),\ldots,E (k)).\] A direct consequence of the previous proposition, in the case when \(E\) is a tautological vector bundle, is the following corollary [6, Lem. 4.1]. For a proof see Section 5.3. **Corollary 2.8**.: _Let \(d\geq 2\), \(k\geq 1\), and \(\ell\geq 1\) be integers where \(1\leq k\leq\ell\leq d\), and let \(E_{\ell}^{d}\) be the tautological vector bundle over the Grassmann manifold \(\operatorname{G}_{\ell}(\mathbb{R}^{d})\) of all \(\ell\)-dimensional linear subspaces in \(\mathbb{R}^{d}\). Then_ \[\iota_{k}(d,\ldots,d)=\iota_{k}(E_{\ell}^{d},\ldots,E_{\ell}^{d}).\] The next corollary is a spherical version of [6, Thm. 1.4]. **Corollary 2.9**.: _Let \(d\geq 2\), \(k\geq 1\), and \(\ell\geq 1\) be integers where \(1\leq k\leq\ell\leq d\), and let \(E=E_{\ell}^{d}\) be the tautological vector bundle over the Grassmann manifold \(\operatorname{G}_{\ell}(\mathbb{R}^{d})\) of all \(\ell\)-dimensional linear subspaces in \(\mathbb{R}^{d}\). If \(j=2^{t}+r\) where \(0\leq r\leq 2^{t}-1\) and \(d\geq 2^{t+k-1}+r\), then \((j,k)\in\Delta_{S}(E_{\ell}^{d})\). In other words, if \(j=2^{t}+r\) where \(0\leq r\leq 2^{t}-1\) and \(d\geq 2^{t+k-1}+r\), then for every collection of \(j\) continuous real valued functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\), there exists a point \(b\in B\) and an arrangement \(\mathcal{H}^{b}=(H_{1}^{b},\ldots,H_{k}^{b})\) of \(k\) linear hyperplanes in the fibre \(E_{b}\) of \(E\) such that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H_{1}^{b}\cup\cdots\cup H_{k}^{b})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] Proof.: From Theorem 2.1 we have that \((j,k)\in\Delta_{S}(E_{\ell}^{d})\) if \(e_{k}(B)^{j}\notin\mathcal{I}_{k}(E_{\ell}^{d})=\mathcal{I}_{k}(E_{\ell}^{d}, \ldots,E_{\ell}^{d})\). Stated differently \((j,k)\in\Delta_{S}(E_{\ell}^{d})\) if \[j\leq\iota_{k}(E_{\ell}^{d},\ldots,E_{\ell}^{d})=\iota_{k}(d,\ldots,d)=\max \big{\{}j^{\prime}:e_{k}(\operatorname{pt})^{j^{\prime}}\notin(x_{1}^{d}, \ldots,x_{k}^{d})\big{\}}.\] Here the first equality comes from Corollary 2.8 while the second one is just the definition of \(\iota_{k}(d,\ldots,d)\). Since \(j=2^{t}+r\) where \(0\leq r\leq 2^{t}-1\) and \(d\geq 2^{t+k-1}+r\), then according to [6, Lem. 4.2] we have that \(e_{k}(\operatorname{pt})^{j}\notin(x_{1}^{d},\ldots,x_{k}^{d})\). Thus, indeed \(j\leq\iota_{k}(E_{\ell}^{d},\ldots,E_{\ell}^{d})\) and the proof of the corollary is complete. We proceed with the next consequence of Proposition 2.7. In this case the base space of the vector bundle will be the real flag manifold, and so the following statement is an extension of Corollary 2.8. For the relevant background on the real flag manifold, associated canonical vector bundles, and a proof of the corollary see Section 6.1. **Corollary 2.10**.: _Let \(k\geq 1\) and \(d\geq 2\) be integers, and let \(0=n_{0}<n_{1}<\dots<n_{k-1}<n_{k}<n_{k+1}=d\) be a strictly increasing sequence of integers. For a real \(d\)-dimensional vector space \(V=\mathbb{R}^{d}\) let \(E_{1},\dots,E_{k+1}\) denote the canonical vector bundles over the flag manifold \(\operatorname{Flag}_{n_{1},\dots,n_{k}}(V)\), with \(\dim(E_{i})=n_{i}-n_{i-1}\) for \(1\leq i\leq k+1\). Set \(E(i):=\bigoplus_{1\leq r\leq i}E_{r}\) for all \(1\leq i\leq k\). Then_ \[\iota_{k}(d,\dots,d)=\iota_{k}(E(1),\dots,E(k)).\] The previous corollary, in the language of GHR problem for the mass assignments, with the help of Theorem 2.2 and the proof of Corollary 2.9, gives the following consequence. For a proof see Section 6.2. **Corollary 2.11**.: _Let \(k\geq 1\) and \(d\geq 2\) be integers, let \(0=n_{0}<n_{1}<\dots<n_{k-1}<n_{k}<n_{k+1}=d\) be a strictly increasing sequence of integers, and let \(V=\mathbb{R}^{d}\) be a real \(d\)-dimensional vector space. Let \(E_{1},\dots,E_{k+1}\) be canonical vector bundles over the flag manifold \(\operatorname{Flag}_{n_{1},\dots,n_{k}}(V)\), let \(E(i):=\bigoplus_{1\leq r\leq i}E_{r}\) for all \(1\leq i\leq k\), and let \(E:=E(k)\). Assume that \(j=2^{t}+r\) is an integer with \(0\leq r\leq 2^{t}-1\) and \(d\geq 2^{t+k-1}+r\). Then for every collection of \(j\) continuous real valued functions \(\varphi_{1},\dots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\), there exists a point \(b:=(W_{1},\dots,W_{k+1})\in\operatorname{Flag}_{n_{1},\dots,n_{k}}(V)\) and an arrangement \(\mathcal{H}^{b}=(H^{b}_{1},\dots,H^{b}_{k})\) of \(k\) linear hyperplanes in \(E_{b}=\bigoplus_{1\leq r\leq k}W_{r}=W^{\perp}_{k+1}\) such that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H^{b}_{1}\cup\dots\cup H^{b}_{k})\) the following statements hold_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\dots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j},\] _and in addition_ \[H^{b}_{1}\supseteq\bigoplus_{2\leq r\leq k+1}W_{r},\ H^{b}_{2}\supseteq \bigoplus_{3\leq r\leq k+1}W_{r},\ \dots\,H^{b}_{k}\supseteq\bigoplus_{k+1\leq r\leq k+1}W_{k+1}.\] Here \((W_{1},\dots,W_{k+1})\in\operatorname{Flag}_{n_{1},\dots,n_{k}}(V)\) means that \(\dim W_{i}=n_{i}-n_{i-1}\) for \(1\leq i\leq k+1\), and \(W_{i^{\prime}}\perp W_{i^{\prime\prime}}\) for all \(1\leq i^{\prime}<i^{\prime\prime}\leq k+1\). For more details on flag manifolds see Section 6. We conclude our collection of results related to flags inside a real vector space with the spherical version of a result by Axelrod-Freed and Soberon [2, Thm. 1.2]. For the so called Fairy Bread Sandwich theorem we give a new proof in Section 6.3 based on the CS / TM scheme presented in Section 3.5. **Theorem 2.12**.: _Let \(d\geq 1\) and \(k\geq 1\) be integers with \(d\geq k\), and let \(V=\mathbb{R}^{d+1}\) be a real vector space. Fix a permutation \((j_{k},\dots,j_{d})\) of the set \(\{k,\dots,d\}\), and take an arbitrary collections of functions \(\varphi_{a,b}\colon S(E^{d+1}_{+1})\longrightarrow\mathbb{R}\), \(k\leq a\leq d\), \(1\leq b\leq j_{a}\), from the sphere bundle of the tautological vector bundle \(E^{d+1}_{a+1}\) over the Grassmann manifold \(G_{a+1}(V)\) to the real numbers. There exists a flag \((V_{k},\dots,V_{d})\in\operatorname{Flag}_{k,\dots,d}(V)\) such that for every \(k\leq a\leq d\) and every \(1\leq b\leq j_{a}\) the following statement holds_ \[\int_{\{v\in V_{a+1}:\langle v,u_{a}\rangle\geq 0\}\cap S(V_{a+1})}\varphi_{a,b}= \int_{\{v\in V_{a+1}:\langle v,u_{a}\rangle\geq 0\}\cap S(V_{a+1})}\varphi_{a,b}.\] _Here the unit vectors \(u_{k},\dots,u_{d}\) are determined, up to a sign, by the equality \(V_{r}=\{v\in V_{r+1}:\langle v,u_{r}\rangle=0\}\), \(k\leq r\leq d\), and with \(V_{d+1}=V\), this means that \(u_{r}\) is a unit normal vector to \(V_{r}\), considered as a hyperplane inside \(V_{r+1}\)._ Returning back to Proposition 2.7 we observe that the numbers \(\iota_{k}(m_{1},\ldots,m_{k})\), in many cases, decide existence of equipartitions of mass assignments. Hence, we collect several properties of these numbers with proofs given in Section 7.1. **Proposition 2.13**.: _Let \(k\geq 1\) be an integer and let \(m_{1},\ldots,m_{k}\) be a sequence of positive integers._ 1. _If_ \(\iota_{k-1}(m_{1},\ldots,m_{k-1})\geq m\) _and_ \(m_{k}\geq 2^{k-1}m+1\)_, then_ \(\iota_{k}(m_{1},\ldots,m_{k})\geq m\)_._ 2. _If_ \(m_{i}\geq 2^{i-1}m+1\) _for all_ \(1\leq i\leq k\)_, then_ \(\iota_{k}(m_{1},\ldots,m_{k})\geq m\)_._ 3. _If_ \(m\geq 1\)_, then_ \(\iota_{k}(m+1,2m+1,2^{2}m+1\ldots,2^{k-1}m+1)=m\)_._ 4. _Let_ \(m\geq 1\) _and_ \(1\leq r\leq k-1\) _be integers. If_ \(\iota_{k-r}(m_{1},\ldots,m_{k-r})\geq m\) _and_ \(\iota_{r}(m_{k-r+1},\ldots,m_{k})\geq 2^{k-r}m\)_, then_ \(\iota_{k}(m_{1},\ldots,m_{k})\geq m\)_._ 5. _If_ \(\iota_{k-1}(m_{1},\ldots,m_{k-1})\geq 2m\) _and_ \(m_{k}\geq m+1\)_, then_ \(\iota_{k}(m_{1},\ldots,m_{k})\geq m\)_._ 6. _Let_ \(k=2\)_._ \(m\leq\iota_{2}(m_{1},m_{2})\) _if and only if there is an integer_ \(i\) _such that_ \(0\leq i\leq m\)_,_ \(\binom{m}{i}=1\mod 2\)_, and_ \(2m-m_{2}+1\leq i\leq m_{1}-m-1\)_._ 7. _If_ \(1\leq r\leq 2^{t}\)_, then_ \(\iota_{2}(2^{t}+2r,2^{t+1}+r)\geq 2^{t}+r-1\)_._ Using the fact that \(e_{k}(\operatorname{pt})\) is the top Dickson polynomial in variables \(x_{1},\ldots,x_{k}\) we can prove even more. For a proof of the proposition which follows see Section 7.2. **Proposition 2.14**.: _Let \(k\geq 1\) be an integer and let \(m_{1},\ldots,m_{k}\) be positive integers._ 1. _If_ \(0\leq r\leq 2^{t}-1\)_,_ \(\iota_{k-1}(m_{1},\ldots,m_{k-1})\geq 2^{t}+2r\) _and_ \(m_{k}\geq 2^{t+k-1}+r+1\)_, then_ \(\iota_{k}(m_{1},\ldots,m_{k})\geq 2^{t}+r\)_._ 2. _If_ \(0\leq r\leq 2^{t}-1\)_,_ \(\iota_{k-1}(m_{1},\ldots,m_{k-1})\geq 2^{t+1}+r\) _and_ \(m_{k}\geq 2^{t+k-1}+r+1\)_, then_ \(\iota_{k}(m_{1},\ldots,m_{k})\geq 2^{t}+r\)_._ 3. _If_ \(0\leq r\leq 2^{t}-1\)_,_ \(m_{i}\geq 2^{t+k-1}+r+1\) _for all_ \(1\leq i\leq k\)_, then_ \(\iota_{k}(m_{1},\ldots,m_{k})\geq 2^{t}+r\)_._ 4. _If_ \(\iota_{k}(m_{1},\ldots,m_{k})\geq m\)_, then_ \(\iota_{k}(2m_{1},\ldots,2m_{k})\geq 2m\)_._ The statement (3) in the previous proposition is equivalent to [6, Lem. 4.2]. We continue with results on partitions by orthogonal arrangements -- the orthogonal GHR problem for mass assignments. Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\), and let \(k\geq 1\) be an integer. Recall that we denoted by \(R_{k}(B)\) the cohomology ring \(H^{*}(B;\mathbb{F}_{2})[x_{1},\ldots,x_{k}]\), and by \(e_{k}(B)\) the cohomology class \(\prod_{(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{F}_{2}^{k}-\{0\}}(\alpha_{1}x_ {1}+\cdots+\alpha_{k}x_{k})\). We consider the following ideals in \(R_{k}(B)\) \[\mathcal{J}_{k}(E):=(f_{1},\ldots,f_{k})\qquad\text{and}\qquad\mathcal{J}_{k} ^{\prime}(E):=(\overline{f}_{1},\ldots,\overline{f}_{k})\] where \[f_{i}:=\sum_{0\leq r_{1}+\cdots+r_{i}\leq n-i+1}w_{n-i+1-(r_{1}+\cdots+r_{i})} (E)\,x_{1}^{r_{1}}\cdots x_{i}^{r_{i}},\] and \[\overline{f}_{i}:=\sum_{0\leq r_{1}+\cdots+r_{k}\leq n-i+1}w_{n-i+1-(r_{1}+ \cdots+r_{k})}(E)\,x_{1}^{r_{1}}\cdots x_{k}^{r_{k}},\] for \(1\leq i\leq k\). The first result on orthogonal partitions is an analogue of Theorem 2.1 and Theorem 2.2. For the proof see Section 8. **Theorem 2.15**.: _Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\), and let \(k\geq 1\) and \(j\geq 1\) be integers. Then the following statements are true:_ 1. \(\mathcal{J}_{k}(E)=\mathcal{J}_{k}^{\prime}(E)\)_._ 2. _If the element_ \(e_{k}(B)^{j}\) _does not belong to the ideal_ \(\mathcal{J}_{k}(E)=\mathcal{J}_{k}^{\prime}(E)\)_, then for every collection of_ \(j\) _continuous real valued functions_ \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\)_, there exists a point_ \(b\in B\) _and an orthogonal arrangement_ \(\mathcal{H}^{b}=(H_{1}^{b},\ldots,H_{k}^{b})\) _of_ \(k\) _linear hyperplanes in the fibre_ \(E_{b}\) _of_ \(E\) _such that for every pair of connected components_ \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) _of the arrangement complement_ \(E_{b}-(H_{1}^{b}\cup\cdots\cup H_{k}^{b})\) _the following equalities hold_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b}) }\varphi_{j}.\] In the case when \(B=\operatorname{pt}\) the previous theorem implies directly the result of Blagojevic & Roman Karasev [9, Thm. 2.1 and Prop. 3.4] with a better description of the set of generators of the relevant ideal. **Corollary 2.16**.: _Let \(V\) be a Euclidean vector space of dimension \(n\), and let \(k\geq 1\) and \(j\geq 1\) be integers. If_ \[e_{k}(\operatorname{pt}):=\prod_{(\alpha_{1},\ldots,\alpha_{k}) \in\mathbb{F}_{2}^{k}-\{0\}}(\alpha_{1}x_{1}+\cdots+\alpha_{k}x_{k})\not\in\\ \Big{(}\sum_{r_{1}+\cdots+r_{i}=n-i+1}x_{1}^{r_{1}}\cdots x_{i}^{r _{i}}\ :\ 1\leq i\leq k\Big{)}=\\ \Big{(}\sum_{r_{1}+\cdots+r_{k}=n-i+1}x_{1}^{r_{1}}\cdots x_{k}^{ r_{k}}\ :\ 1\leq i\leq k\Big{)},\] _then for every collection of \(j\) continuous functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(V)\longrightarrow\mathbb{R}\), there exists an orthogonal arrangement \(\mathcal{H}=(H_{1},\ldots,H_{k})\) of \(k\) linear hyperplanes in \(V\) such that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(V-(H_{1}\cup\cdots\cup H_{k})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(V)}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(V)}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime}\cap S (V)}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(V)}\varphi_{j}.\] In the case of a vector space we collect some numerical results. For that we denote by \[\omega_{k}(n):=\max\Big{\{}j:e_{k}(\operatorname{pt})^{j}\notin\Big{(}\sum_{r _{1}+\cdots+r_{k}=n-i+1}x_{1}^{r_{1}}\cdots x_{k}^{r_{k}}\ :\ 1\leq i\leq k\Big{)}\Big{\}}.\] Using a computer algebra system, like Wolfram Mathematica, we collect some concrete values of \(\omega_{k}(n)\): \begin{tabular}{l|r r r r r r r r r} & \(\omega_{k}(n)\) & \(\mathbf{n}\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) \\ \hline \(\mathbf{k}\) & & & & & & & & & & \\ \(2\) & & & & & & & & & & \\ \(3\) & & & & & & & & & & \\ \(4\) & & & & & & & & & & \\ \end{tabular} ## 3. From a partition problem to a topological question: The CS / TM schemes In this section, based on the work of Crabb [15], we develop an alternative configuration test map scheme (CS / TM) to the one presented in [6, Sec. 2]. This will be done in two steps, first for the classical GHR mass partition problem, and then for the mass assignment partition problem. The new approach allows us a systematic study of mass assignment partition questions even with addition on constrains. ### The GHR problem for masses In this part we reformulate the typical product \(\operatorname{CS}/\operatorname{TM}\) scheme for the classical GHR problem. The reformulation of the scheme naturally gives rise to a convenient \(\operatorname{CS}/\operatorname{TM}\) scheme for the GHR problem for mass assignments. Let \(\operatorname{V}\) be a Euclidean vector space of dimension \(d\geq 1\). The unit sphere of the vector space \(\operatorname{V}\) will be denoted by \(S(\operatorname{V}):=\{v\in\operatorname{V}:\|v\|=1\}\) and the corresponding real projective space by \(\mathbb{P}(\operatorname{V})\). The associated Hopf line bundle is \(H(V):=\{(L,v)\in\mathbb{P}(\operatorname{V})\times\operatorname{V}:v\in L\}\). In particular, \(S(\operatorname{V})\cong S^{d-1}\) is the space of all oriented \(1\)-dimensional vector subspaces of \(V\) and \(\mathbb{P}(\operatorname{V})\cong\mathbb{R}\mathbb{P}^{d-1}\) is the space of all \(1\)-dimensional vector subspaces of \(V\). The canonical homeomorphism \(\mathbb{P}(\operatorname{V})=\operatorname{G}_{1}(\operatorname{V})\cong \operatorname{G}_{d-1}(\operatorname{V})\), \(L\longmapsto L^{\perp}\), identifies the projective space \(\mathbb{P}(\operatorname{V})\) with the space of all linear hyperplanes in \(V\), the Grassmann manifold \(\operatorname{G}_{d-1}(\operatorname{V})\). The space of all arrangements of \(k\) linear hyperplanes in \(\operatorname{V}\) can be identified with the product space \(\mathbb{P}(\operatorname{V})^{\times k}=\mathbb{P}(\operatorname{V})\times \cdots\times\mathbb{P}(\operatorname{V})\). On the other hand, the space of all arrangements of \(k\) oriented linear hyperplanes in \(\operatorname{V}\) is the \(2^{k}\)-fold covering \(S(H(\operatorname{V}))^{\times k}=S(H(\operatorname{V}))\times\cdots\times S(H (\operatorname{V}))\) of \(\mathbb{P}(\operatorname{V})^{\times k}\), whose total space, in particular, is just the product of spheres \(S(\operatorname{V})^{\times k}=S(\operatorname{V})\times\cdots\times S( \operatorname{V})\). In other words, we have a fibre bundle \(S(H(\operatorname{V}))^{\times k}\longrightarrow\mathbb{P}(\operatorname{V})^ {\times k}\) with a discrete fibre \[\bigl{(}S(H(\operatorname{V}))^{\times k}\bigr{)}_{(L_{1},\ldots,L_{k})}=S(L_ {1})\times\cdots\times S(L_{k})\] at \((L_{1},\ldots,L_{k})\in\mathbb{P}(\operatorname{V})^{\times k}\). Here, \(S(H(\operatorname{V}))\) denotes the sphere bundle of the Hopf line bundle \(H(\operatorname{V})\) with fibres homeomorphic to a zero dimensional sphere. We denote by \(A_{k}(\operatorname{V})\) the \(2^{k}\)-dimensional real vector bundle over \(\mathbb{P}(\operatorname{V})^{\times k}\) with fibre at \((L_{1},\ldots,L_{k})\in\mathbb{P}(\operatorname{V})^{\times k}\) defined to be the vector space \(\operatorname{Map}\big{(}\prod_{i=1}^{k}S(L_{i}),\mathbb{R}\big{)}\) of all maps \(\prod_{i=1}^{k}S(L_{i})\longrightarrow\mathbb{R}\). Each vector space \(\operatorname{Map}\big{(}\prod_{i=1}^{k}S(L_{i}),\mathbb{R}\big{)}\) is equipped with the natural \((\mathbb{Z}/2)^{k}\)-action given by the antipodal actions on the \(0\)-dimensional spheres \(S(L_{1}),\ldots,S(L_{k})\). The vector bundle \(A_{k}(\operatorname{V})\) is isomorphic to the vector bundle \[q_{1}^{*}\bigl{(}H(\operatorname{V})\oplus\underline{\mathbb{R}}\bigr{)} \otimes\cdots\otimes q_{k}^{*}\bigl{(}H(\operatorname{V})\oplus\underline{ \mathbb{R}}\bigr{)},\] where \(q_{i}\colon\mathbb{P}(\operatorname{V})^{\times k}\longrightarrow\mathbb{P}( \operatorname{V})\) is the projection on the \(i\)-th factor, \(\underline{\mathbb{R}}\) denotes the trivial line bundle, in this case, over \(\mathbb{P}(\operatorname{V})\), and \(q_{i}^{*}\bigl{(}H(\operatorname{V})\oplus\underline{\mathbb{R}}\bigr{)}\) is the pullback vector bundle. In particular, the vector bundle \[A_{k}(\operatorname{V})\cong q_{1}^{*}\bigl{(}H(\operatorname{V})\oplus \underline{\mathbb{R}}\bigr{)}\otimes\cdots\otimes q_{k}^{*}\bigl{(}H( \operatorname{V})\oplus\underline{\mathbb{R}}\bigr{)}\] has a trivial line subbundle given by all constant maps \(\prod_{i=1}^{k}S(L_{i})\longrightarrow\mathbb{R}\), which we also denote by \(\underline{\mathbb{R}}\). Next, let us consider a continuous function \(\varphi\colon S(\operatorname{V})\longrightarrow\mathbb{R}\) on the sphere \(S(\operatorname{V})\). It induces a section \(s_{\varphi}\colon\mathbb{P}(\operatorname{V})^{\times k}\longrightarrow A_{k} (\operatorname{V})\) of the vector bundle \(A_{k}(\operatorname{V})\) which is given by \[(L_{1},\ldots,L_{k})\longmapsto\bigl{(}s_{\varphi}(L_{1},\ldots,L_{k})\colon \prod_{i=1}^{k}S(L_{i})\longrightarrow\mathbb{R}\bigr{)}\] for \((L_{1},\ldots,L_{k})\in\mathbb{P}(V)\), where \[s_{\varphi}(L_{1},\ldots,L_{k})(v_{1},\ldots,v_{k}):=\int_{\mathcal{O}_{v_{1}, \ldots,v_{k}}\cap\operatorname{S(\operatorname{V})}}\varphi\] for \((v_{1},\ldots,v_{k})\in\prod_{i=1}^{k}S(L_{i})\). Here, \(\mathcal{O}_{v_{1},\ldots,v_{k}}\) denotes the following intersection of open half-spaces in \(\operatorname{V}\): \[\mathcal{O}_{v_{1},\ldots,v_{k}}:=\{u\in\operatorname{V}\,:\,\langle u,v_{1} \rangle>0\}\cap\cdots\cap\{u\in\operatorname{V}\,:\,\langle u,v_{k}\rangle>0\}.\] Here the integration is with the respect to the measure on the sphere \(S(\mathrm{V})\) induced by the metric. Observe that each subset \(\mathcal{O}_{v_{1},\cdots,v_{k}}\) is actually a (path) connected component of the arrangement complement \(\mathrm{V}-(L_{\downarrow}^{1}\cup\cdots\cup L_{k}^{\perp})\). We have introduced all necessary notions to state and prove the \(\mathrm{CS}\) / \(\mathrm{TM}\) scheme theorem for the spherical version of the classical GHR problem. This theorem relates to the similar results in [25, Prop. 6], [10, Prop. 2.2], [8, Prop. 2.1]. **Theorem 3.1**.: _Let \(\mathrm{V}\) be a Euclidean vector space, and let \(k\geq 1\) and \(j\geq 1\) be integers. If the Euler class of the vector bundle \(\big{(}A_{k}(\mathrm{V})/\underline{\mathbb{R}}\big{)}^{\oplus j}\) does not vanish, then for every collection of \(j\) continuous functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(\mathrm{V})\longrightarrow\mathbb{R}\) there exists an arrangement of \(k\) linear hyperplanes \(H_{1},\ldots,H_{k}\) in \(\mathrm{V}\) with the property that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(\mathrm{V}-(H_{1}\cup\cdots\cup H_{k})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(\mathrm{V})}\varphi_{1}=\int_{\mathcal{O}^{ \prime\prime}\cap S(\mathrm{V})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O} ^{\prime}\cap S(\mathrm{V})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S (\mathrm{V})}\varphi_{j}.\] _In other words,_ \[\mathrm{e}\left(\big{(}A_{k}(\mathrm{V})/\underline{\mathbb{R}}\big{)}^{ \oplus j}\right)\neq 0\quad\Longrightarrow\quad\Delta_{S}(j,k)\leq\dim( \mathrm{V}).\] Proof.: Let us assume that the Euler class the vector bundle \(\big{(}A_{k}(\mathrm{V})/\underline{\mathbb{R}}\big{)}^{\oplus j}\) does not vanish. Then, in particular, every section of the vector bundle \(\big{(}A_{k}(\mathrm{V})/\underline{\mathbb{R}}\big{)}^{\oplus j}\) has a zero. Let \(\varphi_{1},\ldots,\varphi_{j}\colon S(\mathrm{V})\longrightarrow\mathbb{R}\) be an arbitrary collection of \(j\) continuous functions on the sphere \(S(\mathrm{V})\). Such a collection induces a section \(s\colon\mathbb{P}(\mathrm{V})^{\times k}\longrightarrow A_{k}(\mathrm{V})^{ \oplus j}\) of the vector bundle \(A_{k}(\mathrm{V})^{\oplus j}\) defined by \[(L_{1},\ldots,L_{k})\longmapsto\big{(}s_{\varphi_{r}}(L_{1},\ldots,L_{k}) \colon\prod_{i=1}^{k}S(L_{i})\longrightarrow\mathbb{R}\big{)}_{1\leq r\leq j}\;.\] Recall that we have already defined functions \(s_{\varphi_{r}}\), for \(1\leq r\leq j\), by \[s_{\varphi_{r}}(L_{1},\ldots,L_{k})(v_{1},\ldots,v_{k})=\int_{\mathcal{O}_{v_ {1},\ldots,v_{k}}\cap S(\mathrm{V})}\varphi_{r}\] for \((v_{1},\ldots,v_{k})\in\prod_{i=1}^{k}S(L_{i})\). Let \(\Pi\colon A_{k}(\mathrm{V})^{\oplus j}\longrightarrow\big{(}A_{k}(\mathrm{V})/ \underline{\mathbb{R}}\big{)}^{\oplus j}\) denote the map of vector bundles induced by the canonical projection(s). Then the section \(\Pi\circ s\) of the vector bundle \(\big{(}A_{k}(\mathrm{V})/\underline{\mathbb{R}}\big{)}^{\oplus j}\) has a zero. Hence, there is a point \((L_{1},\ldots,L_{k})\in\mathbb{P}(\mathrm{V})^{\times k}\) in the base space with the property that \(s(L_{1},\ldots,L_{k})\) belongs to the trivial subbundle \(\underline{\mathbb{R}}^{\oplus j}\) of the bundle \(A_{k}(\mathrm{V})^{\oplus j}\). In other words \[\int_{\mathcal{O}^{\prime}\cap S(\mathrm{V})}\varphi_{1}=\int_{\mathcal{O}^{ \prime\prime}\cap S(\mathrm{V})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O} ^{\prime}\cap S(\mathrm{V})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S (\mathrm{V})}\varphi_{j}.\] for all pairs of the connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(\mathrm{V}-(L_{1}^{\perp}\cup\cdots\cup L_{k}^{\perp})\). This completes the proof of the theorem. The non-vanishing of the Euler class \(\big{(}A_{k}(\mathrm{V})/\underline{\mathbb{R}}\big{)}^{\oplus j}\) mod \(2\) was studied over the years by many authors. For example, Mani-Levitska, Vrecica & Zivaljevic [25, Thm. 39] gave a sufficient condition for the non-vanishing of the mod \(2\) Euler class of \(\big{(}A_{k}(\mathrm{V})/\underline{\mathbb{R}}\big{)}^{\oplus j}\), with a complete proof of this result given only now in [6, Lem. 4.3]. It says that: _If \(\dim(\mathrm{V})\ \leq\ j+(2^{k-1}-1)2^{\lfloor\log_{2}j\rfloor}\), then the top Stiefel-Whitney class of the vector bundle \(\big{(}A_{k}(\mathrm{V})/\underline{\mathbb{R}}\big{)}^{\oplus j}\) does not vanish._ Now we focus our attention to the partition problems of mass assignments and the corresponding solution schemes. ### The GHR problem for mass assignments The scheme we give in this section is derived from the scheme for the spherical version of the classical problem presented in Section 3.1. Due to a transition from a Euclidean space to a sphere, the new scheme differs from the one used in [6, Sec. 2]. Let \(E\) be a Euclidean vector bundle over a compact and connected ENR base space \(B\). The associated unit sphere bundle of \(E\) is \[S(E)=\{(b,v):b\in B,\,v\in S(E_{b})\}.\] Next, let \(\mathbb{P}(E)\) denote the projective bundle of \(E\), that is \[\mathbb{P}(E)=\{(b,L):b\in B,\,L\in\mathbb{P}(E_{b})\}.\] In particular, \(S(E)/(\mathbb{Z}/2)\cong\mathbb{P}(E)\). Here the fibrewise antipodal action of the sphere bundle is assumed. Further on, let \(H(E)\) be the Hopf bundle associated to the vector bundle \(E\). That is the line bundle \[H(E):=\{(b,L,v):b\in B,\,L\in\mathbb{P}(E_{b}),\,v\in L\}\] over the projective bundle \(\mathbb{P}(E)\). The space of all arrangements of \(k\) linear hyperplanes which belong to one fibre of \(E\) is the total space of the pullback \[\mathbb{P}(E)\times_{B}\cdots\times_{B}\mathbb{P}(E):=d^{*}(\mathbb{P}(E) \times\cdots\times\mathbb{P}(E))=d^{*}(\mathbb{P}(E)^{\times k})\] of the product vector bundle \(\mathbb{P}(E)^{\times k}\) via the diagonal embedding \(d\colon B\longrightarrow B^{\times k}\), \(x\longmapsto(x,\ldots,x)\). In other words, there is a pullback diagram Let us denote by \(\Pi_{i}\colon\mathbb{P}(E)^{\times k}\longrightarrow\mathbb{P}(E)\), \((b,(L_{1},\ldots,L_{k}))\longmapsto(b,L_{i})\), the projection on the \(i\)-th factor, and by \(\Theta_{i}\) the composition \(\Pi_{i}\circ D\colon d^{*}(\mathbb{P}(E)^{\times k})\longrightarrow\mathbb{P }(E)\), where \(1\leq i\leq k\). Now, the space of all arrangements of \(k\) oriented linear hyperplanes which belong to one fibre of \(E\) is the total space of the pullback \[S(E)\times_{B}\cdots\times_{B}S(E):=d^{*}(S(E)\times\cdots\times S(E))=d^{*}(S (E)^{\times k}).\] The quotient map \(d^{*}(S(E)^{\times k})\longrightarrow d^{*}(\mathbb{P}(E)^{\times k})\), induced by taking orbits of the natural fibrewise free action of \((\mathbb{Z}/2)^{k}\) on \(d^{*}(S(E)^{\times k})\), is a \(2^{k}\)-fold cover map with a fibre \(S(L_{1})\times\cdots\times S(L_{k})\) at \((L_{1},\ldots,L_{k})\in\mathbb{P}(E_{b})^{\times k}\) for some \(b\in\mathrm{B}\). Recall that each sphere \(S(L_{1}),\ldots,S(L_{k})\) is just a \(0\)-dimensional sphere. Like in the classical case, the covering \(d^{*}(S(E)^{\times k})\longrightarrow d^{*}(\mathbb{P}(E)^{\times k})\) induces a \(2^{k}\)-dimensional real vector bundle \(A_{k}(E)\) over \(d^{*}(\mathbb{P}(E)^{\times k})\) with fibre at \((L_{1},\ldots,L_{k})\in\mathbb{P}(E_{b})^{k}\), for some \(b\in B\), defined to be the vector space \(\operatorname{Map}\big{(}\prod_{i=1}^{k}S(L_{i}),\mathbb{R}\big{)}\) of all real valued functions on \(\prod_{i=1}^{k}S(L_{i})\). Each fibre is equipped with the natural \((\mathbb{Z}/2)^{k}\)-action given by antipodal actions on the \(0\)-dimensional spheres. There is an isomorphism of vector bundles \[A_{k}(E)\cong\Theta_{1}^{*}\big{(}H(E)\oplus\underline{\mathbb{R}}\big{)} \otimes\cdots\otimes\Theta_{k}^{*}\big{(}H(E)\oplus\underline{\mathbb{R}} \big{)},\] where \(\underline{\mathbb{R}}\) denotes the trivial line bundle over \(\mathbb{P}(E)\), and \(\Theta_{i}^{*}\big{(}H(E)\oplus\underline{\mathbb{R}}\big{)}\) is the pullback vector bundle. In particular, the vector bundle \(A_{k}(E)\) has a trivial line bundle determined by all constant maps \(\prod_{i=1}^{k}S(L_{i})\longrightarrow\mathbb{R}\). Let us now consider a continuous function \(\varphi\colon S(E)\longrightarrow\mathbb{R}\). Such a map induces a section \(s_{\varphi}\colon d^{*}(\mathbb{P}(E)^{\times k})\longrightarrow A_{k}(E)\) of the vector bundle \(A_{k}(E)\) by \[(b,(L_{1},\dots,L_{k}))\xmapsto{}\big{(}s_{\varphi}(b,(L_{1},\dots,L_{k})) \colon\prod_{i=1}^{k}S(L_{i})\longrightarrow\mathbb{R}\big{)}\] for \(b\in B\) and \((L_{1},\dots,L_{k})\in\mathbb{P}(E_{b})^{\times k}\), where \[s_{\varphi}(b,(L_{1},\dots,L_{k}))(v_{1},\dots,v_{k}):=\int_{\mathcal{O}_{b,v _{1},\dots,v_{k}}\cap S(E_{b})}\varphi\] for \((v_{1},\dots,v_{k})\in\prod_{i=1}^{k}S(L_{i})\). Here, \(\mathcal{O}_{b,v_{1},\dots,v_{k}}\) denotes the subset of \(E_{b}\) defined by \[\mathcal{O}_{b,v_{1},\dots,v_{k}}:=\{u\in E_{b}\,:\,\langle u,v_{1}\rangle>0 \}\cap\dots\cap\{u\in E_{b}\,:\,\langle u,v_{k}\rangle>0\}.\] Once again, the integration is assumed to be with respect to the measure of the sphere \(S(E_{b})\) induced by the metric on \(E_{b}\). Now we can state the CS / TM scheme theorem for the GHR problem for mass assignments, which is analogous to Theorem 3.1. **Theorem 3.2**.: _Let \(E\) be a Euclidean vector bundle over a compact and connected ENR base space \(B\), and let \(k\geq 1\) and \(j\geq 1\) be integers._ _If the Euler class of the vector bundle \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\) does not vanish, then for every collection of \(j\) continuous functions \(\varphi_{1},\dots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) there exists a point \(b\in B\) and an arrangement of \(k\) linear hyperplanes \(H_{1},\dots,H_{k}\) in the fibre \(E_{b}\) with the property that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H_{1}\cup\dots\cup H_{k})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\dots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] _In other words,_ \[\operatorname{e}\big{(}\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j }\big{)}\neq 0\quad\Longrightarrow\quad(j,k)\in\Delta_{S}(E).\] Proof.: Our follows in the footsteps of the proof of Theorem 3.1. Assume that the Euler class of the vector bundle \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\) does not vanish. Consequently, every section of \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\) has a zero. Consider a collection \(\varphi_{1},\dots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) of continuous functions on the sphere bundle \(S(E)\), and the associated section \(s=(s_{\varphi_{1}},\dots,s_{\varphi_{j}})\) of the vector bundle \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\). Denote by \(\Pi\colon A_{k}(E)^{\oplus j}\longrightarrow\big{(}A_{k}(E)/\underline{ \mathbb{R}}\big{)}^{\oplus j}\) the canonical projection. Then, from the assumption on the Euler class, the section \(\Pi\circ s\) of the vectors bundle \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\) has a zero. In other words, there exists a point \((b,(L_{1},\dots,L_{k}))\in d^{*}(\mathbb{P}(E)^{\times k})\) with the property that \(s(b,(L_{1},\dots,L_{k}))\) is contained in the trivial vector subbundle \(\underline{\mathbb{R}}^{\oplus j}\) of the vector bundle \(A_{k}(E)^{\oplus j}\). This means that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(L_{1}^{\perp}\cup\dots\cup L_{k}^{\perp})\) the following equalities hold \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\dots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] Hence, we have proved the theorem. ### The GHR problem for mass assignments plus constraints In this section we extend the CS / TM schemes presented in Section 3.2 to incorporate an additional constraint. More precisely, we require for the normals of the hyperplanes to belong to the specific, not necessarily equal, vector subbundles. Fix an integer \(k\geq 1\). Let \(E\) be an \(n\)-dimensional Euclidean vector bundle over a compact and connected ENR \(B\), and let \(E(i)\) be a vector subbundles of \(E\), for \(1\leq i\leq k\). Following the notation from Section 3.2 we denote by \(\mathbb{P}(E(i))\) the projective bundle of \(E(i)\), that is \[\mathbb{P}(E(i))=\{(b;L):b\in B,\,L\in\mathbb{P}(E(i)_{b})\}.\] In particular, \(S(E(i))/(\mathbb{Z}/2)\cong P(E(i))\). Furthermore, let \(H(E(i))\) be the Hopf bundle associated to the vector bundle \(E(i)\), or in other words \[H(E(i)):=\{(b,L,v):b\in B,\,L\in\mathbb{P}(E(i)_{b}),\,v\in L\}.\] The space of all arrangements of \(k\) linear hyperplanes which belong to one fibre of \(E\) and are determined by the collection of vector subbundles \(E(1),\ldots,E(k)\) can be seen as the total space of the pullback vector bundle \[\mathbb{P}(E(1))\times_{B}\cdots\times_{B}\mathbb{P}(E(k)):=d^{*}\big{(} \mathbb{P}(E(1))\times\cdots\times\mathbb{P}(E(k))\big{)}\] via the diagonal embedding \(d\colon B\longrightarrow B^{k}\). We denote by \[D\colon d^{*}\big{(}\mathbb{P}(E(1))\times\cdots\times\mathbb{P}(E(k)) \longrightarrow\mathbb{P}(E(1))\times\cdots\times\mathbb{P}(E(k))\] the pullback map between the bundles. Furthermore, let \[\Pi_{i}\colon\mathbb{P}(E(1))\times\cdots\times\mathbb{P}(E(k)) \longrightarrow\mathbb{P}(E(i))\] be the projection on the \(i\)-th factor \((b,(L_{1},\ldots,L_{k}))\longmapsto(b,L_{i})\), and let \(\Theta_{i}:=\Pi_{i}\circ D\). The space of all arrangements of \(k\) oriented linear hyperplanes which belong to one fibre of \(E\) and are given by the collection of vector subbundles \(E(1),\ldots,E(k)\) is the total space of the pullback \[S(E(1))\times_{B}\cdots\times_{B}S(E(k)):=d^{*}\big{(}S(E(1)) \times\cdots\times S(E(k))\big{)}.\] The quotient map \[d^{*}\big{(}S(E(1))\times\cdots\times S(E(k))\big{)}\longrightarrow d^{*} \big{(}\mathbb{P}(E(1))\times\cdots\times\mathbb{P}(E(k))\big{)},\] induced by taking orbits of the natural fibrewise free action of the group \((\mathbb{Z}/2)^{k}\), is a \(2^{k}\)-fold cover map with a typical fibre \(S(L_{1})\times\cdots\times S(L_{k})\) where \((L_{1},\ldots,L_{k})\in\mathbb{P}(E(1)_{b})\times\cdots\times\mathbb{P}(E(k) _{b})\) for some \(b\in\mathrm{B}\). This covering induces a \(2^{k}\)-dimensional real vector bundle \(A_{k}(E(1),\ldots,E(k))\) over \(d^{*}\big{(}\mathbb{P}(E(1))\times\cdots\times\mathbb{P}(E(k))\big{)}\) with fibre at \((L_{1},\ldots,L_{k})\in\mathbb{P}(E(1)_{b})\times\cdots\times\mathbb{P}(E(k) _{b})\), for some \(b\in B\), defined to be the vector space \(\mathrm{Map}\,\big{(}\prod_{i=1}^{k}S(L_{i}),\mathbb{R}\big{)}\) of all real valued functions on \(\prod_{i=1}^{k}S(L_{i})\). There is an isomorphism of vector bundles \[A_{k}(E(1),\ldots,E(k))\cong\Theta_{1}^{*}\big{(}H(E(1))\oplus\underline{ \mathbb{R}(1)}\big{)}\otimes\cdots\otimes\Theta_{k}^{*}\big{(}H(E(k))\oplus \underline{\mathbb{R}(k)}\big{)},\] where \(\underline{\mathbb{R}(i)}\) denotes the trivial line bundle over \(\mathbb{P}(E(i))\), and \(\Theta_{i}^{*}\big{(}H(E(i))\oplus\underline{\mathbb{R}(i)}\big{)}\) is the pullback vector bundle. In particular, the vector bundle \(A_{k}(E(1),\ldots,E(k))\) has a trivial line bundle determined by all constant maps \(\prod_{i=1}^{k}S(L_{i})\longrightarrow\mathbb{R}\), or in other words the vector subbundle \(\underline{\mathbb{R}(1)}\otimes\cdots\otimes\underline{\mathbb{R}(k)}\). Clearly, \(A_{k}(E)=A_{k}(\underline{E,\ldots,E})\). Now we consider a continuous function \(\varphi\colon S(E)\longrightarrow\mathbb{R}\). It induces a section \(s_{\varphi}\colon d^{*}\big{(}\mathbb{P}(E(1))\times\cdots\times\mathbb{P}(E(k)) \big{)}\longrightarrow A_{k}(E(1),\ldots,E(k))\) of the vector bundle \(A_{k}(E(1),\ldots,E(k))\) by \[(b,(L_{1},\ldots,L_{k}))\longmapsto\big{(}s_{\varphi}(b,(L_{1},\ldots,L_{k})) \colon\prod_{i=1}^{k}S(L_{i})\longrightarrow\mathbb{R}\big{)}\] for \(b\in B\) and \((L_{1},\ldots,L_{k})\in\mathbb{P}(E(1)_{b})\times\cdots\times\mathbb{P}(E(k)_{ b})\), where \[s_{\varphi}(b,(L_{1},\ldots,L_{k}))(v_{1},\ldots,v_{k}):=\int_{\mathcal{O}_{b,v _{1},\ldots,v_{k}}\cap S(E_{b})}\varphi\] for \((v_{1},\ldots,v_{k})\in\prod_{i=1}^{k}S(L_{i})\). Recall, \(\mathcal{O}_{b,v_{1},\ldots,v_{k}}\) denotes the set: \[\mathcal{O}_{b,v_{1},\cdots,v_{k}}:=\{u\in E_{b}\,:\,\langle u,v_{1}\rangle> 0\}\cap\cdots\cap\{u\in E_{b}\,:\,\langle u,v_{k}\rangle>0\}.\] The CS / TM scheme theorem for the GHR problem for mass assignments with constraints is as follows. **Theorem 3.3**.: _Let \(E\) be a Euclidean vector bundle over a compact and connected ENR base space \(B\), \(k\geq 1\) and \(j\geq 1\) integers, and let \(E(1),\ldots,E(k)\) be vector subbundles of \(E\). If the Euler class of the vector bundle \(\big{(}A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R}}\big{)}^{\oplus j}\) does not vanish, then for every collection of \(j\) continuous functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) there exists a point \(b\in B\) and an arrangement of \(k\) linear hyperplanes \(H_{1},\ldots,H_{k}\) in the fibre \(E_{b}\) determined by the collection of vector subbundles \(E(1),\ldots,E(k)\) with the property that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H_{1}\cup\cdots\cup H_{k})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] Proof.: A proof is a slight modification of the proof of Theorem 3.2, so we do not repeat it. ### The orthogonal GHR problem for mass assignments The scheme for the partitions with orthogonal arrangements is just a "restriction" of the scheme presented in Section 3.2. For a Euclidean vector bundle over a compact and connected ENR base space \(B\), and integers \(k\geq 1\) and \(j\geq 1\), we proved the following: If the Euler class of the vector bundle \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\) over \(d^{*}(\mathbb{P}(E)^{\times k})\) does not vanish, then for every collection of \(j\) continuous functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) there exists a point \(b\in B\) and an arrangement of \(k\) linear hyperplanes \(H_{1},\ldots,H_{k}\) in the fibre \(E_{b}\) with the property that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H_{1}\cup\cdots\cup H_{k})\) holds \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] Since we are interested in partitions by specifically orthogonal arrangements the space of all possible solutions becomes the following subspace of \(X_{k}(E):=d^{*}(\mathbb{P}(E)^{\times k})\) : \[Y_{k}(E):=\{(b,(L_{1},\ldots,L_{k}))\in X_{k}(E):L_{r}\perp L_{s}\text{ for all }1\leq r<s\leq k\}.\] In addition, let us denote by \(q_{k}\) the inclusion \(Y_{k}(E)\hookrightarrow X_{k}(E)\). Thus, the vector bundle we are interested in is the restriction bundle \(B_{k}(E):=A_{k}(E)|_{Y_{k}(E)}\). In particular, there is an isomorphism of vector bundles \[B_{k}(E)\cong\Psi_{1}^{*}\big{(}H(E)\oplus\underline{\mathbb{R}}\big{)}\otimes \cdots\otimes\Psi_{k}^{*}\big{(}H(E)\oplus\underline{\mathbb{R}}\big{)},\] where \(\Psi_{i}=\Theta_{i}\circ q_{k}\) for \(1\leq i\leq k\). Recall \(H(E)\) and \(\underline{\mathbb{R}}\) are here the Hopf line and trivial line bundle over \(\mathbb{P}(E)\), respectively. Now, we get the CS / TM scheme theorem for the GHR problem for mass assignments by orthogonal arrangements directly from the proof of Theorem 3.2. **Theorem 3.4**.: _Let \(E\) be a Euclidean vector bundle over a compact and connected ENR base space \(B\), and let \(k\geq 1\) and \(j\geq 1\) be integers._ _If the Euler class of the vector bundle \(\big{(}B_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\) does not vanish, then for every collection of \(j\) continuous functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) there exists a point \(b\in B\) and an orthogonal arrangement of \(k\) linear hyperplanes \(H_{1},\ldots,H_{k}\) in the fibre \(E_{b}\) with the property that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H_{1}\cup\cdots\cup H_{k})\) the following statement holds_ \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] The proof of this result is a copy of the proof of Theorem 3.2 with \(Y_{k}(E)\) in place of \(X_{k}(E)\) and the vector bundle \(B_{k}(E)\) in place of the vector bundle \(A_{k}(E)\). ### The Fairy Bread Sandwich theorem Fix integers \(d\geq 1\) and \(k\geq 1\) with \(d\geq k\), and the real vector space \(V=\mathbb{R}^{d+1}\). Let \((j_{k},\ldots,j_{d})\) be a permutation of the set \(\{k,\ldots,d\}\), and let \(\varphi_{a,b}\colon S(E_{a+1}^{d+1})\longrightarrow\mathbb{R}\), \(k\leq a\leq d\), \(1\leq b\leq j_{a}\), be a collection of functions from the sphere bundle of the tautological vector bundle \(E_{a+1}^{d+1}\) over the Grassmann manifold \(G_{a+1}(V)\) to the real numbers. The space of all potential solutions of the partition problem considered in Theorem 2.12 is the following flag manifold \[\operatorname{Flag}_{k,\ldots,d}(V) =\big{\{}(V_{k},\ldots,V_{d})\in\prod_{i=k}^{d}G_{i}(V):0\subseteq V _{k}\subseteq\cdots\subseteq V_{d}\subseteq V\big{\}}\] \[\cong\big{\{}(W_{k},\ldots,W_{d+1})\in G_{k}(V)\times G_{1}(V)^{d -k+1}:\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad W_{i^{\prime}} \perp W_{i^{\prime\prime}}\text{ for all }k\leq i^{\prime}<i^{\prime\prime}\leq d+1\big{\}}.\] We used the homeomorphism between these two presentations \[(W_{k},\ldots,W_{d+1})\longmapsto\big{(}W_{k},(W_{k}\oplus W_{k+1}),\ldots, (W_{k}\oplus W_{k+1}\oplus\cdots\oplus W_{d-1})\big{)} \tag{2}\] to identify the corresponding elements. More detail on flag manifolds can be found in Section 6. For every \(k+1\leq i\leq d+1\) we define a \(2\)-dimensional real vector bundle \(K_{i}\) over \(\operatorname{Flag}_{k,\ldots,d}(V)\) whose fiber over the point \((W_{k},\ldots,W_{d+1})\stackrel{{(\ref{eq:K_1})}}{{=}}(V_{k},V_{k+ 1},\ldots,V_{d})\in\operatorname{Flag}_{k,\ldots,d}(V)\) is the real vector space \(\operatorname{Map}(S(W_{i}),\mathbb{R})\). The vector bundle \(K_{i}\) decomposes into the direct sum \[K_{i}\cong E_{i}\oplus\underline{\mathbb{R}},\] where \(E_{i}\), as in Section Section Section 6, denotes the canonical line bundle associated to the flag manifold \(\operatorname{Flag}_{k,\ldots,d}(V)\) and \(\underline{\mathbb{R}}\) is the trivial line bundle which corresponds to constant functions. Take an integer \(k+1\leq i\leq d+1\), and let \(\varphi\colon S(E_{i}^{d+1})\longrightarrow\mathbb{R}\) be a continuous real valued function. It induces a section \(s_{i,\varphi}\) of \(K_{i}\) defined by \[(W_{k},\dots,W_{d})\stackrel{{(\ref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_def__def_def__def_def__def_def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def___def__def___def__def___def__def___def__def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def____def___def___def____def____def___def___def____def___def____def___def___def___def___def___def___def___def___def___def___def___def___def____def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def__def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def___def__def___def___def___def___def__def___def___def___def___def___def___def___def__def___def___def__def__def___def__def__def___def__def___def__def__def__def__def__def__def__def___def__def__def__def__def___def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def__def_def__def__def_def__def_def__def__def__def_def__def_def__def_def__def_def__def__def_def__def_def__def_def__def_def__def_def_def_def_def_def_def__def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_defdef_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_def_defdef_def_def_def_def_def_defdef_def_def_defdef_def_def_def_def_defdef_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_defdef_def_def_defdef_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef ## 4. Proofs of Theorems 2.1 and 2.2 For the proofs of the theorems we recall and show the following classical fact, see for example [13, Satz und Def. VI.6.4], [24, Thm. 17.2.5 and Def. 17.2.6] and [17, (1.13)]. **Lemma 4.1**.: _Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\), and let \(\mathbb{P}(E)\) denote the associated projective bundle of \(E\). Then there is an isomorphism of \(H^{*}(B;\mathbb{F}_{2})\)-algebras_ \[H^{*}(B;\mathbb{F}_{2})[x]/\big{(}\sum_{s=0}^{n}w_{n-s}(E)\,x^{s}\big{)} \longrightarrow H^{*}(\mathbb{P}(E);\mathbb{F}_{2})\] _which maps \(x\) to the mod \(2\) Euler class of the Hopf line bundle \(H(E)\)._ The importance of the previous claim compels us to present two different proofs. First recall that for \(m\geq 2\) \[H^{*}(\mathbb{P}(\mathbb{R}^{m});\mathbb{F}_{2})=H^{*}(\mathbb{R}\mathrm{P}^{ m-1};\mathbb{F}_{2})\cong\mathbb{F}_{2}[x]/(x^{m}),\] where \(x=\mathrm{e}(H(\mathbb{R}^{m}))\) is the mod \(2\) Euler class of the Hopf line bundle \(H(\mathbb{R}^{m})\). In the case when \(m=\infty\) we have \[H^{*}(\mathbb{P}(\mathbb{R}^{\infty});\mathbb{F}_{2})=H^{*}(\mathbb{R}\mathrm{ P}^{\infty};\mathbb{F}_{2})\cong\mathbb{F}_{2}[x],\] where \(x=\mathrm{e}(H)\) is the mod \(2\) Euler class of the Hopf line bundle \(H:=H(\mathbb{R}^{\infty})\). Second, we point out that for an \(n\)-dimensional vector bundle \(E\) over a compact and connected ENR \(B\) we can define its Stiefel-Whitney classes in the following way. Consider, the projections \[p_{1}\colon B\times\mathbb{P}(\mathbb{R}^{\infty})\longrightarrow B\qquad \text{and}\qquad p_{2}\colon B\times\mathbb{P}(\mathbb{R}^{\infty})\longrightarrow \mathbb{P}(\mathbb{R}^{\infty}),\] and the mod \(2\) Euler class of the vector bundle \(p_{1}^{*}E\otimes p_{2}^{*}H\) which lives in the cohomology \[H^{*}(B\times\mathbb{P}(\mathbb{R}^{\infty});\mathbb{F}_{2})\cong H^{*}(B; \mathbb{F}_{2})\otimes H^{*}(\mathbb{P}(\mathbb{R}^{\infty});\mathbb{F}_{2}) \cong H^{*}(B;\mathbb{F}_{2})\otimes\mathbb{F}_{2}[x].\] Hence, there exist classes \(w_{i}\in H^{i}(B;\mathbb{F}_{2})\), \(0\leq i\leq n\), such that \[\mathrm{e}(p_{1}^{*}E\otimes p_{2}^{*}H)=\sum_{i=0}^{n}w_{i}\times x^{n-i}. \tag{3}\] Here "\(\times\)" denotes the cohomology cross product; see for example [12, Thm. VI.3.2]. Then we define the \(i\)-th Stiefel-Whitney class of \(E\) to be \(w_{i}\) for \(0\leq i\leq n\) and \(0\) otherwise, that is \(w_{i}(E)=w_{i}\) for \(0\leq i\leq n\) and \(w_{i}(E)=0\) for \(i\geq n+1\); consult for example [24, Thm. 17.2.5 and Def. 17.2.6]. Hence, the relation (3) becomes \[\mathrm{e}(p_{1}^{*}E\otimes p_{2}^{*}H)=\sum_{i=0}^{n}w_{i}(E)\times x^{n-i}. \tag{4}\] Let us now consider a real line bundle \(L\) over a compact ENR \(B^{\prime}\), and let \(p_{1}^{\prime}\colon B\times B^{\prime}\longrightarrow B\) and \(p_{2}^{\prime}\colon B\times B^{\prime}\longrightarrow B^{\prime}\) be the projections. The line bundle \(L\) is isomorphic to a pull-back bundle \(f^{*}H\) of the Hopf line bundle \(H\) for some continuous map \(f\colon B^{\prime}\longrightarrow\mathbb{P}(\mathbb{R}^{\infty})\). In particular, the mod \(2\) Euler class of \(L\) is \(\mathrm{e}(L)=f^{*}(t)\). Consequently, first \[p_{1}^{\prime*}E\otimes p_{2}^{\prime*}L\cong(\mathrm{id}\times f)^{*}\big{(} p_{1}^{*}E\otimes p_{2}^{*}H\big{)}. \tag{5}\] Second, the naturality of the Euler class and the description of the map \(\operatorname{id}\times f\) on the level of cohomology imply that \[\operatorname{e}(p_{1}^{\prime*}E\otimes p_{2}^{\prime*}L) \stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eq _The second proof of Lemma 4.1._ Let \(L\) be the trivial line bundle over \(B\) equipped with the fibrewise antipodal \(\mathbb{Z}/2\)-action. Then the vector bundle \(E\otimes L\) is equipped with a fibrewise free \(\mathbb{Z}/2\)-action. We consider the long exact sequence in \(\mathbb{Z}/2\)-equivariant Borel cohomology of the pair \((D(E\otimes L),S(E\otimes L))\), that is the disc bundle modulo the sphere bundle: \[\cdots\longrightarrow H^{*}_{\mathbb{Z}/2}(D(E\otimes L),S(E \otimes L);\mathbb{F}_{2})\longrightarrow H^{*}_{\mathbb{Z}/2}(D(E\otimes L); \mathbb{F}_{2})\longrightarrow\\ H^{*}_{\mathbb{Z}/2}(S(E\otimes L);\mathbb{F}_{2}) \longrightarrow\cdots.\] In other words, the long exact sequence in singular cohomology with coefficients in the field \(\mathbb{F}_{2}\): \[\cdots\longrightarrow H^{*}(\operatorname{E}(\mathbb{Z}/2)\times_{ \mathbb{Z}/2}D(E\otimes L),\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2 }S(E\otimes L))\longrightarrow\\ H^{*}(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}D(E \otimes L))\longrightarrow H^{*}(\operatorname{E}(\mathbb{Z}/2)\times_{ \mathbb{Z}/2}S(E\otimes L))\longrightarrow\cdots.\] Since \(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}(E\otimes L)\longrightarrow \operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}B\) is a vector bundle we can transform the previous exact sequence in the sequence of the pair, the disc bundle modulo the sphere bundle of the vector bundle \(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}(E\otimes L)\): \[\cdots\longrightarrow H^{*}\big{(}D(\operatorname{E}(\mathbb{Z}/2) \times_{\mathbb{Z}/2}(E\otimes L)),S(\operatorname{E}(\mathbb{Z}/2)\times_{ \mathbb{Z}/2}(E\otimes L))\big{)}\longrightarrow\\ H^{*}\big{(}D(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}(E \otimes L))\big{)}\longrightarrow H^{*}\big{(}S(\operatorname{E}(\mathbb{Z}/2 )\times_{\mathbb{Z}/2}(E\otimes L))\big{)}\longrightarrow\cdots.\] Applying the Thom isomorphism to the vector bundle \(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}(E\otimes L)\) the previous exact sequence can be rewritten as the Gysin sequence of the sphere bundle: \[\cdots\longrightarrow H^{*}(\operatorname{E}(\mathbb{Z}/2)\times_{ \mathbb{Z}/2}B)\xrightarrow{\cdot e(\operatorname{E}(\mathbb{Z}/2)\times_{ \mathbb{Z}/2}(E\otimes L))}H^{*}(\operatorname{E}(\mathbb{Z}/2)\times_{ \mathbb{Z}/2}B)\longrightarrow\\ H^{*}\big{(}S(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}(E \otimes L))\big{)}\longrightarrow\cdots.\] Having in mind that the action of \(\mathbb{Z}/2\) on \(B\) is trivial, and so \(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}B\cong\operatorname{B}( \mathbb{Z}/2)\times B\cong\mathbb{R}\mathbb{P}^{\infty}\times B\), we get the sequence \[\cdots\longrightarrow H^{*}(B;\mathbb{F}_{2})[x]\xrightarrow{ \cdot\cdot\sum_{i=0}^{n}w_{i}(E)\cdot x^{n-i}}H^{*}(B;\mathbb{F}_{2})[x] \longrightarrow\\ H^{*}\big{(}S(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2 }(E\otimes L))\big{)}\longrightarrow\cdots,\] where \(x=\operatorname{e}(\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}L)\). In order to complete the proof we need to show that \[H^{*}_{\mathbb{Z}/2}(S(E\otimes L))\cong H^{*}\big{(}S( \operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}(E\otimes L))\big{)} \cong\\ H^{*}\big{(}\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}S( E\otimes L)\big{)}\cong H^{*}(P(E)).\] For that we consider the map \[\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}S(E\otimes L) \longrightarrow S(E\otimes L)/(\mathbb{Z}/2)\cong P(E),\] induced by the projection \(\operatorname{E}(\mathbb{Z}/2)\times S(E\otimes L)\longrightarrow S(E \otimes L)\). Since the action of \(\mathbb{Z}/2\) on \(S(E\otimes L)\) is free, this is a fibre bundle with a contractible fibre \(S^{\infty}\). Now the Leray-Hirsch theorem [24, Thm. 17.1.1] says that \(H^{*}\big{(}\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}S(E\otimes L) \big{)}\) is a \(H^{*}(P(E))\)-module freely generated by \(1\in H^{0}\big{(}\operatorname{E}(\mathbb{Z}/2)\times_{\mathbb{Z}/2}S(E \otimes L)\big{)}\). This concludes the second proof of the claim. Now we proceed with the proofs of Theorem 2.1 and Theorem 2.2. ### Proof of Theorem 2.1 Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\), and let the integers \(k\geq 1\) and \(j\geq 1\) be fixed. Assume that \(e_{k}(B)^{j}\) does not belong to the ideal \(\mathcal{I}_{k}(E)\). The proof of the theorem relies on the criterion from Theorem 3.2, that is: \[\operatorname{e}\big{(}\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j }\big{)}\neq 0\quad\Longrightarrow\quad(j,k)\in\Delta_{S}(E).\] Observe that the mod \(2\) Euler class of the vector bundle \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\), or in other words the top Stiefel-Whitney class, lives in the cohomology of the pullback bundle \(H^{*}(d^{*}(\mathbb{P}(E)^{\times k});\mathbb{F}_{2})\). We will prove that * \(H^{*}(d^{*}(\mathbb{P}(E)^{\times k});\mathbb{F}_{2})\cong R_{k}(B)/\mathcal{I }_{k}(E)\), and * \(w_{(2^{k}-1)j}\big{(}\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j} \big{)}=e_{k}(B)^{j}+\mathcal{I}_{k}(E)\in R_{k}(B)/\mathcal{I}_{k}(E)\). Assuming these two claims to be true, the criterion from Theorem 3.2 yields: \[e_{k}^{j}+\mathcal{I}_{k}(E)\neq\mathcal{I}_{k}(E)\text{ in }R_{k}(B)/\mathcal{I}_{k}(E) \quad\Longrightarrow\quad(j,k)\in\Delta_{S}(E).\] Thus, the proof of Theorem 2.1 is finished, up to a proof of the two facts we listed. First, we compute the cohomology of the pullback bundle \(d^{*}(\mathbb{P}(E)^{\times k})\) which is the ambient of the Stiefel-Whitney classes of the vector bundle \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\). **Claim 4.2**.: _There is an isomorphism of \(H^{*}(B;\mathbb{F}_{2})\)-algebras_ \[R_{k}(B)/\mathcal{I}_{k}(E)=H^{*}(B;\mathbb{F}_{2})[x_{1},\dots,x_{k}]\,/\,\big{(}\sum_{s=0}^{n}w_{n-s}(E)\,x_{r}^{s}\,:\,1\leq r\leq k\big{)} \\ \longrightarrow H^{*}(d^{*}(\mathbb{P}(E)^{\times k});\mathbb{F}_ {2})\] _mapping \(x_{r}\) to the mod \(2\) Euler class of the pullback vector bundle \(\Theta^{*}_{r}\big{(}H(E)\big{)}\) for all \(1\leq r\leq k\)._ Proof.: The proof proceeds by induction on \(j\) where \(1\leq j\leq k\). If \(j=1\), then the statement reduces to Lemma 4.1. Let \(j\geq 2\), and assume that there is an isomorphism \[H^{*}(B;\mathbb{F}_{2})[x_{1},\dots,x_{j-1}]\,/\,\big{(}\sum_{s= 0}^{n}w_{n-s}(E)\,x_{r}^{s}\,:\,1\leq r\leq j-1\big{)}\\ \longrightarrow H^{*}(d^{*}(\mathbb{P}(E)^{\times(j-1)});\mathbb{F}_ {2}) \tag{8}\] which maps each class \(x_{r}\) to the mod \(2\) Euler class of the pullback vector bundle \(\Theta^{*}_{r}\big{(}H(E)\big{)}\), where \(1\leq r\leq j-1\). The pullback bundle \(d^{*}(\mathbb{P}(E)^{\times(j-1)})\) is a bundle over \(B\) with the corresponding projection map \(p\colon d^{*}(\mathbb{P}(E)^{\times(j-1)})\longrightarrow B\). Then \(d^{*}(\mathbb{P}(E)^{\times j})\) is isomorphic to the pullback bundle \(p^{*}(P(E))\cong P(p^{*}(E))\) over \(d^{*}(\mathbb{P}(E)^{\times(j-1)})\). Recall that \(\mathbb{P}(E)\) is the projective bundle associated to \(E\), and therefore a bundle over \(B\). Hence, there is a pullback digram \[d^{*}(\mathbb{P}(E)^{\times j})\cong p^{*}(\mathbb{P}(E))\cong \mathbb{P}(p^{*}(E))\,\ Consequently, by Lemma 4.1, we get an isomorphism of \(H^{*}(d^{*}(\mathbb{P}(E)^{\times(j-1)});\mathbb{F}_{2})\)-algebras \[H^{*}(d^{*}(\mathbb{P}(E)^{\times(j-1)});\mathbb{F}_{2})[x_{j}]/ \big{(}\sum_{s=0}^{n}w_{n-s}(E)\,x_{j}^{s}\big{)}\\ \longrightarrow H^{*}(\mathbb{P}(p^{*}(E));\mathbb{F}_{2})\cong H^{ *}(d^{*}(\mathbb{P}(E)^{\times j});\mathbb{F}_{2}) \tag{9}\] which maps \(x_{j}\) to the mod \(2\) Euler class of the Hopf line bundle \(H(p^{*}(E))\). Now, the induction hypothesis (8) in combination with the isomorphism (9) completes the proof of the claim. Finally, we evaluate the Stiefel-Whitney class \(w_{(2^{k}-1)j}\big{(}\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j} \big{)}\). **Claim 4.3**.: _The mod \(2\) Euler class of the vector bundle \(\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\) is equal to:_ \[w_{(2^{k}-1)j}\big{(}\big{(}A_{k}(E)/\underline{\mathbb{R}} \big{)}^{\oplus j}\big{)}=e_{k}(B)^{j}+\mathcal{I}_{k}(E)=\\ \prod_{(\alpha_{1},\dots,\alpha_{k})\in\mathbb{F}_{2}^{k}-\{0\}}( \alpha_{1}x_{1}+\dots+\alpha_{k}x_{k})^{j}+\mathcal{I}_{k}(E)\in R_{k}(B)/ \mathcal{I}_{k}(E).\] Proof.: Recall the isomorphism of vector bundles \[A_{k}(E)\cong\Theta_{1}^{*}\big{(}H(E)\oplus\underline{\mathbb{R}}\big{)} \otimes\dots\otimes\Theta_{k}^{*}\big{(}H(E)\oplus\underline{\mathbb{R}} \big{)},\] where \(\underline{\mathbb{R}}\) is the trivial line bundle over \(\mathbb{P}(E)\), and \(\Theta_{i}^{*}\big{(}H(E)\oplus\underline{\mathbb{R}}\big{)}\) is a pullback vector bundle. Now the claim follows from the distributivity of the tensor product over the direct sum, the fact that the pullback of trivial bundle is again a trivial bundle, and the equality \(w(\alpha\otimes\beta)=1+(w_{1}(\alpha)+w_{1}(\beta))\) which holds (only) for line bundles \(\alpha\) and \(\beta\) (see [28, Prob. 7-A]). Note that \(\alpha\otimes\beta\) is also a line bundle. With the claims verified, the proof of Theorem 2.1 is now complete. ### Proof of Theorem 2.2 The proof we present is an extension of the proof of Theorem 2.1, and therefore it is just outlined. Let \(k\geq 1\) and \(j\geq 1\) be fixed integers. We consider a Euclidean vector bundle \(E\) of dimension \(n\) over a compact and connected ENR \(B\), and, in addition, \(k\) vector subbundles \(E(1),\dots,E(k)\) of \(\eta\) of dimensions \(n_{1},\dots,n_{k}\), respectively. Assume that \(j\leq\iota_{k}(E(1),\dots,E(k))=\max\big{\{}j:e_{k}(B)^{j}\notin\mathcal{I}_{k} (E(1),\dots,E(k))\big{\}}\). The proof of the theorem uses the criterion from Theorem 3.3. That is, if the Euler class \(\mathrm{e}\left(\big{(}A_{k}(E(1),\dots,E(k))/\underline{\mathbb{R}}\big{)}^{ \oplus j}\right)\neq 0\) does not vanish, then for every collection of \(j\) continuous functions \(\varphi_{1},\dots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) there exists a point \(b\in B\) and an arrangement of \(k\) linear hyperplanes \(H_{1},\dots,H_{k}\) in the fibre \(E_{b}\) determined by the collection of vector subbundles \(E(1),\dots,E(k)\) with the property that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H_{1}\cup\dots\cup H_{k})\) the following equalities hold \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\dots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j}.\] The mod \(2\) Euler class of \(\big{(}A_{k}(E(1),\dots,E(k))/\underline{\mathbb{R}}\big{)}^{\oplus j}\), or in other words the top Stiefel-Whitney class, lives in \(H^{*}\big{(}d^{*}\big{(}\mathbb{P}(E(1))\times\dots\times\mathbb{P}(E(k)) \big{)};\mathbb{F}_{2}\big{)}\). Therefore, we prove that * \(H^{*}\big{(}d^{*}\big{(}\mathbb{P}(E(1))\times\dots\times\mathbb{P}(E(k)) \big{)};\mathbb{F}_{2}\big{)}\cong R_{k}(B)/\mathcal{I}_{k}(E(1),\dots,E(k))\), and that * \(w_{(2^{k}-1)j}\big{(}\big{(}A_{k}(E(1),\dots,E(k))/\underline{\mathbb{R}} \big{)}^{\oplus j}\big{)}=e_{k}(B)^{j}+\mathcal{I}_{k}(E(1),\dots,E(k))\). If these two statements are assumed to be true, then the criterion from Theorem 3.3, in combination with the theorem assumption \(j\leq\iota_{k}(E;E(1),\ldots,E(k))\), implies that \[w_{(2^{k}-1)j}\big{(}\big{(}A_{k}(E(1),\ldots,E(k))/\underline{ \mathbb{R}}\big{)}^{\oplus j}\big{)}=\\ e_{k}^{j}+\mathcal{I}_{k}(E(1),\ldots,E(k))\neq\mathcal{I}_{k}(E(1),\ldots,E(k)).\] Hence, \(\mathrm{e}\left(\big{(}A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R}}\big{)} ^{\oplus j}\right)\neq 0\), and the proof of Theorem 2.1 is complete. Indeed, the remaining claims are verified in the same way as in the proofs of Claim 4.2 and Claim 4.3. ## 5. Proofs of Propositions 2.4 and 2.7 We prove main facts about the integers \(\iota_{1}(E)\) and \(\iota_{k}(E(1),\ldots,E(k))\) stated in Proposition 2.4 and Proposition 2.7, as well as two related consequences, Corollary 2.8 and Corollary 2.9. ### Proof of Proposition 2.4 Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\). Since \(k=1\) we simplify notation by taking \(x=x_{1}\). Hence, \(e_{1}(B)=x\) and \(\mathcal{I}_{1}(E)=\big{(}x^{n}+w_{1}(E)x^{n-1}+\cdots+w_{n}(E)\big{)}\). Set \[a:=\iota_{1}(E)=\max\big{\{}j:x^{j}\notin\mathcal{I}_{1}(E)\big{\}}\] and \[b:=\max\big{\{}j:0\neq w_{j-n+1}(-E)\in H^{j-n+1}(B;\mathbb{F}_{2})\big{\}}.\] In particular, we have that \(w_{b-n+1}(-E)\neq 0\) and that \(w_{r}(-E)=0\) for all \(r\geq b-n+2\). Now, we prove that \(a=b\). Using the Euclidean algorithm in the polynomial ring \(R_{1}(B)=H^{*}(B;\mathbb{F}_{2})[x]\) we have that \[x^{b}=(x^{n}+w_{1}(E)x^{n-1}+\cdots+w_{n}(E))q+d_{n-1}x^{n-1}+\cdots+d_{1}x+d_ {0}\] where \(q\in R_{1}(B)\), and for \(0\leq i\leq n-1\) it holds that \[d_{i}=w_{b-i}(-E)+w_{1}(E)w_{b-i-1}(-E)+\cdots+w_{n-i-1}(E)w_{b-n+1}(-E)\in H ^{*}(B;\mathbb{F}_{2}),\] as demonstrated by Crabb and Jan Jaworowski [16, Proof of Prop. 4.1]. Since \(w_{r}(-E)\neq 0\) for \(r\geq b-n+2\) it follows that \(d_{i}=w_{n-i-1}(E)w_{b-n+1}(-E)\) for \(0\leq i\leq n-1\), and so \[x^{b}=(x^{n}+w_{1}(E)x^{n-1}+\cdots+w_{n}(E))q+\\ w_{b-n+1}(-E)\big{(}x^{n-1}+w_{1}(E)x^{n-2}\cdots+w_{n-1}(E) \big{)}.\] Consequently, from \(w_{b-n+1}(-E)\neq 0\) it follows that \(x^{b}\notin\mathcal{I}_{1}(E)\) and accordingly \(b\leq a\). Let us now assume that \(b<a\), or in other words \(b-n+2\leq a-n+1\). Recall that \(w_{r}(-E)\neq 0\) for \(r\geq b-n+2\), and in particular for \(r\geq a-n+1\). Once again we have \[x^{a}=(x^{n}+w_{1}(E)x^{n-1}+\cdots+w_{n}(E))q^{\prime}+d_{n-1}^{\prime}x^{n-1 }+\cdots+d_{1}^{\prime}x+d_{0}^{\prime}\] where \[d_{i}^{\prime}=w_{a-i}(-E)+w_{1}(E)w_{a-i-1}(-E)+\cdots+w_{n-i-1}(E)w_{a-n+1}( -E)=0,\] for all \(0\leq i\leq n-1\). Hence, \(x^{a}\in\mathcal{I}_{1}(E)\), which is a contradiction with the definition of the integer \(a\). Therefore, \(b\geq a\). We proved that \(a=b\), or in other words that \[\iota_{1}(E)=\max\big{\{}j:0\neq w_{j-n+1}(-E)\in H^{j-n+1}(B;\mathbb{F}_{2}) \big{\}},\] as claimed. ### Proof of Proposition 2.7 Let \(k\geq 1\) be an integer, and let \(E(1),\ldots,E(k)\) be Euclidean vector bundles over a compact and connected ENR \(B\). Set \(n_{i}\) to be the dimension of the vector bundle \(E(i)\) for \(1\leq i\leq k\). Let us denote by \[a_{i}:= \iota_{1}(E(i))=\max\big{\{}j:0\neq w_{j-n_{i}+1}(-E(i))\in H^{j-n _{i}+1}(B;\mathbb{F}_{2})\big{\}},\] \[a:= \iota_{k}(a_{1}+1,\ldots,a_{k}+1)=\max\big{\{}j:e_{k}(\text{pt})^{ j}\notin(x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})\big{\}},\] \[b:= \iota_{k}(E(1),\ldots,E(k))=\max\big{\{}j:e_{k}(B)^{j}\notin \mathcal{I}_{k}(E(1),\ldots,E(k))\big{\}},\] where \(1\leq i\leq k\), and \[\mathcal{I}_{k}(E(1),\ldots,E(k))=\Big{(}\sum_{s=0}^{n_{r}}w_{n_{r}-s}(E(r)) \,x_{r}^{s}\::\:1\leq r\leq k\Big{)}\ \subseteq\ R_{k}(B).\] In the definition of \(a_{i}\) we used the characterization from Proposition 2.4. In particular, we have that \(w_{r}(-E(i))=0\) for all \(r\geq a_{i}-n_{i}+2\). With the notation we just introduced the the assumption of the proposition reads \[w_{a_{1}-n_{1}+1}(-E(1))\cdots w_{a_{k}-n_{k}+1}(-E(k))\neq 0,\] while the claim of the proposition becomes \(a=b\). The main ingredients of our proof of Proposition 2.7 are contained in the next two claims, where the first claim is used for the proof of the second. **Claim 5.1**.: \(x_{1}^{a_{1}}\cdots x_{k}^{a_{k}}\notin\mathcal{I}_{k}(E(1),\ldots,E(k))\)_._ Proof.: For simplicity set \(\mathcal{I}:=\mathcal{I}_{k}(E(1),\ldots,E(k))\). Once again we use [16, Proof of Prop. 4.1] and get that for all \(1\leq i\leq k\): \[x_{i}^{a_{i}}=(x_{i}^{n_{i}}+w_{1}(E(i))x_{i}^{n_{i}-1}+\cdots+w_{n_{i}}(E(i)) )\cdot q_{i}+d_{n_{i}-1,i}x_{i}^{n_{i}-1}+\cdots+d_{0,i},\] where for \(0\leq s\leq n_{i}-1\): \[d_{s,i}=w_{a_{i}-s}(-E(i))+w_{1}(E(i))\,w_{a_{i}-s-1}(-E(i))+ \cdots+\\ w_{n_{i}-s-1}(E(i))\,w_{a_{i}-n_{i}+1}(-E(i)).\] More precisely, since \(w_{r}(-E(i))=0\) for all \(r\geq a_{i}-n_{i}+2\), we have that \[d_{s,i}=w_{n_{i}-s-1}(E(i))\,w_{a_{i}-n_{i}+1}(-E(i)).\] Consequently, \[x_{i}^{a_{i}}+\mathcal{I}=w_{a_{i}-n_{i}+1}(-E(i))\big{(}x_{i}^{n_{i}-1}+ \cdots+w_{n_{i}-1}(E(i))\big{)}+\mathcal{I},\] and so \[x_{1}^{a_{1}}\cdots x_{k}^{a_{k}}+\mathcal{I}=\prod_{i=1}^{k}w_{a_{i}-n_{i}+1} (-E(i))\cdot\prod_{i=1}^{k}\big{(}x_{i}^{n_{i}-1}+\cdots+w_{n_{i}-1}(E(i)) \big{)}+\mathcal{I}.\] From the assumption of the proposition \(\prod_{i=1}^{k}w_{a_{i}-n_{i}+1}(-E(i))\neq 0\) we have that \(x_{1}^{a_{1}}\cdots x_{k}^{a_{k}}+\mathcal{I}\neq\mathcal{I}\), or in other words \(x_{1}^{a_{1}}\cdots x_{k}^{a_{k}}\notin\mathcal{I}\), as claimed. **Claim 5.2**.: \[(x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})=\\ \ker\Big{(}\mathbb{F}_{2}[x_{1},\ldots,x_{k}]\longrightarrow H^{ *}(B;\mathbb{F}_{2})[x_{1},\ldots,x_{k}]/\mathcal{I}_{k}(E(1),\ldots,E(k)) \Big{)}.\] Proof.: The ring homomorphism we consider \[h\colon\mathbb{F}_{2}[x_{1},\ldots,x_{k}]\longrightarrow H^{*}(B;\mathbb{F}_{2})[ x_{1},\ldots,x_{k}]/\mathcal{I}_{k}(E(1),\ldots,E(k))\] is induced by the coefficient inclusion \(\mathbb{F}_{2}\hookrightarrow H^{0}(B;\mathbb{F}_{2})\hookrightarrow H^{*}(B; \mathbb{F}_{2})\). Furthermore, denote by \(\mathcal{J}:=\ker(h)\). Like in the proof of the previous claim we use [16, Proof of Prop. 4.1] and for every \(1\leq i\leq k\) get that \[x_{i}^{a_{i}+1}=(x_{i}^{n_{i}}+w_{1}(E(i))x_{i}^{n_{i}-1}+\cdots +w_{n_{i}}(E(i)))\cdot q_{i}+\\ d_{n_{i}-1,i}^{\prime}x_{i}^{n_{i}-1}+\cdots+d_{0,i}^{\prime}\ \in\ H^{*}(B;\mathbb{F}_{2})[x_{1},\ldots,x_{k}].\] Here for \(0\leq s\leq n_{i}-1\): \[d_{s,i}^{\prime}=w_{a_{i}+1-s}(-E(i))+w_{1}(E(i))\,w_{a_{i}-s}(- E(i))+\cdots+\\ w_{n_{i}-s-1}(E(i))\,w_{a_{i}-n_{i}+2}(-E(i)).\] In this case the fact that \(w_{r}(-E(i))=0\) for all \(r\geq a_{i}-n_{i}+2\) implies that \(d_{s,i}^{\prime}=0\) for all \(0\leq s\leq n-1\) and all \(1\leq i\leq k\). Consequently, \(x_{i}^{a_{i}+1}\in\mathcal{I}_{k}(E(1),\ldots,E(k))\), or in other words, \(x_{i}^{a_{i}+1}\in\mathcal{J}\), for all \(1\leq i\leq k\). Hence, \((x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})\subseteq\mathcal{J}\). Assume that \[0\neq p=\sum_{(c_{1},\ldots,c_{k})\in C}\alpha_{c_{1},\ldots,c_{k}}x_{1}^{c_{1} }\cdots x_{k}^{c_{k}}\in\mathcal{J}-(x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1}),\] where \(C\subseteq\mathbb{Z}_{\geq 0}^{k}\) is a finite set of multi-exponents of the polynomial \(p\), and \(\alpha_{c_{1},\ldots,c_{k}}\in\mathbb{F}_{2}\) are the coefficients. After a possible modification of \(p\), by taking away monomials which already belong to the ideal \((x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})\), we can assume that the set of exponents satisfies \[\emptyset\neq C\subseteq[0,a_{1}]\times\cdots\times[0,a_{k}].\] That is, no monomial in the representation of \(p\) belongs to \((x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})\). Let \(s_{k}:=\min\{s\in\mathbb{Z}_{\geq 0}:\alpha_{c_{1},\ldots,c_{k-1},s}\neq 0\}\). Then \[x_{k}^{a_{k}-s_{k}}p\in\mathcal{J}-(x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})\] with all monomials having degree of \(x_{k}\) at least \(a_{k}\). Taking away all monomials in \(x_{k}^{a_{k}-s_{k}}p\) which already belong to \((x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})\) we get a polynomial which still belongs to \(\mathcal{J}-(x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})\). Now, repeat the procedure iteratively with variables \(x_{k-1},\ldots x_{1}\), respectively. At the end we get that \[x_{1}^{a_{1}}\cdots x_{k}^{a_{k}}\in\mathcal{J}-(x_{1}^{a_{1}+1},\ldots,x_{k}^{ a_{k}+1}).\] We reached a direct contradiction with Claim 5.1. In particular, it says, that \(h(x_{1}^{a_{1}}\cdots x_{k}^{a_{k}})\neq 0\), or equivalently \(x_{1}^{a_{1}}\cdots x_{k}^{a_{k}}\notin\ker(h)=\mathcal{J}\). Finally, we complete the proof of Proposition 2.7 as follows. According to the definition of \(a\) we have that \[e_{k}(\mathrm{pt})^{a}\notin(x_{1}^{a_{1}+1},\ldots,x_{k}^{a_{k}+1})=\ker(h) \quad\text{and}\quad e_{k}(\mathrm{pt})^{a+1}\in(x_{1}^{a_{1}+1},\ldots,x_{k}^ {a_{k}+1})=\ker(h).\] Consequently, \[e_{k}(B)^{a}+\mathcal{I}_{k}(E(1),\ldots,E(k)) =h(e_{k}(\mathrm{pt})^{a})\neq 0,\] \[e_{k}(B)^{a+1}+\mathcal{I}_{k}(E(1),\ldots,E(k)) =h(e_{k}(\mathrm{pt})^{a+1})=0.\] From the definition of \(b\) we conclude that \(a=b\), as claimed. This argument completes the proof of Proposition 2.7. ### Proof of Corollary 2.8 In order to prove the statement, according to Proposition 2.7, it is enough to check whether \(\left(w_{d-\ell}(-E_{\ell}^{d})\right)^{k}\neq 0\), because \(\iota_{1}(E_{\ell}^{d})=d-1\), as demonstrated in Corollary 2.5. Since \(k\leq\ell\) it suffices to prove that \(\left(w_{d-\ell}(-E_{\ell}^{d})\right)^{\ell}\neq 0\). Indeed, the Gambelli's formula [23, p. 523][22, Prop. 9.5.37] implies the equality \[\left(w_{d-\ell}(-E_{\ell}^{d})\right)^{\ell}=\det\left(w_{d-\ell+i-j}(-E_{ \ell}^{d})\right)_{1\leq i,j\leq\ell}=[d-\ell,d-\ell,\ldots,d-\ell]\neq 0.\] Here \([d-\ell,d-\ell,\ldots,d-\ell]\) denotes a Schubert class. Note that \(w_{r}(-E_{\ell})=0\) for all \(r>d-\ell\), and that we assume \(w_{r}(-E_{\ell})=0\) for \(r<0\). ### Proof of Corollary 2.9 From Theorem 2.1 we have that \((j,k)\in\Delta_{S}(E_{\ell}^{d})\) if \(e_{k}(B)^{j}\notin\mathcal{I}_{k}(E_{\ell}^{d})=\mathcal{I}_{k}(E_{\ell}^{d}, \ldots,E_{\ell}^{d})\), or in other words if \[j\leq\iota_{k}(E_{\ell}^{d},\ldots,E_{\ell}^{d})=\iota_{k}(d,\ldots,d)=\max \big{\{}j^{\prime}:e_{k}(\operatorname{pt})^{j^{\prime}}\notin(x_{1}^{d}, \ldots,x_{k}^{d}\big{\}}.\] Here the first equality comes from Corollary 2.8 while the second one is just the definition of \(\iota_{k}(d,\ldots,d)\). Since \(j=2^{t}+r\) where \(0\leq r\leq 2^{t}-1\) and \(d\geq 2^{t+k-1}+r\), then according to [6, Lem. 4.2] we have that \(e_{k}(\operatorname{pt})^{j}\notin(x_{1}^{d},\ldots,x_{k}^{d})\). Thus, indeed \(j\leq\iota_{k}(E_{\ell}^{d},\ldots,E_{\ell}^{d})\) and the proof of the corollary is complete. ## 6. Proofs of Corollary 2.10, 2.11 and Theorem 2.12 Before going into the proofs we recall the notion of a real flag manifold by introducing it in two equivalent ways. Furthermore we give description of the cohomology ring with coefficients in \(\mathbb{F}_{2}\). Let \(k\geq 1\) and \(d\geq 2\) be integers. Consider a strictly increasing sequence of positive integers \((n_{1},\ldots,n_{k})\) bounded by \(d\), meaning \(1\leq n_{1}<\cdots<n_{k-1}<n_{k}\leq d-1\). Set in addition \(n_{0}=0\) and \(n_{k+1}=d\). Let \(V\) be a real vector space of dimension \(d\). The real _flag manifold_, of type \((n_{1},\ldots,n_{k})\), in \(V\) is the space \(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V)\) of all flags \(0\subseteq V_{1}\subseteq\cdots\subseteq V_{k}\subseteq V\) in \(V\) with the property that \(\dim(V_{i})=n_{i}\) for every \(1\leq i\leq k\). Alternatively, we can say that \(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V)\) is a collection of all \((k+1)\)-tuples of vector spaces \((W_{1},\ldots,W_{k+1})\) with the property that * \(\dim(W_{i})=n_{i}-n_{i-1}\) for all \(1\leq i\leq k+1\), and * \(W_{i^{\prime}}\perp W_{i^{\prime\prime}}\) for all \(1\leq i^{\prime}<i^{\prime\prime}\leq k+1\). In other words \[\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V) =\big{\{}(V_{1},\ldots,V_{k})\in\prod_{i=1}^{k}G_{n_{i}}(V):0 \subseteq V_{1}\subseteq\cdots\subseteq V_{k}\subseteq V\big{\}}\] \[\cong\big{\{}(W_{1},\ldots,W_{k+1})\in\prod_{i=1}^{k+1}G_{n_{i}-n _{i-1}}(V):\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad W_{i^{\prime}}\perp W _{i^{\prime\prime}}\text{ for all }1\leq i^{\prime}<i^{\prime\prime}\leq k+1\big{\}}\] \[\cong\frac{\operatorname{O}(d)}{\operatorname{O}(n_{1}-n_{0}) \times\operatorname{O}(n_{2}-n_{1})\times\cdots\times\operatorname{O}(n_{k+1} -n_{k})}.\] The homeomorphism between these two presentations is given by \[(W_{1},\ldots,W_{k+1})\longmapsto\big{(}W_{1},(W_{1}\oplus W_{2}),\ldots,(W_{1 }\oplus W_{2}\oplus\cdots\oplus W_{k-1})\big{)}.\] The flag manifold \(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V)\) is indeed a compact \(\delta\)-dimensional manifold where \(\delta:=\sum_{1\leq i^{\prime}<i^{\prime\prime}\leq k+1}(n_{i^{\prime}}-n_{i^ {\prime}-1})(n_{i^{\prime\prime}}-n_{i^{\prime\prime}-1})\). In the case when \(k=d-1\), and consequently \(n_{i}=i\) for all \(1\leq i\leq k=d-1\), the flag manifold \(\operatorname{Flag}_{1,2,\ldots,d-1}(V)\) is called the _complete flag manifold_. Furthermore, the flag manifold \(\operatorname{Flag}_{n_{1}}(V)\) coincides with the Grassmann manifold \(\operatorname{G}_{n_{1}}(V)\cong\operatorname{G}_{n_{1}}(\mathbb{R}^{d})\). Over the flag manifold \(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V)\) we have \(k+1\) canonical vector bundles \(E_{1},\ldots,E_{k+1}\) given by \[E_{i}:=\big{\{}((W_{1},\ldots,W_{k+1}),w)\in\operatorname{Flag}_{n_{1},\ldots, n_{k}}(V)\times V:w\in W_{i}\big{\}},\] where \(1\leq i\leq k+1\). In particular, \(E_{1}\oplus\cdots\oplus E_{k+1}\) is isomorphic to the trivial vector bundle \(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V)\times V\). Now, the classical result of Armand Borel [11, Thm. 11.1] says that \[H^{*}(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V);\mathbb{F}_{2})\cong\\ \mathbb{F}_{2}[w_{1}(E_{1}),\ldots,w_{n_{1}-n_{0}}(E_{1}),\ldots, w_{1}(E_{k+1}),\ldots,w_{n_{k+1}-n_{k}}(E_{k+1})]\,/\,I_{n_{1},\ldots,n_{k}},\] where the ideal \(I_{n_{1},\ldots,n_{k}}\) is generated by the identity \[\big{(}1+w_{1}(E_{1})+\cdots+w_{n_{1}-n_{0}}(E_{1})\big{)}\cdots\big{(}1+w_{1 }(E_{k+1})+\cdots+w_{n_{k+1}-n_{k}}(E_{k+1})\big{)}=1.\] In particular, in the case of the complete flag manifold, equivalently when \(k=d-1\), we have that \[H^{*}(\operatorname{Flag}_{1,\ldots,d-1}(V);\mathbb{F}_{2})\quad\cong\quad \mathbb{F}_{2}[w_{1}(E_{1}),w_{1}(E_{2}),\ldots,w_{1}(E_{d})]\,/\,I_{1,\ldots, d-1}. \tag{10}\] In this case \(E_{1},\ldots,E_{d}\) are all line bundles. Here, the ideal \(I_{1,\ldots,d-1}\) is generated by the identity \(\prod_{i=1}^{d}(1+w_{1}(E_{i}))=1\), which implies that a generating set for \(I_{1,\ldots,d-1}\) is the set of all elementary symmetric polynomials in \(w_{1}(E_{1}),w_{1}(E_{2}),\ldots,w_{1}(E_{d})\) as variables. Thus, \[I_{1,\ldots,d-1}=\Big{(}\sigma_{r}(w_{1}(E_{1}),w_{1}(E_{2}),\ldots,w_{1}(E_{ d})):1\leq r\leq d\Big{)}, \tag{11}\] where \(\sigma_{1},\ldots,\sigma_{d}\) denote elementary symmetric polynomials. flag manifolds of different types allow continuous maps between each other induced by a choice of a subflag. In particular, for any type \((n_{1},\ldots,n_{k})\) there is a continuous map \[\alpha_{n_{1},\ldots,n_{k}}\colon\operatorname{Flag}_{1,\ldots,d-1}(V) \longrightarrow\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V),\] given by the selection of a subflag \[0\subseteq V_{1}\subseteq V_{2}\subseteq\cdots\subseteq V_{d-1}\subseteq V \ \longmapsto\ 0\subseteq V_{n_{1}}\subseteq V_{n_{2}}\subseteq\cdots\subseteq V_{n_{k}} \subseteq V.\] An important feature of this map that the induced map in cohomology \[\alpha_{n_{1},\ldots,n_{k}}^{*}\colon H^{*}(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V);\mathbb{F}_{2})\longmapsto H^{*}(\operatorname{Flag}_{1,\ldots,d-1}( V);\mathbb{F}_{2})\] is injective; consult for example [23, pp. 523-524]. ### Proof of Corollary 2.10 We apply Proposition 2.7. Thus, we need to compute first \(\iota(E(i))\) for all \(1\leq i\leq k\). From Proposition 2.4 we have that \[\iota_{1}(E(i))=\max\big{\{}j:0\neq w_{j-\dim E(i)+1}(-E(i))\in H^{*}(B; \mathbb{F}_{2})\big{\}},\] where in our situation \(B:=\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V)\). Let \(1\leq i\leq k\). Consider the following commutative diagram of flag manifolds where all the maps are induced by a selection of the corresponding subflags: Since the induced maps in cohomology \(\alpha_{n_{1},\ldots,n_{k}}^{*}\) and \(\alpha_{n_{i}}^{*}\) are injective, it follows that the induced map \(\beta_{i}^{*}\) is also injective. Now, from the injectivity of \(\beta_{i}^{*}\) and the fact \(E(i)=\beta_{i}^{*}E_{n_{i}}^{d}\), in combination with Corollary 2.5 we have that \(\iota_{1}(E(i))=\iota_{1}(E_{n_{i}}^{d})=d-1\). Here, like before, \(E_{n_{i}}^{d}\) denotes the tautological bundle over the Grassmann manifold \(\mathrm{G}_{n_{i}}(V)\). To conclude the proof of the corollary we verify the criterion from Proposition 2.7, that is, we prove that the following product does not vanish \[u:=\prod_{i=1}^{k}w_{\iota_{1}(E(i))-n_{i}+1}(-E(i))=\prod_{i=1}^{k}w_{d-n_{i}} (-E(i))\in H^{*}(B;\mathbb{F}_{2}).\] From the fact that \(E(i)\oplus E_{i+1}\oplus\cdots\oplus E_{k+1}=B\times V\) is a trivial vector bundle we get the following equality of total Stiefel-Whitney classes: \[w(-E(i))=w(E_{i+1}\oplus\cdots\oplus E_{k+1})=w(E_{i+1})\cdots w(E_{k+1}).\] Therefore, \[w_{d-n_{i}}(-E(i))=w_{d-n_{i}}(E_{i+1}\oplus\cdots\oplus E_{k+1})=w_{n_{i+1}-n _{i}}(E_{i+1})\cdots w_{n_{k+1}-n_{k}}(E_{k+1}),\] because \(\dim(E_{i+1}\oplus\cdots\oplus E_{k+1})=d-n_{i}\) and \(\dim(E_{r})=n_{r}-n_{r-1}\) for every \(1\leq r\leq k+1\). In particular, each Stiefel-Whitney class \(w_{n_{r}-n_{r-1}}(E_{r})\) is the mod 2 Euler class \(\mathrm{e}(E_{r})\) of the vector bundle \(E_{r}\). We calculate as follows: \[u =\prod_{i=1}^{k}w_{d-n_{i}}(-E(i))=\prod_{i=1}^{k}\prod_{r=i+1}^{ k+1}w_{n_{i+1}-n_{i}}(E_{i+1})\cdots w_{n_{k+1}-n_{k}}(E_{k+1})\] \[=w_{n_{2}-n_{1}}(E_{2})\cdot w_{n_{3}-n_{2}}(E_{3})^{2}\cdots w_{ n_{k+1}-n_{k}}(E_{k+1})^{k}.\] Thus, it remains to show that the class \[w_{n_{2}-n_{1}}(E_{2})\cdot w_{n_{3}-n_{2}}(E_{3})^{2}\cdots w_{n_{k+1}-n_{k}}( E_{k+1})^{k}\] does not vanish in \(H^{*}(B;\mathbb{F}_{2})\). For that we apply the homomorphism \(\alpha_{n_{1},\ldots,n_{k}}^{*}\) to the class \(u\) and land in the cohomology of the complete flag manifold \(H^{*}(\mathrm{Flag}_{1,\ldots,d-1}(V);\mathbb{F}_{2})\), that is \[\alpha_{n_{1},\ldots,n_{k}}^{*}(u) =\alpha_{n_{1},\ldots,n_{k}}(w_{n_{2}-n_{1}}(E_{2})\cdot w_{n_{3} -n_{2}}(E_{3})^{2}\cdots w_{n_{k+1}-n_{k}}(E_{k+1})^{k})\] \[=w_{1}(E_{n_{1}+1})\cdots w_{1}(E_{n_{2}})\] \[\quad\quad w_{1}(E_{n_{2}+1})^{2}\cdots w_{1}(E_{n_{3}})^{2}\] \[\quad\quad\quad\quad\quad\quad\quad\cdots\] \[\quad\quad\quad\quad w_{1}(E_{n_{k}+1})^{k}\cdots w_{1}(E_{n_{k+1} })^{k}.\] The vector bundles on the farthest right hand side of the last equality are canonical line bundles over the complete flag manifold. Here we used the isomorphisms \[\alpha_{n_{1},\ldots,n_{k}}^{*}E_{2}\cong E_{n_{1}+1}\oplus\cdots\oplus E_{n_ {2}},\ \ldots\,\alpha_{n_{1},\ldots,n_{k}}^{*}E_{k+1}\cong E_{n_{k}+1}\oplus\cdots\oplus E_{n_ {k+1}}.\] Now, we observe that the monomial in the cohomology of the complete flag manifold \[w_{1}(E_{n_{1}+1})\cdots w_{1}(E_{n_{2}})\,w_{1}(E_{n_{2}+1})^{2}\cdots w_{1}( E_{n_{3}})^{2}\cdots w_{1}(E_{n_{k}+1})^{k}\cdots w_{1}(E_{n_{k+1}})^{k}\] divides the monomial \[w_{1}(E_{1})^{0}w_{1}(E_{2})^{1}w_{1}(E_{3})^{2}\cdots w_{1}(E_{d})^{d-1}.\] Thus, in order to prove that \(\alpha_{n_{1},\ldots,n_{k}}^{*}(u)\neq 0\) and consequently conclude \(u\neq 0\) it suffices to show that \[0\neq w_{1}(E_{1})^{0}w_{1}(E_{2})^{1}w_{1}(E_{3})^{2}\cdots w_{ 1}(E_{d})^{d-1}\] \[\quad\quad\quad\in\ H^{*}(\mathrm{Flag}_{1,\ldots,d-1}(V);\mathbb{ F}_{2})\cong\mathbb{F}_{2}[w_{1}(E_{1}),w_{1}(E_{2}),\ldots,w_{1}(E_{d})] \,/\,I_{1,\ldots,d-1}.\] Recall that the ideal \(I_{1,\ldots,d-1}=\Big{(}\sigma_{r}(w_{1}(E_{1}),\ldots,w_{1}(E_{d})):1\leq r\leq d \Big{)}\) is generated by elementary symmetric polynomials. Hence \[w_{1}(E_{1})^{0}w_{1}(E_{2})^{1}\cdots w_{1}(E_{d})^{d-1}\neq 0\Longleftrightarrow w _{1}(E_{\pi(1)})^{0}w_{1}(E_{\pi(2)})^{1}\cdots w_{1}(E_{\pi(d)})^{d-1}\neq 0\] for every permutation \(\pi\in\mathfrak{S}_{d}\). For the sake of brevity we prove that \[w_{1}(E_{d})^{0}w_{1}(E_{d-1})^{1}\cdots w_{1}(E_{1})^{d-1}\neq 0 \tag{12}\] in \(H^{*}(\operatorname{Flag}_{1,\ldots,d-1}(V);\mathbb{F}_{2})\). The proof of (12) proceeds by induction as follows. First, obviously \(w_{1}(E_{1})^{d-1}\neq 0\) in \[H^{*}(\operatorname{Flag}_{1}(V);\mathbb{F}_{2})\cong H^{*}(\mathbb{P}(V); \mathbb{F}_{2})\cong\mathbb{F}_{2}[x]/(x^{d}),\] where \(x\) corresponds to \(w_{1}(E_{1})^{d-1}\). Next, let \(1\leq k\leq d-2\) and let \[w_{1}(E_{k})^{d-k}w_{1}(E_{k-1})^{d-k+1}\cdots w_{1}(E_{1})^{d-1}\neq 0 \tag{13}\] in \(H^{*}(\operatorname{Flag}_{1,\ldots,k}\operatorname{Flag}_{1,\ldots,k}(V); \mathbb{F}_{2})\). Finally, the map \[\operatorname{Flag}_{1,\ldots,k+1}(V)\longrightarrow\operatorname{Flag}_{1, \ldots,k}(V),\] given by \[0\subseteq V_{1}\subseteq\cdots\subseteq V_{k}\subseteq V_{k+1}\ \longmapsto\ 0\subseteq V_{1}\subseteq\cdots\subseteq V_{k},\] is the projective bundle of the vector bundle \((E_{1}\oplus\cdots\oplus E_{k})^{\perp}\) over the flag manifold \(\operatorname{Flag}_{1,\ldots,k}(V)\), that is \(\mathbb{P}\big{(}(E_{1}\oplus\cdots\oplus E_{k})^{\perp}\big{)}\). From Lemma 4.1 we have that \[H^{*}(\operatorname{Flag}_{1,\ldots,k+1}(V);\mathbb{F}_{2}) \cong H^{*}\big{(}\mathbb{P}\big{(}(E_{1}\oplus\cdots\oplus E_{k}) ^{\perp}\big{)};\mathbb{F}_{2}\big{)}\] \[\cong H^{*}(\operatorname{Flag}_{1,\ldots,k}(V);\mathbb{F}_{2})[x ]/\Big{(}\sum_{s=0}^{d-k}w_{d-k-s}\ x^{s}\Big{)},\] where \(w_{d-k-s}=w_{d-k-s}(-(E_{1}\oplus\cdots\oplus E_{k}))\) and \(x=w_{1}(E_{k+1})\). Thus, from assumption (13), that is \(w_{1}(E_{d})^{0}w_{1}(E_{d-1})^{1}\cdots w_{1}(E_{1})^{d-1}\neq 0\) in \(H^{*}(\operatorname{Flag}_{1,\ldots,k}(V);\mathbb{F}_{2})\) we obtain \[w_{1}(E_{k+1})^{d-k-1}w_{1}(E_{k})^{d-k}w_{1}(E_{k-1})^{d-k+1}\cdots w_{1}(E_{ 1})^{d-1}\neq 0\] in \(H^{*}(\operatorname{Flag}_{1,\ldots,k+1}(V);\mathbb{F}_{2})\). Consequently (12) holds. This concludes the argument and completes the proof of the corollary. Let us also point out that the non-vanishing of the class \(u\) can also be deduced using [14, Rem. 2.8]. ### Proof of Corollary 2.11 Like in the previous section we assume that \(k\geq 1\) and \(d\geq 2\) are integers, and that \(0=n_{0}<n_{1}<\cdots<n_{k}<n_{k+1}=d\) is a strictly increasing sequence of integers. We take \(V=\mathbb{R}^{d}\) and denote by \(E_{1},\ldots,E_{k+1}\) the canonical vector bundles over the flag manifold \(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V)\). Furthermore, \(E(i):=\bigoplus_{1\leq r\leq i}E_{r}\) for all \(1\leq i\leq k\), and \(E:=E(k)\). In addition, we assume that \(j=2^{t}+r\) is an integer, with \(0\leq r\leq 2^{t}-1\), and \(d=\dim(V)\geq 2^{t+k-1}+r+1\). In order to prove the existence of the desired partition we use Theorem 2.2. More precisely, if \(j\leq\iota_{k}(E(1),\ldots,E(k))\), then the theorem guarantees the existence of a point \(b:=(W_{1},\ldots,W_{k+1})\) in the base space \(\operatorname{Flag}_{n_{1},\ldots,n_{k}}(V)\) of the vector bundle \(E\) and an arrangement \(\mathcal{H}^{b}=(H^{b}_{1},\ldots,H^{b}_{k})\) of \(k\) linear hyperplanes in the fiber \(E_{b}\) such that for every pair of connected components \((\mathcal{O}^{\prime},\mathcal{O}^{\prime\prime})\) of the arrangement complement \(E_{b}-(H^{b}_{1}\cup\cdots\cup H^{b}_{k})\) holds \[\int_{\mathcal{O}^{\prime}\cap S(E_{b})}\varphi_{1}=\int_{\mathcal{O}^{\prime \prime}\cap S(E_{b})}\varphi_{1}\quad,\ldots,\quad\int_{\mathcal{O}^{\prime} \cap S(E_{b})}\varphi_{j}=\int_{\mathcal{O}^{\prime\prime}\cap S(E_{b})} \varphi_{j},\] and in addition \[(H^{b}_{1})^{\perp}\subseteq E(1)_{b},\ (H^{b}_{2})^{\perp}\subseteq E(2)_{b}, \ \ldots\,(H^{b}_{k})^{\perp}\subseteq E(k)_{b}.\] Since \(E(i)_{b}=\bigoplus_{1\leq r\leq i}(E_{r})_{b}=\bigoplus_{1\leq r\leq i}W_{r}\) for every \(1\leq i\leq k\), we have that \[(H_{i}^{b})^{\perp}\subseteq E(i)_{b}\implies H_{i}^{b}\supseteq(E(i)_{b})^{ \perp}=\big{(}\bigoplus_{1\leq r\leq i}W_{r}\big{)}^{\perp}=\bigoplus_{i+1\leq r \leq k+1}W_{r}.\] Hence, for the the proof of Corollary 2.11 it suffices to verify that \(j=2^{t}+r\leq\iota_{k}(E(1),\ldots,E(k))\) when \(d=\dim(V)\geq 2^{t+k-1}+r\). We have from Corollary 2.10 that \(\iota_{k}(E(1),\ldots,E(k))=\iota_{k}(d,\ldots,d)\), so we need to show that \[j=2^{t}+r\leq\iota_{k}(d,\ldots,d)=\max\big{\{}j^{\prime}:e_{k}(\operatorname{ pt})^{j^{\prime}}\notin(x_{1}^{d},\ldots,x_{k}^{d})\big{\}}.\] Since \(d\geq 2^{t+k-1}+r\), using [6, Lem. 4.2], we get that \(e_{k}(\operatorname{pt})^{j}\notin(x_{1}^{d},\ldots,x_{k}^{d})\), and consequently \(j\leq\iota_{k}(d,\ldots,d)\). This completes the proof of the corollary. ### Proof of Theorem 2.12 Fix integers \(d\geq 1\) and \(k\geq 1\) with \(d\geq k\), and let \(V=\mathbb{R}^{d+1}\). Let \((j_{k},\ldots,j_{d})\) be a permutation of the set \(\{k,\ldots,d\}\), and take an arbitrary collections of functions \(\varphi_{a,b}\colon S(E_{a+1}^{d+1})\longrightarrow\mathbb{R}\), \(k\leq a\leq d\), \(1\leq b\leq j_{a}\), from the sphere bundle of the tautological vector bundle \(E_{a+1}^{d+1}\) over the Grassmann manifold \(G_{a+1}(V)\) to the real numbers. According to Theorem 3.5, for the existence of the desired partition it suffices to prove the non-vanishing of the Euler class of the vector bundle \[E=E_{k+1}^{\oplus j_{k}}\oplus E_{k+2}^{\oplus j_{k+1}}\oplus\cdots\oplus E_{ d+1}^{\oplus j_{d}}.\] For this we show that the related mod \(2\) Euler class which lives in the cohomology ring \(H^{*}(\operatorname{Flag}_{k,\ldots,d}(V);\mathbb{F}_{2})\) is not zero. As already discussed at the beginning of Section 6 we have that \[w(E)=(1+w_{1}(E_{k+1}))^{j_{k}}\cdots(1+w_{1}(E_{d+1}))^{j_{d}}\] implying that the mod \(2\) Euler class of \(E\) is \(\operatorname{e}(E)=w_{1}(E_{k+1})^{j_{k}}\cdots w_{1}(E_{d+1})^{j_{d}}\). Applying the map \(\alpha_{k,\ldots,d}^{*}\), with the usual abuse of notation we have that \[\alpha_{k,\ldots,d}^{*}(\operatorname{e}(E))=w_{1}(E_{k+1})^{j_{k}}\cdots w_{ 1}(E_{d+1})^{j_{d}}\neq 0\] in \(H^{*}(\operatorname{Flag}_{1,\ldots,d}(V);\mathbb{F}_{2})\), according to (12). Consequently, \(\operatorname{e}(E)\neq 0\) and the proof of the corollary is complete. ## 7. Proofs of Proposition 2.13 and 2.14 In this section we verify properties of integers \(\iota_{k}(m_{1},\ldots,m_{k})\) stated in Proposition 2.13 and Proposition 2.14. ### Proof of Proposition 2.13 Let \(k\geq 1\) be an integer and let \(m_{1},\ldots,m_{k}\) be positive integers. Recall that \[\iota_{k}(m_{1},\ldots,m_{k})=\max\big{\{}j:e_{k}(\operatorname{pt})^{j} \notin(x_{1}^{m_{1}},\ldots,x_{k}^{m_{k}})\big{\}},\] where \[e_{k}(\operatorname{pt})=\prod_{(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{F}_{2 }^{k}-\{0\}}(\alpha_{1}x_{1}+\cdots+\alpha_{k}x_{k})\ \in\ R_{k}(\operatorname{pt})\cong\mathbb{F}_{2}[x_{1},\ldots,x_{k}].\] We prove the claims in the order they are listed. **(1)** Assume that \(m_{k}\geq 2^{k-1}m+1\) and in addition that \(\iota_{k-1}(m_{1},\ldots,m_{k-1})\geq m\). Then \(e_{k}(\operatorname{pt})^{m}\notin(x_{1}^{m_{1}},\ldots,x_{k-1}^{m_{k-1}})\). We transform as follows \[e_{k}(\operatorname{pt})^{m} =e_{k-1}(\operatorname{pt})^{m}\prod_{(\alpha_{1},\ldots,\alpha_{ k-1})\in\mathbb{F}_{2}^{k-1}}(\alpha_{1}x_{1}+\cdots+\alpha_{k-1}x_{k-1}+x_{k})^ {m}\] \[=e_{k-1}(\operatorname{pt})^{m}\cdot x_{k}^{2^{k-1}m}+p_{2^{k-1}m -1}\cdot x_{k}^{2^{k-1}m-1}+\cdots+p_{1}\cdot x_{k}+p_{0},\] where \(p_{2^{k-1}m-1},\ldots,p_{1},p_{0}\in\mathbb{F}_{2}[x_{1},\ldots,x_{k-1}]\). Consequently, \[e_{k}(\mathrm{pt})^{m}\notin(x_{1}^{m_{1}},\ldots,x_{k-1}^{m_{k-1}},x_{k}^{2^{k- 1}m+1}).\] Since, \(m_{k}\geq 2^{k-1}m+1\) we have that \((x_{1}^{m_{1}},\ldots,x_{k}^{m_{k}})\subseteq(x_{1}^{m_{1}},\ldots,x_{k}^{2^{k -1}m+1})\) and thus \(e_{k}(\mathrm{pt})^{m}\notin(x_{1}^{m_{1}},\ldots,x_{k-1}^{m_{k-1}},x_{k}^{m_{ k}})\). Therefore, \(\iota_{k}(m_{1},\ldots,m_{k})\geq m\), as claimed. **(2)** We prove the claim by induction on \(k\). For \(k=1\) we assume that \(m_{1}\geq m+1\). Then \[\iota_{1}(m_{1})=\max\big{\{}j:e_{1}(\mathrm{pt})^{j}=x_{1}^{j}\notin(x_{1}^{ m_{1}})\big{\}}=m_{1}-1\geq m.\] Now, assume that claim holds for \(k-1\geq 1\), and assume in addition that \(m_{1}\geq 2^{i-1}m+1\) for all \(1\leq i\leq k\). Then from the assumption \(\iota_{k-1}(m_{1},\ldots,m_{k-1})\geq m\), and consequently by part (1) of this claim it follows that \(\iota_{k}(m_{1},\ldots,m_{k})\geq m\). **(3)** In this case we have that \(m_{1}=m+1,m_{2}=2m+1,\ldots,m_{k}=2^{k-1}+1\). According to the part (2) of this claim, it follows that \[\iota_{k}(m+1,2m+1,2^{2}m+1\ldots,2^{k-1}m+1)\geq m.\] Now, assume that \(\iota_{k}(m_{1},\ldots,m_{k})\geq r\geq 1\) for some sequence of positive integers \(m_{1},\ldots,m_{k}\). Hence, \(e_{k}(\mathrm{pt})^{r}\notin(x_{1}^{m_{1}},\ldots,x_{k}^{m_{k}})\). We expand the transformation from the proof of part (1) of this claim as follows: \[e_{k}(\mathrm{pt})^{r} =e_{k-1}(\mathrm{pt})^{r}\prod_{(\alpha_{1},\ldots,\alpha_{k-1}) \in\mathbb{F}_{2}^{k-1}}(\alpha_{1}x_{1}+\cdots+\alpha_{k-1}x_{k-1}+x_{k})^{r}\] \[=e_{k-2}(\mathrm{pt})^{r}.\] \[\prod_{(\alpha_{1},\ldots,\alpha_{k-2})\in\mathbb{F}_{2}^{k-2}}( \alpha_{1}x_{1}+\cdots+x_{k-1})^{r}\prod_{(\alpha_{1},\ldots,\alpha_{k-1})\in \mathbb{F}_{2}^{k-1}}(\alpha_{1}x_{1}+\cdots+x_{k})^{r}\] \[\ldots\] \[=x_{k}^{2^{k-1}r}x_{k-1}^{2^{k-2}r}\cdots x_{2}^{2r}x_{1}^{r}+q.\] Here \(q\) is a polynomial whose additive representation in the monomial basis does not contain the monomial \(x_{k}^{2^{k-1}r}x_{k-1}^{2^{k-2}r}\cdots x_{2}^{2r}x_{1}^{r}\). Since, \(e_{k}(\mathrm{pt})^{r}\notin(x_{1}^{m_{1}},\ldots,x_{k}^{m_{k}})\) we conclude that \[m_{k}\geq 2^{k-1}r+1,m_{k-1}\geq 2^{k-2}r+1,\ldots,m_{1}\geq r+1,\] implying that \[m_{k}+m_{k-1}+\cdots+m_{2}+m_{1}\geq(2^{k-1}+2^{k-2}+\cdots+2+1)r+k.\] In particular, \[m_{k}+m_{k-1}+\cdots+m_{2}+m_{1}\geq(2^{k}-1)\iota_{k}(m_{1},\ldots,m_{k})+k.\] Thus, in the case when \(m_{1}=m+1,m_{2}=2m+1,\ldots,m_{k}=2^{k-1}+1\), we have that \[(2^{k}-1)m+k\geq(2^{k}-1)\iota_{k}(m+1,2m+1,2^{2}m+1\ldots,2^{k-1}m+1)+k,\] or in other words \(m\geq\iota_{k}(m+1,2m+1,2^{2}m+1\ldots,2^{k-1}m+1)\). Hence, we showed that \(\iota_{k}(m+1,2m+1,2^{2}m+1\ldots,2^{k-1}m+1)=m\), as claimed. **(4)** We start with the following transformation \[e_{k}(\mathrm{pt})^{m} =e_{k-r}(\mathrm{pt})^{m}\prod_{(\alpha_{k-r+1},\ldots,\alpha_{k}) \in\mathbb{F}_{2}^{r}-\{0\}}\prod_{(\alpha_{1},\ldots,\alpha_{k-r})\in\mathbb{F }_{2}^{k-r}}(\alpha_{1}x_{1}+\cdots+\alpha_{k}x_{k})^{m}\] \[=e_{k-r}(\mathrm{pt})^{m}\prod_{(\alpha_{k-r+1},\ldots,\alpha_{k}) \in\mathbb{F}_{2}^{r}-\{0\}}\prod_{(\alpha_{1},\ldots,\alpha_{k-r})\in\mathbb{ F}_{2}^{k-r}}\] \[\qquad\qquad\big{(}(\alpha_{1}x_{1}+\cdots+\alpha_{k-r}x_{k-r})+( \alpha_{k-r+1}x_{k-r+1}+\cdots+\alpha_{k}x_{k})\big{)}^{m}.\] Hence, \[e_{k}(\mathrm{pt})^{m}=\underbrace{e_{k-r}(\mathrm{pt})^{m}\prod_{(\alpha_{k- r+1},\ldots,\alpha_{k})\in\mathbb{F}_{2}^{r}-\{0\}}\big{(}\alpha_{k-r+1}x_{k-r+1}+ \cdots+\alpha_{k}x_{k}\big{)}^{2^{k-r}m}}_{=p}+q,\] where the sets of (non-zero) monomials in the additive presentations of the polynomials \(p\) and \(q\) are disjoint. The assumptions \(\iota_{k-r}(m_{1},\ldots,m_{k-r})\geq m\) and \(\iota_{r}(m_{k-r+1},\ldots,m_{k})\geq 2^{k-r}m\) imply that \[e_{k-r}(\mathrm{pt})^{m}\notin\big{(}x_{1}^{m_{1}},\ldots,x_{k-r}^{m_{k-r}} \big{)}\] and \[\prod_{(\alpha_{k-r+1},\ldots,\alpha_{k})\in\mathbb{F}_{2}^{r}-\{0\}}\big{(} \alpha_{k-r+1}x_{k-r+1}+\cdots+\alpha_{k}x_{k}\big{)}^{2^{k-r}m}\notin\big{(} x_{k-r+1}^{m_{k-r+1}},\ldots,x_{k}^{m_{k}}\big{)}.\] Therefore, the polynomial \(p\) is the witness that \[e_{k}(\mathrm{pt})^{m}\notin\big{(}x_{1}^{m_{1}},\ldots,x_{k-r}^{m_{k-r}},x_{ k-r+1}^{m_{k-r+1}},\ldots,x_{k}^{m_{k}}\big{)},\] and consequently \(\iota_{k}(m_{1},\ldots,m_{k})\geq m\), as claimed **(5)** The polynomial \(e_{k}(\mathrm{pt})^{m}\) can be presented as follows: \[e_{k}(\mathrm{pt})^{m}=e_{k-1}(\mathrm{pt})^{m}x_{k}^{m}\prod_{(\alpha_{1}, \ldots,\alpha_{k-1})\in\mathbb{F}_{2}^{r}-\{0\}}(\alpha_{1}x_{1}+\cdots+ \alpha_{k-1}x_{k-1}+x_{k})^{m}.\] Hence the lowest power of \(x_{k}\) in \(e_{k}(\mathrm{pt})^{m}\) is \(x_{k}^{m}\) with coefficient \(e_{k-1}(\mathrm{pt})^{2m}\). The assumption \(\iota_{k-1}(m_{1},\ldots,m_{k-1})\geq 2m\) implies that \[e_{k-1}(\mathrm{pt})^{2m}\notin(x_{1}^{m_{1}},\ldots,x_{k-1}^{m_{k-1}}),\] and since \(m_{k}\geq m+1\) it follows that \(e_{k}(\mathrm{pt})^{m}\notin(x_{1}^{m_{1}},\ldots,x_{k-1}^{m_{k-1}},x_{k}^{m_{ k}})\). **(6)** In the case when \(k=2\) we have that \[e_{2}(\mathrm{pt})^{m}=\big{(}x_{1}x_{2}(x_{1}+x_{2})\big{)}^{m}=\sum_{i=0}^{m }\binom{m}{i}x_{1}^{m+i}x_{2}^{2m-i}. \tag{14}\] If \(m\leq\iota(m_{1},m_{2})\) then \(e_{2}(\mathrm{pt})^{m}\notin(x_{1}^{m_{1}},x_{2}^{m_{2}})\). Hence, there exists a non-zero monomial \(\binom{m}{i}x_{1}^{m+i}x_{2}^{2m-i}\) in presentation (14) of \(e_{2}(\mathrm{pt})^{m}\) which does not belong to the ideal \((x_{1}^{m_{1}},x_{2}^{m_{2}})\). This means, \(\binom{m}{i}=1\mod 2\), \(m+i\leq m_{1}-1\) and \(2m-i\leq m_{2}-1\) for some integer \(0\leq i\leq m\). Assume the opposite, that there is an integer \(0\leq i\leq m\) such that \(\binom{m}{i}=1\mod 2\) and \(2m-m_{2}+1\leq i\leq m_{1}-m-1\). Then the polynomial \(e_{2}(\mathrm{pt})^{m}\) when presented in the monomial basis has non-zero monomial \(\binom{m}{i}x_{1}^{m+i}x_{2}^{2m-i}\) which does not belong to the ideal \((x_{1}^{m_{1}},x_{2}^{m_{2}})\). Consequently, \(e_{2}(\mathrm{pt})^{m}\notin(x_{1}^{m_{1}},x_{2}^{m_{2}})\). **(7)** This is a direct consequence of the previous claim with \(m=2^{t}+r-1\), \(m_{1}=2^{t}+2t\), \(m_{2}=2^{t+1}+r\) and \(i=r-1\) because \(\binom{2^{t}+r-1}{r-1}=1\mod 2\), and \[2m-m_{2}+1=r-1\leq i=r-1\leq m_{1}-m-1=r.\] We completed the proof of the proposition. ### Proof of Proposition 2.14 As before, \(k\geq 1\) is an integer and \(m_{1},\ldots,m_{k}\) are positive integers. In the proof we use the fact that the polynomial \(e_{k}(\mathrm{pt})\) is the top Dickson polynomial in variables \(x_{1},\ldots,x_{k}\). For more details on Dickson polynomials see for example [36]. **(1)** Let \(D_{k-1},D_{k-2},\ldots,D_{1}\) be Dickson polynomials in variables \(x_{1},\ldots,x_{k-1}\) of degree \(2^{k-1}-1,2^{k-1}-2,\ldots,2^{k-1}-2^{k-2}\), respectively. In particular, \(D_{k-1}=e_{k-1}(\mathrm{pt})\). From [36, Prop. 1.1] we have that \[D(x_{k}) :=\prod_{(\alpha_{1},\ldots,\alpha_{k-1})\in\mathbb{F}_{2}^{k-1} -\{0\}}(\alpha_{1}x_{1}+\cdots+\alpha_{k-1}x_{k-1}+x_{k})\] \[=x_{k}^{2^{k-1}-1}+D_{1}\,x_{k}^{2^{k-2}-1}+\cdots+D_{i}\,x_{k}^{ 2^{k-1-i}-1}+\cdots+D_{k-2}\,x_{k}+D_{k-1}.\] Here \(D(x_{k})\) is considered a polynomial in \(\mathbb{F}_{2}[x_{1},\ldots,x_{k-1}][x_{k}]\), and furthermore \(e_{k}(\mathrm{pt})=e_{k-1}(\mathrm{pt})x_{k}D(x_{k})\). Let \(0\leq r\leq 2^{t}-1\). We compute in \(\mathbb{F}_{2}[x_{1},\ldots,x_{k-1}][x_{k}]\) as follows: \[D(x_{k})^{2t+r} =\left(x_{k}^{2^{k-1}-1}+\cdots+D_{i}\,x_{k}^{2^{k-1-i}-1}+\cdots +D_{k-2}\,x_{k}+D_{k-1}\right)^{2t+r}\] \[=\left(x_{k}^{2^{t}(2^{k-1}-1)}+\cdots+D_{i}\,x_{k}^{2^{t}(2^{k-1 -i}-1)}+\cdots+D_{k-2}^{2^{t}}\,x_{k}^{2^{t}}+D_{k-1}^{2^{t}}\right)\cdot\] \[\quad\left(x_{k}^{2^{k-1}-1}+\cdots+D_{i}\,x_{k}^{2^{k-1-i}-1}+ \cdots+D_{k-2}\,x_{k}+D_{k-1}\right)^{r}. \tag{15}\] Then, the coefficient of \(x_{k}^{2^{t}(2^{k-1}-1)}\) in \(D(x_{k})^{2t+r}\) is \(D_{k-1}^{r}=e_{k-1}(\mathrm{pt})^{r}\), obtained as the product of \(x_{k}^{2^{t}(2^{k-1}-1)}\) from the first factor with \(D_{k-1}^{r}\) from the second factor in (15). Indeed, the only other candidate which might additionally contribute to the coefficient of \(x_{k}^{2^{t}(2^{k-1}-1)}\) is the product \[D_{1}^{2^{t}}x_{k}^{2^{t}(2^{k-2}-1)}\cdot x_{k}^{r(2^{k-1}-1)}=D_{1}^{2^{t}}x _{k}^{2^{t}(2^{k-2}-1)+r(2^{k-1}-1)}\] when \[2^{t}(2^{k-1}-1)=2^{t}(2^{k-2}-1)+r(2^{k-1}-1)\iff 2^{t+k-2}=r(2^{k-1}-1).\] This cannot be because \(0\leq r\leq 2^{t}-1\). Consequently, the coefficient of \(x_{k}^{2^{k-1+t}+r}\) in \[e_{k}(\mathrm{pt})^{2t+r}=e_{k-1}(\mathrm{pt})^{2t+r}x_{k}^{2t+r}D(x_{k})^{2t+r}\] is equal to \(e_{k-1}(\mathrm{pt})^{2t+2r}\). From the assumption \(\iota_{k-1}(m_{1},\ldots,m_{k-1})\geq 2^{t}+2r\) we have that \[e_{k-1}(\mathrm{pt})^{2t+2r}\notin(x_{1}^{m_{1}},\ldots,x_{k-1}^{m_{k-1}}),\] and since \(m_{k}\geq 2^{t+k-1}+r+1\) we conclude that \[e_{k}(\mathrm{pt})^{2t+r}\notin(x_{1}^{m_{1}},\ldots,x_{k-1}^{m_{k-1}},x_{k}^{ m_{k}}).\] Thus, \(\iota_{k}(m_{1},\ldots,m_{k})\geq 2^{t}+r\) as claimed. **(2)** The claim follows from the previous instance of the proposition because \[2^{t+1}+r=2^{t}+r+2^{t}>2^{t}+r+r=2^{t}+2r.\] **(3)** The proof is by induction on \(k\) for every pair of integers \((2^{t},r)\) with \(1\leq r\leq 2^{t}-1\). In the case \(k=1\), the assumption \(m_{1}\geq 2^{t}+r+1\) implies that \[\iota_{1}(m_{1})=m_{1}-1\geq 2^{t}+r+1-1=2^{t}+r.\] Let us assume that the claim holds for \(k-1\geq 1\) and every every pair of integers \((2^{t},r)\) with \(1\leq r\leq 2^{t}-1\) (the induction hypothesis). Take \(m_{i}\geq 2^{t+k-1}+r+1=2^{(t+1)+(k-1)-1}+r+1\) for all \(1\leq i\leq k\). Applying the induction hypothesis to the first \(k-1\) inequalities and the pair \((2^{t+1},r)\) we get that \[\iota_{k-1}(m_{1},\ldots,m_{k})\geq 2^{t+1}+r.\] Now, the inequalities \(\iota_{k-1}(m_{1},\ldots,m_{k})\geq 2^{t+1}+r+1\) and \(m_{k}\geq 2^{t+k-1}+r+1\), and the previous claim of this proposition imply that \(\iota_{k}(m_{1},\ldots,m_{k})\geq 2^{t}+r\). This completes the proof. **(4)** Since \(e_{k}(\mathrm{pt})^{2}=\prod_{(\alpha_{1},\ldots,\alpha_{k})\in\mathbb{F}_{2} ^{k}-\{0\}}(\alpha_{1}x_{1}^{2}+\cdots+\alpha_{k}x_{k}^{2})\), the following equivalence holds \[e_{k}(\mathrm{pt})^{2m}\in(x_{1}^{2m_{1}},\ldots,x_{k}^{2m_{k}})\iff e_{k}( \mathrm{pt})^{m}\in(x_{1}^{m_{1}},\ldots,x_{k}^{m_{k}}).\] This equivalence implies the claim. ## 8. Proof of Theorem 2.15 Let \(E\) be a Euclidean vector bundle of dimension \(n\) over a compact and connected ENR \(B\), and let the integers \(1\leq k\leq n\) and \(j\geq 1\) be fixed. We first prove the equality of the ideals and then a criterion for the existence of orthogonal partitions. ### Proof of Part (1) We prove the equality of the ideals \[\mathcal{J}_{k}(E):=(f_{1},\ldots,f_{k})=(\overline{f}_{1},\ldots,\overline{f }_{k})=:\mathcal{J}_{k}^{\prime}(E) \tag{16}\] where \[f_{i}:=\sum_{0\leq r_{1}+\cdots+r_{i}\leq n-i+1}w_{n-i+1-(r_{1}+\cdots+r_{i}) }(E)\,x_{1}^{r_{1}}\cdots x_{i}^{r_{i}},\] and \[\overline{f}_{i}:=\sum_{0\leq r_{1}+\cdots+r_{k}\leq n-i+1}w_{n-i+1-(r_{1}+ \cdots+r_{k})}(E)\,x_{1}^{r_{1}}\cdots x_{k}^{r_{k}},\] for \(1\leq i\leq k\). To prove the equality of the ideal we first consider the polynomials \[X_{a}[b]:=\sum_{r_{1}+\cdots+r_{b}=n-a+1}x_{1}^{r_{1}}\cdots x_{b}^{r_{b}}\] for \(1\leq a\leq n+1\) and \(1\leq b\leq k\). It is straightforward to see that the following equality holds \[X_{a}[b+1]=X_{a}[b]+x_{b+1}\cdot X_{a+1}[b+1]. \tag{17}\] Indeed, we have that \[X_{a}[b+1]:=\sum_{r_{1}+\cdots+r_{b}+r_{b+1}=n-a+1}x_{1}^{r_{1}} \cdots x_{b}^{r_{b}}x_{b+1}^{r_{b+1}}=\\ \sum_{r_{1}+\cdots+r_{b}+0=n-a+1}x_{1}^{r_{1}}\cdots x_{b}^{r_{b} }x_{b+1}^{0}+x_{b+1}\sum_{r_{1}+\cdots+r_{b+1}=n-a}x_{1}^{r_{1}}\cdots x_{b+1}^ {r_{b+1}}=\\ X_{a}[b+1]=X_{a}[b]+x_{b+1}\cdot X_{a+1}[b+1].\] Next, using induction on \(\ell\geq 0\), we prove the following identity: \[X_{c+s}[c+\ell]=\sum_{c\leq b\leq c+\ell}\Big{(}\sum_{s_{b}+\cdots+s_{c+\ell}= b-c}x_{b}^{s_{b}}\cdots x_{c+\ell}^{s_{c+\ell}}\Big{)}X_{b+s}[b]. \tag{18}\] In case when \(\ell=0\) the equality (18) becomes the identity \(X_{c+s}[c]=X_{c+s}[c]\), an so the induction basis is verified. Now, we assume that the equality (18) holds for the given fixed integer \(\ell\geq 1\). For the induction step we compute and use induction hypothesis as follows: \[X_{c+s}[c+\ell+1]\stackrel{{\eqref{eq:17}}}{{=}}X_{ c+s}[c+\ell]+x_{c+\ell+1}\cdot X_{c+s+1}[c+\ell+1]\stackrel{{ \eqref{eq:18}}}{{=}}\\ \sum_{c\leq b\leq c+\ell}\Big{(}\sum_{s_{b}+\cdots+s_{c+\ell}=b- c}x_{b}^{s_{b}}\cdots x_{c+\ell}^{s_{c+\ell}}\Big{)}X_{b+s}[b]+x_{c+\ell+1} \cdot X_{c+s+1}[c+\ell+1]=\\ \sum_{c\leq b\leq c+\ell}\Big{(}\sum_{s_{b}+\cdots+s_{c+\ell}=b- c}x_{b}^{s_{b}}\cdots x_{c+\ell}^{s_{c+\ell}}\Big{)}X_{b+s}[b]+\\ x_{c+\ell+1}\cdot\sum_{s_{1}+\cdots+s_{c+\ell+1}=n-c-s}x_{1}^{s_{1}} \cdots x_{c+\ell+1}^{s_{c+\ell+1}}.\] Gathering two terms on the right hand since of the previous equality under one sum we get that \[X_{c+s}[c+\ell+1]=\sum_{c\leq b\leq c+\ell+1}\Big{(}\sum_{s_{b}+\cdots+s_{c+ \ell}=b-c}x_{b}^{s_{b}}\cdots x_{c+\ell+1}^{s_{c+\ell+1}}\Big{)}X_{b+s}[b].\] This completes the induction and the proof of the relation (18). We proceed with a proof of the equality (16). Observe that for \(1\leq i\leq k\): \[f_{i}=\sum_{0\leq s\leq n-i+1}w_{s}(E)X_{s+i}[i]\qquad\text{and}\qquad\overline {f}_{i}=\sum_{0\leq s\leq n-i+1}w_{s}(E)X_{s+i}[k],\] and in particular that \(f_{k}=\overline{f}_{k}\). Now, using the relation (18) we have that \[\overline{f}_{i}=\sum_{0\leq s\leq n-i+1}w_{s}(E)X_{s+i}[k] \stackrel{{\eqref{eq:18}}}{{=}}\\ \sum_{0\leq s\leq n-i+1}w_{s}(E)\Big{(}\sum_{i\leq b\leq k}\Big{(} \sum_{s_{b}+\cdots+s_{k}=b-i}x_{b}^{s_{b}}\cdots x_{k}^{s_{k}}\Big{)}X_{s+b}[ b]\Big{)}=\\ \sum_{i\leq b\leq k}\Big{(}\sum_{s_{b}+\cdots+s_{k}=b-i}x_{b}^{s _{b}}\cdots x_{k}^{s_{k}}\Big{)}\Big{(}\sum_{0\leq s\leq n-i+1}w_{s}(E)X_{s+b} [b]\Big{)}=\\ \sum_{i\leq b\leq k}\Big{(}\sum_{s_{b}+\cdots+s_{k}=b-i}x_{b}^{s _{b}}\cdots x_{k}^{s_{k}}\Big{)}f_{b}.\] In summary, \[\overline{f}_{r}=\sum_{i\leq b\leq k}\Big{(}\sum_{s_{b}+\cdots+s_{k}=b-i}x_{b }^{s_{b}}\cdots x_{k}^{s_{k}}\Big{)}f_{b}. \tag{19}\] Hence, \((\overline{f}_{1},\ldots,\overline{f}_{k})\subseteq(f_{1},\ldots,f_{k})\). On the other hand, since \(f_{k}=\overline{f}_{k}\) we have that \(f_{k}\in\mathcal{J}_{k}^{\prime}(E)=(\overline{f}_{1},\ldots,\overline{f}_{k})\). Now, for \(1\leq r\leq k-1\) assume that \(f_{r+1},\ldots,f_{k}\in\mathcal{J}_{k}^{\prime}(E)\). Then from the equality (19) it follows that \[\overline{f}_{r}=\sum_{r\leq b\leq k}\Big{(}\sum_{s_{b}+\cdots+ s_{k}=b-r}x_{b}^{s_{b}}\cdots x_{k}^{s_{k}}\Big{)}f_{b}=\\ f_{r}+\sum_{r+1\leq b\leq k}\Big{(}\sum_{s_{b}+\cdots+s_{k}=b-r}x_{ b}^{s_{b}}\cdots x_{k}^{s_{k}}\Big{)}f_{b},\] and consequently, by assumption, we have \[f_{r}=\overline{f}_{r}+\sum_{r+1\leq b\leq k}\Big{(}\sum_{s_{b}+\cdots+s_{k}=b-r} x_{b}^{s_{b}}\cdots x_{k}^{s_{k}}\Big{)}f_{b}\ \in\ \mathcal{J}_{k}^{\prime}(E).\] Thus, \((\overline{f}_{1},\ldots,\overline{f}_{k})\supseteq(f_{1},\ldots,f_{k})\). We have completed the proof of the equality (16). ### Proof of Part (2) For the second part of the theorem assume that the class \(e_{k}(B)^{j}\) does not belong to the ideal \(\mathcal{J}_{k}(E)\). The proof relies on the criterion from Theorem 3.4. In other words, it suffices to prove that \[\mathrm{e}\left(\big{(}B_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j} \right)\neq 0.\] The mod \(2\) Euler class of the vector bundle \(\big{(}B_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j}\), or in other words the top Stiefel-Whitney class, lives in the cohomology of \(H^{*}(Y_{k}(E);\mathbb{F}_{2})\). We show that * \(H^{*}(Y_{k}(E);\mathbb{F}_{2})\cong R_{k}(B)/\mathcal{J}_{k}(E)\), and that * \(w_{(2^{k}-1)j}\big{(}\big{(}B_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j} \big{)}=e_{k}(B)^{j}+\mathcal{J}_{k}(E)\in R_{k}(B)/\mathcal{J}_{k}(E)\). The second claim follows from the first claim, the fact that \(B_{k}(E)\) is the restriction of \(A_{k}(E)\), and related computation of \(w_{(2^{k}-1)j}\big{(}\big{(}A_{k}(E)/\underline{\mathbb{R}}\big{)}^{\oplus j} \big{)}\) in the proof of Theorem 3.2. Thus we need to prove only the first statement, that is to compute the cohomology ring \(H^{*}(Y_{k}(E);\mathbb{F}_{2})\). First, we give a description of the space \(Y_{k}(E)\) as a projective bundle at the end of the tower of projective bundles (20) where \(E_{1}:=E\) and \(p_{1}\) is the projection. The vector bundles \(E_{2},\ldots,E_{k}\) and the maps \(p_{2},\ldots,p_{k}\) are defined iteratively as follows. Let \(H(E_{1})\) be the Hopf line bundle over \(\mathbb{P}(E_{1})\), and reacall that \(p_{1}\colon\mathbb{P}(E_{1})\longrightarrow B\) is the projection map. Then \(H(E_{1})\) is a vector subbundle of the pull-back vector bundle \(p_{1}^{*}E_{1}\), and we set \[E_{2}:=H(E_{1})^{\perp}\] to be the orthogonal complement of \(H(E_{1})\) inside \(p_{1}^{*}E_{1}\). In particular, \(E_{2}\) is a \((n-1)\)-dimensional vector bundle over \(\mathbb{P}(E_{1})\). Set \(p_{2}\colon\mathbb{P}(E_{2})\longrightarrow\mathbb{P}(E_{1})\) to be the projection map. Next, \(H(E_{2})\oplus p_{1}^{*}H(E_{1})\) is a vector subbundle of the pull-back vector bundle \((p_{2}\circ p_{1})^{*}E_{1}\), and so we define \[E_{3}:=\big{(}H(E_{2})\oplus p_{1}^{*}H(E_{1})\big{)}^{\perp},\] and \(p_{3}\) to be the projection map \(\mathbb{P}(E_{3})\longrightarrow\mathbb{P}(E_{2})\). We continue in the same way. Assume that for \(1\leq i\leq k-1\), all the vector bundles \(E_{1},\ldots,E_{i}\), of dimensions \(n,n-1,\ldots,n-i+1\), respectively, and the projection maps \(p_{1},\ldots,p_{i}\) are defined. Notice that \[H(E_{i})\oplus p_{i}^{*}H(E_{i-1})\oplus(p_{i}\circ p_{i-1})^{*}H(E_{i-1}) \oplus\cdots\oplus(p_{i}\circ\cdots\circ p_{1})^{*}H(E_{1})\] is a vector subbundle of \((p_{i}\circ\cdots\circ p_{1})^{*}E_{1}\). We define the vector bundle \(E_{i+1}\) as the orthogonal complement \[E_{i+1}:=\Big{(}H(E_{i})\oplus p_{i}^{*}H(E_{i-1})\oplus\cdots\oplus(p_{i} \circ\cdots\circ p_{1})^{*}H(E_{1})\Big{)}^{\perp}. \tag{21}\] The map \(p_{i+1}\) is defined to be the standard projection \(\mathbb{P}(E_{i+1})\longrightarrow\mathbb{P}(E_{i})\). It is clear that \(Y_{k}(E)=\mathbb{P}(E_{k})\). Now, we use the tower of projective bundles (20), Lemma 4.1, as well as the proof of Claim 4.2, to describe the cohomology ring \(H^{*}(Y_{k}(E);\mathbb{F}_{2})=H^{*}(\mathbb{P}(E_{k});\mathbb{F}_{2})\). Since \(H^{*}(Y_{k}(E);\mathbb{F}_{2})=H^{*}(\mathbb{P}(E_{k});\mathbb{F}_{2})\) where \(\mathbb{P}(E_{k})\) is the projective bundle of the \((n-k+1)\)-dimensional vector bundle \(E_{k}\) over \(\mathbb{P}(E_{k-1})\) from Lemma 4.1 we have that \[H^{*}(Y_{k}(E);\mathbb{F}_{2})\cong H^{*}(\mathbb{P}(E_{k-1});\mathbb{F}_{2})[ x_{k}]/\Big{(}\sum_{s=0}^{n-k+1}w_{n-k+1-s}(E_{k})\,x_{k}^{s}\Big{)},\] where \(x_{k}\) corresponds to mod \(2\) Euler class of the Hopf line bundle \(H(E_{k})\). Continuing to apply Lemma 4.1 for projective bundles \(\mathbb{P}(E_{k-1}),\dots,\mathbb{P}(E_{1})\) we get the following conclusion \[H^{*}(Y_{k}(E);\mathbb{F}_{2})\cong\\ H^{*}(B;\mathbb{F}_{2})[x_{1},\dots,x_{k}]/\Big{(}\sum_{s=0}^{n}w_{n-s }(E_{1})\,x_{1}^{s},\dots,\sum_{s=0}^{n-k+1}w_{n-k+1-s}(E_{k})\,x_{k}^{s}\Big{)}. \tag{22}\] Here \(x_{i}\), for all \(1\leq i\leq k\), with a bit of abuse of notation, corresponds to the mod \(2\) Euler class of the Hopf line bundle \(H(E_{i})\), or more precisely to the mod \(2\) Euler class of the pull-back line bundle \((p_{k}\circ\dots\circ p_{i+1})^{*}H(E_{i})\). Set \(f_{i}:=\sum_{s=0}^{n-i+1}w_{n-i+1-s}(E_{i})\,x_{i}^{s}\) for \(1\leq i\leq k\). Then \[H^{*}(Y_{k}(E);\mathbb{F}_{2})\cong H^{*}(B;\mathbb{F}_{2})[x_{1},\dots,x_{k}] /\big{(}f_{1},\dots,f_{k}\big{)}.\] Now we focus on identification of Stiefel-Whitney classes of the vector bundles \(E_{1},\dots,E_{k}\) in terms of the Stiefel-Whitney classes \(E\). Note that \(E_{1}=E\) by definition, and so \(w(E_{1})=w(E)\). Next, from the definition (21) of the vector bundles \(E_{i}\) for \(2\leq i\leq k\), as an orthogonal complements, we get that \[w(E_{i}) =w\Big{(}-\big{(}H(E_{i-1})\oplus p_{i-1}^{*}H(E_{i-2})\oplus \dots\oplus(p_{i-1}\circ\dots\circ p_{1})^{*}H(E_{1})\big{)}\Big{)}\] \[=w\big{(}-H(E_{i-1})\big{)}\cdot w\big{(}-p_{i-1}^{*}H(E_{i-2}) \big{)}\cdots w\big{(}-(p_{i-1}\circ\dots\circ p_{1})^{*}H(E_{1})\big{)}.\] From Lemma 4.1 we also know that \[w(H(E_{i-1}))=1+x_{i-1},\ \dots\,w((p_{i-1}\circ\dots\circ p_{1})^{*}H(E_{1}))=1+ x_{1}.\] Here we assume the expected identifications of classes \(x_{1},\cdots,x_{i-1}\) along the sequence of isomorphisms given in Lemma 4.1. Combining last two observations we have that \[w(E_{i})=\frac{1}{1+x_{i-1}}\cdot\frac{1}{1+x_{i-2}}\cdots\frac{1}{1+x_{1}}= \sum_{r_{i-1}\geq 0}x_{i-1}^{r_{i-1}}\cdot\sum_{r_{i-2}\geq 0}x_{i-2}^{r_{i-2}} \ \cdots\ \sum_{r_{1}\geq 0}x_{1}^{r_{1}},\] for \(2\leq i\leq k\). Consequently, we have that \[f_{i}=\sum_{s=0}^{n-i+1}w_{n-i+1-s}(E_{i})\,x_{i}^{s}=\\ \sum_{0\leq r_{1}+\cdots+r_{i}\leq n-i+1}w_{n-i+1-(r_{1}+\cdots+r_{ i})}(E)\,x_{1}^{r_{1}}\cdots x_{i}^{r_{i}}\] for every \(1\leq i\leq k\). This finishes the proof of the second claim, and so the proof of Theorem 3.4 is complete. ## 9. Even more main results In this section, we use methods developed in previous sections to give new proofs and generalise results of Larry Guth & Nets Hawk Katz [20], Blagojevic, Dimitrijevic Blagojevic & Gunter M. Ziegler [5], Schnider [31] and Soberon & Yuki Takahashi [34]. Throughout this section \(B\) will be a compact, connected ENR, and \(E\) will be a Euclidean real vector bundle of dimension \(n\) over \(B\). For an integer \(k\geq 1\), \(E(1),\ldots,E(k)\) will be finite-dimensional non-zero real vector bundles over \(B\) with \(\dim E(i)=n_{i}\). As before, we write \(S(E(i))\) for the sphere bundle of \(E(i)\) with fibre at \(b\in B\) the space of oriented \(1\)-dimensional subspacesof \(E(i)_{b}\). Equivalently, \(S(E(i))\) is the unit sphere bundle for a chosen Euclidean structure. Also, we shall use \(V\) for a Euclidean vector space \(V\), and sometimes see it as a vector bundle over a point. Recall that \(A_{k}(E(1),\ldots,E(k))\) is the \(2^{k}\)-dimensional real vector bundle over \(\mathbb{P}(E(1))\times_{B}\cdots\times_{B}\mathbb{P}(E(k))\) with fibre at \((L_{1},\ldots,L_{k})\), where \(L_{i}\in\mathbb{P}(E(i)_{b})\), \(b\in B\), the real vector space of all functions \(S(L_{1})\times\cdots\times S(L_{k})\longrightarrow\mathbb{R}\). As a space of real-valued functions, each fibre of \(A_{k}(E(1),\ldots,E(k))\) can be equipped with a partial order by setting \[f_{1}\leq f_{2}\quad\Longleftrightarrow\quad(\forall x\in S(L_{1})\times \cdots\times S(L_{k}))\ f_{1}(x)\leq f_{2}(x)\] for \(f_{1},f_{2}\in A_{k}(E(1),\ldots,E(k))\). Hence, every finite non-empty subset of functions \(S\) has a least upper bound, which we shall denote by \(\max(S)\). ### Partitioning by polynomials Now we give an extension of the results [20, Thm. 4.1], [19, Thm. 0.3] and [5, Thm. 1.3] to the setting of mass assignments over an arbitrary real vector bundle \(E\). In the case of a vector bundle over the point we recover the original results. For an integer \(d\geq 0\), let \(\mathcal{P}^{d}(E)\) denote the real vector bundle of dimension \(\binom{n+d-1}{d}\) over \(B\) with fibre at \(b\in B\) the vector space of homogeneous polynomial functions \(v\colon E_{b}\longrightarrow\mathbb{R}\) of degree \(d\). It is the dual \((S^{d}E)^{*}\) of the vector bundle obtained from of the \(d\)th symmetric power of \(E\). If \(d=1\), we can identify \(\mathcal{P}^{1}(E)=E^{*}\) with \(E\) using the inner product. In the following the crucial property of polynomial functions that we shall need is that for a non-zero homogeneous polynomial function \(v\in\mathcal{P}^{d}(V)\), the zero-set \[Z(v)=\{x\in S(V)\,|\,v(x)=0\}\] is null with respect to the Lebesgue measure on the Riemannian manifold \(S(V)\). It follows that, for any \(\epsilon>0\), there is an open neighbourhood of \(Z(v)\) in the sphere \(S(V)\) with volume less than \(\epsilon\), consult [35]. Now we extend our discussion from Section 3.3. Assume that \(E(i)\subseteq\mathcal{P}^{d(i)}(E)\) is a vector subbundle of the vector bundle of homogeneous polynomial functions of degree \(d(i)\geq 1\). For \(b\in B\), \((L_{1},\ldots,L_{k})\in\mathbb{P}(E(1)_{b})\times\cdots\times\mathbb{P}(E(k)_ {b})\), and \((v_{1},\ldots,v_{k})\in S(L_{1})\times\cdots\times S(L_{k})\), let us define an analogue of an orthant by \[\mathcal{A}_{b;v_{1},\ldots,v_{k}}:=\{u\in S(E_{b})\,|\,v_{1}(u)>0,\,\ldots,\, v_{k}(u)>0\}.\] We note that any real continuous function on the sphere bundle \(\varphi\colon S(E)\longrightarrow\mathbb{R}\) restricts to a function \(\varphi_{b}\colon S(E_{b})\longrightarrow\mathbb{R}\) which can be integrated over the set \(\mathcal{A}_{b;v_{1},\ldots,v_{k}}\). The first generalization of [20, Thm. 4.1], and also at the same time extension of our Theorem 2.2, can be stated as follows. **Theorem 9.1**.: _Under the hypotheses in the text, for an integer \(j\geq 1\), given continuous functions \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) assume that the \(\mathbb{F}_{2}\)-cohomology Euler class_ \[e(A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R}})^{j}\in H^{(2^{k}-1)j}( \mathbb{P}(E(1))\times_{B}\cdots\times_{B}\mathbb{P}(E(k));\mathbb{F}_{2})\] _of the vector bundle \(\underline{\mathbb{R}}^{j}\otimes(A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{ R}})\cong(A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R}})^{\oplus j}\) is non-zero._ _Then there exists a point \(b\in B\) and lines \(L_{i}\in\mathbb{P}(E(i)_{b})\), \(1\leq i\leq k\), such that, for each \(1\leq\ell\leq j\), the function_ \[S(L_{1})\times\cdots\times S(L_{k})\longrightarrow\mathbb{R},\qquad(v_{1}, \ldots,v_{k})\longmapsto\int_{\mathcal{A}_{b;\,v_{1},\ldots,v_{k}}}(\varphi_ {\ell})_{b}\] _is constant._ Proof.: As in in the Section 3.3, we define for any continuous function \(\varphi\colon S(E)\longrightarrow\mathbb{R}\) a section \(s_{\varphi}\) of the vector bundle \(A_{k}(E(1),\ldots,E_{k})\) by \[s_{\varphi}(b,(L_{1},\ldots,L_{k}))(v_{1},\ldots,v_{k}):=\int_{\mathcal{A}_{b; \,v_{1},\ldots,v_{k}}}\varphi_{b}.\] Continuity of \(s_{\varphi}\) follows from the fact that zero sets of polynomial functions are sets of Lebesgue measure zero on the sphere \(S(V)\). The proof then follows the pattern of arguments in the proof of Theorem 3.3. The result remains true if the functions \(\varphi_{\ell}\) are only assumed to be integrable in an appropriate sense. Form the locally trivial bundle \(L^{1}_{B}(S(E);\mathbb{R})\longrightarrow B\) with fibre at \(b\in B\) the Banach space \(L^{1}(S(E_{b});\mathbb{R})\) of all absolutely Lebesgue integrable functions \(S(E_{b})\longrightarrow\mathbb{R}\). If \(\varphi\) is a section of this Banach bundle, then we can integrate \(\varphi_{b}\in L^{1}(S(E_{b});\mathbb{R})\) and the associated section \(s_{\varphi}\) is continuous. Next, we extend our results to probability measures as follows. Let us write \(M_{+}(S(E))\longrightarrow B\) for the locally trivial bundle with fibre at \(b\in B\) the space \(M_{+}(S(E_{b}))\) of all finite Borel measures on the sphere \(S(E_{b})\), see Section 1.2. A continuous section \(\mu\) of \(M_{+}(S(E))\) will be called a _family of probability measures_ on \(S(E)\) if \(\mu_{b}\in M_{+}(S(E_{b}))\) is a probability measure for each \(b\in B\). In this more general context the zero set of a polynomial function can have positive measure. Now, for each \(b\in B\) and every \((L_{1},\ldots,L_{k})\in\mathbb{P}(E(1)_{b})\times\cdots\times\mathbb{P}(E(k)_ {b})\), we have \(2^{k}\) non-negative real numbers \(\mu_{b}(\mathcal{A}_{b;v_{1},\ldots,v_{k}})\in\mathbb{R}\), \((v_{1},\ldots,v_{k})\in S(L_{1})\times\cdots\times S(L_{k})\), - the measures of generalised orthants - with sum less than or equal to \(1\) (the measure of a zero sets can be positive). The following proposition allow us to transfer our more general setup in the previously developed topological framework. **Proposition 9.2**.: _Assume that for an integer \(j\geq 1\) there exist families of probability measures \(\mu_{1},\ldots,\mu_{j}\) on \(S(E)\) with the property that, for each \(b\in B\) and every \((L_{1},\ldots,L_{k})\in\mathbb{P}(E(1)_{b})\times\cdots\times\mathbb{P}(E(k)_ {b})\), there is \((v_{1},\ldots,v_{k})\in S(L_{1})\times\cdots\times S(L_{k})\) and some \(\ell\) such that \((\mu_{\ell})_{b}(\mathcal{A}_{b;v_{1},\ldots,v_{k}})>1/2^{k}\). Then the vector bundle \(\underline{\mathbb{R}}^{j}\otimes\big{(}A_{k}(E(1),\ldots,E(k))/\underline{ \mathbb{R}}\big{)}\cong(A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R}})^{ \oplus j}\) has a nowhere zero section._ Proof.: For a fixed integer \(1\leq\ell\leq j\), consider the set of points \[U_{\ell}:=\big{\{}\,x=(b;L_{1},\ldots,L_{k})\in\mathbb{P}(E(1) )\times_{B}\cdots\times_{B}\mathbb{P}(E(k)):\\ (\exists(v_{1},\ldots,v_{k})\in S(L_{1})\times\cdots\times S(L_{k }))\ \ (\mu_{\ell})_{b}(\mathcal{A}_{b;v_{1},\ldots,v_{k}})>1/2^{k}\,\big{\}},\] which is an open subspace of the base space \(X:=\mathbb{P}(E(1))\times_{B}\cdots\times_{B}\mathbb{P}(E(k))\). From the assumption it follows that \(U_{1},\ldots,U_{j}\) forms an open cover of the base space \(X\), Using the local triviality of the vector bundles, for every point \(x\in X\) we can manufacture a (continuous) section \(s_{\ell}^{x}\) of \(A_{k}(E(1),\ldots,E(k))\) and an open neighborhood \(U_{\ell}^{x}\) of \(x\) such that for each \(x^{\prime}=(b^{\prime};L_{1}^{\prime},\ldots,L_{k}^{\prime})\in X\) the following holds 1. \(s_{\ell}^{x}(x^{\prime})(v_{1}^{\prime},\ldots,v_{k}^{\prime})\in[0,1]\), for all \((v_{1}^{\prime},\ldots,v_{k}^{\prime})\in S(L_{1}^{\prime})\times\cdots\times S (L_{k}^{\prime})\); 2. if \(s_{\ell}^{x}(x^{\prime})(v_{1}^{\prime},\ldots,v_{k}^{\prime})=1\), then \((\mu_{\ell})_{b^{\prime}}(\mathcal{A}_{b^{\prime};v_{1}^{\prime},\ldots,v_{k }^{\prime}})>1/2^{k}\); 3. if \(x^{\prime}\in U_{\ell}^{x}\), then there is some \((v_{1}^{\prime},\ldots,v_{k}^{\prime})\) such that \(s_{\ell}^{x}(x^{\prime})(v_{1}^{\prime},\ldots,v_{k}^{\prime})=1\). Since \(X\) is compact, and \(U_{1},\ldots,U_{j}\) forms an open cover of \(X\) it can be refined to a compact cover \(K_{1},\ldots,K_{j}\) of \(X\) with the property that \(K_{\ell}\subseteq U_{\ell}\) for \(1\leq\ell\leq j\). Now, for each \(\ell\), we can choose a finite subset \(S_{\ell}\subseteq U_{\ell}\) such that \(K_{\ell}\subseteq\bigcup_{x\in S_{\ell}}U_{\ell}^{x}\). This allows as to define a continuous section \(s_{\ell}\) of \(A_{k}(E(1),\ldots,E(k))\) as \(s_{\ell}:=\max\left\{s_{\ell}^{x}:x\in S_{\ell}\right\}\). Here the maximum is taken with respect to the partial order on the space of real valued functions which was introduced at the beginning of this section. The properties (i), (ii) and (iii) ensure that at each point \(x\in K_{\ell}\) at least one of the \(2^{k}\) components of \(s_{\ell}(x)\) is equal to \(1\), but that not all are equal to \(1\). Thus, the associated section \(\bar{s}_{\ell}\) of \(A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R}}\) has no zeros in \(K_{\ell}\). The sum \((\bar{s}_{1},\ldots,\bar{s}_{j})\) is a nowhere zero section of \(\underline{\mathbb{R}}^{j}\otimes(A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R }})\). Now a generalization of Theorem 9.1 can be stated as follows. **Theorem 9.3**.: _Under the hypotheses in the text, suppose that for an integer \(j\geq 1\), \(\mu_{1},\ldots,\mu_{j}\) are families of probability measures on the sphere bundle \(S(E)\)._ _If the \(\mathbb{F}_{2}\)-cohomology Euler class_ \[e(A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R}})^{j}\in H^{(2^{k}-1)j}( \mathbb{P}(E(1))\times_{B}\cdots\times_{B}\mathbb{P}(E(k);\,\mathbb{F}_{2})\] _of the vector bundle \(\underline{\mathbb{R}}^{j}\otimes\left(A_{k}(E(1),\ldots,E(k))/\underline{ \mathbb{R}}\right)\) is non-zero, then there exists a point \(b\in B\) and lines \(L_{i}\in\mathbb{P}(E(i)_{b})\), \(1\leq i\leq k\), such that, for each \(1\leq\ell\leq j\) and every \((v_{1},\ldots,v_{k})\in S(L_{1})\times\cdots\times S(L_{k})\),_ \[\mu_{\ell}(\mathcal{A}_{b;v_{1},\ldots,v_{k}})\leq\frac{1}{2^{k}}.\] Proof.: Since the Euler class is non-zero, every section of the vector bundle has a zero. So the assertion follows from Proposition 9.2. As an application we give a spherical version of a generalization in [5, Thm. 1.3] of [20, Thm. 4.1]. **Corollary 9.4**.: _Let \(V\) be a real vector space of dimension \(n\). There is a constant \(C_{n}\) with the property that for integers \(d\geq 1\) and \(j\geq 1\), and probability measures \(\mu_{1},\ldots,\mu_{j}\) on the sphere \(S(V)\), there exists a non-zero homogeneous polynomial function \(v\) of degree \(d\) on \(S(V)\) such that for each component \(\mathcal{O}\) of the complement in \(S(V)\) of the zero set of \(v\)_ \[\mu_{1}(\mathcal{O})<C_{n}\cdot\frac{j}{d^{n-1}},\ldots,\mu_{j}(\mathcal{O})<C _{n}\cdot\frac{j}{d^{n-1}}.\] Proof.: Consider probability measures \(\mu_{1},\ldots,\mu_{j}\) and fix an integer \(k\geq 1\). We shall apply Theorem 9.3 with \(B=\mathrm{pt}\) a point, \(V=\mathbb{R}^{n}\) and \(E(i)=V(i)\subseteq\mathcal{P}^{d(i)}(V)\) a vector subspace of dimension \(n_{i}\leq\binom{n+d(i)-1}{d(i)}\). Let \(r(i)\geq 1\) be the least positive integer such that \(r(i)^{n-1}>2^{i-1}j\) and set \[d(i)=(n-1)r(i)\qquad\text{and}\qquad n_{i}=r(i)^{n-1}.\] Take \(V(i)\) to be the \(n_{i}=r(i)^{n-1}\) dimensional space of polynomials with basis, in terms of the standard coordinate functions \(\xi_{i}\), the monomials \[(\xi_{1}^{s_{1}}\xi_{2}^{r(i)-s_{1}})(\xi_{2}^{s_{2}}\xi_{3}^{r(i)-s_{2}})\cdots (\xi_{n-1}^{s_{n-1}}\xi_{n}^{r(i)-s_{n-1}}),\] where \(0\leq s_{1},\ldots,s_{n-1}<r(i)\). It follows from Proposition 2.13 that \(\iota_{k}(n_{1},\ldots,n_{k})\geq j\). By Theorem 9.3, there exist homogeneous polynomials \(v_{1},\ldots,v_{k}\) of degree \(d(1),\ldots,d(k)\), respectively, such that \(\mu_{\ell}(\mathcal{A}_{\text{pt};v_{1},\ldots,v_{k}})\leq 1/2^{k}\) for all \(1\leq\ell\leq j\). The product \(v_{1}\cdots v_{k}\) has degree \(d_{k}=d(1)+\ldots+d(k)\) and each component of the complement of its zero-set is contained in some \(\mathcal{A}_{\text{pt};v_{1},\ldots,v_{j}}\). Since \[2^{\frac{i-1}{n-1}}\cdot j^{\frac{1}{n-1}}<r(i)\leq 2\cdot 2^{\frac{i-1}{n-1}} \cdot j^{\frac{1}{n-1}},\] it follows that \[d_{k}=d(1)+\ldots+d(k)\leq(n-1)(r(1)+\cdots+r(k))\\ \leq 2(n-1)\cdot j^{\frac{1}{n-1}}\cdot\sum_{i=1}^{k}2^{\frac{i-1} {n-1}}=2(n-1)\cdot j^{\frac{1}{n-1}}\cdot\frac{2^{\frac{k}{n-1}}-1}{2^{\frac{1 }{n-1}}-1}.\] So \(d_{k}^{n-1}<C_{n}^{\prime}2^{k}j\), where \(C_{n}^{\prime}=\left(\frac{2(n-1)}{2^{\frac{n}{n-1}}-1}\right)^{n-1}\). Now \((d_{k})\) is a strictly increasing sequence. If \(k\) is chosen so that \(d_{k}\leq d<d_{k+1}\), then \(1/2^{k+1}<C_{n}^{\prime}j/d_{k+1}^{n-1}\), and so \[\frac{1}{2^{k}}<C_{n}\cdot\frac{j}{d_{k+1}^{n-1}}\leq C_{n}\cdot\frac{j}{d^{n -1}},\] where \(C_{n}=2C_{n}^{\prime}\). We can multiply \(v_{1}\cdots v_{k}\) by any non-zero polynomial of degree \(d-d_{1}\cdots d_{k}\) to produce the required polynomial of degree \(d\). ### Partitioning by affine functions In this section we give an extension of our results on the spherical GHR problem for the mass assignments to the broader class of partitions by caps which are not necessarily hemispheres. Let \(V\) be an \(n\)-dimensional real vector space with \(n\geq 2\). Using the inner product we can identify the vector space \(\mathbb{R}\oplus V\) with the \((n+1)\)-dimensional vector space of affine functions \(V\longrightarrow\mathbb{R}\) where the pair \((t,w)\in\mathbb{R}\oplus V\) determines the function \(u\longmapsto t+\langle u,w\rangle\). A unit vector \(v=(t,w)\in S(\mathbb{R}\oplus V)\) decomposes the sphere \(S(V)\) as the union \(S(V)=C(v)\cup C(-v)\) of two _caps_: \[C(v)=\{u\in S(V):\langle u,w\rangle\geq-t\}\qquad\text{and}\qquad C(-v)=\{u \in S(V):\langle u,w\rangle\leq-t\}\] with intersection \(\{u\in S(V):\langle u,w\rangle=-t\}\). For an illustration see Figure 4. If \(t=0\), the caps are hemispheres. If \(t>\|w\|\), then \(C(-v)=\emptyset\); if \(t<-\|w\|\), then \(C(v)=\emptyset\). If \(t=\|w\|\), then \(C(-v)\) is the single point \(-w/\|w\|\), and, if \(t=-\|w\|\), \(C(v)=\{w/\|w\|\}\). The intersection \(C(v)\cap C(-v)\), if \(|t|<\|w\|\) is a sphere of dimension \(n-2\) (if \(n>1\), which we now assume). Now suppose that each vector bundle \(E(i)\) is a subbundle of \(\underline{\mathbb{R}}\oplus E\), regarding a vector \(v\in E(i)_{b}\subseteq\mathbb{R}\oplus E_{b}\) in the fibre at \(b\in B\) as an affine linear function \(E_{b}\longrightarrow\mathbb{R}\). For a point \(b\in B\), a collection of lines \((L_{1},\ldots,L_{k})\in\mathbb{P}(E(1)_{b})\times\cdots\times\mathbb{P}(E(k) _{b})\), and a collection of vectors \((v_{1},\ldots,v_{k})\in S(L_{1})\times\cdots\times S(L_{k})\), we write another analogue of an orthant by \[\mathcal{C}_{b;v_{1},\ldots,v_{k}}:=\{u\in S(E_{b}):v_{1}(u)>0,\,\ldots,\,v_{k }(u)>0\}.\] The corresponding equipartition theorem is proved in the usual way by constructing a section of the vector bundle \(\underline{\mathbb{R}}^{j}\otimes\big{(}A_{k}(E(1),\ldots,E(k))/\underline{ \mathbb{R}}\big{)}\). **Theorem 9.5**.: _Under the hypotheses in the text, suppose that for an integer \(j\geq 1\), \(\varphi_{1},\ldots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) are continuous functions. If the \(\mathbb{F}_{2}\)-cohomology Euler class_ \[e(A_{k}(E(1),\ldots,E(k))/\underline{\mathbb{R}})^{j}\in H^{(2^{k}-1)j}( \mathbb{P}(E(1))\times_{B}\cdots\times_{B}\mathbb{P}(E(k);\mathbb{F}_{2})\] _of the vector bundle \(\underline{\mathbb{R}}^{j}\otimes\big{(}A_{k}(E(1),\ldots,E(k))/\underline{ \mathbb{R}}\big{)}\) is non-zero, then there exists a point \(b\in B\) and lines \(L_{i}\in\mathbb{P}(E(i)_{b})\), \(1\leq i\leq k\) such that, for each \(1\leq\ell\leq j\), the function_ \[S(L_{1})\times\cdots\times S(L_{k})\longrightarrow\mathbb{R},\quad(v_{1}, \ldots,v_{k})\longmapsto\int_{\mathcal{C}_{b,v_{1},\ldots,v_{k}}}(\varphi_{ \ell})_{b}\] _is constant._ ### Partitioning by spherical weges Next we describe an extension of the results of Schnider [31] and Soberon & Takahashi [34]. Let \(V\) be a vector space of dimension \(n\geq 3\), and let \(U\subseteq V\) be a vector subspace of dimension \(m\geq 2\). Then \(V=U\oplus U^{\perp}\) is the direct sum of \(U\) and its orthogonal complement \(U^{\perp}\) and the unit sphere \(S(V)=S(U\oplus U^{\perp})\) is the join \(S(U)*S(U^{\perp})\). To be precise, we also think of the join as the space \[S(V)=\{\cos(\theta)x+\sin(\theta)y\,|\,x\in S(U),\,y\in S(U^{\perp}),\,0\leq \theta\leq\pi/2\}.\] Like in the previous section, given \(v=(t,w)\in S(\mathbb{R}\oplus U)\), we have the decomposition of the sphere \(S(U)\) as \(C(v)\cup C(-v)\), where \[C(v)=\{u\in S(U):\langle u,w\rangle\geq-t\}\qquad\text{and}\qquad C(v)=\{u\in S (U):\langle u,w\rangle\geq-t\}.\] This leads to a decomposition of the bigger sphere \(S(V)\) as the union \(W(v,U)\cup W(-v,U)\) of two _wedges_ \[W(v,U) =C(v)*S(U^{\perp})\] \[=\{\cos(\theta)u+\sin(\theta)y:u\in S(U),\,y\in S(U^{\perp}),\, \langle u,w\rangle\geq-t,\,0\leq\theta\leq\pi/2\}\] and \[W(-v,U) =C(-v)*S(U^{\perp})\] \[=\{\cos(\theta)u+\sin(\theta)y:u\in S(U),\,y\in S(U^{\perp}),\, \langle u,w\rangle\leq-t,\,0\leq\theta\leq\pi/2\}\] The intersection \(W(v,U)\cap W(-v,U)\) is \(S(U^{\perp})\) if \(|t|>\|w\|\), a disc of dimension \(n-m\) if \(|t|=\|w\|\), and a sphere of dimension \(n-2\) if \(|t|<\|w\|\). (The subspace \(\{rx:r\geq 0,\,x\in W(v,U)\}\) of \(V\) is an _\(m\)-cone_ in the sense of [31].) Figure 4. The halfspaces defining the caps. For example, take \(U=\mathbb{R}^{2}\), \(U^{\perp}=\mathbb{R}\), \(V=\mathbb{R}^{2}\oplus\mathbb{R}\), so that \(m=2\), \(n=3\). The wedges \(W(v,U)\), where \(v=(t,w)\in S(\mathbb{R}^{2}\oplus\mathbb{R})\) with \(|t|<\|w\|\), are the subsets \[\{(\cos(\theta)\cos(\phi),\,\cos(\theta)\sin(\phi),\,\sin(\theta))\in S( \mathbb{R}^{2}\oplus\mathbb{R}):\alpha\leq\phi\leq\beta,\,-\pi/2\leq\theta \leq\pi/2\}\] where \(0\leq\alpha<\beta<2\pi\). Now suppose that \(F(i)\subseteq E\) is a vector subbundle of dimension \(m_{i}\geq 2\), for every \(1\leq i\leq k\), and that \(E(i)\) is a subbundle of \(\mathbb{R}\oplus F(i)\) of dimension \(n_{i}\leq m_{i}+1\). For a point \(b\in B\), lines \((L_{1},\dots,L_{k})\in\mathbb{P}(E(1)_{b})\times\dots\times\mathbb{P}(E(k)_{b})\), and vectors \((v_{1},\dots,v_{k})\in S(L_{1})\times\dots\times S(L_{k})\), we write \[\mathcal{W}_{b;v_{1},\dots,v_{k}}:=\bigcap_{i=1}^{k}\big{(}S(E_{b})-W(-v_{i}, F(i)_{b})\big{)}\] as the intersection of the open subsets \(S(E_{b})-W(-v_{i},F(i)_{b})\subseteq W(v_{i},E(i)_{b})\). As in all the previous partition problems for mass assignments we derive the following result in almost identical manner. **Theorem 9.6**.: _Under the hypotheses in the text, suppose that \(j\geq 1\) is an integer, \(\varphi_{1},\dots,\varphi_{j}\colon S(E)\longrightarrow\mathbb{R}\) are continuous functions, and \(j\leq\iota_{k}(E(1),\dots,E(k))\). Then there exists a point \(b\in B\) and lines \(L_{i}\in\mathbb{P}(E(i)_{b})\), \(1\leq i\leq k\), such that, for each \(1\leq\ell\leq j\), the function_ \[S(L_{1})\times\dots\times S(L_{k})\longrightarrow\mathbb{R},\qquad(v_{1}, \dots,v_{k})\longmapsto\int_{\mathcal{W}_{b;\,v_{1},\dots,v_{k}}}(\varphi_{ \ell})_{b}\] _is constant._ In the special case of a vector bundle over a point we get the following corollary. **Corollary 9.7**.: _Suppose that \(j\geq 1\) is an integer, \(\varphi_{1},\dots,\phi_{j}:S(V)\longrightarrow\mathbb{R}\) are continuous functions and that \(j\leq\iota_{k}(n+1,\dots,n+1)\). Let \(m_{1},\dots,m_{k}\) be integers in the range \(2\leq m_{i}\leq n\)._ _Then there exist vector subspaces \(U_{1},\dots,U_{k}\subseteq V\) with \(\dim(U_{i})=m_{i}\) and lines \(L_{i}\in\mathbb{P}(\mathbb{R}\oplus U_{i})\), \(1\leq i\leq k\), such that for each \(1\leq\ell\leq j\), the function_ \[S(L_{1})\times\dots\times S(L_{k})\longrightarrow\mathbb{R},\quad(v_{1}, \dots,v_{k})\longmapsto\int_{W(v_{1},U_{1})\,\dots\,\cap\,W(v_{k},U_{k})} \varphi_{\ell}\] _is constant._ Proof.: Take \(B\) to be the product \(G_{m_{i}}(V)\times\dots\times G_{m_{k}}(V)\) of Grassmann manifolds and \(F(i)\) to be the canonical \(m_{i}\)-dimensional bundle over the \(i\)th factor. Apply Theorem 9.6 with \(n_{i}=m_{i}+1\) and \(E(i)=\underline{\mathbb{R}}\oplus F(i)\). Indeed, since \(\iota_{1}(\underline{\mathbb{R}}\oplus F(i))=n\), we have that \(\iota_{k}(E(1),\dots,E(k))=\iota_{k}(n+1,\dots,n+1)\) by Proposition 2.7. **Remark 9.8**.: The previous Corollary 9.7 can be sharpened by restricting the base space in the following way. Replace the Grassmann manifolds \(G_{m_{i}}(V)\), where \(V=\mathbb{R}^{n}\), by the its subspace \(\mathbb{P}(\mathbb{R}^{n-m_{i}+1})\), embedded by taking the direct sum of a line in \(\mathbb{R}^{n-m_{i}+1}\) with \(\mathbb{R}^{m_{i}-1}\) to get a subspace of \(\mathbb{R}^{n}=\mathbb{R}^{n-m_{i}+1}\oplus\mathbb{R}^{m_{i}-1}\) of dimension \(m_{i}\). Then the vector bundle \(E(i)\) restricts to \(\underline{\mathbb{R}}^{m_{i}}\oplus H_{i}\) where \(H_{i}\) is the Hopf line bundle \(H(\mathbb{R}^{n-m_{i}+1})\). We have \(\iota_{1}(\underline{\mathbb{R}}^{m_{i}}\oplus H_{i})=n\), because \(w_{n-m_{i}}(-H_{i})\neq 0\). To illustrate the given in Corollary 9.7 we spell out the special case \(n=3\), \(j=3\), \(k=1\), \(m_{1}=2\), for which \(\iota_{1}(3+1)=3\). Suppose that \(\varphi_{1},\varphi_{2},\varphi_{3}\colon S^{2}=S(\mathbb{R}^{3})\longrightarrow \mathbb{R}\) are continuous functions. Then there is a wedge \(W\subseteq\mathbb{R}^{3}\), specified by a plane \(U\) through the origin in \(\mathbb{R}^{3}\) and \((t,w)\in S(\mathbb{R}\oplus U)\), such that \[\int_{W}\phi_{1}=\frac{1}{2}\int_{S^{2}}\phi_{1}\,,\qquad\int_{W}\phi_{2}=\frac {1}{2}\int_{S^{2}}\phi_{2}\,,\qquad\int_{W}\phi_{3}=\frac{1}{2}\int_{S^{2}} \phi_{3}\,.\] Furthermore, the \(k=1\) case of Corollary 9.7 gives [31, Thm. 8], and also the spherical version of [34, Thm. 1.2 and Thm. 3.2]. **Corollary 9.9**.: _Suppose that \(\varphi_{1},\ldots,\varphi_{n}\colon S(\mathbb{R}^{n})\longrightarrow\mathbb{R}\) are continuous functions and \(m\) is an integer, \(2\leq m\leq n\). Write \(V=\mathbb{R}^{n}\) and \(V^{\prime}=\mathbb{R}^{m-1}\subseteq\mathbb{R}^{n-m+1}\oplus\mathbb{R}^{m-1}=V\)._ _Then there exists a vector subspace \(U\subseteq V\) of dimension \(m\) containing the subspace \(V^{\prime}\) and a vector \(v\in S(\mathbb{R}\oplus U)\) such that_ \[\int_{W(v,U)}\varphi_{\ell}=\frac{1}{2}\int_{S(V)}\phi_{l}=\int_{W(-v,U)} \varphi_{\ell}\] _for \(\ell=1,\ldots,n\)._ Proof.: We just need to recall that \(\iota_{1}(n+1)=n\). The sharpening, to give the restriction that \(U\) should contain \(V^{\prime}\), is given by Remark 9.8. The connection between the affine and spherical cases was discussed in Section 1.3. We explain how [34, Thm. 1.2] can be deduced from the case \(m=2\) of our Corollary 9.9. **Corollary 9.10**.: _For an integer \(n\geq 2\), suppose that \(\psi_{1},\ldots,\psi_{n}\colon\mathbb{R}^{n-1}\longrightarrow\mathbb{R}\) are continuous functions with compact support with the \(n\) integrals \(\int_{\mathbb{R}^{n-1}}\psi_{\ell}\), \(1\leq\ell\leq n\), not all equal to zero._ _Then there exist two distinct parallel hyperplanes in \(\mathbb{R}^{n-1}\) such that the closed region \(S\) sandwiched between them satisfies_ \[\int_{S}\psi_{l}=\frac{1}{2}\int_{\mathbb{R}^{n-1}}\psi_{\ell}\,,\] _for all \(1\leq\ell\leq n\)._ (Note that if all the integrals \(\int_{\mathbb{R}^{n-1}}\phi_{\ell}\) are zero, then there is a trivial statement for any two coinciding hyperplanes.) Proof.: Consider the diffeomorphism \[\pi\colon\Lambda=\{(x,y)\in S(\mathbb{R}^{n-1}\oplus\mathbb{R}):y>0\} \longrightarrow\mathbb{R}^{n-1}\,,\qquad\qquad(x,y)\longmapsto\frac{x}{y}\,,\] which maps intersections of linear subspaces of \(\mathbb{R}^{n-1}\oplus\mathbb{R}\) with \(\Lambda\) to affine subspaces of \(\mathbb{R}^{n-1}\). Each density \(\psi_{\ell}\) lifts to a density \(\varphi_{\ell}\) on \(S(\mathbb{R}^{n-1}\oplus\mathbb{R})\) with support in the open upper hemisphere \(\Lambda\). (To be precise, \(\varphi(x,y)=y^{n}\psi(x/y)\).) Let \(U\subseteq\mathbb{R}^{n-1}\oplus\mathbb{R}=V\) be a \(2\)-dimensional vector subspace and \(v\in S(\mathbb{R}\oplus U)\) a vector as provided by Corollary 9.9 when \(m=2\). Since some \(\int_{\mathbb{R}^{n-1}}\psi_{\ell}\) is non-zero, both \(S(V)-W(-v,U)\) and \(S(V)-W(v,U)\) have to be non-empty. The intersection \(W(v,U)\cap W(-v,U)\) is, therefore, the union of two discs \[\{a\}\ast S(U^{\perp})\subseteq S(\mathbb{R}\cdot a\oplus U^{\perp})\qquad \text{and}\qquad\{b\}\ast S(U^{\perp})\subseteq S(\mathbb{R}\cdot b\oplus U^{ \perp})\] meeting in \(S(U^{\perp})\). Here \(a,b\in S(U)\). The image of the intersection \(\pi(W(v,U)\cap W(-v,U)\cap\Lambda)\) is the union of two affine hyperplanes meeting in \(\pi(S(U^{\perp})\cap\Lambda)\). We can prescribe the subspace \(V^{\prime}\) in Corollary 9.9 to be the line \(0\oplus\mathbb{R}\subseteq\mathbb{R}^{n-1}\oplus\mathbb{R}\). In that case, \(S(U^{\perp})\cap\Lambda\) is empty, and the two hyperplanes are parallel. ## 10. Concluding remarks: Real flag manifolds In the final section we make further remarks on particular arguments used in the proofs of our results. For a Euclidean vector space \(V\) of dimension \(n\) and integers \(0=n_{0}<n_{1}<\dots<n_{k}<n\), let \(B:=\operatorname{Flag}_{n_{1},\dots,n_{k}}(V)\) be the manifold of flags \((V_{*}):0=V_{0}\subseteq V_{1}\subseteq\dots\subseteq V_{k}\subseteq V\) with \(\dim V_{i}=n_{i}\). The canonical bundles of dimension \(n_{i}\) over \(B\) are denoted by \(E(i)\), as in the statement of Corollary 2.10. Write \(E\) for the trivial bundle over \(B\) with fibre \(V\). **Proposition 10.1**.: _The \(\mathbb{F}_{2}\)-Euler classes satisfy_ \[\prod_{i=1}^{k}\operatorname{e}(E/E(i))^{n_{i}-n_{i-1}}\neq 0\in H^{d}(B; \,\mathbb{F}_{2})=\mathbb{F}_{2},\] _where \(n_{0}=0\) and the dimension \(d\) is equal to \(\sum_{i=1}^{k}(n-n_{i})(n_{i}-n_{i-1})\)._ Proof.: Let \((U_{*}):0=U_{0}\subseteq U_{1}\subseteq\dots\subseteq U_{k}\) be a fixed flag in \(V\). The general linear group \(G=\operatorname{GL}(V)\) acts transitively on \(B\). If \(H\leq G\) denotes the stabilizer of \((U_{*})\), we have a map \(\pi\colon G\longrightarrow B\) defined given \(\pi(g)=(gU_{*})\) which describes \(B\) as the homogeneous space \(G/H\). The derivative of \(\pi\) at \(1\in G\) is a map from the Lie algebra \(\mathfrak{g}=\operatorname{End}(V)\) onto the tangent space of \(B\) at \((U_{*})\) with kernel the Lie algebra \(\mathfrak{h}\) of \(H\), that is, the space of endomorphisms \(a\) of \(V\) such that \(a(U_{i})\subseteq U_{i}\) for all \(i\). The tangent bundle of \(B\) is the quotient of the trivial Lie algebra bundle \(B\times\operatorname{GL}(V)=\operatorname{GL}(F)\) by the subbundle with fibre at \((V_{*})\in B\) the quotient of \(\operatorname{End}(V)\) by the Lie subalgebra, \(\mathfrak{h}(V_{*})\) of endomorphisms that preserve the flag. Using the inner product, we can express \(\mathfrak{h}(V_{*})\) as \(\bigoplus_{i=1}^{k}\operatorname{Hom}(V_{i}^{\perp},V_{i}\cap(V_{i-1}^{\perp}))\), which has dimension \(\sum_{i=1}^{k}(n-n_{i})(n_{i}-n_{i-1})\). Now consider the vector bundle \(E^{\prime}\), defined as a quotient of \(B\times\operatorname{End}(V)\), with the fibre at \((V_{*})\in B\) the quotient of \(\mathfrak{g}=\operatorname{End}(V)\) by the vector subspace \(\mathfrak{h}(V_{*},U_{*})\) of maps \(a\colon V\longrightarrow V\) such that \(a(V_{i})\subseteq U_{i}\) for \(i=1,\dots,k\). In metric terms, \[E^{\prime}=\bigoplus_{i=1}^{k}\operatorname{Hom}(F(i)^{\perp},U_{i}\cap(U_{i- 1}^{\perp}))\,,\] and its Euler class \(\operatorname{e}(E^{\prime})\) is equal to \(\prod_{i=1}^{k}e(E(i)^{\perp})^{n_{i}-n_{i-1}}\in H^{d}(B;\,\mathbb{F}_{2})\). The vector bundle \(E^{\prime}\) over the closed connected \(d\)-dimensional manifold \(B\) has the same dimension \(d\). We shall prove that \(\operatorname{e}(E^{\prime})\) is non-zero by writing down a smooth section \(s\) of \(E^{\prime}\) with exactly one zero and checking that the (mod 2) degree of that zero is equal to 1. The section \(s\) is defined to have the value at \((V_{*})\) given, modulo \(\mathfrak{h}(V_{*},U_{*})\), by the identity endomorphism \(1\in\operatorname{End}(V)\). At a zero of \(s\), \(V_{i}\subseteq U_{i}\) for all \(i\), that is, \(V_{*}=U_{*}\). At this zero, the tangent space of \(B\) coincides with the fibre of \(E^{\prime}\), and we shall show that the derivative of \(s\) is the identity endomorphism of \(\mathfrak{g}/\mathfrak{h}\). To do this, we lift from \(B=G/H\) to \(G\) by the projection \(\pi\). The pullback \(\pi^{*}E^{\prime}\) is trivialized by the isomorphism \[G\times(\mathfrak{g}/\mathfrak{h})\longrightarrow E^{\prime}\] taking \((g,a+\mathfrak{h})\), where \(g\in\operatorname{GL}(V)\), \(a\in\operatorname{End}(V)\), to \(((gU_{*}),ag^{-1}+\mathfrak{h}(gU_{*}))\). And the section \(s\) lifts to the map \[G\to\mathfrak{g}/\mathfrak{h}\,:\,g\longmapsto g+\mathfrak{h},\] for which the derivative at 1 is, transparently, the projection \(\mathfrak{g}\to\mathfrak{g}/\mathfrak{h}\). This completes the proof. Writing the quotient \(E/E(i)=E(i)^{\perp}\) as the direct sum \(\bigoplus_{j=i}^{k}(E(j+1)/E(j))\), where \(E(k+1)=E\), we can reformulate Proposition 10.1 as follows. **Corollary 10.2**.: _The product of Euler classes_ \[\prod_{i=1}^{k}\operatorname{e}(E(i+1)/E(i))^{n_{i}}\in H^{d}(B;\,\mathbb{F}_ {2})\] _is non-zero._ The previous Corollary 10.2 connects with previously given arguments in the following way: * The case \(k=1\), shows that \(\operatorname{e}(E(1)^{\perp})^{n-n_{1}}\neq 0\), and in particular \(\operatorname{e}(E(1)^{\perp})\neq 0\), as used in the proof of Corollary 2.5. * The statement \(\operatorname{e}(E(1)^{\perp})^{n-n_{1}}\neq 0\) is the result needed in Section 5.3 for the proof of Corollary 2.8. * For general \(k\), we have in particular that \(\prod_{i=1}^{k}\operatorname{e}(E(i)^{\perp})\neq 0\). This is what is required in Section 6.1 to prove Corollary 2.10. * If \(n_{i}=n-k+i-1\) (that is, \(n_{1}=n-k\), \(n_{2}=n-k+1\), \(\ldots\), \(n_{k}=n-1\)), then \(\operatorname{e}(E(1)^{\perp})^{k}\operatorname{e}(E(2)^{\perp})^{k-1}\cdots \operatorname{e}(E(k)^{\perp})^{1}\neq 0\). This is what is needed in Section 6.3 to prove Theorem 2.12. It shows directly that \(\operatorname{e}(E_{k+1})^{k}\cdots\operatorname{e}(E_{d+1})^{d}\neq 0\). The permutation symmetry of the cohomology then gives \[\operatorname{e}(E_{k+1})^{j_{k}}\cdots\operatorname{e}(E_{d+1})^{j_{d}}\neq 0.\] Thus, different arguments we offered in the proofs can be seen as a direct consequences of Corollary 10.2. To best of our knowledge these implications were not known until now, and we believe it was worth explaining these connections. ### Acknowledgements The authors would like to thank Aleksandra Dimitrijevic Blagojevic and Matija Blagojevic for useful discussions and many improvements of the manuscript.
2308.02826
Dynamics of Skyrmion Contraction and Expansion in a Magnetic Film
Contraction and expansion of skyrmions in ferromagnetic films are investigated. In centrosymmetric systems, the dynamics of a collapsing skyrmion is driven by dissipation. The collapse time has a minimum on the damping constant. In systems with broken inversion symmetry, the evolution of skyrmions toward equilibrium size is driven by the Dzyaloshinskii-Moriya interaction. Expressions describing the time dependence of the skyrmion size are derived and their implications for skyrmion-based information processing are discussed.
Eugene M. Chudnovsky
2023-08-05T09:05:54Z
http://arxiv.org/abs/2308.02826v1
# Dynamics of Skyrmion Contraction and Expansion in a Magnetic Film ###### Abstract Contraction and expansion of skyrmions in ferromagnetic films are investigated. In centrosymmetric systems, the dynamics of a collapsing skyrmion is driven by dissipation. The collapse time has a minimum on the damping constant. In systems with broken inversion symmetry, the evolution of skyrmions toward equilibrium size is driven by the Dzyaloshinskii-Moriya interaction. Expressions describing the time dependence of the skyrmion size are derived and their implications for skyrmion-based information processing are discussed. ferromagnetic films, skyrmion expansion and collapse, speed of skyrmion-based information processing ## I Introduction Skyrmions found their way to material science from nuclear physics where they were introduced as models of nucleons [1; 2]. The similarity of the \(\sigma\)-model to the continuous spin-field model of the exchange interaction prompted studies of skyrmions in theories of ferro- and antiferromagnets [3; 4; 5]. Topological arguments that produce skyrmions also arise in Bose-Einstein condensates [6], quantum Hall effect [7; 8], anomalous Hall effect [9], liquid crystals [10] and graphene [11]. In recent years, studies of skyrmions in thin magnetic films, besides their fundamental value, have been inspired by the prospect of developing skyrmion-based topologically-protected data storage and information processing [12; 13; 14; 15]. Crucial to this task is the dynamics of the creation and annihilation of skyrmions [16; 17; 18; 19; 20; 21; 22]. Characteristic times involved in such processes will determine the competitiveness of the skyrmion-based computer technology if it ever materializes. In a continuous field theory, the stability of a textbook Belavin-Polyakov skyrmion [3] against contraction or expansion is due to the scale invariance of the 2D exchange model. Atomic lattice breaks this invariance. In real magnetic films, skyrmions must be stabilized by interactions other than Heisenberg exchange. Most often it is the combined effect of the magnetic field and Dzyaloshinskii-Moriya (DMI) interaction [23; 24; 25]. Other mechanisms of skyrmion stabilization include frustrated exchange [26; 27], magnetic anisotropy [28; 29; 30], disorder [31], and geometrical confinement [32]. The stability of a skyrmion to thermal fluctuations capable of kicking it out of a metastable state has been studied by several authors [33; 34; 35; 36; 37; 38; 39; 40; 41; 42]. Similar effect due to quantum fluctuations [43] has been investigated as well [44; 45; 46]. These works addressed the lifetime of a skyrmion in a metastable state but did not study its evolution in time after it goes over or under the energy barrier. The root problem of that kind is the collapse of a BP skyrmion due to effects unaccounted for in a continuous-field 2D exchange model, such as, e.g., the Landau-Lifshitz (LL) damping [47]. In addition, the nonlinearity of the BP model on a lattice, and the emission of spin waves caused by it, leads to the effective dissipation of the skyrmion motion that emulates the LL damping [48]. This problem will be reviewed here and the formulas describing skyrmion collapse in the case of an arbitrary strength of the LL damping will be derived. Then we will move to the dynamics of a nucleated skyrmion which allows it to expand toward the equilibrium size determined by the magnetic field in the model with DMI. Formulas describing the time evolution of such a skyrmion will be derived. Characteristic times required for a skyrmion to stabilize will be estimated. They are important for evaluating the prospect of building skyrmion-based information technology. We will argue that the use of high magnetic fields is crucial for that purpose. The article is organized as follows. Skyrmion on a lattice is considered in Section II. Dissipation is introduced in Section III. The model with DMI is studied in Section IV. Section V contains a brief summary of the results and comments on their applications. ## II Skyrmion on a lattice The BP skyrmion field \({\bf s}({\bf r})\) with \({\bf s}^{2}=1\), satisfying the condition \({\bf s}=-\hat{z}\) at infinity, is given by \[{\bf s}_{BP}=\left(\frac{2\lambda r\cos(\phi+\gamma)}{\lambda^{2}+r^{2}},\, \frac{2\lambda r\sin(\phi+\gamma)}{\lambda^{2}+r^{2}}\,,\frac{\lambda^{2}-r^ {2}}{\lambda^{2}+r^{2}}\right), \tag{1}\] where \(\lambda\) can be interpreted as the size of the skyrmion and \(\gamma\) is a chirality angle. The first image of the Neel skyrmion (\(\gamma=0\)) known to the author appeared in Ref. [49]. The solution with \(\gamma=\pi/2\) (Bloch skyrmion) is shown in Fig. 1. The exchange energy, \[{\cal H}_{ex}=\frac{J}{2}\int dxdy\,\left(\partial_{i}{\bf s}\cdot\partial_{ i}{\bf s}\right), \tag{2}\] of the BP skyrmion is independent of \(\lambda\) and \(\gamma\) and equals \[E_{ex}=4\pi J. \tag{3}\] When the 2D exchange model is studied on a square lattice, the Hamiltonian acquires an additional term [50] \[{\cal H}_{lat}=-\frac{Ja^{2}}{24}\int dxdy\left(\partial_{i}^{2}{\bf s}\cdot \partial_{i}^{2}{\bf s}\right) \tag{4}\] proportional to the square of the lattice spacing \(a\). At \(\lambda\gg a\) Eq. (1) still provides a good approximation for the skyrmion field. The lattice, however, breaks the scale invariance and contributes \[E_{lat}=-\frac{2\pi}{3}J\left(\frac{a}{\lambda}\right)^{2} \tag{5}\] to the energy of the skyrmion, favoring smaller skyrmions over larger skyrmions. With the interactions considered so far, the Lagrangian of the skyrmion, up to a constant, is \[{\cal L}={\cal L}_{WZ}-{\cal H}_{lat}, \tag{6}\] where \[{\cal L}_{WZ}=\hbar s\int\frac{rdr}{a^{2}}d\phi\,\dot{\Phi}(\cos\Theta+1) \tag{7}\] is the Wess-Zumino (geometrical) term [5]. Equations of motion generated by such Lagrangian are equivalent to the Landau-Lifshitz equations of motion for the spin field [5]. Here \(s\) is the spin of the unit cell (we chose \(s=1\)), \(\hbar s/a^{2}\) is the 2D density of the spin angular momentum, \(\Theta\) and \(\Phi\) are spherical coordinates of the spin field, and \(\phi\) and \(r\) are polar coordinates in the 2D plane hosting the skyrmion. For the BP skyrmion \(\tan\Phi=\tan(\phi+\gamma)\), so that \(\Phi=\phi+\gamma\) and \(\dot{\Phi}=\dot{\gamma}\). Eq. (7) then gives \[{\cal L}_{WZ}=4\pi\hbar\dot{\gamma}\left(\frac{\lambda}{a}\right)^{2}\ln\left( \frac{L}{\lambda}\right), \tag{8}\] \[{\cal L}=4\pi\hbar\dot{\gamma}\left(\frac{\lambda}{a}\right)^{2}\ln\left( \frac{L}{\lambda}\right)+\frac{2\pi}{3}J\left(\frac{a}{\lambda}\right)^{2}, \tag{9}\] where \(L\) is the cutoff determined by the size of the 2D system. The Euler-Lagrange equations are \[\frac{d}{dt}\frac{\partial{\cal L}}{\partial\dot{\gamma}}=\frac{\partial{\cal L }}{\partial\gamma},\qquad\frac{d}{dt}\frac{\partial{\cal L}}{\partial\dot{ \lambda}}=\frac{\partial{\cal L}}{\partial\lambda}. \tag{10}\] They give \[\lambda={\rm const},\quad\dot{\gamma}=\frac{J}{6\hbar\ln(L/\lambda\sqrt{e})} \left(\frac{a}{\lambda}\right)^{4}. \tag{11}\] Thus, within our simple model, the size of the skyrmion does not change even though smaller skyrmions on a lattice have lower energy. So far the only effect of the lattice has been on the chirality angle that now changes linearly with time at a speed inversely proportional to the fourth power of the skyrmion size. It is easy to see that this situation does not change even in the presence of the magnetic field, \({\bf H}=-H\hat{z}\) opposite to the total spin of the skyrmion, \[\Sigma_{BP}=\int\frac{dxdy}{a^{2}}\left(1+\hat{z}\cdot{\bf s}_{BP}\right)=4 \pi\left(\frac{\lambda}{a}\right)^{2}\ln\left(\frac{L}{\lambda}\right). \tag{12}\] In practice, such a field is needed to provide the stabilizing boundary condition \({\bf s}=-\hat{z}\) at infinity. Similar to the lattice, it favors smaller skyrmions over larger skyrmions. The corresponding Zeeman Hamiltonian is \[{\cal H}_{Z}=g\mu_{B}\Sigma_{BP}H=4\pi g\mu_{B}H\left(\frac{\lambda}{a} \right)^{2}\ln\left(\frac{L}{\lambda}\right), \tag{13}\] with \(g\) being the gyromagnetic factor. (In the presence of the field, the integration cutoff \(L\gg\lambda\) is determined by the lateral size of the 2D system or by \(\delta_{H}=\sqrt{J/(g\mu_{B}H}\), whichever is greater [45].) The resulting modification of the Lagrangian is \[{\cal L}_{H}=4\pi\hbar\left(\dot{\gamma}-\omega_{H}\right)\left(\frac{\lambda }{a}\right)^{2}\ln\left(\frac{L}{\lambda}\right)+\frac{2\pi}{3}J\left(\frac{a }{\lambda}\right)^{2}, \tag{14}\] where \[\omega_{H}=\frac{g\mu_{B}H}{\hbar} \tag{15}\] is the Larmor frequency of spin \(s=1\) in the magnetic field. The solutions of the equations of motion are now \[\lambda={\rm const},\quad\dot{\gamma}=\omega_{H}+\frac{J}{6\hbar\ln(L/\lambda \sqrt{e})}\left(\frac{a}{\lambda}\right)^{4}. \tag{16}\] This solution resembles dynamical skyrmions stabilized by the magnetic anisotropy, introduced in Ref. [28]. Notice that the second of Eq. (16) could have been written immediately from Eq. (11) by observing that in the Figure 1: Belavin-Polyakov skyrmion with \(\gamma=\pi/2\). problem of spin precession, the application of the magnetic field is equivalent to switching to a coordinate frame rotating with the angular velocity \(\mathbf{\omega}_{H}=g\mu_{B}\mathbf{H}\). Less trivial is the fact that the effect of the discreteness of the atomic lattice is similar to the effect of the field in generating spin precession. The impossibility to induce skyrmion collapse by the magnetic field directed along the \(z\)-axis is related to the rotational invariance with respect to that axis which preserves the \(z\)-component of the spin angular momentum. It is similar to the well-known impossibility to induce the motion of the domain wall in a uniaxial ferromagnet by the field directed along the anisotropy axis, even though it would have decreased the energy [5]. ## III Inclusion of dissipation We shall now include dissipation in this simple problem. The case of a weak dissipation was studied in Ref. [50]. Here we consider an arbitrary dissipation strength. The effect of damping on the skyrmion can be described by the dissipation function [51] \[F=\frac{\hbar}{2}\eta\int\frac{dxdy}{a^{2}}\,\hat{\mathbf{s}}^{2}, \tag{17}\] where \(\eta\) is the dimensionless Landau-Lifshitz damping parameter. With the help of Eq. (1) one obtains for a BP skyrmion \[\dot{\mathbf{s}}^{2}=\frac{4r^{2}}{(\lambda^{2}+r^{2})^{2}}\left(\dot{\lambda} ^{2}+\lambda^{2}\dot{\gamma}^{2}\right). \tag{18}\] Substitution of this expression into Eq. (17) and integration over coordinates give \[F=4\pi\hbar\eta\left(\frac{\dot{\lambda}^{2}+\lambda^{2}\dot{\gamma}^{2}}{a^{ 2}}\right)\ln\left(\frac{L}{\lambda\sqrt{e}}\right). \tag{19}\] In the presence of dissipation, the Euler-Lagrange equations must be replaced with \[\frac{d}{dt}\frac{\partial\mathcal{L}}{\partial\dot{\gamma}}=\frac{\partial \mathcal{L}}{\partial\gamma}-\frac{\partial F}{\partial\dot{\gamma}},\qquad \frac{d}{dt}\frac{\partial\mathcal{L}}{\partial\dot{\lambda}}=\frac{\partial \mathcal{L}}{\partial\lambda}-\frac{\partial F}{\partial\dot{\lambda}}. \tag{20}\] Since \(\mathcal{L}\) does not depend on \(\gamma\) and \(\dot{\lambda}\), the above equations reduce to \[\frac{d}{dt}\frac{\partial\mathcal{L}}{\partial\dot{\gamma}}=-\frac{\partial F }{\partial\dot{\gamma}},\qquad\frac{\partial\mathcal{L}}{\partial\lambda}= \frac{\partial F}{\partial\dot{\lambda}}\,. \tag{21}\] For the smallest skyrmions, whose collapse is dominated by the discreteness of the atomic lattice, they give \[\frac{d\lambda}{dt}=-\eta\lambda\dot{\gamma},\qquad\dot{\gamma}-\eta\frac{ \dot{\lambda}}{\lambda}=\frac{Ja^{4}}{6\hbar\lambda^{4}\ln(L/(\lambda\sqrt{e} )}, \tag{22}\] which results in \[\dot{\lambda}=-\frac{1}{6}\left(\frac{\eta}{1+\eta^{2}}\right)\frac{Ja^{4}}{ \hbar\lambda^{3}\ln(L/(\lambda\sqrt{e})}, \tag{23}\] \[\left(\frac{\lambda}{a}\right)^{4}\ln\left(\frac{L}{\lambda e^{1/4}}\right)= \left(\frac{\lambda_{0}}{a}\right)^{4}\ln\left(\frac{L}{\lambda_{0}e^{1/4}} \right)-\frac{2\eta Jt}{3\hbar(1+\eta^{2})}, \tag{24}\] where \(\lambda_{0}\) is the initial skyrmion size. According to Eq. (24) the lifetime of the skyrmion is proportional to the fourth power of the initial skyrmion size, \[t_{c}=\frac{3\hbar(1+\eta^{2})}{2\eta J}\left(\frac{\lambda_{0}}{a}\right)^{4 }\ln\left(\frac{L}{\lambda_{0}e^{1/4}}\right). \tag{25}\] It has a minimum on the damping parameter \(\eta\) that occurs at \(\eta=1\). This is easy to understand by noticing that high damping slows down skyrmion collapse by imposing a drag on its motion, while the collapse of a skyrmion at weak damping is inhibited by the conservation of the spin angular momentum. An interesting observation is that damping does not slow down the rotation of the chirality angle in the process of skyrmion collapse, \[\dot{\gamma}=\frac{1}{6(1+\eta^{2})}\frac{Ja^{4}}{\hbar[\lambda(t)]^{4}\ln \{L/[\lambda(t)\sqrt{e}]\}}. \tag{26}\] On the contrary, with or without damping, it accelerates as the skyrmion size decreases. The frequency \(J/\hbar\) is typically of order \(10^{13}\)s\({}^{-1}\), which gives \(t_{c}\sim 10^{-8}\)s for a nanometer-size skyrmion at \(\eta\sim 0.1\). This suggests that the collapse of a skyrmion in a pure exchange model is rather slow. It would be a problem if deleting a skyrmion becomes a part of information processing. ## IV Inclusion of DMI To break with the slow dynamics of skyrmions exhibited by the pure exchange model we should now introduce DMI that breaks the inversion symmetry together with the rotational symmetry, \[\mathcal{H}_{DMI}=A\int\frac{dxdy}{a}\left[(\mathbf{s}\times\partial_{x} \mathbf{s})\cdot\hat{x}+(\mathbf{s}\times\partial_{y}\mathbf{s})\cdot\hat{y} \right]. \tag{27}\] Here \(A\) is a parameter having dimensionality of energy that describes the strength of the DMI. For certainty, we chose it to be positive. Since DMI is formed by the combined effect of the exchange and spin-orbit interactions, \(A\) is typically small compared to the exchange energy \(J\) but large compared to the Zeeman energy. It was tested numerically [45] that the change in the skyrmion shape due to the DMI is insignificant, especially for small skyrmions studied here. Substituting Eq. (1) into Eq. (27), one obtains \[\mathcal{H}_{DMI}=-4\pi A\left(\frac{\lambda}{a}\right)\sin\gamma \tag{28}\] for the DMI energy of the BP skyrmion. With the account of contributions from the atomic lattice, DMI, and the magnetic field, the total energy of the skyrmion, up to a constant \(4\pi J\) provided by the exchange, is \[\mathcal{H} = -\frac{2\pi}{3}J\left(\frac{a}{\bar{\lambda}}\right)^{2}-4\pi A \left(\frac{\lambda}{a}\right)\sin\gamma \tag{29}\] \[+ 4\pi g\mu_{B}H\left(\frac{\lambda}{a}\right)^{2}\ln\left(\frac{L }{\bar{\lambda}}\right).\] It favors a Bloch skyrmion with \(\gamma=\pi/2\) depicted in Fig. 1. The dependence of the energy on the skyrmion size is shown schematically in Fig. 2. Notice that only \(\lambda>a\) has physical meaning. To estimate the spatial and time scales involved, it is convenient to use the reduced form of the energy \[\bar{\mathcal{H}}=\frac{\mathcal{H}}{4\pi A}=-\frac{\kappa}{2\bar{\lambda}^{2} }-\bar{\lambda}\sin\gamma+\frac{1}{2}\delta\bar{\lambda}^{2}, \tag{30}\] where we replaced the logarithm with a constant \(l\) and introduced dimensionless \[\kappa=\frac{J}{3A}\gg 1,\qquad\delta(H)=\frac{2g\mu_{B}lH}{A}\ll 1,\qquad \bar{\lambda}=\frac{\lambda}{a}. \tag{31}\] The energy has a minimum at \(H<H_{c}\) which corresponds to \(\delta<\delta_{c}=2^{1/3}3/(8l\kappa^{1/3})\). As \(H\) decreases, the equilibrium skyrmion size at the minimum goes up from \(\bar{\lambda}_{c}=(4\kappa)^{1/3}\) at the field threshold \(H=H_{c}\) to \(\bar{\lambda}_{H}=1/\delta(H)\) at \(H<H_{c}\). The reduced Lagrangian is \[\bar{\mathcal{L}}=\frac{\mathcal{L}}{4\pi A}=\frac{1}{2}\dot{\gamma}_{\tau} \bar{\lambda}^{2}+\frac{\kappa}{2\bar{\lambda}^{2}}+\bar{\lambda}\sin\gamma- \frac{1}{2}\delta\bar{\lambda}^{2} \tag{32}\] where \[\dot{\gamma}_{\tau}=\frac{d\gamma}{d\tau},\qquad\tau=\frac{A}{2\hbar l}t. \tag{33}\] Skyrmions nucleated to the left of the energy maximum in Fig. 2 collapse. If the skyrmion of size \(\lambda_{0}<\bar{\lambda}_{H}\) nucleates to the right from the energy maximum, it begins to expand toward the energy minimum corresponding to \(\bar{\lambda}=\bar{\lambda}_{H}\). We are interested in the characteristic time of such an expansion. Noticing that it is dominated by the DMI, we can start by considering the dynamics of a small skyrmion due to the DMI alone, described by the Lagrangian \[\bar{\mathcal{L}}=\frac{1}{2}\dot{\gamma}_{\tau}\bar{\lambda}^{2}+\bar{ \lambda}\sin\gamma. \tag{34}\] The corresponding equations of motion are \[\dot{\bar{\lambda}}_{\tau}=\cos\gamma,\qquad\dot{\gamma}_{\tau}\bar{\lambda}= -\sin\gamma. \tag{35}\] They have an exact solution \[\bar{\lambda}=\sqrt{\bar{\lambda}_{0}^{2}+\tau^{2}},\qquad\tan\gamma=\frac{ \bar{\lambda}_{0}}{\tau}. \tag{36}\] As the skyrmion expands from \(\bar{\lambda}=\bar{\lambda}_{0}\) at \(t=0\) to \(\bar{\lambda}\gg\bar{\lambda}_{0}\) at \(\tau\gg\bar{\lambda}_{0}\), the chirality angle changes from \(\pi/2\) to \(0\). According to Eq. (36), the expansion of the skyrmion towards equilibrium size \(\bar{\lambda}_{H}\) requires time \(\tau_{H}\sim\bar{\lambda}_{H}\). Translated into the real time it corresponds to \[t\sim\frac{2\hbar l}{A}\tau_{H}=\frac{2\hbar l}{A\delta(H)}=\frac{1}{\omega_ {H}}. \tag{37}\] Damping can be added to the problem along the lines of the previous section. Equations (35) become \[\dot{\bar{\lambda}}_{\tau}+\eta\dot{\gamma}_{\tau}\bar{\lambda}=\cos\gamma, \qquad\dot{\gamma}_{\tau}\bar{\lambda}-\eta\dot{\bar{\lambda}}_{\tau}=-\sin\gamma. \tag{38}\] They possess the same solution for skyrmion expansion induced by the DMI as before, \(\bar{\lambda}=\sqrt{\bar{\lambda}_{0}^{2}+\tau_{\eta}^{2}}\), but with \(\tau_{\eta}=\tau/\sqrt{1+\eta^{2}}\) instead of \(\tau\). This means that dissipation reduces the speed of the expansion by a factor \(\sqrt{1+\eta^{2}}\), which is insignificant at weak damping. ## V Conclusion We have studied the collapse of skyrmions in centrosymmetric ferromagnetic films and the expansion of skyrmions in 2D films with broken inversion symmetry. These problems determine the times needed for deleting and creating skyrmions if they are to be used as topologically protected memory units for information processing. In systems with rotational invariance, the collapse of skyrmions is rather slow and is determined entirely by the damping. The fastest collapse occurs at the (Landau-Lifshitz) damping constant \(\eta=1\). In systems with DMI, the dynamics is faster. To nucleate a skyrmion, one should overcome an energy barrier \(U\sim 4\pi J\). It can be achieved by using a spin-polarized current [16], local heating [17], a magnetic dipole [19] Figure 2: Schematic representation of the skyrmion energy vs size in a model with DMI and magnetic field. or a tip of the magnetic force microscope [20], as well as by temperature [21]. At high temperatures, nucleation of skyrmions should occur naturally with a probability proportional to \(\exp(-U/T)\). Very small skyrmions collapse. Below a threshold field, a skyrmion above the critical size, nucleated in a film with broken inversion symmetry, expands until it reaches equilibrium size \(\lambda_{H}\). The expansion of skyrmion driven by DMI is barely affected by damping. It requires time \(t\sim 1/\omega_{H}\). This time is in the picosecond range for the fields in the excess of one tesla, which would be sufficiently short for functional skyrmion-based computer memory. ## VI Acknowledgements This work has been supported by the Grant No. DE-FG02-93ER45487 funded by the U.S. Department of Energy, Office of Science.
2310.17907
Twist- and gate-tunable proximity spin-orbit coupling, spin relaxation anisotropy, and charge-to-spin conversion in heterostructures of graphene and transition-metal dichalcogenides
We present a DFT-based investigation of the twist-angle dependent proximity spin-orbit coupling (SOC) in graphene/TMDC structures. We find that for Mo-based TMDCs the proximity valley-Zeeman SOC exhibits a maximum at around 15--20{\deg}, and vanishes at 30{\deg}, while for W-based TMDCs we find an almost linear decrease of proximity valley-Zeeman SOC when twisting from 0{\deg} to 30{\deg}. The induced Rashba SOC is rather insensitive to twisting, while acquiring a nonzero Rashba phase angle, $\varphi \in [-20;40]${\deg}, for twist angles different from 0{\deg} and 30{\deg}. This finding contradicts earlier tight-binding predictions that the Rashba angle can be 90{\deg} in the studied systems. In addition, we study the influence of several tunability knobs on the proximity SOC for selected twist angles. By applying a transverse electric field in the limits of $\pm 2$ V/nm, mainly the Rashba SOC can be tuned by about 50\%. The interlayer distance provides a giant tunability, since the proximity SOC can be increased by a factor of 2--3, when reducing the distance by about 10\%. Encapsulating graphene between two TMDCs, both twist angles are important to control the interference of the individual proximity SOCs, allowing to precisely tailor the valley-Zeeman SOC in graphene, while the Rashba SOC becomes suppressed. Finally, based on our effective Hamiltonians with fitted parameters, we calculate experimentally measurable quantities such as spin lifetime anisotropy and charge-to-spin conversion efficiencies. The spin lifetime anisotropy can become giant, up to $10^4$, in encapsulated structures. The charge-to-spin conversion, which is due to spin-Hall and Rashba-Edelstein effects, can lead to twist-tunable non-equilibrium spin-density polarizations that are perpendicular and parallel to the applied charge current.
Klaus Zollner, Simão M. João, Branislav K. Nikolić, Jaroslav Fabian
2023-10-27T05:53:26Z
http://arxiv.org/abs/2310.17907v1
Twist- and gate-tunable proximity spin-orbit coupling, spin relaxation anisotropy, and charge-to-spin conversion in heterostructures of graphene and transition-metal dichalcogenides ###### Abstract Proximity-induced phenomena in van der Waals heterostructures have emerged as a platform to tailor the electronic, spin, optical, and topological properties in two dimensional materials. A crucial degree of freedom, which has only recently been recognized, is the relative twist angle between the monolayers. While partial results exist in the literature, we present here a comprehensive first-principles based investigation of the twist-angle dependent proximity spin-orbit coupling (SOC) in graphene in contact with, or encapsulated by, monolayer transition metal dichalcogenides (TMDCs) MoS\({}_{2}\), MoSe\({}_{2}\), WS\({}_{2}\), and WSe\({}_{2}\). Crucially, our commensurate supercells comprise monolayers with strains of less than 2.5%, minimizing band-offset artifacts. We confirm earlier DFT results that for Mo-based TMDCs the proximity valley-Zeeman SOC exhibits a maximum at around 15-20\({}^{\circ}\), and vanishes at 30\({}^{\circ}\) for symmetry reasons. Although such a maximum was also predicted by tight-binding simulations for W-based TMDCs, we find an almost linear decrease of proximity valley-Zeeman SOC in graphene/WSe\({}_{2}\) and graphene/WS\({}_{2}\) when twisting from 0\({}^{\circ}\) to 30\({}^{\circ}\). We also refine previous DFT simulations and show that the induced Rashba SOC is rather insensitive to twisting, while acquiring a nonzero Rashba phase angle \(\varphi\) which measures the deviation of the electron spin from in-plane transverse direction to the momentum, for twist angles different from 0\({}^{\circ}\) and 30\({}^{\circ}\). The Rashba phase angle \(var\phi\) varies from \(-20^{\circ}\) to 40\({}^{\circ}\), with the largest variation (40\({}^{\circ}\)) found for MoS\({}_{2}\) at a twist angle of 20\({}^{\circ}\). This finding contradicts earlier tight-binding predictions that the Rashba angle can be 90\({}^{\circ}\) in the studied systems. In addition, we study the influence of a transverse electric field, vertical and lateral shifts, and TMDC encapsulation on the proximity SOC for selected twist angles. Within our investigated electric field limits of \(\pm\)2 V/nm, mainly the Rashba SOC can be tuned by about 50%. The interlayer distance provides a giant tunability, since the proximity-induced SOC can be increased by a factor of 2-3, when reducing the distance by only about 10%. When encapsulating graphene between two TMDCs, both twist angles are important to control the interference of the individual proximity-induced SOCs, allowing to precisely tailor the proximity-induced valley-Zeeman SOC in graphene, while the Rashba SOC becomes suppressed. Finally, based on our effective Hamiltonians with fitted parameters to low-energy _ab initio_ band structures, we calculate experimentally measurable quantities such as spin lifetime anisotropy and charge-to-spin conversion efficiencies. The spin lifetime anisotropy--being the ratio between out-of-plane and in-plane spin lifetimes--can become giant (up to 100), depending on the TMDC, twist angle, transverse electric field, and the interlayer distance. The charge-to-spin conversion can be divided into three components which are due to spin-Hall and Rashba-Edelstein effects with non-equilibrium spin-density polarizations that are perpendicular and parallel to the applied charge current. All conversion efficiencies are highly tunable by the twist angle and the Fermi level. spintronics, transition-metal dichalcogenides, heterostructures, proximity spin-orbit coupling ## I Introduction Van der Waals (vdW) heterostructures based on two-dimensional (2D) materials are emerging as an important platform for investigating novel solid state phenomena [1; 2; 3; 4; 5; 6; 7; 8]. While 2D materials exhibit extraordinary physical properties on the atomic scale, we can combine different monolayers to form artificial vdW crystals with customized electronic, optical, magnetic, or topological properties [1; 2; 5; 9]. The prime example are heterostructures based on monolayer graphene, where proximity interactions, such as spin-orbit coupling (SOC) [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], exchange coupling [28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43], and superconductivity [44] can be induced via neighboring layers. Important, the proximity-induced interactions can be controlled by gating, doping, straining, lateral stacking, and twisting. Particularly interesting for spintronics [45] are graphene/transition-metal dichalcogenide (TMDC) bilayers [10; 11; 46; 47]. First-principles calculations [10] and experiments [48; 49; 50; 20; 51] on graphene/TMDC structures have already demonstrated that proximity SOC can be tuned by the application of a transverse electric field. Recent DFT simulations show a potential tunability via controlled alloying of the TMDC [52]; this should be experimentally realizable given the impressive progress in TMDC growth techniques [53]. Since proximity effects are short-ranged and originate from the wavefunction overlap of different layers, also the vdW distance plays an important role. Recent experiments have shown that external pressure, which reduces the interlayer distance, can significantly boost proximity interactions [19; 54]. The proximity coupling of graphene with TMDCs has already lead to fascinating experimental findings, such as optical spin injection [10; 55; 56], gate tunable charge-to-spin conversion [20; 48; 49; 57], giant spin relaxation anisotropy [58; 59; 60; 61; 62; 21], and field-effect spin transistor operation [63]. Recently, the relative twist angle between the monolayers has emerged as another important control knob. In general, vdW heterostructures composed of twisted monolayers [64; 65; 66; 67] promise great tunability of electronic, optical, and magnetic properties. For example, magic-angle twisted bilayer graphene exhibits magnetism and superconductivity due to strong correlations [68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82]. In twisted TMDCs, a strong trapping potential for excitons can arise due to the emerging moire pattern [83; 84]. In graphene/Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\) bilayers, twisting allows to reverse the proximity-induced exchange splitting of the Dirac bands [29]. Finally, gating and twisting are two efficient control knobs to tune the valley splitting in TMDC/CrI\({}_{3}\) heterostructures [85]. All the above demonstrates that the twist angle has a highly non-trivial influence on physical observables. There have already been theoretical [86; 87; 88; 89; 90] and experimental [91] studies investigating the impact of twisting on the electronic properties and proximity-induced SOC in graphene/TMDC heterostructures [91]. Tight-binding studies have predicted that the relative rotation of the monolayers can greatly enhance the proximity SOC, with an expected maximum at around 15-20\({}^{\circ}\), for graphene in contact with MoS\({}_{2}\), MoSe\({}_{2}\), WS\({}_{2}\), and WSe\({}_{2}\)[87; 88]. However, tight-binding calculations have to rely on some input parameters. For example, the position of the Dirac point within the TMDC band gap seems is rather crucial for predicting twist-angle dependent proximity SOC [87]. In a systematic DFT investigation, Naimer _et al._[13] showed that strain (the study used up to 10% of strain in graphene) in twisted graphene/TMDC supercells affects the proximity effects due to strain-induced band offsets, prompting the application of a transverse displacement field to remove these artifacts. This _ad hoc_ procedure has produced qualitatively similar results as the aforementioned tight-binding studies for Mo-based TMDCs, but has found that the valley-Zeeman proximity coupling for W-based TMDCs decreases with increasing the twist angle from 0\({}^{\circ}\) to 30\({}^{\circ}\), not exhibiting a global maximum. This DFT study [13] also found specific values for the Rashba phase angles, predicted on symmetry grounds to be different from zero (the reference angle at which the in-plane spin is perpendicular to the momentum) away from 0\({}^{\circ}\) to 30\({}^{\circ}\)[87; 88]. Also Pezo _et al._[86] considered large-scale supercells of graphene on strained (up to 3.5%) MoTe\({}_{2}\) and WSe\({}_{2}\), employing twist angles around 0\({}^{\circ}\), 15\({}^{\circ}\), and 30\({}^{\circ}\), predicting strong variations of the proximity SOC, although the limited set of twist angles was insufficient to uncover systematic trends. Finally, Lee _et al._[89] performed DFT investigations of twisted graphene/WSe\({}_{2}\) heterostructures with small strain (less than 2%) finding a nearly constant valley-Zeeman SOC up to about 18\({}^{\circ}\), followed by a linear decrease to 30\({}^{\circ}\); the Rashba SOC was found to be nearly constant for all the investigated twist angles. There is already evidence from weak antilocalization experiments [91] on twisted graphene/WSe\({}_{2}\) structures showing small (\(\sim\) 0.05 meV) valley-Zeeman and finite (\(\sim\) 0.5 meV) Rashba SOC at 30\({}^{\circ}\), in agreement with theory. In contrast, samples with 15\({}^{\circ}\) twist angle show larger SOC values, with Rashba \(\sim\) 1.5 meV and valley-Zeeman \(\sim\) 0.4 meV. In this paper, we aim to provide a comprehensive DFT-based picture of proximity SOC in twisted graphene/TMDC heterostructures by considering only small-strain supercells (less than 2.5% of strain in graphene and zero strain in TMDCs) for all four semiconducting TMDC monolayers MoS\({}_{2}\), MoSe\({}_{2}\), WS\({}_{2}\), and WSe\({}_{2}\). In addition to providing systematic dependencies of the valley-Zeeman and Rashba SOC on the twist angles, we also address the effects of a transverse electric field, encapsulation, and lateral and vertical shifts. We confirm earlier DFT studies that upon twisting from 0\({}^{\circ}\) to 30\({}^{\circ}\), the induced valley-Zeeman SOC decreases almost linearly to zero for W-based TMDCs, while for Mo-based TMDCs it exhibits a maximum at around 15-20\({}^{\circ}\). The induced Rashba SOC stays rather constant upon twisting, and acquires a phase angle \(\varphi\neq 0\), due to symmetry breaking, for twist angles different from 0\({}^{\circ}\) and 30\({}^{\circ}\). For WSe\({}_{2}\) our results also correspond to the findings of Ref. [89], but we additionally cover the twist angle behavior for graphene on MoS\({}_{2}\), MoSe\({}_{2}\), and WS\({}_{2}\). Within our investigated electric field limits of \(\pm\)2 V/nm, mainly the Rashba SOC can be tuned by about 50%. The interlayer distance, correlating to external pressure in experiments [19; 54], provides a giant tunability, since the proximity-induced SOC can be increased by a factor of 2-3, when reducing the distance by only about 10%. When encapsulating graphene between two TMDCs, both twist angles are important to control the interference of the individual proximity-induced SOCs, allowing to precisely tailor the valley-Zeeman SOC, while the Rashba SOC becomes suppressed. More precisely, when the twist angles of the encapsulating TMDC layers are equal, say both are 0\({}^{\circ}\), the induced valley-Zeeman SOC is roughly doubled, since the layer-resolved proximity effect is additive on the graphene sublattices. In contrast, when the twist angles differ by 60\({}^{\circ}\), the sublattices are effectively exchanged and the effective valley-Zeeman SOC becomes suppressed. The Rashba SOC is always suppressed due to the nearly restored \(z\)-mirror symmtery in encapsulated structures. Finally, combining the first-principles calculations, low energy model Hamiltonian, fitted parameters, and real-space transport calculations, we make specific predic tions for experimentally measurable quantities such as spin lifetime anisotropy and charge-to-spin conversion efficiency. We find that the spin lifetime anisotropy--the ratio between out-of-plane and in-plane spin lifetimes--can become giant, up to 100, especially in graphene on MoS\({}_{2}\) and WS\({}_{2}\) as the valley-Zeeman dominates over the Rashba SOC, pinning the spin to the out-of-plane direction. Our calculated anisotropies are in agreement with experiments [21, 59, 92] and further tunability is provided by twisting, an external electric field, and the interlayer distance. The real-space transport calculations reveal that twisted heterostructures provide a tunable charge-to-spin conversion via spin-Hall and Rashba-Edelstein effects. With gating and twisting, it is possible to tailor not only the magnitude but also the direction of the non-equilibrium spin-density, making graphene/TMDC heterostructures a versatile platform for creating and detecting spin polarized currents without the need of conventional ferromagnets. The manuscript is organized as follows. In Sec. II, we first address the structural setup and summarize the calculation details for obtaining the electronic structures of the twisted graphene/TMDC bilayers. In Sec. III, we introduce the model Hamiltonian that captures the proximitized Dirac bands, which is used to fit the first-principles results. In Sec. IV, we show and discuss exemplary calculated electronic structures, along with the model Hamiltonian fits. We also address the influence of the twist-angle, transverse electric field, and the interlayer distance on the proximity SOC. In Sec. V, we briefly discuss TMDC-encapsulated graphene structures, where proximity SOC can be enhanced or suppressed due to interference of the encapsulating layers. In Sec. VI, we address some open questions and discuss the origin of our findings in more detail. In Sec. VII and Sec. VIII we analyze experimentally relevant quantities, which are the twist-angle and gate tunability of the spin-lifetime anisotropy and charge-to-spin conversion efficiencies. Finally, in Sec. IX we conclude the manuscript. ## II Geometry setup and computational details The graphene/TMDC heterostructures, for which we consider several twist angles between the two monolayers, are set-up with the atomic simulation environment (ASE) [93] and the CellMatch code [94], implementing the coincidence lattice method [67, 95]. Within this method, a graphene/TMDC heterostructure contains a (\(n\),\(m\)) graphene supercell and a (\(n^{\prime}\),\(m^{\prime}\)) TMDC supercell, where integers \(n,m,n^{\prime}\) and \(m^{\prime}\) define the corresponding supercell lattice vectors. Monolayers of graphene and TMDCs are based on hexagonal unit cells, with experimental lattice constants [96, 97, 98, 99] of \(a=2.46\) A (graphene), \(a=3.288\) A (MoSe\({}_{2}\)), \(a=3.282\) A (WSe\({}_{2}\)), \(a=3.15\) A (MoS\({}_{2}\)), and \(a=3.153\) A (WS\({}_{2}\)), which additionally need to be strained in the twisted heterostructures, in order to form commensurate supercells for periodic density functional theory (DFT) calculations. Since MoSe\({}_{2}\) and WSe\({}_{2}\) have nearly the same lattice constant, we set them to 3.28 A in the following. The same we do for MoS\({}_{2}\) and WS\({}_{2}\), where we use 3.15 A. In Table S1 and Table S2 we summarize the main structural information for the twist angles we consider. In total, we investigate 12 different angles between 0\({}^{\circ}\) and 30\({}^{\circ}\), for each graphene/TMDC heterostructure. Especially these angles are suitable for DFT calculations, since strain applied to the monolayers is below 2.5%. We already know that biaxial strain strongly influences the band gap of monolayer TMDCs [100] and therefore we leave them nearly unstrained in the heterostructures. The residual strain is applied to the graphene lattice, which mainly influences the Fermi velocity of Dirac states [13]. In addition, the number of atoms is kept below 250. Otherwise, also other angles could be investigated, but beyond reasonable strain limits and above a computationally feasible number of atoms in the structure. The electronic structure calculations and structural relaxations of the graphene/TMDC heterostructures are Figure 1: 3D view of graphene above a TMDC (MoSe\({}_{2}\)), where we define the interlayer distance, \(\mathrm{d_{int}}\). We twist graphene by an angle \(\vartheta\) around the \(z\) axis with respect to the TMDC. The twist-angle evolution of the proximitized Dirac states is sketched. Red (blue) bands are polarized spin up (spin down), while grey bands are in-plane polarized. At 0\({}^{\circ}\), the proximity-induced SOC in the Dirac states, is of Valley-Zeeman and Rashba type. At around 19.1\({}^{\circ}\), the Valley-Zeeman SOC is maximized leading to a band inversion. At 30\({}^{\circ}\), Valley-Zeeman SOC vanishes and only Rashba SOC remains. performed by DFT [101] with Quantum ESPRESSO[102]. Self-consistent calculations are carried out with a \(k\)-point sampling of \(n_{k}\times n_{k}\times 1\). The number \(n_{k}\) is listed in Table S1 and Table S2 for all twist angles and depends on the number of atoms in the heterostructure. In addition, \(n_{k}\) is limited by our computational power. Nevertheless, for large supercells the heterostructure Brillouin Zone is small and only few \(k\)-points are necessary to get converged results. We use an energy cutoff for charge density of 560 Ry and the kinetic energy cutoff for wavefunctions is 70 Ry for the fully relativistic pseudopotentials with the projector augmented wave method [103] with the Perdew-Burke-Ernzerhof exchange correlation functional [104]. Spin-orbit coupling (SOC) is included in the calculations. For the relaxation of the heterostructures, we add DFT-D2 vdW corrections [105; 106; 107] and use quasi-Newton algorithm based on trust radius procedure. Dipole corrections [108] are also included to get correct band offsets and internal electric fields. In order to simulate quasi-2D systems, we add a vacuum of about 20 A to avoid interactions between periodic images in our slab geometry. To get proper interlayer distances and to capture possible moire reconstructions, we allow all atoms to move freely within the heterostructure geometry during relaxation. Relaxation is performed until every component of each force is reduced below \(5\times 10^{-4}\) [\(\mathrm{Ry}/a_{0}\)], where \(a_{0}\) is the Bohr radius. After relaxation of the graphene/TMDC heterostructures, we calculate the mean interlayer distances, \(d_{\mathrm{int}}\), and the standard deviations, \(\Delta z_{\mathrm{grp}}\), from the \(z\) coordinates of the C atoms of graphene. The standard deviations represent the amount of rippling of graphene. The results are summarized in Table S1 and Table S2. The interlayer distances are nearly independent of the twist angle and range from about 3.3 to 3.4 A. The graphene itself stays nearly flat, as the rippling stays below about 3 pm. In Fig. 1, we show the general structural setup of our graphene/TMDC heterostructures, where the graphene resides above the TMDC. When we apply the transverse electric field (modeled by a zigzag potential), a positive field points along \(z\) direction from the TMDC towards graphene. ## III Model Hamiltonian From our first-principles calculations we obtain the low energy Dirac band structure of the spin-orbit proximitized graphene. We then extract realistic parameters for an effective Hamiltonian describing graphene's low energy Dirac bands. The Hamiltonian together with the fitted parameters provide an effective description for the low-energy physics, which is relevant for studying transport [119; 114; 110; 33; 89; 110], topology [111; 112], or spin relaxation [16; 58; 60; 113]. Due to the short-range nature of the proximity effects in van der Waals heterostructures, the effective model parameters are transferable and can be employed for bilayer and trilayer graphene heterostructures [114; 115; 116]. The band structure of spin-orbit proximitized graphene can be modeled by symmetry-derived Hamiltonians [117]. For graphene in heterostructures with \(C_{3}\) symmetry, the effective low energy Hamiltonian is \[\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{\Delta}+\mathcal{H}_{ \mathrm{I}}+\mathcal{H}_{\mathrm{R}}+E_{D}, \tag{1}\] \[\mathcal{H}_{0}=\hbar v_{\mathrm{F}}(\tau k_{x}\sigma_{x}-k_{y} \sigma_{y})\otimes s_{0},\] (2) \[\mathcal{H}_{\Delta}=\Delta\sigma_{z}\otimes s_{0},\] (3) \[\mathcal{H}_{1}=\tau(\lambda_{\mathrm{I}}^{\mathrm{A}}\sigma_{+} +\lambda_{\mathrm{I}}^{\mathrm{B}}\sigma_{-})\otimes s_{z},\] (4) \[\mathcal{H}_{\mathrm{R}}=-\lambda_{\mathrm{R}}\mathrm{e}^{- \mathrm{i}\varphi\frac{\tau}{2}}(\sigma\sigma_{x}\otimes s_{y}+\sigma_{y} \otimes s_{x})\mathrm{e}^{\mathrm{i}\varphi\frac{\tau}{2}} \tag{5}\] Here \(v_{\mathrm{F}}\) is the Fermi velocity and the in-plane wave vector components \(k_{x}\) and \(k_{y}\) are measured from \(\pm\)K, corresponding to the valley index \(\tau=\pm 1\). The Pauli spin matrices are \(s_{i}\), acting on spin space \((\uparrow,\downarrow)\), and \(\sigma_{i}\) are pseudospin matrices, acting on sublattice space (\(\mathrm{C_{A}}\), \(\mathrm{C_{B}}\)), with \(i=\{0,x,y,z\}\) and \(\sigma_{\pm}=\frac{1}{2}(\sigma_{z}\pm\sigma_{0})\). The staggered potential gap is \(\Delta\), arising from sublattice asymmetry. The parameters \(\lambda_{\mathrm{I}}^{\mathrm{A}}\) and \(\lambda_{\mathrm{I}}^{\mathrm{B}}\) describe the sublattice-resolved intrinsic SOC and \(\lambda_{\mathrm{R}}\) stands for the Rashba SOC. In addition, a phase angle \(\varphi\) can be present in the usual Rashba term, which leads to a rotation of the spin-orbit field around the \(z\)-axis [87; 88]. When the intrinsic SOC parameters satisfy \(\lambda_{\mathrm{I}}^{\mathrm{A}}=-\lambda_{\mathrm{I}}^{\mathrm{B}}\), it is also called valley-Zeeman or Ising type SOC, while in the case of \(\lambda_{\mathrm{I}}^{\mathrm{A}}=\lambda_{\mathrm{I}}^{\mathrm{B}}\), it is called Kane-Mele type SOC [118]. Charge transfer between the monolayers in the DFT calculation is captured by the Dirac point energy, \(E_{D}\), which adjusts the Dirac point with respect to the Fermi level. The basis states are \(|\Psi_{\mathrm{A}},\uparrow\rangle\), \(|\Psi_{\mathrm{A}},\downarrow\rangle\), \(|\Psi_{\mathrm{B}},\uparrow\rangle\), and \(|\Psi_{\mathrm{B}},\downarrow\rangle\), resulting in four eigenvalues \(\varepsilon_{1/2}^{\mathrm{CB/VB}}\). For each considered heterostructure, we calculate the proximitized low energy Dirac bands in the vicinity of the K point. To extract the fit parameters from the first-principles data, we employ a least-squares routine [119], taking into account band energies, splittings, and spin expectation values. ## IV First-principles results and discussion ### Twist angle dependence of proximity SOC In Fig. 2(a), we show the calculated global band structure of the graphene/MoSe\({}_{2}\) heterostructure for a twist angle of \(0^{\circ}\), as an exemplary case. The Dirac states of graphene are nicely preserved within the band gap of the TMDC, and are located about 0.61 eV (\(-0.85\) eV) above (below) the relevant \(K\) point valence (conduction) band edge of the TMDC, see Table S3. Actually in Fig. 2(a), the conduction band edge of the TMDC is located close to the \(M\) point. However, we note that we use a lattice constant of 3.28 A for MoSe\({}_{2}\), and not the exact experimental one of 3.288 A. Already at such small tensile strain, MoSe\({}_{2}\) becomes an indirect band gap semiconductor, with the conduction band edge at the \(Q\) side valley [100]. In addition, the relevant \(K\) points of TMDC band edges are backfolded to the \(\Gamma\) point due to the \(3\times 3\) MoSe\({}_{2}\) supercell we use for the \(0^{\circ}\) case. In Figs. 2(b)-(g), we summarize the low-energy band properties of the graphene Dirac states near the Fermi level. Due to proximity-induced SOC, the Dirac bands split into four states, \(\varepsilon_{1/2}^{\rm CB/VB}\). The magnitude of the splitting is on the order of 0.7 meV. By fitting the low-energy Dirac dispersion to our model Hamiltonian, we find that proximity-induced intrinsic SOCs are of valley-Zeeman type, \(\lambda_{1}^{\rm A}\approx-\lambda_{1}^{\rm B}\approx 0.23\) meV. In addition, a Rashba SOC is present, \(\lambda_{\rm R}\approx 0.25\) meV, being of the same magnitude. The obtained SOC parameters are giant compared to the intrinsic SOC of pristine graphene, being about 20-40 \(\mu\)eV [120; 121]. In addition, Dirac states display an orbital gap, which results from the potential asymmetry of the sublattices (connected to the rippling of graphene), characterized by parameter \(\Delta\). The Dirac states, band splittings, and spin expectation values are perfectly reproduced by our model Hamiltonian employing the parameters in Table 1. The results for \(0^{\circ}\) are in good agreement to earlier calculations of proximity SOC in graphene/TMDC heterostructures [11]. Before we show and discuss the twist-angle dependence of proximity SOC, we first want to address how strain affects the dispersion. Since the lattice constant of the TMDC is fixed for all twist angles, the main changes are in the graphene Dirac states and band offsets. From literature, we know that the Dirac states of graphene are quite robust against biaxial strain [122; 123], apart from a renormalization of the Fermi velocity. From recent studies [13; 29], we already know that band offsets are tunable by strain. In Fig. 3, we plot the position of the Dirac point with respect to the TMDC valence (conduction) band edge, \(E_{D}-E_{V}\) (\(E_{D}-E_{C}\)), as defined in Fig. 2(a), as function of the strain applied to graphene. Figure 3: The calculated position of the Dirac point with respect to the TMDC valence (conduction) band edge, \(E_{D}-E_{V}\) (\(E_{D}-E_{C}\)), as function of the biaxial strain in graphene for the different TMDCs. The data are summarized in Table S3. Figure 2: (a) DFT-calculated band structure of the graphene/MoSe\({}_{2}\) heterostructure along the high-symmetry path M-K-\(\Gamma\) for a twist angle of \(0^{\circ}\). The color of the lines corresponds to the \(s_{z}\) spin expectation value. We also indicate the position of the Dirac point with respect to the TMDC valence (conduction) band edge, \(E_{D}-E_{V}\) (\(E_{D}-E_{C}\)). (b)-(e) The spin expectation values of the 4 low-energy bands as labeled in (f). (f) Zoom to the calculated low-energy bands (symbols) near the Fermi level around the \(K\) point, corresponding to the band structure in (a), with a fit to the model Hamiltonian (solid lines). (g) The energy splitting of the low energy Dirac bands. The different twist angles provide different strain, and the plotted information are summarized in Tables S1, S2, and S3. We find a linear dependence of the band offsets with respect to the graphene strain as in a previous study [13]. In experiment, one can expect that both graphene and the TMDCs are nearly unstrained due to weak vdW bonding and only the zero strain band offsets are relevant. For our exemplary case of MoSe\({}_{2}\), we find the Dirac cone roughly in the middle of the TMDC band gap. From Fig. 3 we can extract the zero strain \begin{table} \begin{tabular}{c c c c c c c c} TMDC & \(\vartheta\) [\({}^{\uparrow}\)] & \(\Delta\) [meV] & \(v_{\rm F}/10^{5}[\frac{m}{\pi}]\) & \(\lambda_{\rm I}^{\rm A}\) [meV] & \(\lambda_{\rm I}^{\rm B}\) [meV] & \(\lambda_{\rm R}\) [meV] & \(\varphi\) [\({}^{\uparrow}\)] & \(E_{\rm D}\) [meV] \\ \hline MoSe\({}_{2}\) & 0.0000 & 0.4917 & 8.2538 & 0.2422 & -0.2258 & 0.2550 & 0 & 1.8970 \\ & 2.6802 & 0.4346 & 8.2382 & 0.2213 & -0.2120 & 0.2664 & -2.2919 & 0.0024 \\ & 3.8858 & -0.3121 & 8.1250 & -0.1860 & 0.1954 & 0.2859 & -4.1254 & -0.0311 \\ & 5.2087 & -1.1162 & 8.5072 & -0.2920 & 0.2166 & 0.2448 & -1.3751 & 1.9400 \\ & 8.2132 & -0.6569 & 8.3124 & -0.3046 & 0.2434 & 0.2613 & -2.8076 & 0.0046 \\ & 12.2163 & -0.7117 & 8.4028 & -0.5062 & 0.3877 & 0.2136 & 2.8190 & 0.1276 \\ & 14.3916 & 0.4097 & 8.0799 & 0.3838 & -0.4240 & 0.3247 & -7.9644 & 0.0592 \\ & 19.1066 & 0.1163 & 8.0073 & 0.5627 & -0.5827 & 0.3326 & 4.7156 & 1.0680 \\ & 22.4987 & -0.0826 & 8.2585 & -0.5181 & 0.5041 & 0.2912 & 31.8860 & -0.1366 \\ & 25.2850 & -0.0173 & 7.9727 & -0.3393 & 0.3320 & 0.3110 & 29.5139 & 0.0445 \\ & 30.0000 & 0.0040 & 8.3109 & 0.0013 & -0.0055 & 0.2398 & 0 & 0.2514 \\ \hline WSe\({}_{2}\) & 0.0000 & 0.5878 & 8.2500 & 1.1722 & -1.1572 & 0.5303 & 0 & 1.2931 \\ & 2.6802 & 0.5438 & 8.2687 & 1.0775 & -1.0650 & 0.5475 & -1.3522 & -0.0502 \\ & 3.8858 & -0.4079 & 8.2968 & -0.9045 & 0.9120 & 0.5592 & -3.1055 & -0.0509 \\ & 5.2087 & -1.3110 & 8.3911 & -1.1868 & 1.0555 & 0.5979 & -1.3293 & 1.6139 \\ & 8.2132 & -0.8307 & 8.3230 & -1.0482 & 0.9122 & 0.6210 & -3.4092 & 1.0818 \\ & 12.2163 & -0.8494 & 8.4755 & -1.2914 & 0.9973 & 0.6129 & -1.8794 & -0.0278 \\ & 14.3916 & 0.4444 & 8.0440 & 0.6371 & -0.7484 & 0.8339 & -17.3382 & 0.0158 \\ & 19.1066 & 0.0876 & 7.8914 & 0.5899 & -0.6420 & 0.8215 & -19.6129 & 2.2178 \\ & 22.4987 & -0.0813 & 8.2654 & -0.7106 & 0.6654 & 0.6441 & 3.8985 & -0.0464 \\ & 25.2850 & -0.0037 & 7.9577 & -0.2522 & 0.2382 & 0.5237 & 18.6102 & 0.0107 \\ & 30.0000 & -0.0093 & 8.3185 & -0.0165 & 0.0128 & 0.6197 & 0 & 1.1670 \\ \hline MoS\({}_{2}\) & 1.0445 & -0.7794 & 8.3275 & -0.2990 & 0.2672 & 0.0737 & 6.1881 & -0.1036 \\ & 6.5868 & 0.4420 & 8.0126 & 0.2445 & -0.2647 & 0.0854 & 21.1428 & 0.2847 \\ & 8.9483 & 0.3782 & 7.9692 & 0.2244 & -0.2460 & 0.0953 & 17.5330 & -0.0681 \\ & 12.8385 & -0.2796 & 7.9358 & -0.2393 & 0.2140 & 0.1106 & 8.2508 & -0.0696 \\ & 14.4649 & 0.3765 & 8.1134 & 0.3053 & -0.3565 & 0.1245 & 15.0692 & 1.1699 \\ & 16.1021 & -0.3058 & 8.2297 & -0.4126 & 0.3517 & 0.1287 & 14.3244 & 0.0450 \\ & 22.4109 & -0.0546 & 8.0486 & -0.1347 & 0.1216 & 0.0718 & 37.4152 & 0.0025 \\ & 27.6385 & -0.0002 & 8.1439 & -0.0410 & 0.0373 & 0.0843 & 32.8887 & 0.1104 \\ & 29.2649 & 0.0011 & 8.0021 & 0.0027 & -0.0049 & 0.0395 & 18.4498 & 0.0020 \\ \hline WS\({}_{2}\) & 1.0445 & -0.9678 & 8.1209 & -1.1390 & 1.0407 & 0.2131 & 5.3688 & -0.0787 \\ & 6.5868 & 0.6485 & 8.0248 & 0.7849 & -0.8638 & 0.2337 & 16.8970 & 1.6459 \\ & 8.9483 & 0.5615 & 7.9988 & 0.6581 & -0.7354 & 0.2705 & 9.8609 & 0.5747 \\ & 12.8385 & -0.3525 & 7.9563 & -0.5200 & 0.4531 & 0.3206 & -4.9620 & 0.0493 \\ & 14.4649 & 0.4676 & 8.1248 & 0.5635 & -0.6826 & 0.3678 & -1.3236 & 0.3962 \\ & 16.1021 & -0.3602 & 8.1780 & -0.6841 & 0.5536 & 0.3956 & -4.8474 & 0.0075 \\ & 22.4109 & -0.0472 & 8.0434 & -0.0158 & -0.0082 & 0.1777 & 2.4793 & 0.3277 \\ & 27.6385 & 0.0025 & 8.2009 & 0.0059 & -0.0113 & 0.2410 & 18.7310 & 1.8203 \\ & 29.2649 & -0.0007 & 8.0090 & -0.0212 & 0.0194 & 0.1462 & 9.0129 & 0.3090 \\ \end{tabular} \end{table} Table 1: Fit parameters of the model Hamiltonian, Eq. (1), for the graphene/TMDC heterostructures for different twist angles \(\vartheta\). We summarize the Fermi velocity \(v_{\rm F}\), the staggered potential gap \(\Delta\), the sublattice-resolved intrinsic SOC parameters \(\lambda_{\rm I}^{\rm A}\) and \(\lambda_{ band offsets and the rates \(\gamma\) at which the band offsets change via straining, by fitting the data with a linear dependence. The extrapolated values are summarized in Table 2. We find that for lighter (heavier) elements in the TMDC, the Dirac cone is located closer to the conduction (valence) band edge, as is the case for MoS\({}_{2}\) (WSe\({}_{2}\)). Especially the zero strain band offsets should be also useful for tight-binding models of graphene/TMDC bilayers [87; 88], where the position of the Dirac point within the TMDC band gap enters as an unknown parameter. In addition, despite the strain in graphene is kept below \(\pm 2.5\%\) in our heterostructure calculations, we observe variations in the band offsets of several hundreds of meV. The reason is that the rates \(\gamma\approx-80\) meV/% are quite large, but similar for all TMDCs, and band offsets can be massively tuned by straining. In particular, tensile (compressive) strain will shift the Dirac states closer to the TMDC valence (conduction) band edge. Our calculated zero strain band offsets show that the Dirac cone is clearly located within the TMDC band gap, which is in agreement to experiments [124; 125]. The tunability of the band offset with straining graphene is expected, since the individual workfunctions of the layers determine the band alignment, and the workfunction of graphene shows a significant strain dependence within our strain limits [126]. In particular, the workfunction of graphene increases (decreases) with positive (negative) strain [126], shifting the Dirac point towards more negative (positive) energy, which is consistent with our observations in Fig. 3. In contrast to Ref. [13], our heterostructures have smaller strain so we do not compensate the strain-related band offsets with an electric field. Also, we perform structural relaxation at each twist angle which leads to rippling and twist-dependent interlayer distance. As we show, both effects influence the proximity induced SOC, so that electric-field compensation would not necessarily make the results more representative. We demonstrate this by comparing \(0^{\circ}\) graphene/MoSe\({}_{2}\) and graphene/WSe\({}_{2}\) heterostructures with different strains and setup conditions [127]. We believe that the field correction as in Ref. [13] makes sense to be applied only Figure 4: (a) Zoom to the calculated low-energy bands (symbols) of the graphene/MoSe\({}_{2}\) heterostructure near the Fermi level around the \(K\) point, for a twist angle of \(0^{\circ}\) and with a fit to the model Hamiltonian (solid lines). The color of the lines/points corresponds to the \(s_{z}\) spin expectation value. (b) and (c) The same as (a), but for twist angles of \(19.1^{\circ}\) and \(30^{\circ}\). (d) The calculated spin-orbit field, in the vicinity of the \(K\) point, of the spin-up valence band from the low-energy dispersion shown in (a). The color represents the \(s_{z}\) spin expectation value, while the arrows represent \(s_{x}\) and \(s_{y}\) spin expectation values. The dashed white lines represent the edges of the hexagonal Brillouin zone, with the \(K\) point at the center. (e) and (f) The same as (d), but for twist angles of \(19.1^{\circ}\) and \(30^{\circ}\). in the scenario of a flat graphene layer and fixed interlayer distance, to extract the bare twist-angle dependence while disregarding other effects. Otherwise all these effects: band offset, rippling, and interlayer distance, which are in some way connected to strain and which affect proximity SOC, would be difficult to disentangle. Now we turn to the most important result, which is the twist-angle dependence of proximity-induced SOC. In Fig. 4, we show the calculated low energy Dirac states for the graphene/MoSe\({}_{2}\) heterostructure for three different twist angles, \(0^{\circ}\), \(19.1^{\circ}\), and \(30^{\circ}\), as exemplary cases. As already mentioned, the Dirac states are split due to proximity SOC. In the case of \(0^{\circ}\), the splitting is moderate, caused by nearly equal valley-Zeeman and Rashba SOC (\(\lambda_{\rm I}^{\rm A}\approx-\lambda_{\rm I}^{\rm B}\approx 0.23\) meV, \(\lambda_{\rm R}\approx 0.25\) meV). This can be also seen in the calculated spin-orbit field of one of the Dirac bands. Overall, spins have an out-of-plane component due to intrinsic SOCs, while Rashba SOC is responsible for the vortex-like in-plane components. Both components are nearly equal away from the \(K\) point, see also Fig. 2. For \(19.1^{\circ}\), the splitting is maximized, a band inversion can be obtained, and valley-Zeeman SOC dominates over the Rashba one (\(\lambda_{\rm I}^{\rm A}\approx-\lambda_{\rm I}^{\rm B}\approx 0.57\) meV, \(\lambda_{\rm R}\approx 0.33\) meV). The band inversion is due to the fact that the sublattice potential asymmetry \(\Delta\) is small compared to the magnitude of the intrinsic SOCs. The spin-orbit field shows almost only an out-of-plane component, while in-plane components are suppressed. For \(30^{\circ}\), the splitting is minimal, valley-Zeeman SOC vanishes and Rashba SOC dominates (\(\lambda_{\rm I}^{\rm A}\approx-\lambda_{\rm I}^{\rm B}\approx 0\) meV, \(\lambda_{\rm R}\approx 0.24\) meV). In fact, the valley-Zeeman SOC should completely vanish at \(30^{\circ}\), due to a mirror plane symmetry, restoring the sublattice symmetry [89]. However, due to the small rippling in graphene from structural relaxations, this symmetry is not fully restored and small, but finite, intrinsic SOCs arise even at \(30^{\circ}\). The spin-orbit field almost solely shows vortex-like in-plane components, while an out-of-plane component is only present right at the \(K\) point. Such a twist-angle tunability of SOC and the corresponding spin-orbit fields will have a huge impact on spin transport and relaxation [58], as we will discuss later. For all the investigated twist angles and the different TMDCs, our model Hamiltonian can faithfully describe the low-energy Dirac states, with the fit parameters summarized in Table 1. For structures from Tables S1 and S2, which satisfy \(n-m=3\cdot l\), \(l\in\mathbb{Z}\), the Dirac states of graphene from both \(K\) and \(K^{\prime}\) fold back to the \(\Gamma\) point. Consequently, we cannot apply our fitting routine employing the model Hamiltonian, Eq. (1), for some twist angles, which are then absent in Table 1. Note that, when graphene sublattices (C\({}_{\rm A}\) and C\({}_{\rm B}\)) are interchanged in the geometry, the parameter \(\Delta\) changes sign, while parameters \(\lambda_{\rm I}^{\rm A}\) and \(\lambda_{\rm I}^{\rm B}\) are interchanged as well. Such an exchange of sublattices corresponds to an additional \(60^{\circ}\) twist applied to graphene above the TMDC. Therefore twist angles \(\vartheta\) and \(\vartheta+60^{\circ}\) cannot be distinguished from the geometries. In Table 1, the fit parameters show such a sign change for the investigated twist angles. This is connected to the setup of the heterostructure supercells for different angles, since 1) the starting point stacking of the non-rotated layers is arbitrary, 2) the origin of the rotation axis can be chosen randomly, 3) the lattice vectors, defining the periodic heterostructure supercell, can be imposed differently on the moire structure from the twisted layers. Consequently, one would have to consider several structures for each twist angle to obtain well justified results (in terms of value and sign). Considering subsequent lateral shifts (see below) is particularly helpful to see how the proximity SOC changes for different atomic registries. However, it is enough to consider only angles between \(0^{\circ}\) and \(30^{\circ}\), since the parameters for the other angles can be obtained by symmetry considerations [13]. From the experimental point of view, e. g., in spin transport or spin-charge conversion experiments, that consider twisted graphene/TMDC heterostructures, only the magnitude and type of proximity SOC plays a role, since a well-defined manufacturing process with atomically precise control of stacking and twisting of two different monolayers is not yet possible. Due to this and the mentioned sign issue from the DFT results, in Fig. 5 we plot the absolute values of valley-Zeeman and Rashba \begin{table} \begin{tabular}{l c c c} TMDC & \(E_{D}-E_{V}\) [eV] & \(E_{D}-E_{C}\) [eV] & \(\gamma\) [meV/\%] \\ \hline MoS\({}_{2}\) & 1.3360 & -0.3817 & -78.95 \\ WS\({}_{2}\) & 0.9473 & -0.7531 & -77.04 \\ MoSe\({}_{2}\) & 0.6159 & -0.8458 & -77.35 \\ WSe\({}_{2}\) & 0.2446 & -1.1606 & -75.72 \\ \end{tabular} \end{table} Table 2: Zero strain band offsets \(E_{D}-E_{V}\) and \(E_{D}-E_{C}\) and the rates \(\gamma\) at which the band offsets change via straining, extrapolated by fitting the data in Fig. 3 with linear functions. Figure 5: Calculated twist-angle dependence of the valley-Zeeman and Rashba SOC for the different TMDCs. The data are summarized in Table 1. SOC as function of the twist angle for all TMDs, as summarized in Table 1. Note that the valley-Zeeman SOC is defined as \(\lambda_{\rm VZ}=(\lambda_{1}^{\rm A}-\lambda_{1}^{\rm B})/2\). We find a clear and strong twist-angle dependence of the proximity-induced SOC. The heavier the elements in the TMDC, the larger is the proximity SOC. For untwisted structures (\(0^{\circ}\)), both valley-Zeeman and Rashba SOC are finite. At \(30^{\circ}\), the valley-Zeeman SOC vanishes and Rashba SOC dominates, independent of the TMDC. While the Rashba SOC stays rather constant upon twisting, the valley-Zeeman SOC shows a marked twist-angle dependence, different for Mo- and W-based TMDCs. For WS\({}_{2}\) and WSe\({}_{2}\), the valley-Zeeman SOC gradually decreases when twisting from \(0^{\circ}\) to \(30^{\circ}\). This finding is consistent with Ref. [89]. In contrast, for MoS\({}_{2}\) and MoSe\({}_{2}\), the valley-Zeeman SOC exhibits a maximum at around \(15^{\circ}\) to \(20^{\circ}\). ### Influence of vertical and lateral shifts How sensitive is the proximity-induced SOC with respect to the atomic registry (stacking) and the interlayer distance? Recent experiments have shown that one can tune proximity SOC by external pressure, thereby reducing the interlayer distance between graphene and the TMDC [19; 54]. In particular, by applying external pressure of about 1.8 GPa to a graphene/WSe\({}_{2}\) heterostructure and diminishing the interlayer distance by about 9%, leads to a 2-fold enhancement of the proximity-induced Rashba SOC, as found by magnetotransport experiments [19]. In this section, we study how variations of the interlayer distance influence proximity SOC. For selected twist angles we vary \(d_{\rm int}\) in steps of 0.1 A, starting from the relaxed equilibrium distances listed in Tables S1 and S2, keeping the rest of the geometry (rippling of graphene and the TMDC) fixed. In addition, we study how lateral shifts, which essentially change the exact stacking of graphene above the TMDC, influence proximity SOC. For the lateral shifts, we use crystal coordinate notation, i. e., we shift graphene above the TMDC by fractions \(x\) and \(y\) of the supercell lattice vectors. We perform structural relaxations in the case of lateral shifts before we calculate the proximitized low energy Dirac bands, since the stacking may influence the graphene rippling and the interlayer distance. Since Mo- and W-based TMDCs produce different trends in the twist-angle dependence of proximity SOC, we focus on MoSe\({}_{2}\) and WSe\({}_{2}\) only. In addition, we consider only three selected twist angles, namely \(0^{\circ}\), \(19.1^{\circ}\) and \(30^{\circ}\). In Table S4 and Table S5 we summarize the fit results, when tuning the interlayer distance or changing the stacking. By reducing the interlayer distance, we find that Dirac states are pushed towards the TMDC valence band edge. In addition, the sublattice asymmetry, represented by the staggered potential \(\Delta\) increases, when decreasing the distance. Most important, the induced valley-Zeeman and Rashba SOC depends strongly on the distance, as summarized in Fig. 6. By reducing the interlayer distance, the SOC can be heavily increased, in agreement with experiments [19; 54]. In particular, the proximity-induced SOC can be increased by a factor of 2-3, when reducing the distance by only about 10%. The only exception is the valley-Zeeman SOC for the \(30^{\circ}\) structures, which is absent (or at least very small in our case due to rippling) due to symmetry. In contrast, the precise atomic registry (stacking) has negligible influence on the magnitude of proximity SOC in graphene/TMDC heterostructures. This results probably from the fact that the considered heterostructure supercells are large compared to the monolayer unit cells, such that an averaging effect takes place. ### Gate tunability of proximity SOC In experiment, gating is a tool to further control and tailor the proximity SOC in graphene-based heterostructures [48; 49; 50; 20]. For example, in Ref. [50] it has been shown that a gate voltage can be employed to control the spin-charge conversion efficiency in graphene/MoTe\({}_{2}\) heterostructures. We wish to answer the question: How does a transverse electric field affect proximity SOC for different twist angles? Again, we focus only on MoSe\({}_{2}\) and WSe\({}_{2}\) and twist angles of \(0^{\circ}\), \(19.1^{\circ}\) and \(30^{\circ}\). The positive field direction is indicated in Fig. 1. The fit results are summarized in Tab. S6 for graphene/MoSe\({}_{2}\) and Tab. S7 for graphene/WSe\({}_{2}\) bilayers. In general, the electric field simply shifts the Dirac cone up or down in energy within the TMDC band gap, as can be seen from the band offsets. The tunability is about 100 meV per V/nm of applied field. Since the band offsets change, also the interlayer coupling along with proximity SOC changes. In Fig. 7 we show how the valley-Zeeman and Rashba SOC are affected by the ex Figure 6: Calculated interlayer distance dependence of the valley-Zeeman and Rashba SOC for MoSe\({}_{2}\) and WSe\({}_{2}\) structures for selected twist angles. The data are summarized in Table S4 and Table S5. ternal transverse electric field. We find that for MoSe\({}_{2}\), the field barely influences the valley-Zeeman SOC, while the Rashba one can be tuned in a linear fashion, similar for all the different twist angles we consider. More precisely, within our field limits of \(\pm 2\) V/nm, the Rashba SOC can be tuned by about 50%. In particular, recalling that the ratio between valley-Zeeman and Rashba SOC determines the spin relaxation anisotropy [58], the electric field will lead to an enormous tunability of the latter. In the case of WSe\({}_{2}\), the behaviour is rather similar but the 19.1\({}^{\circ}\) twist angle is an exception. For this angle, also the valley-Zeeman SOC is highly tunable by the field. Moreover, we find that the valley-Zeeman SOC increases, while the Rashba one decreases for positive field amplitudes and vice versa for negative fields. ## V Encapsulated geometries Maximizing the proximity SOC in graphene is advantageous for example in spin-charge conversion experiments [17; 109; 48; 49; 107; 17]. We have already seen, that proximity-induced SOC is maximized for WSe\({}_{2}\) at 0\({}^{\circ}\) and for MoSe\({}_{2}\) at 19.1\({}^{\circ}\). Can we further enhance proximity SOC, by encapsulating graphene between two TMDC monolayers? We consider the graphene/WSe\({}_{2}\) heterostructure with 0\({}^{\circ}\) twist angle and place another WSe\({}_{2}\) monolayer on top. The top WSe\({}_{2}\) layer is considered to have a relative twist angle of 0\({}^{\circ}\) and 0+60\({}^{\circ}\) with respect to the subjacent graphene/WSe\({}_{2}\) bilayer, see Fig. 8. Similarly, we consider the graphene/MoSe\({}_{2}\) heterostructure with 19.1\({}^{\circ}\) twist angle and place another MoSe\({}_{2}\) monolayer on top, with a relative twist angle of 19.1\({}^{\circ}\) and 19.1+60\({}^{\circ}\). We also perform a structural relaxation on the encapsulated structures, similar as above, before we proceed to calculate the proximitized Dirac dispersion. The structural information for the encapsulated structures are summarized in Table 3. The relaxed top and bottom graphene/TMDC interlayer distances are nearly identical for the different cases we consider, and coincide with the non-encapsulated geometries. In addition, the intrinsic dipole of the trilayer structure is strongly diminished, but still finite due to a small asymmetry in the interlayer distances. The rippling of the graphene layer is small (large) for symmetric (asymmetric) encapsulation when twist angles are the same for top and bottom monolayers (when the top TMDC monolayer has an additional 60\({}^{\circ}\) twist). The calculated band offsets are also nearly identical to the non-encapsulated structures. We expect that symmetric encapsulation will boost proximity SOC in graphene, while for asymmetric encapsulation the proximity SOC in graphene will nearly vanish. The reason is the valley-Zeeman type of SOC combined with the interchange of the graphene sublattices upon 60\({}^{\circ}\) rotation. For example, the induced SOC from the bottom WSe\({}_{2}\) is \(\lambda_{\rm I}^{\rm A}\approx-\lambda_{\rm I}^{\rm B}\approx 1.2\) meV in the case of 0\({}^{\circ}\) twist angle. If the top WSe\({}_{2}\) layer has the same alignment to graphene as the bottom WSe\({}_{2}\) layer, the induced SOC will be the same and we can expect a doubling of valley-Zeeman SOC. However, if the top WSe\({}_{2}\) layer is rotated by 60\({}^{\circ}\) with respect to the underlying graphene/WSe\({}_{2}\) bilayer, the graphene sublattices are effectively interchanged with respect to the top WSe\({}_{2}\) layer. Hence, bottom and top TMDC layers induce opposite valley-Zeeman SOC, which in total leads to a cancellation. In Table 4, we summarize the fit results for the TMDC encapsulated geometries, while in Fig. 8, we explicitly show the results for WSe\({}_{2}\)-encapsulated graphene and the different twist angle scenarios. Indeed, symmetric encapsulation strongly enhances and roughly doubles the proximity-induced intrinsic SOC parameters, compared to non-encapsulated geometries. In contrast, the Rashba SOC is drastically reduced, since TMDC encapsulation nearly restores the \(z\)-mirror symmetry. Also the dipole (intrinsic electric field) of the structures is almost zero. For asymmetric encapsulation, the proximity-induced intrinsic and Rashba SOC is strongly reduced, as expected. Actually, for perfectly symmetric encapsulation, the Rashba SOC should exactly vanish. Also the valley-Zeeman SOC should vanish in encapsulated structures where inversion symmetry is restored. However, our heterostructures still show a finite structural asymmetry after atomic relaxation, leading to finite values of proximity SOC. In conclusion, TMDC encapsulation will only boost proximity SOC in graphene, if both TMDC layers offer the valley-Zeeman SOC in an additive way. In other words, both twist angles are important control knobs to tailor the interference of the individual proximity effects, as also discussed in Ref. [128]. Figure 7: Calculated electric field dependence of the valley-Zeeman and Rashba SOC for MoSe\({}_{2}\) and WSe\({}_{2}\) structures for selected twist angles. The data are summarized in Table 6 and Table 7. ## VI Physics behind the spin-orbit proximity effect There are several open questions related to the presented DFT and simulation results that we wish to address: Why is the proximity-induced SOC of valley-Zeeman (sublattice-odd) and not Kane-Mele (sublattice-even) type? What is the exact origin of the proximity-induced SOC? Why is the twist-angle dependence so different for different TMDCs and not as universal as predicted by recent tight-binding studies [87; 88]? Which atomic type (transition-metal or chalcogen) contributes most to the proximity-induced SOC? Why is the electric field tunability of valley-Zeeman SOC so pronounced for WSe\({}_{2}\) and a twist angle of 19.1\({}^{\circ}\)? We start by addressing the question about which atomic type contributes most to proximity SOC. We already know that the different transition-metal and chalcogen atoms provide very different contributions to the TMDC spin splittings [100], which should also influence proximity effects. Therefore, we have turned off SOC on different atoms by employing non-relativistic pseudopotentials, and recalculated the proximitized Dirac bands for different TMDCs and twist angles. The fit results are summarized in the SM [127]. We find, as expected, that the heavier the element (Mo or W, S or Se), the larger the contribution to the proximity-induced SOC. In particular, the contribution of W, Mo, Se, and S atoms to the proximity-induced valley-Zeeman SOC is roughly 1.2, 0.3, 0.1, and 0.01 meV for small (0 to 8\({}^{\circ}\)) twist angles. Remarkably, this can be drastically different for other twist angles. For example, at 19.1\({}^{\circ}\) the contribution of Se atoms to the valley-Zeeman SOC is roughly twice as large as the one from W or Mo atoms. The reason is that the graphene Dirac cone couples to different \(k\)-points within the TMDC Brillouin zone for different twist angles. At different \(k\)-points, the TMDC bands have a different atomic and orbital decomposition [100]. Therefore, for different twist angles different atomic contributions and orbitals are involved. Why is the proximity SOC of valley-Zeeman type? The graphene Dirac states at \(K\) are split as if an external magnetic field would be present, see Fig. 4. In particular, for 0\({}^{\circ}\), spin down states are shifted to lower energies compared to spin up, see Fig. 4(a), hence a Zeeman-like band splitting. Due to time-reversal symmetry the Dirac states at \(K^{\prime}\) are energetically the same, but have the opposite spin. Hence, the charge carriers effectively experience the opposite magnetic field, i. e., a valley-dependent Zeeman-like spin splitting arises. What causes this splitting in the first place? As we find from the projected band structures for different twist angles, the Dirac states predominantly couple to high-energy TMDC bands, see for example Fig. 9(a) and SM [127]. Considering a particular twist angle, the Dirac states at \(K\) couple differently to the spin up and spin down TMDC band manifolds. For simplicity, imagine that the coupling of Dirac states is only to TMDC conduction band states and the coupling to the spin down manifold is stronger than to the spin up one. According to second order perturbation theory, coupled energy levels repel. When the coupling to spin down is stronger, the spin down Dirac states would be pushed to lower energies compared to spin up, explaining the Zeeman-like splitting for a given valley. Due to time-reversal symmetry, the other valley shows the opposite behavior. Of course, in our heterostructures the \begin{table} \begin{tabular}{l c c c c c c c c} TMDC & \(\vartheta_{b}\) (\(\vartheta_{t}\)) [\({}^{\circ}\)] & \(\Delta\) [meV] & \(v_{\rm F}/10^{5}[\frac{m}{s}]\) & \(\lambda_{\rm I}^{\rm M}\) [meV] & \(\lambda_{\rm I}^{\rm B}\) [meV] & \(\lambda_{\rm R}\) [meV] & \(\varphi\) [\({}^{\circ}\)] & \(E_{\rm D}\) [meV] \\ \hline MoSe\({}_{2}\) & 19.1 (19.1) & 0.1049 & 7.8918 & 1.1320 & -1.1357 & 0.0057 & 4.1166 & -0.5327 \\ & 19.1 (19.1+60) & -0.2099 & 7.8872 & -0.0488 & 0.0066 & -0.0187 & -5.2934 & -0.7439 \\ \hline WSe\({}_{2}\) & 0.0 (0.0) & 0.0399 & 8.1670 & 2.6068 & -2.6201 & 0.0334 & 0 & -0.5580 \\ & 0.0 (0.0+60) & 0.2623 & 8.1523 & 0.0106 & -0.0002 & 0.0042 & 0 & -3.2908 \\ \end{tabular} \end{table} Table 4: Fit parameters of the model Hamiltonian, Eq. (1), for the TMDC/graphene/TMDC heterostructures. We summarize the relative twist angles \(\vartheta_{b}\) (\(\vartheta_{t}\)) of graphene with respect to bottom (top) TMDC layer, the Fermi velocity \(v_{\rm F}\), the staggered potential gap \(\Delta\), the sublattice-resolved intrinsic SOC parameters \(\lambda_{\rm I}^{\rm A}\) and \(\lambda_{\rm I}^{\rm B}\), the Rashba SOC parameter \(\lambda_{\rm R}\), the phase angle \(\varphi\), and the position of the Dirac point, \(E_{\rm D}\), with respect to the Fermi level. \begin{table} \begin{tabular}{l c c c c c c} TMDC & \(\vartheta_{b}\) (\(\vartheta_{t}\)) [\({}^{\circ}\)] & \(d_{\rm b}\) (\(d_{\rm t}\)) [Γ…] & \(\Delta_{z_{\rm BP}}\) [pm] & dipole [debye] & \(E_{D}-E_{V}\) [eV] & \(E_{D}-E_{C}\) [eV] \\ \hline MoSe\({}_{2}\) & 19.1 (19.1) & 3.4114 (3.4152) & 0.0020 & 0.0008 & 0.5196 & -0.9346 \\ & 19.1 (19.1+60) & 3.4222 (3.4083) & 0.5701 & -0.0057 & 0.5180 & -0.9394 \\ \hline WSe\({}_{2}\) & 0.0 (0.0) & 3.3489 (3.3609) & 0.1847 & 0.0099 & 0.1821 & 1.2108 \\ & 0.0 (0.0+60) & 3.3410 (3.3419) & 3.7920 & 0.0135 & 0.1739 & -1.2246 \\ \end{tabular} \end{table} Table 3: Structural information and calculated band offsets for the TMDC/graphene/TMDC heterostructures. We summarize the relative twist angles \(\vartheta_{b}\) (\(\vartheta_{t}\)) of graphene with respect to bottom (top) TMDC layer, the relaxed interlayer distances \(d_{\rm b}\) (\(d_{\rm t}\)), the rippling of the graphene layer \(\Delta z_{\rm BPp}\), the calculated dipole of the structures, and the position of the Dirac point with respect to the TMDC valence (conduction) band edge, \(E_{D}-E_{V}\) (\(E_{D}-E_{C}\)). coupling is also to TMDC valence bands and there is a delicate balance to the coupling to spin up and spin down manifolds, where one outweighs the other. This is similar to recent considerations in twisted graphene/Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\) heterostructures [29]. In particular for 30\({}^{\circ}\) twist angle, the Dirac states of graphene are folded to the \(\Gamma\)-M high-symmetry line of the TMDC Brillouin zone, see Fig. 9, where TMDC bands are spin degenerate, and proximity-induced valley-Zeeman SOC vanishes [127]. Regarding the electric field tunability of valley-Zeeman SOC for WSe\({}_{2}\) and a twist angle of 19.1\({}^{\circ}\), we first have to consider the location in the TMDC Brillouin zone, where the Dirac cone folds back, see Fig. 9(b) and SM [127]. In particular, the graphene \(K\) point folds near the WSe\({}_{2}\)\(Q\) side-valley, see Fig. 9(f), where the spin splitting of the first TMDC conduction band is very large (\(\sim 200\) meV). Moreover, the electric field results in Table S7 show that the closer the Dirac point shifts towards the TMDC conduction band, the larger is the proximity-induced valley-Zeeman SOC. Considering a coupling of Dirac states to the energetically closest TMDC bands, for this particular twist angle, we come to the conclusion that mainly the first conduction band is responsible for the spin splitting of Dirac states. The contributions from the first two WSe\({}_{2}\) valence bands seem to cancel each other, due to opposite spin splittings. Another supporting factor is that at the \(Q\) valley, the TMDC conduction band wave function is strongly delocalized across the TMDC layer, see Fig. 9(e), allowing for a more efficient wavefunction overlap between the layers and an enhanced transfer of the SOC to the graphene layer. Therefore, a coupling to the Dirac states should be enhanced, once the energy difference is reduced by applying an external electric field. In contrast, for MoSe\({}_{2}\) the spin splittings of the relevant bands at the \(Q\) valley are very different in magnitude compared to WSe\({}_{2}\), see Fig. 9(d), and therefore the electric field dependence is not as pronounced for the same twist angle. This also relates to the question, why our twist angle results are not universal for all the TMDCs, as the tight-binding studies suggest [87; 88]. Even though the individual TMDCs are very similar, there are profound differences such as atomic and orbital decompositions of bands, leading to different spin splittings across the Brillouin zone. On top of that, our DFT calculations capture the full picture, including monolayer dispersions, spin-orbit effects, and interlayer interactions. In contrast, the tight-binding description of the heterostructure [88] employs assumptions for the interlayer interactions and a specific parametrization of the TMDC monolayer dispersion based on first-principles results [129], which does not perfectly reproduce band energies nor spin splittings. Anyway, both DFT and the tight-binding descriptions have advantages and drawbacks, but help to gain insights on the physics of proximity-induced SOC in graphene/TMDC heterostructures. ## VII Spin relaxation anisotropy An experimentally verifiable fingerprint of the proximity-induced SOC in graphene/TMDC heterostructures is the anisotropy of the spin lifetimes [58; 59; 60; 61; 62]. The intrinsic SOC parameters provide a spin-orbit field that points out of the monolayer plane, while the Rashba SOC creates, in the simplest case, a vortex-like in-plane spin-orbit field. Depending on the interplay of both SOCs, spins pointing in different directions relax on different timescales, creating a spin lifetime anisotropy. The spin relaxation anisotropy, \(\xi\), which is defined as the ratio between the out-of-plane (\(\tau_{s,z}\)) and in-plane (\(\tau_{s,x}\)) spin relaxation times, can be easily calculated from the fitted parameters via [58] \[\xi=\frac{\tau_{s,z}}{\tau_{s,x}}=\left(\frac{\lambda_{\text{VZ}}}{\lambda_{ \text{R}}}\right)^{2}\left(\frac{\tau_{\text{V}}}{\tau_{\text{p}}}\right)+ \frac{1}{2}. \tag{6}\] A similar expression has been derived in Ref. [60]. Here, the ratio between the valley-Zeeman and the Rashba SOC strength predominantly determines the anisotropy, but also the ratio between intervalley (\(\tau_{\rm iv}\)) and momentum (\(\tau_{\rm p}\)) scattering times play a role. In the following, we assume \(\tau_{\rm iv}/\tau_{\rm p}=5\), as in Ref. [58]. In Fig. 10, we summarize the calculated anisotropies as function of the 1) twist angle, 2) the applied electric field, and 3) the interlayer distance, employing the results from above. The anisotropy is extraordinarily large for WS\({}_{2}\) and MoS\({}_{2}\) at \(0^{\circ}\), since the valley-Zeeman SOC is giant compared to the Rashba one, pinning the spins to the out-of-plane direction. At \(30^{\circ}\), the anisotropy reduces to 1/2, i. e., the Rashba limit, since the valley-Zeeman SOC vanishes independent of the TMDC. In general, the twist angle is an experimental knob to tailor the spin relaxation anisotropy. Once a twist angle is fixed, the proximity SOC can be further tuned by a transverse electric field or pressure engineering of the interlayer distance. Tuning the electric field from \(-2\) to 2 V/nm essentially decreases the Rashba SOC and consequently increases the anisotropy. A strong tunability can be especially observed in WSe\({}_{2}\) for \(0^{\circ}\) and for MoSe\({}_{2}\) for \(19.1^{\circ}\), where the anisotropies can be increased by a factor of 2-3. In contrast, reducing the interlayer distance both valley-Zeeman and Rashba SOC increase, but at different rates, and the anisotropies decrease. A particular strong anisotropy can be expected in TMDC-encapsulated graphene, as the Rashba SOC can be suppressed compared to the valley-Zeeman SOC, see Table 4. In particular, considering the WSe\({}_{2}\)-encapsulated case, and both twist angles to be \(0^{\circ}\), the calculated anisotropy would be gigantic \(\xi\approx 3\times 10^{4}\). ## VIII Spin-charge conversion Another experimentally verifiable fingerprint of proximity-induced SOC is the possibility to convert between charge and spin currents in proximitized graphene without the need of conventional ferromagnetic electrodes, which is highly desirable for all-2D spintronic devices [17; 109; 100; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 130; 131; 132; 133; 134; 135; 136]. Recent theoretical calculations [89; 109] have already considered the twist angle dependence of the charge-to-spin conversion in graphene/TMDC heterostructures. Remarkably, not only the conventional spin-Hall effect (SHE) and Rashba-Edelstein effect (REE) occur, but also an unconventional REE (UREE) can arise. While for SHE and REE the current-induced non-equilibrium spin density has a polarization perpendicular to the charge current [48], for the UREE the spin density polarization is collinear to the applied electric current. A similar unconventional charge-to-spin conversion has already been experimentally detected in the semimetals WTe\({}_{2}\)[137] and MoTe\({}_{2}\)[138; 139; 50], and can be at Figure 9: (a) DFT-calculated band structure of the graphene/MoSe\({}_{2}\) heterostructure along the high-symmetry path M-K-\(\Gamma\) for a twist angle of \(0^{\circ}\). The color code shows the contribution of the individual monolayers to the bands, i. e., the bands appear dark-reddish (dark-blueish) when only MoSe\({}_{2}\) (graphene) orbitals contribute. (b) The backfolding of the graphene Dirac point at \(K\) for different twist angles. The black (green) hexagon represents the graphene (TMDC) Brillouin zone. (c) DFT-calculated band structure of monolayer MoSe\({}_{2}\) with lattice constant of \(a=3.28\) Γ… along the high-symmetry path \(\Gamma\)-K-M-\(\Gamma\). The vertical dashed lines indicate the \(k\)-points, to which the Dirac states couple to, according to the backfolding in (b). The black dots are the locations of the Dirac point for the different twist angles from Table S3. (d) The spin splittings \(\Delta_{s}=E_{\uparrow}-E_{\downarrow}\) of the MoSe\({}_{2}\) bands VB\({}_{1}\), VB\({}_{2}\), and CB\({}_{1}\), extracted from the band structure in (c). (e) and (f) are the same as (c) and (d), but for WSe\({}_{2}\) monolayer. tributed to reduced symmetries [140]. Recent experiments on graphene/NbSe\({}_{2}\)[57], graphene/WTe\({}_{2}\)[136], and graphene/MoTe\({}_{2}\)[138; 139] heterostructures have demonstrated the spin-to-charge conversion of spins oriented in all three directions. However, in these structures NbSe\({}_{2}\), WTe\({}_{2}\), and MoTe\({}_{2}\) are metallic, contributing directly to the conversion process, along with the proximitized graphene. The figure of merit for charge-to-spin conversion for comparing 3D and 2D systems is given by \(\alpha\lambda_{\rm SF}\), where \(\alpha\) is the conversion efficiency and \(\lambda_{\rm SF}\) is the spin diffusion length [18; 141; 20]. Especially \(\lambda_{\rm SF}\) can be giant in proximitized graphene (\(\sim~{}\mu\)m) [20; 49; 92], much larger than in conventional 3D bulk heavy metals such as Pt or W (\(\sim\) nm) [142; 143]. Therefore, 2D material heterostructures can outperform 3D systems, even though the conversion efficiencies of, e. g., Pt (7%) [144] or W (20%) [143] are sizable. The reason for the UREE in graphene/semiconductor-TMDC heterostructures [109; 89] is the Rashba phase angle \(\varphi\) of the proximitized Dirac bands. When \(\varphi=0\), no radial in-plane spin-orbit field components arise. In other words, the in-plane spins are always perpendicular to momentum, see for example Fig. 4(f), and consequently the generated spin density polarization will also be perpendicular to the applied current direction. However, when \(\varphi\neq 0\), also radial spin-orbit field components arise, see for example Fig. S11, meaning that a current-induced spin density can have a polarization component parallel to the current. Consequently, the UREE will be maximized when \(\varphi=90^{\circ}\). In Fig. 11, we summarize the twist-angle dependence of the Rashba phase angle for our investigated graphene/TMDC structures. For our exemplary case of MoSe\({}_{2}\), we therefore expect that UREE will be maximized for a twist angle of \(\vartheta\approx 23^{\circ}\), where the Rashba phase angle has a maximum of \(\varphi\approx 30^{\circ}\). In Fig. 12, we schematically sketch the different conversion processes in an experimental setup. A charge current along \(x\) direction generates a spin current along \(y\) with spins polarized along \(z\) due to SHE. Similarly, a non-equilibrium spin density \(\delta s\) is generated, which is in-plane polarized, due to combined REE and UREE. In order to get the conversion efficiencies, we have performed real-space quantum transport calculations [145; 146; 147], employing the honeycomb tight-binding version [14] of the Hamiltonian \(\mathcal{H}\), Eq. (1). The conversion efficien Figure 11: Calculated twist-angle dependence of the Rashba phase angle \(\varphi\). The data are summarized in Table 1. Figure 10: Calculated spin relaxation anisotropy \(\xi\), employing Eq. (6). Left: Anisotropy as function of the twist angle for the different graphene/TMDC heterostructures, employing the parameters from Table 1. Middle: Anisotropy as function of the transverse electric field for MoSe\({}_{2}\) and WSe\({}_{2}\) structures for selected twist angles, employing the parameters from Table S6 and Table S7. Right: Anisotropy as function of the interlayer distance for MoSe\({}_{2}\) and WSe\({}_{2}\) structures for selected twist angles, employing the parameters from Table S4 and Table S5. cies \(\Theta_{\rm SHE}\), \(\alpha_{\rm REE}\), and \(\alpha_{\rm UREE}\), are evaluated as \[\Theta_{\rm SHE} = (2/\hbar)\,J_{y}^{z}/J_{x} \tag{7}\] \[\alpha_{\rm REE} = (2ev_{F}/\hbar)\,\delta s_{y}/J_{x}\] (8) \[\alpha_{\rm UREE} = (2ev_{F}/\hbar)\,\delta s_{x}/J_{x} \tag{9}\] where \(J_{x}\) is the charge current along the direction of the applied bias voltage \(V_{b}\) and \(\delta s_{x}\) (\(\delta s_{y}\)) is the current-induced nonequilibrium spin density along the \(x\) (\(y\)) axis. Analogously, \(J_{y}^{z}=(e/2)\{s_{z},v_{y}\}\) is the Hermitian operator [146; 148] of spin current along the \(y\)-axis which carries spins oriented alo/hg the \(z\)-axis. The local spin and charge currents [146; 148], as well as nonequilibrium spin density [14; 147], were calculated using the nonequilibrium Green's function formalism (NEGF) [149] applied to Landauer geometry [145; 148] where the central region of finite length is an armchair nanoribbon that is attached to two semi-infinite leads terminating into macroscopic source (S) and drain (D) reservoirs at infinity. The difference of their electrochemical potentials defines the bias voltage, \(\mu_{S}-\mu_{D}=eV_{b}\). Such clean (i.e., without any impurities) system is then periodically repeated in the transverse direction, which requires carefully checking of convergence in \(k_{y}\) points sampling [150]. Note that this procedure effectively models an infinite plane, while guarantying a continuous energy spectrum of the system Hamiltonian which is essential [151] for properly introducing dissipation effects when calculating nonequilibrium expectation values in quantum statistical mechanics. The NEGF formalism provides the nonequilibrium density matrix for steady-state transport, \(\rho(\hat{k}_{y})\), from which the expectation value of the relevant operator \(\hat{O}\) is obtained via \(O(k_{y})=\langle\hat{O}\rangle={\rm Tr}\,[\rho(\hat{k}_{y})\hat{O}]\) at a single value of \(k_{y}\), while its total is an integral over the first Brillouin zone (BZ), \(O=\frac{W}{2\pi}\int dk_{y}\,O(k_{y})\), where \(W\) is the width of the nanoribbon. In Fig. 13, we show the calculated SHE, REE, and UREE efficiencies, \(\Theta_{\rm SHE}\), \(\alpha_{\rm REE}\), and \(\alpha_{\rm UREE}\), as function of the twist angle and Fermi level for the different graphene/TMDC heterostructures, employing the model Hamiltonian parameters from Table 1. We find that graphene/WSe\({}_{2}\) has in general both the largest range and highest values of spin conversion efficiencies, due to the highest values and variations of proximity SOC upon twisting. In addition, the large tunability of the Rashba phase angle is responsible for a pronounced UREE for WSe\({}_{2}\) and changes sign at a twist angle of around \(20^{\circ}\). In all cases, the UREE follows the REE according to \(\alpha_{\rm UREE}=\alpha_{\rm REE}\tan(\varphi)\), i. e., a modulation by the Rashba phase angle. Fig. 14 shows the REE and UREE efficiencies for a set of twist angles, as a function of the Fermi energy, for graphene/WSe\({}_{2}\). The overall behaviour of these curves can simply be understood via the band structure of the corresponding twisted heterostructure. Below the band gap, no states contribute to transport, but as the Fermi energy increases, different cases need to be considered. In the first case, there is no Mexican hat in the band structure and only Rashba-type SOC present, see for example Fig. 4(c) for a twist angle of \(30^{\circ}\). Once the Fermi energy crosses the first spin-split subband, which is characterized by spin-momentum locking, a plateau in REE emerges [110]. The plateau is maintained within the Rashba pseudo-gap, followed by an algebraic decay, once the second subband is reached, which contributes with opposite spin-momentum locking. In the second case, when there is additionally a valley-Zeeman SOC present, as is the case for example in Fig. 4(a) for a twist angle of \(0^{\circ}\), the REE and UREE efficiencies spike before reaching the plateau. In the third case, a Mexican Figure 12: Sketch of the charge-to-spin conversion processes in an experimental setup. Left: A charge current, \(J_{x}\), along the \(x\) direction results in a spin current flowing along \(y\) direction with spins polarized along \(z\) due to SHE at the graphene/TMDC region. Right: The charge current shifts the Fermi contour, i.e., the proximitized Dirac bands, and generates a non-equilibrium spin density \(\delta s\) at the graphene/TMDC interface. The spin density has components perpendicular (REE) and parallel (UREE) to the charge current, due to the Rashba phase angle \(\varphi\neq 0\). hat develops, see for example Fig. 4(b), due to proximity SOC that is larger than the pseudospin-asymmetry gap (inverted band structure) [11]. Instead of directly reaching the plateau or a spike as the Fermi energy increases, the REE and UREE efficiencies now ramp up slowly but still reach a plateau once the Mexican hat is overcome. The analysis from this point is identical to before. ## IX Conclusions In conclusion, we have performed extensive first-principles calculations to reveal the twist-angle and gate dependence of proximity-induced SOC in graphene/TMDC heterostructures. By employing a symmetry-based Hamiltonian, we have extracted orbital and spin-orbit parameters that capture the proximitized low energy Dirac bands. Our results show that the magnitude and the interplay of valley-Zeeman and Rashba SOC can be tuned via twisting, gating, encapsulation, and the interlayer distance. In particular, when twisting from \(0^{\circ}\) to \(30^{\circ}\), the induced valley-Zeeman SOC decreases almost linearly to zero for W-based TMDCs, while for Mo-based TMDCs it exhibits a maximum at around 15-\(20^{\circ}\) before going to zero. The induced Rashba SOC stays rather constant upon twisting, and acquires a phase angle \(\varphi\neq 0\), due to symmetry breaking, for twist angles different from \(0^{\circ}\) and \(30^{\circ}\). Within our investigated electric field limits of \(\pm 2\) V/nm, mainly the Rashba SOC can be tuned by about 50%. The interlayer distance provides a giant tunability, since the proximity-induced SOC can be increased by a factor of 2-3, when reducing the distance by only about 10%. In TMDC-encapsulated graphene, both twist angles are important to control the interference of the individual proximity-induced SOCs, allowing to precisely tailor the valley-Zeeman SOC, while the Rashba SOC becomes suppressed. Based on our effective Hamiltonian with fitted parameters, we made specific predictions for experimentally measurable quantities such as spin lifetime anisotropy and charge-to-spin conversion efficiencies. The spin lifetime anisotropy, as well as the charge-to-spin conversion efficiencies are highly tunable by our investigated control knobs and serve as guidance for experimental measurements. Our results highlight the important impact of the twist angle, gating, interlayer distance, and encapsulation when employing van der Waals heterostructures in Figure 13: NEGF-computed conversion efficiencies, \(\Theta_{\text{SHE}}\), \(\alpha_{\text{REE}}\) and \(\alpha_{\text{UREE}}\), as function of the twist angle and Fermi level for the different graphene/TMDC heterostructures. experiments. ###### Acknowledgements. K. Z. and J. F. were supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) SFB 1277 (Project No. 314695032), SPP 2244 (Project No. 443416183), the European Union Horizon 2020 Research and Innovation Program under contract number 881603 (Graphene Flagship) and FLAGERA project 2DSOTECH. B. K. N. was supported by the US National Science Foundation through the University of Delaware Materials Research Science and Engineering Center, DMR-2011824. The authors thank T. Naimer, E. Icking, and A. Ferreira for fruitful discussions. ## References * Sierra _et al._ [2021]Juan F Sierra, Jaroslav Fabian, Roland K Kawakami, Stephan Roche, and Sergio O Valenzuela, "Van der waals heterostructures for spintronics and opto-spintronics," Nature Nanotechnology **16**, 856-868 (2021). * Zutic _et al._ [2019]Igor Zutic, Alex Matos-Abiague, Benedikt Scharf, Hanan Dery, and Kirill Belashchenko, "Proximitized materials," Mater. Today **22**, 85 (2019). * Gibertini _et al._ [2019]M. Gibertini, M. Koperski, A. F. Morpurgo, and K. S. Novoselov, "Magnetic 2D materials and heterostructures," Nat. Nanotechnol. **14**, 408 (2019). * Briggs _et al._ [2019]Natalie Briggs, Shruti Subramanian, Zhong Lin, Xufan Li, Xiaotian Zhang, Kehao Zhang, Kai Xiao, David Geohegan, Robert Wallace, Long-Qing Chen, Mauricio Terrones, Aida Ebrahimi, Saptarshi Das, Joan Redwing, Christopher Hinkle, Kasra Momeni, Adri van Duin, Vin Crespi, Swastik Kar, and Joshua A Robinson, "A roadmap for electronic grade 2D materials," 2D Mater. **6**, 022001 (2019). * Novoselov _et al._ [2016]K. S. Novoselov, A. Mishchenko, A. Carvalho, and A. H. Castro Neto, "2D materials and van der Waals heterostructures," Science **353**, aac9439 (2016). * Burch _et al._ [2018]Kenneth S. Burch, David Mandrus, and Je-Geun Park, "Magnetism in two-dimensional van der Waals materials," Nature **563**, 47 (2018). * Duong _et al._ [2017]Dinh Loc Duong, Seok Joon Yun, and Young Hee Lee, "Van der Waals Layered Materials: Opportunities and Challenges," ACS Nano **11**, 11803 (2017). * Bora and Deb [2021]M Bora and P Deb, "Magnetic proximity effect in two-dimensional van der waals heterostructure," Journal of Physics: Materials **4**, 034014 (2021). * Geim and Grigorieva [2013]A. K. Geim and I. V. Grigorieva, "Van der Waals heterostructures," Nature **499**, 419 (2013). * Gmitra and Fabian [2015]Martin Gmitra and Jaroslav Fabian, "Graphene on transition-metal dichalcogenides: A platform for proximity spin-orbit physics and optospintronics," Phys. Rev. B **92**, 155403 (2015). * Gmitra _et al._ [2016]Martin Gmitra, Denis Kochan, Petra Hogl, and Jaroslav Fabian, "Trivial and inverted dirac bands and the emergence of quantum spin hall states in graphene on transition-metal dichalcogenides," Phys. Rev. B **93**, 155104 (2016). * Szalowski _et al._ [2023]Karol Szalowski, Marko Milivojevic, Denis Kochan, and martin gmitra, "Spin-orbit and exchange proximity couplings in graphene/1t-tas2 heterostructure triggered by a charge density wave," 2D Materials (2023). * Naimer _et al._ [2021]Thomas Naimer, Klaus Zollner, Martin Gmitra, and Jaroslav Fabian, "Twist-angle dependent proximity induced spin-orbit coupling in graphene/transition metal dichalcogenide heterostructures," Phys. Rev. B **104**, 195156 (2021). * Zollner _et al._ [2021]Klaus Zollner, Marko D. Petrovic, Kapildeb Dolui, Petr Plechac, Branislav K. Nikolic, and Jaroslav Fabian, "Scattering-induced and highly tunable by gate damping-like spin-orbit torque in graphene doubly proximitized by two-dimensional magnet cr\({}_{2}\)ge\({}_{2}\)te\({}_{6}\) and Figure 14: NEGF-computed conversion efficiencies, \(\Theta_{\rm SHE}\), \(\alpha_{\rm REE}\) and \(\alpha_{\rm UREE}\), as function of the Fermi level for selected twist angles for the graphene/WSe\({}_{2}\) heterostructure. monolayer ws\({}_{2}\)," Phys. Rev. Research **2**, 043057 (2020). * Zollner and Fabian (2021)Klaus Zollner and Jaroslav Fabian, "Heterostructures of graphene and topological insulators bi2se3, 12te3, and sb2te3," physica status solidi (b) **258**, 2000081 (2021). * Zollner _et al._ (2019)Klaus Zollner, Martin Gmitra, and Jaroslav Fabian, "Heterostructures of graphene and hBN: Electronic, spin-orbit, and spin relaxation properties from first principles," Phys. Rev. B **99**, 125151 (2019). * Herling _et al._ (2020)Franz Herling, C. K. Safeer, Josep Ingla-Aynes, Nerea Ontoso, Luis E. Hueso, and Felix Casanova, "Gate tunability of highly efficient spin-to-charge conversion by spin hall effect in graphene proximitized with wse2," APL Materials **8**, 071103 (2020). * Safeer _et al._ (2019)CK Safeer, Josep Ingla-Aynes, Franz Herling, Jose H Garcia, Marc Vila, Nerea Ontoso, M Reyes Calvo, Stephan Roche, Luis E Hueso, and Felix Casanova, "Room-temperature spin Hall effect in graphene/MoS\({}_{2}\) van der Waals heterostructures," Nano Lett. **19**, 1074 (2019). * Fulop _et al._ (2021)Balint Fulop, Albin Marffy, Simon Zihlmann, Martin Gmitra, Endre Tovari, Balint Szentpeteri, Mate Kedves, Kenji Watanabe, Takashi Taniguchi, Jaroslav Fabian, _et al._, "Boosting proximity spin-orbit coupling in graphene/wse2 heterostructures via hydrostatic pressure," npj 2D Materials and Applications **5**, 82 (2021). * Khokhirakov _et al._ (2020)Dmitrii Khokhirakov, Anamul Md. Hoque, Bogdan Karpiak, and Saroj P. Dash, "Gate-tunable spin-galvanic effect in graphene-topological insulator van der waals heterostructures at room temperature," Nature Communications **11**, 3657 (2020). * Zihlmann _et al._ (2018)Simon Zihlmann, Aron W. Cummings, Jose H. Garcia, Mate Kedves, Kenji Watanabe, Takashi Taniguchi, Christian Schonenberger, and Peter Makk, "Large spin relaxation anisotropy and valley-zeeman spin-orbit coupling in wse2/graphene/\(h\)-bn heterostructures," Phys. Rev. B **97**, 075434 (2018). * Song _et al._ (2018)Kenan Song, David Soriano, Aron W. Cummings, Roberto Robles, Pablo Ordejon, and Stephan Roche, "Spin Proximity Effects in Graphene/Topological Insulator Heterostructures," Nano Lett. **18**, 2033 (2018). * Garcia _et al._ (2018)Jose H. Garcia, Marc Vila, Aron W. Cummings, and Stephan Roche, "Spin transport in graphene/transition metal dichalcogenide heterostructures," Chem. Soc. Rev. **47**, 3359 (2018). * Khoo _et al._ (2017)Jun Yong Khoo, Alberto F. Morpurgo, and Leonid Levitov, "On-Demand Spin-Orbit Interaction from Which-Layer Tunability in Bilayer Graphene," Nano Lett. **17**, 7003 (2017). * Omar and van Wees (2017)S. Omar and B. J. van Wees, "Graphene-ws\({}_{2}\) heterostructures for tunable spin injection and spin transport," Phys. Rev. B **95**, 081404 (2017). * Omar and van Wees (2018)S. Omar and B. J. van Wees, "Spin transport in high-mobility graphene on ws\({}_{2}\) substrate with electric-field tunable proximity spin-orbit interaction," Phys. Rev. B **97**, 045414 (2018). * Tiwari _et al._ (2022)Priya Tiwari, Mohit Kumar Jat, Adithi Udupa, Deepa S Narang, Kenji Watanabe, Takashi Taniguchi, Diptiman Sen, and Aveek Bid, "Experimental observation of spin- split energy dispersion in high-mobility single-layer graphene/wse2 heterostructures," npj 2D Materials and Applications **6**, 68 (2022). * Zollner _et al._ (2018)Klaus Zollner, Martin Gmitra, and Jaroslav Fabian, "Electrically tunable exchange splitting in bilayer graphene on monolayer Cr 2 X 2 Te 6 with X = Ge, Si, and Sn," New J. Phys. **20**, 073007 (2018). * Zollner and Fabian (2022)Klaus Zollner and Jaroslav Fabian, "Engineering proximity exchange by twisting: Reversal of ferromagnetic and emergence of antiferromagnetic dirac bands in Graphene/crg\({}_{2}\)e\({}_{2}\)e\({}_{6}\)," Phys. Rev. Lett. **128**, 106401 (2022). * Dyrdal and Barnas (2017)A Dyrdal and J Barnas, "Anomalous, spin, and valley hall effects in graphene deposited on ferromagnetic substrates," 2D Materials **4**, 034003 (2017). * Hallal _et al._ (2017)Ali Hallal, Fatima Ibrahim, Hongxin Yang, Stephan Roche, and Maiurebek Chshiev, "Tailoring magnetic insulator proximity effects in graphene: first-principles calculations," 2D Mater. **4**, 025074 (2017). * Cardoso _et al._ (2018)C. Cardoso, D. Soriano, N. A. Garcia-Martinez, and J. Fernandez-Rossier, "Van der waals spin valves," Phys. Rev. Lett. **121**, 067701 (2018). * Karpiak _et al._ (2019)Bogdan Karpiak, Aron W. Cummings, Klaus Zollner, Marc Vila, Dmitrii Khokhirakov, Anamul Md Hoque, Andre Dankert, Peter Svedlindh, Jaroslav Fabian, Stephan Roche, and Saroj P. Dash, "Magnetic proximity in a van der Waals heterostructure of magnetic insulator and graphene," 2D Mater. **7**, 015026 (2019). * Zollner _et al._ (2016)Klaus Zollner, Martin Gmitra, Tobias Frank, and Jaroslav Fabian, "Theory of proximity-induced exchange coupling in graphene on hbn/(co, ni)," Phys. Rev. B **94**, 155441 (2016). * Zhang _et al._ (2015)Jiayong Zhang, Bao Zhao, Yugui Yao, and Zhongqin Yang, "Robust quantum anomalous hall effect in graphene-based van der waals heterostructures," Phys. Rev. B **92**, 165418 (2015). * Zhang _et al._ (2018)Jiayong Zhang, Bao Zhao, Tong Zhou, Yang Xue, Chunlan Ma, and Zhongqin Yang, "Strong magnetization and Chern insulators in compressed graphene/CrI\({}_{3}\) van der Waals heterostructures," Phys. Rev. B **97**, 085401 (2018). * Yang _et al._ (2013)H. X. Yang, A. Hallal, D. Terrade, X. Waintal, S. Roche, and M. Chshiev, "Proximity effects induced in graphene by magnetic insulators: First-principles calculations on spin filtering and exchange-splitting gaps," Phys. Rev. Lett. **110**, 046603 (2013). * Song (2018)Yu Song, "Electric-field-induced extremely large change in resistance in graphene ferromagnets," J. Phys. D: Appl. Phys. **51**, 025002 (2018). * Haugen _et al._ (2008)Havard Haugen, Daniel Huertas-Hernando, and Arne Brataas, "Spin transport in proximity-induced ferromagnetic graphene," Phys. Rev. B **77**, 115406 (2008). * Zhang _et al._ (2015)Jiayong Zhang, Bao Zhao, Yugui Yao, and Zhongqin Yang, "Quantum Anomalous Hall Effect in Graphene-based Heterostructure," Sci. Rep. **5**, 10629 (2015). * Su _et al._ (2017)Shanshan Su, Yafis Barlas, Junxue Li, Jing Shi, and Roger K. Lake, "Effect of intervalley interaction on band topology of commensurate graphene/euo heterostructures," Phys. Rev. B **95**, 075418 (2017). * Singh _et al._ (2017)Simarnejet Singh, Jyoti Katoch, Tiancong Zhu, Keng Yuan Meng, Tianyu Liu, Jack T. Brangham, Fengyuan Yang, Michael E. Flatte, and Roland K. Kawakami, "Strong Modulation of Spin Currents in Bilayer Graphene by Static and Fluctuating Proximity Exchange Fields," Phys. Rev. Lett. **118**, 187201 (2017). * Swartz _et al._ (2012)Adrian G. Swartz, Patrick M. Odenthal, Yufeng Hao, Rodney S. Ruoff, and Roland K. Kawakami, "Integration of the ferromagnetic insulator EuO onto graphene," ACS Nano **6**, 10063 (2012). * Moriya _et al._ [2020]Rai Moriya, Naoto Yabuki, and Tomoki Machida, "Superconducting proximity effect in a Nbse\({}_{2}\)/graphene van der waals junction," Phys. Rev. B **101**, 054503 (2020). * Zutic _et al._ [2004]Igor Zutic, Jaroslav Fabian, and S. Das Sarma, "Spinronics: Fundamentals and applications," Review of Modern Physics **76**, 323 (2004). * Wang _et al._ [2015]Zhe Wang, Dong-Keun Ki, Hua Chen, Helmuth Berger, Allan H MacDonald, and Alberto F Morpurgo, "Strong interface-induced spin-orbit interaction in graphene on WS2," Nat. Commun. **6**, 8339 (2015). * Island _et al._ [2019]J. O. Island, X. Cui, C. Lewandowski, J. Y. Khoo, E. M. Spanton, H. Zhou, D. Rhodes, J. C. Hone, T. Taniguchi, K. Watanabe, L. S. Levitov, M. P. Zuletel, and A. F. Young, "Spin-orbit-driven band inversion in bilayer graphene by the van der Waals proximity effect," Nature **571**, 85 (2019). * Ghiasi _et al._ [2019]Talieh S. Ghiasi, Alexey A. Kaverzin, Patrick J. Blah, and Bart J. van Wees, "Charge-to-spin conversion by the rashba-edelstein effect in two-dimensional van der waals heterostructures up to room temperature," Nano Letters **19**, 5959-5966 (2019). * Benitez _et al._ [2020]L Antonio Benitez, Williams Savero Torres, Juan F Sierra, Matias Timmermans, Jose H Garcia, Stephan Roche, Marius V Costache, and Sergio O Valenzuela, "Tunable room-temperature spin galvanic and spin Hall effects in van der Waals heterostructures," Nat. Mater. **19**, 170 (2020). * Hoque _et al._ [2021]Anamul Md Hoque, Dmitrii Khokhirakov, Klaus Zollner, Bing Zhao, Bogdan Karpiak, Jaroslav Fabian, and Saroj P Dash, "All-electrical creation and control of spin-galvanic signal in graphene and molybdenum ditelluride heterostructures at room temperature," Communications Physics **4**, 124 (2021). * Amann _et al._ [2021]Julia Amann, Tobias Volkl, Tobias Rockinger, Denis Kochan, Kenji Watanabe, Takashi Taniguchi, Jaroslav Fabian, Dieter Weiss, and Jonathan Eroms, "Counterintuitive gate dependence of weak antilocalization in bilayer graphene/wsez heterostructures," Phys. Rev. B **105**, 115425 (2022). * Khatibi and Power [2022]Zahra Khatibi and Stephen R. Power, "Proximity spin-orbit coupling in graphene on alloyed transition metal dichalcogenides," Phys. Rev. B **106**, 125417 (2022). * Nugera _et al._ [2022]Florence A. Nugera, Prasana K. Sahoo, Yan Xin, Sharad Ambardar, Dmitri V. Voronine, Un Jeong Kim, Yooojong Han, Hyungbin Son, and Humberto R. Gutierrez, "Bandgap engineering in 2d lateral heterostructures of transition metal dichalcogenides via controlled alloying," Small **18**, 2106600 (2022). * Fulop _et al._ [2021]Balint Fulop, Albin Marffy, Endre Tovari, Mate Kedves, Simon Zihlmann, David Indolese, Zoltan Kovacs-Krausz, Kenji Watanabe, Takashi Taniguchi, Christian Schonenberger, Istvan Kezsmarki, Peter Makk, and Szabolcs Csonka, "New method of transport measurements on van der Waals heterostructures under pressure," Journal of Applied Physics **130**, 064303 (2021). * Avsar _et al._ [2017]Ahmet Avsar, Dmitrii Unuchek, Jiawei Liu, Oriol Lopez Sanchez, Kenji Watanabe, Takashi Taniguchi, Barbaros Ozyilmaz, and Andras Kis, "Optospintronics in Graphene via Proximity Coupling," ACS Nano **11**, 11678 (2017). * Kelly Luo _et al._ [2017]Yunqi Kelly Luo, Jinsong Xu, Tiancong Zhu, Guanzhong Wu, Elizabeth J. McCormick, Wenbo Zhan, Mahesh R. Neupane, and Roland K. Kawakami, "Optovalev-alvertonic spin injection in monolayer MoS2/few-layer graphene hybrid spin valves," Nano Lett. **17**, 3877 (2017). * Ingla-Aynes _et al._ [2022]Josep Ingla-Aynes, Inge Groen, Franz Herling, Nerea Ontoso, C K Safeer, Fernando de Juan, Luis E Hueso, Marco Gobbi, and Felix Casanova, "Omnidirectional spin-to-charge conversion in graphene/nbse\({}_{2}\) van der waals heterostructures," 2D Materials **9**, 045001 (2022). * Cummings _et al._ [2017]Aron W. Cummings, Jose H. Garcia, Jaroslav Fabian, and Stephan Roche, "Giant spin lifetime anisotropy in graphene induced by proximity effects," Phys. Rev. Lett. **119**, 206601 (2017). * Ghiasi _et al._ [2017]Talieh S. Ghiasi, Josep Ingla-Aynes, Alexey A. Kaverzin, and Bart J. Van Wees, "Large Proximity-Induced Spin Lifetime Anisotropy in Transition-Metal Dichalcogenide/Graphene Heterostructures," Nano Lett. **17**, 7528 (2017). * Offidani and Ferreira [2018]Manuel Offidani and Aires Ferreira, "Microscopic theory of spin relaxation anisotropy in graphene with proximity-induced spin-orbit coupling," Phys. Rev. B **98**, 245408 (2018). * Leutenantsmeyer _et al._ [2018]Johannes Christian Leutenantsmeyer, Josep Ingla-Aynes, Jaroslav Fabian, and Bart J. van Wees, "Observation of Spin-Valley-Coupling-Induced Large Spin-Lifetime Anisotropy in Bilayer Graphene," Phys. Rev. Lett. **121**, 127702 (2018). * Omar _et al._ [2019]S. Omar, B. N. Madhushankar, and B. J. van Wees, "Large spin-relaxation anisotropy in bilayer-graphene/wsa heterostructures," Phys. Rev. B **100**, 155415 (2019). * Ingla-Aynes _et al._ [2021]Josep Ingla-Aynes, Franz Herling, Jaroslav Fabian, Luis E. Hueso, and Felix Casanova, "Electrical control of valley-zeeman spin-orbit-coupling-induced spin precession at room temperature," Phys. Rev. Lett. **127**, 047202 (2021). * Carr _et al._ [2017]Stephen Carr, Daniel Massatt, Shiang Fang, Paul Cazeaux, Mitchell Luskin, and Efthimios Kaxiras, "Twistronics: Manipulating the electronic properties of two-dimensional layered structures through their twist angle," Phys. Rev. B **95**, 075420 (2017). * Hennighausen and Kar [2021]Zachariah Hennighausen and Swastik Kar, "Twistronics: a turning point in 2d quantum materials," Electronic Structure **3**, 014004 (2021). * Ribeiro-Palau _et al._ [2018]Rebecca Ribeiro-Palau, Changjian Zhang, Kenji Watanabe, Takashi Taniguchi, James Hone, and Cory R. Dean, "Twistable electronics with dynamically rotatable heterostructures," Science **361**, 690-693 (2018). * Carr _et al._ [2020]Stephen Carr, Shiang Fang, and Efthimios Kaxiras, "Electronic-structure methods for twisted moire layers," Nature Reviews Materials **5**, 748-763 (2020). * Cao _et al._ [2018]Yuan Cao, Valla Fatemi, Shiang Fang, Kenji Watanabe, Takashi Taniguchi, Efthimios Kaxiras, and Pablo Jarillo-Herrero, "Unconventional superconductivity in magic-angle graphene superlattices," Nature **556**, 43 (2018). * Cao _et al._ [2018]Yuan Cao, Valla Fatemi, Ahmet Demir, Shiang Fang, Spencer L. Tomarken, Jason Y. Luo, Javier D. Sanchez-Yamagishi, Kenji Watanabe, Takashi Taniguchi, Efthimios Kaxiras, Ray C. Ashoori, and Pablo Jarillo-Herrero, "Correlated insulator behaviour at half-filling in magic-angle graphene superlattices," Nature **556**, 80 (2018). * Arora _et al._ [2018]Harpreet Singh Arora, Robert Polski, Yiran Zhang, Alex Thomson, Youngjoon Choi, Hyunjin Kim, Zhong Lin, Ilham Zaky Wilson, Xiaodong Xu, Jiun-Haw Chu, _et al._, "Superconductivity in metallic twisted bilayer graphene stabilized by wse2," Nature **583**, 379-384 (2020). * Stepanov _et al._ (2020)Petr Stepanov, Ipsita Das, Xiaobo Lu, Ali Fahimniya, Kenji Watanabe, Takashi Taniguchi, Frank HL Koppens, Johannes Lischner, Leonid Levitov, and Dmitri K Efetov, "Untying the insulating and superconducting orders in magic-angle graphene," Nature **583**, 375-378 (2020). * Lu _et al._ (2019)Xiaobo Lu, Petr Stepanov, Wei Yang, Ming Xie, Mohammed Ali Aamir, Ipsita Das, Carles Urgell, Kenji Watanabe, Takashi Taniguchi, Guangyu Zhang, _et al._, "Superconductors, orbital magnets and correlated states in magic-angle bilayer graphene," Nature **574**, 653-657 (2019). * Sharpe _et al._ (2019)Aaron L. Sharpe, Eli J. Fox, Arthur W. Barnard, Joe Finney, Kenji Watanabe, Takashi Taniguchi, M. A. Kastner, and David Goldhaber-Gordon, "Emergent ferromagnetism near three-quarters filling in twisted bilayer graphene," Science **365**, 605-608 (2019). * Saito _et al._ (2021)Yu Saito, Fangyuan Yang, Jingyuan Ge, Xiaoxue Liu, Takashi Taniguchi, Kenji Watanabe, JIA Li, Erez Berg, and Andrea F Young, "Isospin pomeranchuk effect in twisted bilayer graphene," Nature **592**, 220-224 (2021). * Serlin _et al._ (2020)M. Serlin, C. L. Tschirhart, H. Polshyn, Y. Zhang, J. Zhu, K. Watanabe, T. Taniguchi, L. Balents, and A. F. Young, "Intrinsic quantized anomalous hall effect in a moire heterostructure," Science **367**, 900-903 (2020). * Nimbalkar and Kim (2020)Amol Nimbalkar and Hyunmin Kim, "Opportunities and challenges in twisted bilayer graphene: a review," Nano-Micro Letters **12**, 126 (2020). * Bultinck _et al._ (2020)Nick Bultinck, Shubhayu Chatterjee, and Michael P. Zaletel, "Mechanism for anomalous hall ferromagnetism in twisted bilayer graphene," Phys. Rev. Lett. **124**, 166601 (2020). * Repellin _et al._ (2020)Cecile Repellin, Zhihuan Dong, Ya-Hui Zhang, and T. Senthil, "Ferromagnetism in narrow bands of moire superlattices," Phys. Rev. Lett. **124**, 187601 (2020). * Choi _et al._ (2019)Youngjoon Choi, Jeannette Kemmer, Yang Peng, Alex Thomson, Harpreet Arora, Robert Polski, Yiran Zhang, Hechen Ren, Jason Alicea, Gil Refael, _et al._, "Electronic correlations in twisted bilayer graphene near the magic angle," Nature Physics **15**, 1174-1180 (2019). * Lisi _et al._ (2021)Simone Lisi, Xiaobo Lu, Tjerk Benschop, Tobias A de Jong, Petr Stepanov, Jose R Duran, Florian Margot, Irene Cucchi, Edoardo Cappelli, Andrew Hunter, _et al._, "Observation of flat bands in twisted bilayer graphene," Nature Physics **17**, 189-193 (2021). * Balents _et al._ (2020)Leon Balents, Cory R Dean, Dmitri K Efetov, and Andrea F Young, "Superconductivity and strong correlations in moire flat bands," Nature Physics **16**, 725-733 (2020). * Wolf _et al._ (2019)T. M. R. Wolf, J. L. Lado, G. Blatter, and O. Zilberberg, "Electrically tunable flat bands and magnetism in twisted bilayer graphene," Phys. Rev. Lett. **123**, 096802 (2019). * Golv _et al._ (2022)Y. Galv\(\tilde{\text{o}}\)obato, C. Serati de Brito, A. Chaves, M. A. Prosnikov, T. Wozniak, Shi Guo, Ingrid D. Barcelos, M. V. Milosevic, F. Withers, and P. C. M. Christianen, "Distinctive g-factor of moire-confined excitons in van der waals heterostructures," Nano Letters **22**, 8641-8646 (2022). * Lin _et al._ (2023)Bo-Han Lin, Yung-Chun Chao, I-Ta Hsieh, Chih-Piao Chuu, Chien-Ju Lee, Fu-Hsien Chu, Li-Syuan Lu, Wei-Ting Hsu, Chun-Wei Pao, Chih-Kang Shih, Jung-Jung Su, and Wen-Hao Chang, "Remarkably deep moire potential for intralayer excitons in nose2/mos2 twisted heterobilayers," Nano Letters **23**, 1306-1312 (2023). * Zollner _et al._ (2023)Klaus Zollner, Paulo E. Faria Junior, and Jaroslav Fabian, "Strong manipulation of the valley splitting upon twisting and gating in \(\text{mose}_{2}/\text{cri}_{3}\) and \(\text{wse}_{2}/\text{cri}_{3}\) van der waals heterostructures," Phys. Rev. B **107**, 035112 (2023). * Pezo _et al._ (2021)Armando Pezo, Zeila Zanolli, Nils Wittemeier, Pablo Ordejon, Adalberto Fazzio, Stephan Roche, and Jose H Garcia, "Manipulation of spin transport in graphene/transition metal dichalcogenide heterobilayers upon twisting," 2D Materials **9**, 015008 (2021). * David _et al._ (2019)Alessandro David, Peter Rakyta, Andor Kormanyos, and Guido Burkard, "Induced spin-orbit coupling in twisted graphene-transition metal dichalcogenide heterobilayers: Twistronics meets spintronics," Phys. Rev. B **100**, 085412 (2019). * Li and Koshino (2019)Yang Li and Mikito Koshino, "Twist-angle dependence of the proximity spin-orbit coupling in graphene on transition-metal dichalcogenides," Phys. Rev. B **99**, 075438 (2019). * Lee _et al._ (2022)Seungjun Lee, D. J. P. de Sousa, Young-Kyun Kwon, Fernando de Juan, Zhendong Chi, Felix Casanova, and Tony Low, "Charge-to-spin conversion in twisted graphene/wse\({}_{2}\) heterostructures," Phys. Rev. B **106**, 165420 (2022). * Wang _et al._ (2022)Jicui Wang, Mei Ge, Rongrong Ma, Yun Sun, Liyuan Cheng, Rui Wang, Miaomiao Guo, and Junfeng Zhang, "Twist angle dependent electronic properties in 2D graphene/MoS2 vdW heterostructures," Journal of Applied Physics **131**, 034301 (2022). * Rockinger and Eroms (2022)T. Rockinger and J. Eroms, private communication (2022). * Benitez _et al._ (2018)L. Antonio Benitez, Juan F. Sierra, Williams Savero Torres, Alois Arrighi, Frederic Bonell, Marius V. Costache, and Sergio O. Valenzuela, "Strongly anisotropic spin relaxation in graphene-transition metal dichalcogenide heterostructures at room temperature," Nat. Phys. **14**, 303 (2018). * Bahn and Jacobsen (2002)S. R. Bahn and K. W. Jacobsen, "An object-oriented scripting interface to a legacy electronic structure code," Comput. Sci. Eng. **4**, 56 (2002). * 334 (2015). * Koda _et al._ (2016)Daniel S Koda, Friedhelm Bechstedt, Marcelo Marques, and Lara K Teles, "Coincidence lattices of 2d crystals: heterostructure predictions and applications," The Journal of Physical Chemistry C **120**, 10895-10908 (2016). * Baskin and Meyer (1955)Y. Baskin and L. Meyer, "Lattice constants of graphite at low temperatures," Phys. Rev. **100**, 544 (1955). * Wakabayashi _et al._ (1975)N. Wakabayashi, H. G. Smith, and R. M. Nicklow, "Lattice dynamics of hexagonal MoS2 studied by neutron scattering," Phys. Rev. B **12**, 659 (1975). * Schutte _et al._ (1987)W. J. Schutte, J. L. De Boer, and F. Jellinek, "Crystal structures of tungsten disulfide and diselenide," Journal of Solid State Chemistry **70**, 207 (1987). * James and Lavik (1963)P. B. James and M. T. Lavik, "The crystal structure of MoSe2," Acta Crystallographica **16**, 1183 (1963). * Zollner _et al._ (2019)Klaus Zollner, Paulo E. Faria Junior, and Jaroslav Fabian, "Strain-tunable orbital, spin-orbit, and optical properties of monolayer transition-metal dichalcogenides," Phys. Rev. B **100**, 195126 (2019). * Hohenberg and Kohn (1964)P. Hohenberg and W. Kohn, "Inhomogeneous electron gas," Phys. Rev. **136**, B864 (1964). * Giannozzi _et al._ (2020)Paolo Giannozzi, Stefano Baroni, Nicola Bonini, Matteo Calandra, Roberto Car, Carlo Cavazzoni, Davide Ceresoli, Guido L Chiarotti, Matteo Cococcioni, Ismaila Dabo, Andrea Dal Corso, Stefano de Gironcoli, Stefano Fabris, Guido Fratesi, Ralph Gebauer, Uwe Gerstmann, Christos Gougoussis, Anton Kokalj, Michele Lazzeri, Layla Martin-Samos, Nicola Marzari, Francesco Mauri, Riccardo Mazzarello, Stefano Paolini, Alfredo Pasquarello, Lorenzo Paulatto, Carlo Sbraccia, Sandro Scandolo, Gabriele Sclauzero, Ari P Seitsonen, Alexander Smogunov, Paolo Umari, and Renata M Wentzcovitch, "Quantum espresso: a modular and open-source software project for quantum simulations of materials," Journal of Physics: Condensed Matter **21**, 395502 (2009). * Kresse and Joubert (1999)G. Kresse and D. Joubert, "From ultrasoft pseudopotentials to the projector augmented-wave method," Phys. Rev. B **59**, 1758 (1999). * Perdew _et al._ (1996)John P. Perdew, Kieron Burke, and Matthias Ernzerhof, "Generalized gradient approximation made simple," Phys. Rev. Lett. **77**, 3865 (1996). * Grimme (2006)Stefan Grimme, "Semiempirical gga-type density functional constructed with a long-range dispersion correction," J. Comput. Chem. **27**, 1787 (2006). * Grimme _et al._ (2010)Stefan Grimme, Jens Antony, Stephan Ehrlich, and Helge Krieg, "A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu," J. Chem. Phys. **132**, 154104 (2010). * Barone _et al._ (2009)Vincenzo Barone, Maurizio Casarin, Daniel Forrer, Michele Pavone, Mauro Sambi, and Andrea Vittadini, "Role and effective treatment of dispersive forces in materials: Polyethylene and graphite crystals as test cases," J. Comput. Chem. **30**, 934 (2009). * Bengtsson (1999)Lennart Bengtsson, "Dipole correction for surface supercell calculations," Phys. Rev. B **59**, 12301 (1999). * Veneri _et al._ (2022)Alessandro Veneri, David T. S. Perkins, Csaba G. Peterfalvi, and Aires Ferreira, "Twist angle controlled collinear edelstein effect in van der waals heterostructures," Phys. Rev. B **106**, L081406 (2022). * Offidani _et al._ (2017)Manuel Offidani, Mirco Milletari, Roberto Raimondi, and Aires Ferreira, "Optimal charge-to-spin conversion in graphene on transition-metal dichalcogenides," Phys. Rev. Lett. **119**, 196801 (2017). * Hogl _et al._ (2020)Petra Hogl, Tobias Frank, Klaus Zollner, Denis Kochan, Martin Gmitra, and Jaroslav Fabian, "Quantum anomalous hall effects in graphene from proximity-induced uniform and staggered spin-orbit and exchange coupling," Phys. Rev. Lett. **124**, 136403 (2020). * Frank _et al._ (2018)Tobias Frank, Petra Hogl, Martin Gmitra, Denis Kochan, and Jaroslav Fabian, "Protected pseudohelical edge states in \({\mathit{F}}_{2}\)-trivial proximitized graphene," Phys. Rev. Lett. **120**, 156402 (2018). * Zollner _et al._ (2021)Klaus Zollner, Aron W. Cummings, Stephan Roche, and Jaroslav Fabian, "Graphene on two-dimensional hexagonal bn, aln, and gan: Electronic, spin-orbit, and spin relaxation properties," Phys. Rev. B **103**, 075129 (2021). * Zollner _et al._ (2020)Klaus Zollner, Martin Gmitra, and Jaroslav Fabian, "Swapping exchange and spin-orbit coupling in 2d van der waals heterostructures," Phys. Rev. Lett. **125**, 196402 (2020). * Zollner and Fabian (2021)Klaus Zollner and Jaroslav Fabian, "Bilayer graphene encapsulated within monolayers of ws\({}_{2}\) or cr\({}_{2}\)ge\({}_{2}\)te\({}_{6}\): Tunable proximity spin-orbit or exchange coupling," Phys. Rev. B **104**, 075126 (2021). * Zollner _et al._ (2022)Klaus Zollner, Martin Gmitra, and Jaroslav Fabian, "Proximity spin-orbit and exchange coupling in aba and abc trilayer graphene van der waals heterostructures," Phys. Rev. B **105**, 115126 (2022). * Kochan _et al._ (2017)Denis Kochan, Susanne Irmer, and Jaroslav Fabian, "Model spin-orbit coupling hamiltonians for graphene systems," Phys. Rev. B **95**, 165415 (2017). * Kane and Mele (2005)C. L. Kane and E. J. Mele, "Quantum Spin hall effect in graphene," Phys. Rev. Lett. **95**, 226801 (2005). * Newville _et al._ (2014)Matthew Newville, Till Stensitzki, Daniel B. Allen, and Antonino Ingagiola, "LMFIT: Non-Linear Least-Square Minimization and Curve-Fitting for Python," (2014). * Gmitra _et al._ (2009)M. Gmitra, S. Konschuh, C. Ertler, C. Ambrosch-Draxl, and J. Fabian, "Band-structure topologies of graphene: Spin-orbit coupling effects from first principles," Phys. Rev. B **80**, 235431 (2009). * Sichau _et al._ (2019)J. Sichau, M. Prada, T. Anlauf, T. J. Lyon, B. Bosnjak, L. Tiemann, and R. H. Blick, "Resonance microwave measurements of an intrinsic spin-orbit coupling gap in graphene: A possible indication of a topological state," Phys. Rev. Lett. **122**, 046403 (2019). * Si _et al._ (2016)Chen Si, Zhimei Sun, and Feng Liu, "Strain engineering of graphene: a review," Nanoscale **8**, 3207-3217 (2016). * Choi _et al._ (2010)Seon-Myeong Choi, Seung-Hoon Jhi, and Young-Woo Son, "Effects of strain on electronic properties of graphene," Phys. Rev. B **81**, 081407 (2010). * Pierucci _et al._ (2016)Debora Pierucci, Hugo Henck, Jose Avila, Adrian Balan, Carl H. Naylor, Gilles Patriarche, Yannick J. Dappe, Mathieu G. Silly, Fausto Sirotti, A. T. Charlie Johnson, Maria C. Asensio, and Abdelkarim Ouerghi, "Band alignment and minigaps in monolayer mos2-graphene van der waals heterostructures," Nano Letters **16**, 4054-4061 (2016). * Kim _et al._ (2015)Kyounghwan Kim, Stefano Larentis, Babak Fallahazad, Kayoung Lee, Jiamin Xue, David C. Dillen, Chris M. Corbet, and Emanuel Tutuc, "Band alignment in wse2-graphene heterostructures," ACS Nano **9**, 4527-4532 (2015). * Grassano _et al._ (2020)D. Grassano, M. D'Alessandro, O. Pulci, S. G. Sharapov, V. P. Gusynin, and A. A. Varlamov, "Work function, deformation potential, and collapse of landau levels in strained graphene and silicene," Phys. Rev. B **101**, 245115 (2020). * (127)See Supplemental Material, including Refs. [13, 87, 117, 152-163] where we summarize structural information and fit results in tabular form for the investigated heterostructures. We also analyze the origin of proximity SOC and give details on the real space transport calculations. * Peterfalvi _et al._ (2022)Csaba G. Peterfalvi, Alessandro David, Peter Rakyta, Guido Burkard, and Andor Kormanyos, "Quantum interference tuning of spin-orbit coupling in twisted van der waals trilayers," Phys. Rev. Res. **4**, L022049 (2022). * Fang _et al._ (2019)Sihang Fang, Rodrick Ruate Defo, Sharmin N. Shirodkar, Simon Lieu, Georgios A. Tritsaris, and Efthimios Kaxiras, "Ab initio tight-binding hamiltonian for transition metal dichalcogenides," Phys. Rev. B **92**, 205108 (2015). * Safeer _et al._ [2021]C K Safeer, Franz Herling, Won Young Choi, Nerea Ontoso, Josep Ingla-Aynes, Luis E Hueso, and Felix Casanova, "Reliability of spin-to-charge conversion measurements in graphene-based lateral spin valves," 2D Materials **9**, 015024 (2021). * Monaco _et al._ [2021]Carmen Monaco, Aires Ferreira, and Roberto Raimondi, "Spin hall and inverse spin galvanic effects in graphene with strong interfacial spin-orbit coupling: A quasi-classical green's function approach," Phys. Rev. Research **3**, 033137 (2021). * Ferreira [2021]Aires Ferreira, "Theory of spin-charge-coupled transport in proximitized graphene: an SO(5) algebraic approach," Journal of Physics: Materials **4**, 045006 (2021). * Millettari _et al._ [2017]Mirco Millettari, Manuel Offidani, Aires Ferreira, and Roberto Raimondi, "Covariant conservation laws and the spin hall effect in dirac-rashba systems," Phys. Rev. Lett. **119**, 246801 (2017). * Dyrdal _et al._ [2014]A. Dyrdal, J. Barnas, and V. K. Dugaev, "Current-induced spin polarization in graphene due to rashba spin-orbit interaction," Phys. Rev. B **89**, 075422 (2014). * Garcia _et al._ [2017]Jose H. Garcia, Aron W. Cummings, and Stephan Roche, "Spin hall effect and weak antilocalization in graphene/transition metal dichalcogenide heterostructures," Nano Letters **17**, 5078-5083 (2017). * Camosi _et al._ [2022]Lorenzo Camosi, Josef Svetlik, Marius V Costache, Williams Savero Torres, Ivan Fernandez Aguirre, Vera Marinova, Dimitre Dimitrov, Marin Gospodinov, Juan F Sierra, and Sergio O Valenzuela, "Resolving spin currents and spin densities generated by charge-spin interconversion in systems with reduced crystal symmetry," 2D Materials **9**, 035014 (2022). * Zhao _et al._ [2020]Bing Zhao, Bogdan Karpiak, Dmitrii Khokhirakov, Annik Johansson, Anamull Md. Hoque, Xiaoguang Xu, Yong Jiang, Ingrid Mertig, and Saroj P. Dash, "Unconventional charge-spin conversion in weyl-semimetal wte2," Advanced Materials **32**, 2000818 (2020). * Safeer _et al._ [2019]C. K. Safeer, Nerea Ontoso, Josep Ingla-Aynes, Franz Herling, Van Tuong Pham, Annika Kurzmann, Klaus Ensslin, Andrey Chuvilin, Inigo Robredo, Maia G. Vergniory, Fernando de Juan, Luis E. Hueso, M. Reyes Calvo, and Felix Casanova, "Large multidirectional spin-to-charge conversion in low-symmetry semimetal mote2 at room temperature," Nano Letters **19**, 8758-8766 (2019). * Ontoso _et al._ [2023]Nerea Ontoso, C. K. Safeer, Franz Herling, Josep Ingla-Aynes, Haozhe Yang, Zhendong Chi, Beatriz Martin-Garcia, Inigo Robredo, Maia G. Vergniory, Fernando de Juan, M. Reyes Calvo, Luis E. Hueso, and Felix Casanova, "Unconventional charge-to-spin conversion in graphene/mote\({}_{2}\) van der waals heterostructures," Phys. Rev. Appl. **19**, 014053 (2023). * Culcer and Winkler [2007]Dimitrie Culcer and R. Winkler, "Generation of spin currents and spin densities in systems with reduced symmetry," Phys. Rev. Lett. **99**, 226601 (2007). * Rojas-Sanchez and Fert [2019]J.-C. Rojas-Sanchez and A. Fert, "Compared efficiencies of conversions between charge and spin current by spin-orbit interactions in two- and three-dimensional systems," Phys. Rev. Applied **11**, 054049 (2019). * Rojas-Sanchez _et al._ [2014]J.-C. Rojas-Sanchez, N. Reyren, P. Laczkowski, W. Savero, J.-P. Attane, C. Deranlot, M. Jamet, J.-M. George, L. Vila, and H. Jaffres, "Spin pumping and inverse spin hall effect in platinum: The essential role of spin-memory loss at metallic interfaces," Phys. Rev. Lett. **112**, 106602 (2014). * Kim _et al._ [2016]Junyeon Kim, Peng Sheng, Saburo Takahashi, Seiji Mintani, and Masamitsu Hayashi, "Spin hall magnetoresistance in metallic bilayers," Phys. Rev. Lett. **116**, 097201 (2016). * Wang _et al._ [2014]Yi Wang, Praveen Deorani, Xuepeng Qiu, Jae Hyun Kwon, and Hyunsoo Yang, "Determination of intrinsic spin hall angle in pt," Applied Physics Letters **105**, 152412 (2014). * Nikolic _et al._ [2018]Branislav K. Nikolic, Kapilde Dolui, Marko D. Petrovic, Petr Plechac, Troels Markussen, and Kurt Stokbro, "First-principles quantum transport modeling of spin-transfer and spin-orbit torques in magnetic multilayers," in _Handbook of Materials Modeling: Applications: Current and Emerging Materials_, edited by Wanda Andreoni and Sidney Yip (Springer International Publishing, Cham, 2018) pp. 1-35. * Nikolic _et al._ [2006]Branislav K. Nikolic, Liviu P. Zarbo, and Satofumi Souma, "Imaging mesoscopic spin hall flow: Spatial distribution of local spin currents and spin densities in and out of multiterminal spin-orbit coupled semiconductor nanostructures," Phys. Rev. B **73**, 075303 (2006). * Nikolic _et al._ [2005]Branislav K. Nikolic, Satofumi Souma, Liviu P. Zarbo, and Jairo Sinova, "Nonequilibrium spin hall accumulation in ballistic semiconductor nanostructures," Physical Review Letters **95**, 046601 (2005). * Wang _et al._ [2016]Lei Wang, R. J. H. Wesselink, Yi Liu, Zhe Yuan, Ke Xia, and Paul J. Kelly, "Giant room temperature interface spin hall and inverse spin hall effects," Phys. Rev. Lett. **116**, 196602 (2016). * Stefanucci and Leeuwen [2013]Gianluca Stefanucci and Robert van Leeuwen, _Nonequilibrium Many-Body Theory of Quantum Systems: A Modern Introduction_ (Cambridge University Press, 2013). * Liu and Richter [2012]Ming-Hao Liu and Klaus Richter, "Efficient quantum transport simulation for bulk graphene heterojunctions," Phys. Rev. B **86**, 115455 (2012). * Giuliani and Vignale [2005]Gabriee Giuliani and Giovanni Vignale, _Quantum Theory of the Electron Liquid_ (Cambridge University Press, 2005). * Zawadzki _et al._ [2017]Krisisa Zawadzki, Irene D'Amico, and Luiz N. Oliveira, "Symmetries and Boundary Conditions with a Twist," Brazilian Journal of Physics **47**, 488-511 (2017). * Keldysh _et al._ [1965]Leonid V Keldysh _et al._, "Diagram technique for nonequilibrium processes," Sov. Phys. JETP **20**, 1018-1026 (1965). * Mahfouzi and Nikolic [2013]Farzad Mahfouzi and Branislav K. Nikolic, "How to construct the proper gauge-invariant density matrix in steady-state nonequilibrium: Applications to spin-transfer and spin-orbit torques," SPIN **03**, 1330002 (2013). * Ozaki [2007]Taisuke Ozaki, "Continued fraction representation of the fermi-dirac function for large-scale electronic structure calculations," Phys. Rev. B **75**, 035123 (2007). * Sancho _et al._ [1985]M P Lopez Sancho, J M Lopez Sancho, J M L Sancho, and J Rubio, "Highly convergent schemes for the calculation of bulk and surface green functions," Journal of Physics F: Metal Physics **15**, 851 (1985). * MacKinnon and Kramer [1981]A. MacKinnon and B. Kramer, "One-Parameter Scaling of Localization Length and Conductance in Disordered Systems," Physical Review Letters **47**, 1546-1549 (1981). * Lewenkopf and Mucciolo [2013]Caio H. Lewenkopf and Eduardo R. Mucciolo, "The recursive Green's function method for graphene," Journal of Computational Electronics **12**, 203-231 (2013). * Groth _et al._ [2014]Christoph W. Groth, Michael Wimmer, Anton R. Akhmerov, and Xavier Waintal, "Kwant: A software package for quantum transport," New Journal of Physics **16**, 063065 (2014). * Gaury _et al._ [2014]Bon'i Gaury, Joseph Weston, Matthieu Santin, Manuel Houzet, Christoph Groth, and Xavier Waintal, "Numerical simulations of time-resolved quantum electronics," Physics Reports **534**, 1-37 (2014). * Weisse _et al._ [2006]Alexander Weisse, Gerhard Wellein, Andreas Alvermann, and Holger Fehske, "The kernel polynomial method," Reviews of modern physics **78**, 275 (2006). * Sintos Pires _et al._ [2020]J. P. Santos Pires, B. Amorim, and J. M. Viana Parente Lopes, "Landauer transport as a quasisteady state on finite chains under unitary quantum dynamics," Physical Review B **101**, 104203 (2020). * Joao _et al._ [2020]Simao M Joao, Misa Andelkovic, Lucian Covaci, Tatiana G Rappoport, Joao MVP Lopes, and Aires Ferreira, "Kite: high-performance accurate modelling of electronic structure and response functions of large molecules, disordered crystals and heterostructures," Royal Society Open Science **7**, 191809 (2020). **Supplemental Material: Twist- and gate-tunable proximity spin-orbit coupling, spin relaxation anisotropy, and charge-to-spin conversion in heterostructures of graphene and transition-metal dichalcogenides** Klaus Zollner [1]Institute for Theoretical Physics, University of Regensburg, 93040 Regensburg, Germany [2]Department of Materials, Imperial College London, South Kensington Campus, London SW7 2AZ, United Kingdom [3]Department of Physics and Astronomy, University of Delaware, Newark, DE 19716, USA [4]S.A.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J..J.M.J.M.J.J.M.J.J.M.J..M.J.M.J.M.J.M.J.M.J.J.M.J.J.M.J.M.J.M.J.M.J..J.M.J.M.J.M.J.M.J.M.J..M.J.M.J.M.J.J.M.J.J.M.J..M.J.M.J.M.J.J.M.J..M.J.J.M.J.M.J..M.J.M.J.M.J.M.J..M.J.J.M.J.M.J.M.J.J.M.J.M.J.J.M.J.M.J.M.J.M.J..M.J.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.J.M.J.M.J.J.M.J.M.J..M.J.M.J.M.J.J.M.J.M.J.M.J.M.J..M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.J.M.J.M.J.J.M.J.M.J.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.M.J.J.M.J.M.J.M.J.M.J.M.J.M.J.J. Band Offsets ## III Vertical and lateral shifts -- model parameters Transverse electric field -- model parameters Origin of Proximity SOC Let us first investigate the MoSe\({}_{2}\) monolayer band structures, as an example, to find out about the magnitude of proximity SOC. According to Ref. [1], the interlayer coupling can be effectively described by tunneling matrix elements from graphene orbitals to TMDC bands at particular \(k\) points for different twist angles. From this analysis, one finds that the proximity-induced valley-Zeeman SOC can be estimated as: \[\lambda_{\rm{VZ}}\propto\sum_{b}\frac{|t_{b}|^{2}\Delta_{s,b}}{(\Delta E_{b})^{ 2}-(\Delta_{s,b})^{2}},\] (S1) where \(\Delta E_{b}\) is the energy difference between TMDC band \(b\) and the Dirac point without taking into account SOC, \(t_{b}\) is the band tunneling strength, and \(\Delta_{s,b}=E_{\uparrow,b}-E_{\downarrow,b}\) is the spin splitting of the TMDC band. It is then obvious that \(\lambda_{\rm{VZ}}\) depends on the particular \(k\) point within the TMDC Brillouin zone, to which the Dirac states couple to, since all these quantities depend on \(k\). It is straightforward, to calculate the valley-Zeeman SOC from the monolayer TMDC band dispersions, having knowledge about \(t_{b}\). In Fig. S1, we summarize our MoSe\({}_{2}\) band structure analysis. Depending on the twist angle, the Dirac states fold back to different locations within the TMDC Brillouin zone, as indicated by the dashed lines and black dots for three selected twist angles in Fig. S1(a). We also show the atomic character of the TMDC bands, see Fig. S1(b), which certainly influences the band tunneling strength. Remember that graphene resides above the TMDC and the interlayer coupling depends also on the distance between C atoms and metal/chalcogen atoms. In Fig. S1(c), we show the spin splittings, \(\Delta_{s,b}\), of three individual TMDC bands, that are probably most relevant for the coupling to graphene, since they are energetically closest to the Dirac states. Note that the spin splitting can be positive or negative, depending on the energetic order of the spin-split TMDC bands. In Fig. S1(d), we show the valley-Zeeman SOC as calculated from perturbation theory, assuming \(t_{b}=1\). The maximum valley-Zeeman SOC is expected at the TMDC \(K\) point, where the first valence band has a giant spin splitting. However, due to lattice mismatch a coupling of the graphene Dirac states directly to the TMDC valleys is not possible. As we can see for \(30^{\circ}\), the Dirac states are folded at a \(k\)-point along the \(\Gamma\)-M high-symmetry line of the TMDC. Along this line, the TMDC bands are not spin split. Even though graphene couples to the TMDC across the vdW gap, the absent splitting within the TMDC bands prohibits a finite proximity-induced valley-Zeeman SOC in graphene, according to Eq. (S1). This is in agreement with the actual DFT calculation results for \(30^{\circ}\). However, comparing \(0^{\circ}\) and \(19.1^{\circ}\) in Fig. S1(d), the predicted valley-Zeeman SOC for \(0^{\circ}\) would be much larger compared to \(19.1^{\circ}\). This contradicts the DFT results in Fig. 5 of the main paper, where valley-Zeeman SOC shows a maximum at \(19.1^{\circ}\) for MoSe\({}_{2}\). Moreover, for \(19.1^{\circ}\) the predicted valley-Zeeman SOC is negative. Comparing to the extracted parameters in Table 1, the DFT predicts a positive valley-Zeeman SOC, but a negative one for other twist angles. Therefore it is likely that a sign change could appear upon twisting. Certainly, the band tunneling strength is not the same for all bands and \(k\) points. Therefore, here we also employ the following approach for calculating \(t_{b}\) from the monolayer TMDC dispersion. From the band structure in Fig. S1(b), we have knowledge about the atomic projections. In addition, from the relaxed graphene/MoSe\({}_{2}\) heterostructures, we know about the interlayer distances of C atoms to the individual atomic layers within the TMDC. Hence, we assume: \[t_{b}=\sum_{\alpha}100\cdot P_{\alpha}\cdot e^{-d_{\alpha}},\] (S2) where \(P_{\alpha}\in[0;1]\) is the projection onto atom \(\alpha=\){Mo, Se\({}_{1}\), Se\({}_{2}\)}, for given band \(b\) and \(k\)-point, which is weighted by an exponential function taking into account the interlayer distances \(d_{\alpha}\) between graphene and the TMDC atomic layers. In Ref. [1], \(t_{b}\) is actually calculated from knowledge about orbital amplitudes of the TMDC band structure, assuming constant interlayer hopping amplitudes, and taking only the closest chalcogen layer into account. We believe that our approach is somewhat similar and also well justified. Our calculated band tunneling strengths along the high-symmetry paths are summarized in Fig. S1(e). Since the first Se layer in MoSe\({}_{2}\) is closest to the graphene, the tunneling strength is large for \(k\)-points where the Se content is large within a particular band. In Fig. S1(f), we show the valley-Zeeman SOC as calculated from perturbation theory and taking into account our calculated band tunneling strengths. Indeed, this leads to an enhanced valley-Zeeman SOC for \(19.1^{\circ}\), compared to the case of \(t_{b}=1\). Still, the predicted valley-Zeeman SOC for \(0^{\circ}\) would be larger compared to \(19.1^{\circ}\) in contradiction to our DFT results. From our monolayer TMDC band structure analysis, one can certainly extract some information about the coupling mechanism, but the predicted valley-Zeeman SOC does not reflect the actual DFT results, most likely due to the limited set of bands that we include in our analysis. We performed the same analysis also for the WSe\({}_{2}\) monolayer, see Fig. S2, but the results are similar to MoSe\({}_{2}\). However, one can already find pronounced differences. For example, comparing the predicted valley-Zeeman SOC for \(0^{\circ}\) [Fig. S1(f) and Fig. S2(f)], WSe\({}_{2}\) should give a much larger value compared to MoSe\({}_{2}\), which is consistent with the DFT results. For \(19.1^{\circ}\) the predicted valley-Zeeman SOC for WSe\({}_{2}\) is smaller than for \(0^{\circ}\), which is also consistent with our DFT data. Another way to find out about the origin of proximity-induced SOC is by carefully analyzing the heterostructure dispersion. In Fig. S3, we do that for the case of graphene/MoSe\({}_{2}\) and a twist angle of \(0^{\circ}\). Especially from the projected band structure, see Fig. S3(b), we find that the graphene Dirac states couple to TMDC high-energy conduction and valence bands, as indicated by the anticrossings (greenish and yellowish colors). The lowest TMDC conduction bands do not seem to contribute to the interlayer coupling. Looking at the full density of states (DOS), see Fig. S3(e), anticrossings appear whenever the Se content is large, for example at about 1.5 (\(-\)1.8) eV above (below) the Fermi level. This is reasonable, since the interlayer coupling happens predominantly at the interface between C and Se atoms. Analyzing the low energy Dirac bands, we find that only 0.3% of Mo and 0.4% of Se content contribute there, leading to a sizable spin splitting of the bands, see Fig. S3(c,d). From the integrated local DOS in real space, see Fig. S4, we find that the whole TMDC is contributing to the Dirac bands, but predominantly interfacial Se \(p\) and Mo \(d_{xz}+d_{yz}\) orbitals. In the case of \(19.1^{\circ}\), the situation is similar and Dirac states also couple predominantly at higher energies, see Fig. S6. However, the low energy bands have a much larger Mo and Se content compared to the \(0^{\circ}\) case. In fact, the contribution has doubled, which explains the much larger proximity SOC for \(19.1^{\circ}\). The origin may be the coupling to the second highest TMDC valence band (VB\({}_{2}\) in Fig. S1), which is almost exclusively formed by Se atoms and has a giant spin splitting at the \(19.1^{\circ}\) backfolding \(k\)-point. From the integrated local DOS in real space, see Fig. S5, we find that also Mo \(d_{z^{2}}\) orbitals contribute, which is a significant difference compared to the \(0^{\circ}\) case. In Fig. S7, we analyze the \(30^{\circ}\) geometry. The formerly dominant \(s_{z}\) spin polarization of nearly the whole band structure has vanished, which originates from the arising mirror plane symmetry. Still, the Dirac states couple predominantly to high energy TMDC states. The low energy Dirac bands have nearly the same Mo and Se contribution as for the \(19.1^{\circ}\) case, but due to symmetry considerations a valley-Zeeman SOC is prohibited. Figure S3: (a) DFT-calculated band structure of the graphene/MoSe\({}_{2}\) heterostructure along the high-symmetry path M-K-\(\Gamma\) for a twist angle of 0\({}^{\circ}\). The color of the lines corresponds to the \(s_{z}\) spin expectation value. The inset shows the backfolding of the graphene Dirac point at \(K\). The black (green) hexagon represents the graphene (TMDC) Brillouin zone. (b) Same as (a), but the color code shows the contribution of the individual monolayers to the bands, i. e., the bands appear dark-reddish (dark-blueish) when only TMDC (graphene) orbitals contribute. (c) Zoom to the calculated low-energy Dirac bands near the Fermi level around the \(K\) point, corresponding to the band structure in (a). We also show the corresponding atom resolved density of states (DOS). The contribution of Mo and Se is multiplied by a factor of 100 for better visualization. (d) Top view of the heterostructure geometry (black = C, blue = Mo, yellow = Se), where the dashed lines indicate the unit cell. The Table lists the atomic decomposition (in percent) of the DOS shown in (c) at the given energies. (e) The atom resolved DOS. Figure S4: DFT-calculated integrated local density of states of the \(0^{\circ}\) graphene/MoSe\({}_{2}\) heterostructure. The figure is a side view of the cut along the longer diagonal of the unit cell in Fig. S3(d). We take into account only states in an energy window of \(\pm 5\) meV around the Dirac point from the low energy dispersion in Fig. S3(c). The colors correspond to isovalues between \(1\times 10^{-4}\) (blue) and \(5\times 10^{-7}\) (red) e/Γ…\({}^{3}\). Figure S6: Same as Fig. S3, but for the graphene/MoSe\({}_{2}\) heterostructure with a twist angle of \(19.1^{\circ}\). Figure S7: Same as Fig. S3, but for the graphene/MoSe\({}_{2}\) heterostructure with a twist angle of \(30^{\circ}\). Figure S8: Same as Fig. S3, but for the graphene/WSe\({}_{2}\) heterostructure with a twist angle of \(0^{\circ}\). Figure S9: Same as Fig. S3, but for the graphene/WSe\({}_{2}\) heterostructure with a twist angle of \(19.1^{\circ}\). Figure S10: Same as Fig. S3, but for the graphene/WSe\({}_{2}\) heterostructure with a twist angle of \(30^{\circ}\). Figure S11: The calculated spin-orbit fields of the low energy Dirac bands of the graphene/WSe\({}_{2}\) heterostructure with a twist angle of \(19.1^{\circ}\), corresponding to the dispersion in Fig. S9(c). The color represents the \(s_{z}\) spin expectation value, while the arrows represent \(s_{x}\) and \(s_{y}\) spin expectation values. The dashed white lines represent the edges of the hexagonal Brillouin zone, with the \(K\) point at the center. Especially looking at in-plane spins (arrows) along the \(k_{y}=0\) line emphasizes the presence of the Rashba phase angle \(\varphi\approx-19.6^{\circ}\). This is in contrast to the conventional Rashba field in Fig. 4(f) of the main text, where in-plane spins are always perpendicular to the momentum. Atomic contributions to proximity SOC From Fig. S4, we can see that the C \(p_{z}\) orbitals predominantly couple to the Se \(p\) orbitals, which mediate the coupling to Mo \(d_{xz}+d_{yz}\) orbitals. However, it is not clear which atomic type gives the dominant contribution for proximity SOC. Therefore, we investigated the impact of artificially turning off SOC on different atoms. In Table S8, Table S9, Table S10, and Table S11 we summarize the fit results for different twist angles and different TMDCs. Turning off SOC on the TMDC, the spin splitting of the TMDC bands vanishes, along with the proximity SOC and we recover the pristine graphene dispersion. Since the TMDC band splittings are reduced, it is not surprising that also the band offsets, of the Dirac states with respect to the TMDC band edges, are modified. For all considered TMDCs and twist angles the valley-Zeeman and Rashba SOC contributions of different atoms nearly perfectly add up, i. e., summing the second and third row fit parameters of a certain structure gives the fit results of the first row. Surprisingly, for the MoSe\({}_{2}\) and WSe\({}_{2}\)\(0^{\circ}\) structures we find that the transition-metal and the chalcogen atoms provide the opposite sign for the valley-Zeeman SOC. Moreover, the contribution of the transition-metal atoms dominate over the chalcogen ones. Similarly for the \(5.2^{\circ}\) structures. For the \(19.1^{\circ}\) structures, both TMDC atoms provide the same sign for valley-Zeeman SOC, but now the chalcogen atom contribution dominates over the transition-metal one. Furthermore, the Rashba phase angles are opposite in sign. For the \(30^{\circ}\) structures, again the transition-metal gives the dominant contribution. In the case of MoS\({}_{2}\) and WS\({}_{2}\), the transition-metal contribution is always dominant compared to the chalcogen atom contribution. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \(\vartheta\) [\({}^{*}\)] & SOC on & \(\Delta\) [meV] & \(v_{\rm F}/10^{5}[\frac{m}{s}]\) & \(\lambda_{\rm I}^{\rm A}\) [meV] & \(\lambda_{\rm I}^{\rm B}\) [meV] & \(\lambda_{\rm R}\) [meV] & \(\varphi\) [\({}^{*}\)] & \(E_{\rm D}\) [meV] & \(E_{D}-E_{V}\) [eV] \\ \hline 6.5868 & C, W, S & 0.6485 & 8.0248 & 0.7849 & -0.8638 & 0.2337 & 16.8965 & 1.6459 & 0.8035 & -0.8946 \\ 6.5868 & C, W & 0.6327 & 8.0261 & 0.8424 & -0.8997 & 0.2079 & 21.8240 & 3.3981 & 0.8141 & -0.8903 \\ 6.5868 & C, S & 0.6720 & 8.0120 & -0.0070 & 0.0094 & 0.0428 & -5.3514 & -1.3558 & 1.0089 & -0.9381 \\ \hline 14.4649 & C, W, S & 0.4676 & 8.1248 & 0.5635 & -0.6826 & 0.3678 & -1.3235 & 0.3962 & 0.9249 & -0.7755 \\ 14.4649 & C, W & 0.4759 & 8.1623 & 0.5315 & -0.6532 & 0.3231 & -0.2807 & 1.0492 & 0.9342 & -0.7716 \\ 14.4649 & C, S & 0.4707 & 8.1924 & 0.0209 & -0.0223 & 0.0438 & -1.0199 & 0.8989 & 1.1341 & -0.8146 \\ \hline 27.6385 & C, W, S & 0.0025 & 8.2009 & 0.0059 & -0.0113 & 0.2410 & 18.7310 & 1.8203 & 0.9394 & -0.7623 \\ 27.6385 & C, W & 0.0030 & 8.1990 & 0.0284 & -0.0334 & 0.2179 & 18.3814 & -2.0955 & 0.9440 & -0.7631 \\ 27.6385 & C, S & 0.0027 & 8.2060 & -0.0211 & 0.0184 & 0.0253 & 24.5200 & 0.3798 & 1.1455 & -0.8050 \\ \end{tabular} \end{table} Table 11: Fit parameters of the model Hamiltonian for selected graphene/WS\({}_{2}\) heterostructures, where we artificially turned off SOC on W and S. ## VII Correcting strain-related band offset with electric field? One particular effect that we would like to address is the fact that the band offset, i. e., the position of the Dirac point within the TMDC band gap, is a linear function of the strain applied to graphene (see main text Fig. 3). In Ref [2] this strain-related band offset was compensated by a transverse electric field in order to extract the zero-strain-like results. How justified is this assumption? For that purpose, we have considered the \(0^{\circ}\) twist angle structures of graphene on MoSe\({}_{2}\) and WSe\({}_{2}\), where no strain was necessary to build the supercells. These structures serve as reference results. In addition, we consider another \(0^{\circ}\) structure, where graphene is strained by about \(-4.8\%\). We compensate the strain-related band offset by a transverse electric field and compare to the reference results. The electric field that is necessary for the correction can be extracted from Table S6 and Table S7 and is about \(-2.6\) V/nm. If we relax the structures, we find that the large amount of strain leads to a sizable rippling of the graphene layer. In fact, the rippling has increased by a factor of 10. This is also the reason, why in Ref. [2] atomic relaxation was neglected and all the twisted structures were kept at fixed \(z\). To rule out that the rippling introduces unwanted side effects, we have additionally considered the strained structure and flattened graphene (no rippling) to the average interlayer distance from the fully relaxed strained structure, which is different than the distance obtained for the unstrained sample. Also there, we apply the electric field correction and compare to the reference results. Finally, we consider the flattened graphene samples and change the interlayer distance to the reference cases, as proximity effects are rather sensitive to the interlayer distance. The comparison of the mentioned cases is summarized in Table S12, Table S13, and Table S14. Comparing the band offsets, the compressive strain pushes the Dirac point closer to the TMDC conduction band edge. The correction of the band offset with the estimated transverse electric field of \(-2.6\) V/nm works quite well in the case of the fully relaxed structure. Once we flatten the graphene layer, a larger electric field of about \(-3.3\) V/nm is necessary to correct the strain-related band offset. Most important is how the proximitized Dirac states are affected by the strain and the electric field correction. We find that the large amount of rippling leads to a strongly enhanced sublattice asymmetry, reflected in the parameter \(\Delta\), and the opening of a pronounced band gap. Also, the intrinsic SOC parameters are strongly modified by the strain and the rippling. In particular the changes in the WSe\({}_{2}\) case are giant and in the range of 0.5-0.8 meV. In contrast, the Rashba SOC, which in the first place originates from the structural \(z\)-mirror asymmetry and the distortion of graphene \(p_{z}\) orbitals is less affected by the strain. The electric field correction does not help to adjust the results to the reference values. Once we flatten graphene the sublattice asymmetry nearly vanishes and the Fermi velocity strongly renormalizes, while the intrinsic and Rashba SOC parameters almost match the reference values. With the electric field correction, the Rashba SOC parameters can be brought even closer to the reference value, while intrinsic SOC parameters become less reliable. In particular, the intrinsic SOC values deviate by about 20% (50%) in the case of WSe\({}_{2}\) (MoSe\({}_{2}\)) with flattened graphene and the field correction applied. However, the deviation is also due to the different interlayer distance for the strained and the reference structure. As a final check, for the flattened case, we tune the interlayer distance to match the one of the reference structure, since the distance strongly affects proximity effects. Without electric field correction, intrinsic SOCs again match the reference values, while the Rashba SOC is underestimated. Similar to before, the field correction makes the intrinsic (Rashba) SOC parameters less (more) reliable. Based on these findings, we believe that the electric field correction can be partially justified when comparing twisted structures with fixed interlayer distance and no structural relaxation which could lead to rippling, in particular at large strain. However, the presented results for the \(0^{\circ}\) structure may not be representative for the other twist angles. \begin{table} \begin{tabular}{c c c c c c c c c} \hline TMDC & \(\vartheta\) [\({}^{\circ}\)] & E-field [V/nm] & \(\Delta\) [meV] & \(v_{F}/10^{5}[\frac{\mathrm{m}}{\mathrm{s}}]\) & \(\lambda_{\mathrm{I}}^{\mathrm{A}}\) [meV] & \(\lambda_{\mathrm{I}}^{\mathrm{B}}\) [meV] & \(\lambda_{\mathrm{R}}\) [meV] & \(\varphi\) [\({}^{\circ}\)] & \(E_{\mathrm{D}}\) [meV] \\ \hline MoSe\({}_{2}\) & 0.0000 & 0 & 0.4917 & 8.2538 & 0.2422 & -0.2258 & 0.2550 & 0 & 1.8970 \\ & 0.0000 & 0 & 2.4419 & 8.2928 & -0.0353 & -0.3081 & 0.1935 & 0 & 0.2123 \\ & 0.0000 & -2.57 & 1.9394 & 8.3885 & 0.0853 & -0.1075 & 0.2088 & 0 & -0.0015 \\ & 0.0000 & 0 & 0.0422 & 8.8716 & 0.2767 & -0.2634 & 0.1886 & 0 & -0.3451 \\ & 0.0000 & -3.34 & 0.0487 & 8.9140 & 0.3347 & -0.3226 & 0.2500 & 0 & -0.2220 \\ & 0.0000 & 0 & 0.0340 & 8.8793 & 0.2424 & -0.2307 & 0.1647 & 0 & 0.4358 \\ & 0.0000 & -3.34 & 0.0399 & 8.9070 & 0.2952 & -0.2847 & 0.2178 & 0 & -0.0377 \\ \hline WSe\({}_{2}\) & 0.0000 & 0 & 0.5878 & 8.2500 & 1.1722 & -1.1572 & 0.5303 & 0 & 1.2931 \\ & 0.0000 & 0 & 2.3658 & 8.2914 & 0.3939 & -0.5162 & 0.4881 & 0 & -0.0413 \\ & 0.0000 & -2.57 & 1.6984 & 8.2439 & 0.5374 & -0.0649 & 0.4747 & 0 & -0.5142 \\ & 0.0000 & 0 & 0.0269 & 8.8884 & 1.3056 & -1.2737 & 0.4416 & 0 & 0.8119 \\ & 0.0000 & -3.34 & 0.0326 & 8.9017 & 1.4471 & -1.4127 & 0.5486 & 0 & 1.0723 \\ & 0.0000 & 0 & 0.0239 & 8.8938 & 1.1849 & -1.1559 & 0.3986 & 0 & -0.7790 \\ & 0.0000 & -3.34 & 0.0295 & 8.9121 & 1.3126 & -1.2815 & 0.4950 & 0 & -1.1163 \\ \hline \end{tabular} \end{table} Table 13: The calculated position of the Dirac point with respect to the TMDC valence (conduction) band edge, \(E_{D}-E_{V}\) (\(E_{D}-E_{C}\)), as defined in Fig. 2(a), for the graphene/MoSe\({}_{2}\) (graphene/WSe\({}_{2}\)) heterostructures. Red indicates the structure with more strain. Blue is the same as red, but with flat graphene. Green is the same as blue, but the interlayer distance is the same as for the reference structure. ## VIII Real space transport calculations The Rashba-Edelstein effect (REE) and the unconventional REE (UREE) were evaluated using a real-space equivalent of our effective low energy Hamiltonian \(\mathcal{H}\) within the Keldysh formalism on a graphene nanoribbon. The expectation values were evaluated using the Recursive Green's function method (RGFM). ### Hamiltonian The Hamiltonian used for the transport calculations is the real space tight-binding equivalent of \(\mathcal{H}=\mathcal{H}_{0}+\mathcal{H}_{\Delta}+\mathcal{H}_{I}+\mathcal{H}_ {R}\)[3]. The first term \[\mathcal{H}_{0}=-t\sum_{\langle ij\rangle}\sum_{\sigma}c_{i\sigma}^{\dagger}c_ {j\sigma}\] is the graphene Hamiltonian, where \(\langle\cdots\rangle\) denotes a sum over nearest neighbors and \(c_{i\sigma}\) (\(c_{i\sigma}^{\dagger}\)) annihilates (creates) an electron in site \(i\) with spin \(\sigma\). The next term \[\mathcal{H}_{\Delta}=\Delta\sum_{i\in A}\sum_{\sigma}c_{i\sigma}^{\dagger}c_{i \sigma}-\Delta\sum_{i\in B}\sum_{\sigma}c_{i\sigma}^{\dagger}c_{i\sigma}\] arises from sublattice asymmetry. The first (second) sum is over all sites belonging to sublattice A (B). The third term \[\mathcal{H}_{I}=\sum_{S=A,B}\frac{i\lambda_{B}^{S}}{3\sqrt{3}}\sum_{\langle \langle i,j\rangle\rangle}\sum_{\sigma}v_{ij}\left[s_{z}\right]_{\sigma\sigma }c_{i\sigma}^{\dagger}c_{j\sigma}\] contains the sublattice-resolved intrinsic spin-orbit coupling, where \(\langle\langle\cdots\rangle\rangle\) denotes a sum over next-nearest neighbors and the first sum separates the contributions coming from either sublattice. \(v_{ij}=+1\) (\(-1\)) if the electron takes a left (right) turn along the lattice to get to the next-nearest neighbor. The last term \[\mathcal{H}_{R}=\frac{2i\lambda_{R}}{3}e^{-i\frac{\varphi}{2}s_{z}}\sum_{ \langle ij\rangle}\sum_{\sigma\sigma^{\prime}}c_{i\sigma}^{\dagger}c_{j\sigma} \left(\mathbf{s}_{\sigma\sigma^{\prime}}\times\mathbf{d}_{ij}\right)\cdot \mathbf{z}\,e^{i\frac{\varphi}{2}s_{z}}\] is the Rashba SOC with an additional phase angle \(\varphi\). \(\mathbf{d}_{ij}\) is the unit vector connecting site \(i\) to \(j\) and \(\mathbf{s}_{\sigma\sigma^{\prime}}=\left(\left[s_{x}\right]_{\sigma\sigma^{ \prime}},\left[s_{y}\right]_{\sigma\sigma^{\prime}},\left[s_{z}\right]_{\sigma \sigma^{\prime}}\right)\) is a vector of Pauli matrices' components. ### Geometry and symmetries The honeycomb lattice is set to an armchair nanoribbon geometry, consisting of a central sample (of width \(W\) unit cells and length \(L\) unit cells) attached to two infinite leads made of the same material with the same width. In these simulations, we used \(W=6\) and \(L=10\). Twisted boundary conditions [4] are imposed to allow for \(k\)-point sampling along the transverse nanoribbon direction. An additional phase \(0<\phi<2\pi\) is added to the hoppings crossing the periodic boundary conditions and the final expectation value is averaged over \(\phi\). Then, even if \(W\) is small, the infinite-width limit can be retrieved with enough sampling over \(\phi\). Effectively, what this does is to repeat the system along the transverse direction. Therefore, despite \(W\) being a rather small number, sampling over \(k\) yields the same result as an infinitely wide lattice. Transport properties are calculated with respect to operators \(\mathcal{A}\) defined in a single slice of the lattice. The nanoribbon can be organized by slices across its cross-section. The Hamiltonian of the nanoribbon is described in terms of the intra-slice Hamiltonian \(h\) and the inter-slice Hamiltonian \(u\) connecting slice \(n\) to \(n+1\). Both \(u\) and \(h\) can be slice-dependent, as long as they are uniform in the leads. A sketch of the sample geometry is shown in Fig. S12. Due to translation invariance along the longitudinal direction, the operator \(\mathcal{A}\) only needs to be nonzero in one of the slices. This simplifies the process of calculating \(\left\langle\mathcal{A}\right\rangle\) with the RGFM, since only the matrix elements of the Green's functions that connect this slice to the beginning of the leads need to be computed. Symmetries also play a big role in our numerical results. In the absence of a twist (\(\varphi=0\)) in our graphene/TMDC structures, the \(\left\langle s_{x}\right\rangle_{\varphi=0}\) response is forbidden and \(\left\langle s_{y}\right\rangle_{\varphi=0}\) comes entirely from the Fermi surface. When a twist is introduced, these two mix: \(\left\langle s_{y}\right\rangle_{\varphi}=\left\langle s_{y}\right\rangle_{ \varphi=0}\cos\left(\varphi\right)\) and \(\left\langle s_{x}\right\rangle_{\varphi}=\left\langle s_{y}\right\rangle_{ \varphi=0}\sin\left(\varphi\right)\), but they are still determined by \(\left\langle s_{y}\right\rangle_{\varphi=0}\) and thus rely only on a Fermi surface calculation, considerably simplifying the numerical procedure. The \(\left\langle s_{z}\right\rangle\) response is also forbidden. ### Recursive Green's function method The REE and UREE were calculated using the Recursive Green's function method (RGFM) which is now briefly explained. At \(t<0\), the leads are disconnected from the sample and lie in thermal equilibrium with the corresponding reservoirs at energy \(-\Delta V/2\) and \(\Delta V/2\). At \(t=0\) the leads are connected to the sample, a transient regime ensues and eventually an equilibrium state is reached, at which point the desired observables are measured. The expectation value \(\left\langle\mathcal{A}\right\rangle\) of observable \(\mathcal{A}\) is obtained via the Keldysh formalism [5; 6] as the sum of two terms, stemming from the Fermi surface and the Fermi sea: \[\left\langle\mathcal{A}\right\rangle_{\text{surf}} =\frac{i}{2\hbar}\int_{-\infty}^{\infty}\text{d}\varepsilon\left( f_{R}-f_{L}\right)\text{Tr}\left[AG^{r}\left(\Gamma^{L}-\Gamma^{R}\right)G^{a}\right]\] \[\left\langle\mathcal{A}\right\rangle_{\text{sea}} =-\frac{1}{2}\int_{-\infty}^{\infty}\text{d}\varepsilon\left( f_{R}+f_{L}\right)\text{Tr}\left[A\left(G^{r}-G^{a}\right)\right]\] where \(f_{R\left(L\right)}\left(\varepsilon\right)\) is the Fermi function of the right (left) lead defined through the Fermi-Dirac distribution\(f_{R/L}\left(\varepsilon\right)=f\left(\varepsilon\pm\Delta V/2\right)\), \(G^{r\left(a\right)}\left(\varepsilon\right)=\left(\varepsilon-H\pm i0^{+} \right)^{-1}\) is the retarded (advanced) Green's function of the whole system and is the level-width function defined through the left (right) self-energies of the leads. Finally, the self-energies are defined in terms of the surface Green's function at the left and right leads, respectively: \(\Sigma_{L}^{r}\left(\varepsilon\right)=ug_{L}^{r}\left(\varepsilon\right)u^{\dagger}\) and \(\Sigma_{R}^{r}\left(\varepsilon\right)=u^{\dagger}g_{R}^{r}\left(\varepsilon \right)u\). Within this formalism, the full expectation value is \[\left\langle\mathcal{A}\right\rangle=\left\langle\mathcal{A}\right\rangle_{ \text{surf}}+\left\langle\mathcal{A}\right\rangle_{\text{sea}}-\left\langle \mathcal{A}\right\rangle_{0}\] where \(\left\langle\mathcal{A}\right\rangle_{0}\) is the expectation value of \(\left\langle\mathcal{A}\right\rangle\) at zero bias. The Fermi sea term can be calculated efficiently via Ozaki countour integration [7], while the Fermi surface term simplifies at low bias because \(f_{R}-f_{L}\) is only nonzero in a very narrow window of energy. The recursive Green's function method for nanoribbons requires matrix inversions for each slice, so its numerical complexity scales as \(W^{3}L\), making it very difficult to deal with wide nanoribbons [8; 9; 10]. In the case of simple lattices, this complexity can be reduced to \(W^{2}L\) at the expense of a high memory cost [11; 12], and more recently a real-space method based on the Kernel Polynomial Method (KPM) [13] has been proposed for general systems to reduce this complexity to \(WL\) at the expense of introducing stochasticity to the formalism and finite leads [14; 15]. An alternative way of effectively increasing \(W\) is through the use of twisted boundary conditions, or \(k\)-point sampling, in the transverse direction. When the system has translation invariance along the transverse direction, as is the case here, this approach exactly reproduces the infinite-width case.
2307.15513
Scaling in local to global condensation of wealth on sparse networks
The prevalence of wealth inequality propels us to characterize its origin and progression, via empirical and theoretical studies. The Yard-Sale(YS) model, in which a portion of the smaller wealth is transferred between two individuals, culminates in the concentration of almost all wealth to a single individual, while distributing rest of the wealth with a power-law of exponent one. By incorporating redistribution to the model, in which the transferred wealth is proportional to the sender's wealth, we show that such extreme inequality is suppressed if the frequency ratio of redistribution to the YS-type exchange exceeds the inverse of the population size. Studying our model on a sparsely-connected population, we find that the wealth inequality ceases to grow for a period, when local rich nodes can no longer acquire wealth from their broke nearest neighbors. Subsequently, inequality resumes growth due to the redistribution effect by allowing locally amassed wealth to move and coalesce. Analyzing the Langevin equations and the coalescing random walk on complex networks, we elucidate the scaling behaviors of wealth inequality in those multiple phases. These findings reveal the influence of network structure on wealth distribution, offering a novel perspective on wealth inequality.
Hyun Gyu Lee, Deok-Sun Lee
2023-07-28T12:15:41Z
http://arxiv.org/abs/2307.15513v2
# Scaling in local to global condensation of wealth on sparse networks ###### Abstract The prevalence of wealth inequality propels us to characterize its origin and progression, via empirical and theoretical studies. The Yard-Sale(YS) model, in which a portion of the smaller wealth is transferred between two individuals, culminates in the concentration of wealth to a single individual, while distributing rest of the wealth with a power-law of exponent one. By incorporating redistribution to the model, in which the transferred wealth is proportional to the sender's wealth, we show that such extreme inequality is suppressed if the redistribution occurs more frequently than the inverse of the population size. Studying our model on a sparsely-connected population, we find that the wealth inequality ceases to grow for a period, when local rich nodes can no longer acquire wealth from their broke nearest neighbors. Subsequently, inequality resumes growth due to the redistribution effect by allowing locally amassed wealth to move and coalesce. Analyzing the Langevin equations and the coalescing random walk on complex networks, we elucidate the scaling behaviors of wealth inequality in those multiple phases. These findings reveal the influence of network structure on wealth distribution, offering a novel perspective on wealth inequality. Wealth inequality may be attributed to a myriad of socioeconomic factors and their orchestration. Yet the universal power-law wealth and income distributions [1; 2; 3] imply a common mechanism at play, and provide the possibility of theoretical understanding on how wealth inequality has arisen and how long it will persist [4; 5; 6]. Grounded on the idea that individuals participating in a trade can undergo wealth transfer due to imperfect pricing, various computational models of wealth exchange have been studied extensively, and their steady-state solutions, often available analytically, serve as plausible explanations for various wealth distributions [7; 8; 9; 10; 11; 12; 13]. The Yard-Sale(YS) model [14; 15] is remarkable as it generates an extreme wealth inequality from a seemingly fair (and thus realistic) exchange rule: A fraction of the sender's and receiver's smaller wealth is transferred in each trade, and ultimately, almost total wealth is concentrated in a single individual while the remaining wealth being distributed by a power-law with exponent one across the rest of the population [16]. The model's simplicity and yet the emergence of such stark inequality have attracted much attention [16; 17; 18; 19; 20; 21; 22]. However, such global wealth condensation does not (yet) happen in the real world, undermining the reality of the long-time-limit solution. The degree of real-world wealth inequality has been changing with time [4; 5; 6]. In this light, the non-stationary state, rather than the stationary state, of a model may offer a better explanation of the reality. Also, by relaxing constraints like the fully-connected population often assumed in many studies and by considering multiple modes of wealth exchange, the YS model can become more realistic and reveal a richer set of insights. For example, wealth transfer equal to a fraction of the sender's wealth, occurring in donation, investment, or taxation, effectively redistributes wealth and suppresses wealth inequality [7; 8; 9; 18]. Investigating the non-stationary state of the generalized YS model [16; 22] which allows such redistribution (RD) mode of wealth transfer, as well as the YS-mode, between connected pairs in a structured population [21], we identify new relevant factors influencing wealth inequality and provide a novel theoretical framework. We show that the extreme inequality of the original YS model in the long-time limit can be suppressed by increasing the ratio of the RD-mode transfers beyond the inverse of the population size. In the sparsely-connected population, inequality evolves with time through multiple phases, and we elucidate the underlying mechanisms. Initially, the inequality grows primarily driven by the YS-mode wealth transfer before saturating over a period of time due to depleted wealth of the nearest neighbors of locally rich nodes. We call this stage _local condensate_ phase. As time passes, the RD-mode transfer effectively thanks this frozen state, enabling further elevation of inequality via processes akin to random walk and coalescence of locally-concentrated wealth. In these stages, scaling behaviors of wealth inequality emerge, and we show that they originate from intimate relationships between the dynamics of wealth and the network structure. This demonstrates the critical role of the structure of networks in understanding the wealth distribution. Finally, comparing with the empirical data, we discuss the implications of our findings. _Model_ - We consider a network of \(N\) nodes (individuals) connected by \(L\) undirected links (trade partnership) with the adjacency matrix \(A_{ij}=0,1\). Each node \(i\) has wealth \(\omega_{i}(t)\) with \(\omega_{i}(t=0)=1\) initially. For every pair of connected nodes with rate \(N/L\), the sender ('s') and the receiver ('r') are determined randomly and the sender sends an amount \(\Delta\omega\) of wealth to the receiver [Fig. 1 (a)], where \(\Delta\omega\) is a fraction \(\varepsilon\) of either the smaller wealth or the sender's wealth as \[\Delta\omega=\left\{\begin{aligned} &\varepsilon\,\min\{\omega_{\rm s}, \omega_{\rm r}\}&&\text{ with prob. }1-p\text{ (YS mode)},\\ &\varepsilon\,\omega_{\rm s}&&\text{ with prob. }p\text{ (RD mode)}.\end{aligned}\right. \tag{1}\] Consequently their wealth change as \((\omega_{\rm s},\omega_{\rm r})\to(\omega_{\rm s}-\Delta\omega,\omega_{\rm r}+ \Delta\omega)\), but their sum is preserved. The mean wealth is fixed, i.e., \(\overline{\omega}=N^{-1}\sum_{i}\omega_{i}(t)=1\). The probability \(p\) is the relative ratio of the RD-mode transfers. This is a network version of the model introduced in Refs. [16; 22]. For the underlying networks, we use the complete graphs (\(A_{ij}=1\) for all \(i\neq j\)) and the giant connected components of sparse scale-free (SF) networks [23] constructed by the static model [24; 25], which display power-law degree distributions \(P_{\rm deg}(k)\equiv N^{-1}\sum_{i}\delta_{k_{i},k}\sim k^{-\gamma}\) for large \(k\) with the degree \(k_{i}=\sum_{j}A_{ij}\) meaning the number of the nearest neighbors and \(\gamma\) called the degree exponent, and have the mean degree \(\overline{k}=2L/N\) finite. For a measure of wealth inequality we use the wealth variance \[\sigma^{2}(t)\equiv\frac{1}{N}\sum_{i=1}^{N}(\omega_{i}(t)-\overline{\omega}) ^{2},\] which is the second cumulant of the wealth distribution \(P(\omega,t)\). A single run result of the model simulation with small \(p\) readily reveals multiple phases in the time-evolution of wealth inequality. See Figs. 1(b) and 1(c). i) For \(t\lesssim 10^{4}\), individuals' wealth is made increasingly different from one another, resulting in the growth of wealth inequality. ii) Then a frozen period follows (\(10^{4}\lesssim t\lesssim 10^{7}\)), when \(\omega_{i}\)'s and \(\sigma^{2}\) hardly change with time. _Rich_ nodes, possessing wealth larger than the average (\(\omega_{i}\geq\overline{\omega}=1\)), are surrounded by the _poor_ nearest neighbors (\(\omega_{j}<1\)). With \(p=0\), this local condensate phase becomes the equilibrium as shown in Fig. S1 in Supplemental Material (SM) [26] and also reported in Ref. [21]. iii) In the late-time regime (\(10^{7}\lesssim t\lesssim 10^{8}\)), the wealth variance resumes growing and the locally-concentrated wealth switches its host node to one of its nearest neighbors repeatedly, appearing to perform a random walk, until it encounters another local wealth and they coalesce [27]. iv) Very late (\(t\gtrsim 10^{8}\)), global condensation occurs; almost all wealth is concentrated onto a single node. Yet its host changes with time and \(\sigma^{2}\) fluctuates though weakly. _Langevin equation-_ To understand these observations quantitatively and proceed, we consider the Langevin equation of the model \[d\omega_{i}= -\frac{\varepsilon\,p}{\overline{k}}\sum_{j}L_{ij}\omega_{j}dt+ \varepsilon\sqrt{2p\,D^{\rm(RD)}(\omega_{i})}\,dX_{i}\] \[+\varepsilon\sqrt{2(1-p)\,D^{\rm(YS)}(\omega_{i})}\,dY_{i}, \tag{2}\] where \(L_{ij}\equiv k_{i}\delta_{ij}-A_{ij}\) is the Laplacian, \(dX_{i}\) and \(dY_{i}\) are the Wiener processes with mean \(0\) and variance \(dt\) representing the stochasticity of whether to send or receive wealth, and the coefficients \(D^{\rm(RD)}(\omega_{i})=\langle\overline{([\omega_{i}+\omega)/2\}^{2}\rangle\) and \(D^{\rm(YS)}(\omega_{i})=\langle\overline{\min\{\omega_{i},\omega\}^{2}}\rangle\) are the mean square of the transferred wealth for node \(i\) under the RD and YS mode, respectively, with \(\langle\cdots\rangle\) denoting the ensemble average. The first and second terms represent the deterministic and stochastic changes by the RD-mode transfers, and the third one from the YS transfers. The derivation of Eq. (2) is in SM. In the early-time regime, \(\omega_{i}\)'s remain close to the initial value, and thus \(D^{\rm(YS)}(\omega_{i})\simeq D^{\rm(RD)}(\omega_{i})\simeq 1\), which leads to the approximation \(d\omega_{i}\simeq\sqrt{2}\varepsilon\,dY_{i}\) and \(d\omega_{i}\simeq\sqrt{2}\varepsilon\,dX_{i}\) for small and large \(p\), respectively. Therefore, as shown in Figs. 2(a), 3(a), and S2, we find \[\sigma^{2}(t)\simeq 2\varepsilon^{2}t. \tag{3}\] This linear growth cannot continue indefinitely for finite \(N\) but \(\sigma^{2}\) eventually saturates. The equilibrium value varies with the ratio \(p\) of the RD mode as [Fig. 2(b)] \[\sigma^{2}_{\rm eq}\equiv\lim_{t\to\infty}\sigma^{2}(t)\sim\left\{\begin{aligned} & N& \text{ for }& p\ll p_{*}\equiv\frac{\varepsilon}{N},\\ &\frac{\varepsilon}{p}&\text{ for }& p\gg p_{*}.\end{aligned}\right. \tag{4}\] Figure 1: Model and its Monte-Carlo simulation results. (a) Two modes of wealth transfers in Eq. (1). (b) Time-evolution of individual wealth in a single run of the Monte-Carlo simulation with \(p=10^{-6}\) and \(\varepsilon=0.05\) on a SF network of \(N=97\) nodes, \(L=200\) links and the degree exponent \(\gamma=2.5\). Inset: The same plots for \(1\leq t\leq 10^{6}\). (c) Time-evolution of wealth variance from the same simulation. Three insets represent a part of the network at \(t=10^{2}\), \(10^{5}\), and \(10^{10}\), respectively, with rich (poor) nodes colored black (white). The critical ratio \(p_{*}\) and Eq. (4) can be obtained as follows. For \(p=0\) or small \(p\), the model is similar to the original YS model and almost all wealth is concentrated in a single node in the long-time limit [16] yielding \(\sigma_{\rm eq}^{2}\simeq N\). For \(p\) relatively large, let us use the approximation \(\sum_{j}L_{ij}\omega_{j}\simeq\overline{k}(\omega_{i}-1)\), which is the mean-field approach that becomes exact in the complete graph. In equilibrium, the fluctuation driven by the YS-mode transfers \((\delta\omega)_{\rm YS}\sim\varepsilon\sqrt{2t_{\rm eq}D_{\rm YS}}\) is balanced by the redistribution \((\delta\omega)_{\rm RD}\sim\varepsilon p(\delta\omega)_{\rm Vsfeq}\) in a time interval \(t_{\rm eq}\), which allows us to estimate \(t_{\rm eq}\sim\frac{1}{\varepsilon p}\) and \(\sigma_{\rm eq}^{2}\sim(\delta\omega)_{\rm YS}^{2}\sim(\delta\omega)_{\rm RD} ^{2}\sim\frac{\varepsilon}{p}\). It is the fluid phase of wealth. The two values of \(\sigma_{\rm eq}^{2}\) become comparable at the threshold \(p_{*}\equiv\frac{\varepsilon}{N}\). Simulations on the complete graphs support Eq. (4) [Fig. 2(b)]. The equilibration time \(t_{\rm eq}\) when \(\sigma^{2}(t)\) following Eq. (3) reaches \(\sigma_{\rm eq}^{2}\) is given by \(t_{\rm eq}\sim\frac{N}{\varepsilon^{2}}\) for \(p\ll\frac{\varepsilon}{N}\), and \(t_{\rm eq}\sim\frac{1}{\varepsilon p}\) for \(p\gg\frac{\varepsilon}{N}\) [Fig. S2]. The wealth distribution \(P(\omega,t)\) is available, though partially, in the mean-field approach. With a low ratio of the RD-mode transfers (\(p\ll p_{*}\)), the Fokker-Planck (FP) equation is approximated as \(\partial P/\partial t\simeq\varepsilon^{2}(\partial^{2}/\partial\omega^{2}) \{D^{\rm(YS)}(\omega)P\}\). Recalling \(D^{\rm(YS)}(\omega)\simeq\omega^{2}\) for \(\omega\ll 1\) and \(D_{\rm YS}(\omega)\simeq 1\) for \(\omega\gg 1\) in the early-time regime, we obtain [26] \[P(\omega,t)\simeq\left\{\begin{array}{ll}\frac{e^{-\frac{ \mu^{2}}{4}}}{\sqrt{4\pi\varepsilon^{2}t}\omega^{3/2}}\exp\left[-\frac{(\log \omega)^{2}}{4\varepsilon^{2}t}\right]&\mbox{for $\omega\ll 1$,}\\ \frac{1}{\sqrt{4\pi\varepsilon^{2}t}}e^{-\frac{(\omega-1)^{2}}{4 \varepsilon^{2}t}}&\mbox{for $\omega\gg 1$,}\end{array}\right. \tag{5}\] in agreement with the simulation results [Fig. 2(b)]. Note that the width of these distributions is commonly given by \(\langle\omega^{2}\rangle-1\simeq 2\varepsilon^{2}t\) for \(\varepsilon^{2}t\ll 1\) in agreement with Eq. (3). In the long-time limit, a node occupies almost all wealth and the rest shows a power-law distribution \(P(\omega,t)\sim\frac{1}{\varepsilon^{2}t}\omega\) [Fig. S3] as obtained by solving the Boltzmann equation [16; 26]. With \(p\gg p_{*}\), one can approximate Eq. (2) as \(d\omega\simeq-\varepsilon p(\omega-1)dt+\varepsilon\sqrt{2(1-p)}\omega dY\) for small \(\omega\). This Langevin equation has been studied as a model for power-law wealth distributions [8] and also for the stationary state of the YS model with redistribution [16; 22]; The stationary-state solution to the corresponding FP equation is the inverse gamma distribution [26] \[P_{\rm eq}(\omega)\simeq\frac{\mu^{\mu+1}}{\Gamma(\mu+1)}\omega^{-2-\mu}e^{- \frac{\mu}{\omega}}, \tag{7}\] where \(\mu\equiv\frac{p}{\varepsilon(1-p)}\simeq\frac{p}{\varepsilon}\) and the width is \(\langle\omega^{2}\rangle-1=\frac{1}{\mu-1}\simeq\frac{\varepsilon}{p}\) for large \(\mu\). Note that it is a power-law \(P_{\rm eq}(\omega)\sim\omega^{-2-\frac{\mu}{\omega}}\) for large \(\omega\)[8] while Eq. (7) here describes only the small-\(\omega\) behavior of our model. _Local condensate phase_-- For sparse networks, the early- and stationary-state behaviors remain similar to those on the complete graphs given in Eqs. (3) and (4). See Fig. 3(a). However, in the intermediate-time regime \(t_{\rm lc}\lesssim t\lesssim t_{\rm rel}\), the wealth variance ceases to grow but remains fixed. Setting the ratio of RD transfers to be small, we here investigate the novel phases emerging on sparse networks. The fixed value of wealth variance in this local condensate phase is related to the sparse connection of the underlying network. Each rich node \(i\) is found to have taken almost all the wealth of its nearest neighbors, possessing \(\omega_{\rm i;rich}\simeq k_{i}+1\) including its own as well [Fig. 3(b)]. Hub nodes thus possess more as long as they are rich. Yet the probability of a node \(i\) to be rich decreases with its degree as \(\rho_{\rm i;rich}\simeq\frac{1}{k_{i}+1}\) [Fig. 3(c)], for a node and its neighbor(s) are equally likely to be rich under the YS-mode transfer. Introducing the wealth \(\omega_{\rm rich}(k)\) of a rich node of degree \(k\) and the probability \(\rho_{\rm rich}(k)\) of a node of degree \(k\) to be rich and approximating the wealth of a poor node to be zero, one can represent the wealth variance as \[\sigma^{2}\simeq\sum_{k}P_{\rm deg}(k)\left[\rho_{\rm rich}(k)\left\{\omega_{ \rm rich}(k)-1\right\}^{2}+1-\rho_{\rm rich}(k)\right]. \tag{8}\] Using \(\rho_{\rm rich}(k)\simeq 1/(k+1)\) and \(\omega_{\rm rich}\simeq k+1\), we obtain \[\sigma^{2}\simeq\overline{k}, \tag{9}\] which is supported by Figs. 3(e) and (f). Recalling the initial growth of the wealth variance in Eq. (3), we find local condensation begins at \(t_{\rm lc}\sim\frac{\overline{k}}{\varepsilon^{2}}\), which can be rationalized by considering that it takes time \(\varepsilon^{-2}\) for a node to take the wealth of a neighbor by the YS-mode transfers, and it has \(\overline{k}\) such neighbors on the average. Local condensation is terminated at \(t_{\rm rel}\), when the RD-mode transfers begin to redistribute significantly Figure 2: Wealth variance and distribution on the complete graphs. (a) Wealth variance for \(\varepsilon=0.1\) and different \(p\)’s on the complete graph with \(N=10^{5}\) averaged over 20 realization. (b) Data collapse of the rescaled wealth variance in the equilibrium state for different \(p\)’s, \(\varepsilon\)’s and \(N\)’s. The dashed line has slope \(-1\). Inset: Wealth distributions for \(\varepsilon=0.00625\), \(N=10^{5}\), and different \(p\)’s at time \(t=10^{3}\). Lines represent the analytic predictions. β€˜Gaussian’ denotes \(P(\omega,t)=\frac{1}{\sqrt{2\pi\sigma^{2}(t)}}e^{-\frac{(\omega-1)^{2}}{2 \sigma^{2}(t)}}\). the wealth of the local rich nodes to their poor neighbors. _Relaxation phase--_ On the time scale longer than \(t_{\rm rel}\), the RD transfers occur frequently enough to redistribute the locally-concentrated wealth to a neighboring node. Also one of the two local wealths on neighboring nodes can absorb the other [27]. Such coalescence of wealth drives wealth to a single or a few nodes until the stationary state is reached, and we call this period the _relaxation_ phase (\(t_{\rm rel}\lesssim t\lesssim t_{\rm eq}\)). The wealth variance in this phase exhibits interesting scaling behaviors [Fig. 3]. The dynamics of wealth can be understood by studying the coalescing random walk (CRW) on complex networks [28; 29], which allows us to evaluate \(\rho_{\rm rich}(k)\) and \(\omega_{\rm rich}(k)\), varying with time, and use them in Eq. (8) to obtain the wealth variance. In the CRW suited for our model, the following occurs for every link with rate \(\lambda\): i) if the link is occupied by a walker (local wealth) at one end node and empty at the other end, the walker moves to the latter, ii) if both end nodes are occupied by walkers, they coalesce leaving one walker at either end node, and iii) if both ends are empty, nothing happens over the link. One can show that the time-decrease of the walker density is proportional to the square of the density and obtain the solution \(\rho\simeq\frac{1}{\lambda k^{t}}\). The details are in SM. The jump rate \(\lambda\) is governed by the rate of RD transfers and thus given by \(\lambda_{\rm gYS}\sim\varepsilon p\). Therefore the fraction of rich nodes \(\rho_{\rm rich}\) in our model is \[\rho_{\rm rich}\simeq\frac{1}{\varepsilon p\overline{k}t} \tag{10}\] for large \(t\). It is confirmed by simulations [Figs. 3(d)]. As random movement and coalescence of local wealth proceeds, the probability of a node to be rich \(\rho_{\rm rich}(k)\) loses its dependence on degree \(k\), in contrast to the local condensate phase [Fig. 3(c)]. The wealth of a node remains proportional to its degree [Fig. 3(b)] with the proportional coefficient increasing as the number of rich nodes decreases. Assuming \(\omega_{\rm rich}(k)\simeq c\,k\) with \(c\) a coefficient and \(\rho_{\rm rich}(k)\simeq\rho_{\rm rich}\) in Eq. (10), one can use the unit mean wealth condition \(\overline{\omega}\simeq\sum_{k}P_{\rm deg}(k)\rho_{\rm rich}(k)\,c\,k\simeq \rho_{\rm rich}\,c\,\overline{k}=1\) to obtain \(c\simeq\frac{1}{\rho_{\rm rich}k}\). Using these results in Eq. (8), we find \[\sigma^{2}(t)\simeq\rho_{\rm rich}\,c^{2}\,\overline{k^{2}}\sim\left\{\begin{array} []{ll}\varepsilon p\overline{k}N^{\frac{3-\gamma}{\gamma-1}}t&\mbox{for $2<\gamma<3$},\\ \varepsilon p\overline{k}t&\mbox{for $\gamma>3$},\end{array}\right. \tag{11}\] where we used \(\frac{\overline{k}^{n}}{k^{n}}\sim\max\{1,N^{\frac{n-\gamma+1}{\gamma-1}}\}\) for the static-model SF networks [25]. The data collapses of the scaled plots for different \(\varepsilon,p,\overline{k}\) and \(N\) in Fig. 3(e) and 3(f) confirm these distinct scaling behaviors between \(2<\gamma<3\) and \(\gamma>3\). The influence of the network structure is remarkable: A large and heterogeneous network (small \(\gamma\)) facilitates wealth condensation by large wealth inequality and small \(t_{\rm rel}\) and \(t_{\rm eq}\)[26]. Moreover, the correlation of wealth and node degree leads the wealth distribution to share the similar asymptotic behaviors with the degree distribution, \(P(\omega)\sim\omega^{-\gamma}\) for large \(\omega\) while it behaves as \(P(\omega)\sim\omega^{-1}\) for small \(\omega\)[26]. A large ratio \(p\) of the RD transfers increases wealth inequality while decreasing the stationary-state wealth variance. The increase of Figure 3: Scaling of wealth variance on SF networks. (a) Wealth variance for \(\varepsilon=0.1\) and different \(p\)’s on SF networks with \(N=887\pm 13\), \(\overline{k}=4.5\), and \(\gamma=2.5\), averaged over 100 realizations. Inset: data collapse of the rescaled wealth variance in the equilibrium state. (b) Wealth of rich nodes versus degree plus one at different times for \(p=2.5\times 10^{-6}\) and \(\varepsilon=0.05\) on SF networks with \(N=27989\pm 27\), \(\overline{k}=4.5\), and \(\gamma=2.5\). (c) Plots of the probability \(\rho_{\rm rich}(k)\) of a node of degree \(k\) to be rich. (d) Time-decay of the rich-node density \(\rho_{\rm rich}\) in the generalized YS model compared with the random-walker density \(\rho\) in the CRW for \((p_{1},\varepsilon_{1})=(2.0\times 10^{-6},0.1)\) and \((p_{2},\varepsilon_{2})=(2.5\times 10^{-6},0.05)\) on SF networks with different \(N\)’s and \(\overline{k}\)’s [26]. The dashed line is Eq. (10). (e) Data collapse of the rescaled wealth variance as predicted in Eq. (11) for \(\{(p_{1},\varepsilon_{1}),(p_{2},\varepsilon_{2})\}\) as in (d) and different \(N\)’s, \(\overline{k}\)’s and \(\gamma=2.5\). Inset: Wealth variance versus \(N\) at fixed time \(t=1.25/\varepsilon p\) with \(p=2.5\times 10^{-6}\) and \(\varepsilon=0.05\) for two different \(\gamma\)’s. The dashed line has slope \(1/3\). (f) The same plots as in (e) for \(\gamma=10\). The dashed lines in (e) and (f) have slope \(1\). the fraction \(\varepsilon\) not only speeds up the whole process, as shown in Eq. (2), but also increases stochasticity and thereby enhances the stationary-state wealth variance as shown in Eq. (4). _Discussions--_ In this study, we have characterized the scaling properties of wealth inequality, represented by wealth variance, in both its dynamics and steady-state. In the process, we have identified the critical value for redistribution ratio which is inversely proportional to the population size. Furthermore, the evolution of wealth inequality on a sparsely-connected population undergoes multiple phases, revealing the effects of network structure: If heterogeneously connected, a large population progresses into an inequality at a greater rate than a small one, while if homogeneously connected, the population size does not affect the speed much. This is direct consequence of the correlation between wealth and node degree. Our study thus demonstrates that the relevance of the network structure to the wealth distribution in the non-stationary state should be considered in analyzing the real-world wealth inequality. The real-world income distributions, which we obtain by using the empirical data in Ref. [5; 6], decays slow for small income and then fast for large income with the latter characterized by a power-law with the exponent between 2 and 3 for the past 200 years [26]. Similar features are seen also in the relaxation phase of our model: \(P(\omega)\sim\omega^{-1}\) for small \(\omega\) crossovers to \(\sim\omega^{-\gamma}\) for large \(\omega\), demanding further investigations with more data. Also the influence of the dimensionality of the underlying network on the speed of wealth condensation deserves further investigation. ###### Acknowledgements. We thank Su-Chan Park for helpful discussion. This work was funded by KIAS Individual Grants (CG079901 (D.-S.L) and CG084501 (H.G.L)) at Korea Institute for Advanced Study. We are grateful to the Center for Advanced Computation in KIAS for providing computing resources.
2308.05185
On the Pauli group on 2-qubits in dynamical systems with pseudofermions
The group of matrices $P_1$ of Pauli is a finite 2-group of order 16 and plays a fundamental role in quantum information theory, since it is related to the quantum information on the 1-qubit. Here we show that both $P_1$ and the Pauli 2-group $P_2$ of order 64 on 2-qubits, other than in quantum computing, can also appear in dynamical systems which are described by non self-adjoint Hamiltonians. This will allow us to represent $P_1$ and $P_2$ in terms of pseudofermionic operators.
Fabio Bagarello, Yanga Bavuma, Francesco G. Russo
2023-08-09T18:42:20Z
http://arxiv.org/abs/2308.05185v1
# On the Pauli group on 2-Qubits ###### Abstract. The group of matrices \(P_{1}\) of Pauli is a finite 2-group of order 16 and plays a fundamental role in quantum information theory, since it is related to the quantum information on the 1-qubit. Here we show that both \(P_{1}\) and the Pauli 2-group \(P_{2}\) of order 64 on 2-qubits, other than in quantum computing, can also appear in dynamical systems which are described by non self-adjoint Hamiltonians. This will allow us to represent \(P_{1}\) and \(P_{2}\) in terms of pseudofermionic operators. Key words and phrases:Pauli group; PT symmetries; LC circuits; fermionic operators; Hilbert spaces _Mathematics Subject Classification 2020:_ Primary 81R05, 22E10; Secondary 22E70, 81R30, 81Q12 ## 1. Introduction and statement of the results Noether's Theorem [21, Equations (13.148) and (13.158)] is a powerful tool in order to investigate the governing equations in dynamical systems possessing symmetries. The role of groups of symmetries has been largely investigated in connection with Noether's Theorem and its applications in mathematical physics are numerous. In quantum information theory, one is often interested to study quantum codes and, in quantum computing one is interested in preventing or correcting errors that may affect dynamical systems involving qubits. This motivates several authors to work with the Pauli group \(P_{n}\) on \(n\) qubits. In particular \(P_{1}\) denotes the Pauli group on 1 qubit \[I=\left(\begin{array}{cc}1&0\\ 0&1\end{array}\right),\ \ X=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\ \ Y=\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right),\ \ Z=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right), \tag{1.1}\] which satisfy the well known identities \[X=iZY,\ \ Y=iXZ,\ \ Z=iYX,\ \ X^{2}=Y^{2}=Z^{2}=I. \tag{1.2}\] These are the well known Pauli matrices [29], which form the nonabelian group of order 16 \[P_{1}=\{\pm I,\pm iI,\pm X,\pm iX,\pm Y,\pm iY,\pm Z,\pm iZ\} \tag{1.3}\] When we deal with quantum information systems, based on more than 1-qubit, one generalizes the notion of Pauli group, considering larger 2-groups \(P_{n}\) of order \(4^{n+1}\) for \(n\)-qubits with \(n\geq 1\) big enough. These groups, known as large Pauli groups, are prominent in the literature of dynamical systems of the last years [23, 28]. Here we focus on the case \(n=2\), that is, on the Pauli group on 2-qubits, defined by \[P_{2}=\{M\otimes N\mid M,N\in P_{1}\}, \tag{1.4}\] where the symbol \(\otimes\) denotes the Kronecker product among matrices. We will realize (1.4) via a new family of examples which come from the area of the electromagnetism, and specifically from the LC quantum circuits. In this spirit we are going to illustrate the notion of pseudofermionic ladder operators, according to [6]; in fact we may reconstruct Hamiltonians from ladder operators as originally indicated by Dirac and Dieudonne [18]. Among other things, we mention some recent dynamical systems, whose Hamiltonian is obtained with a construction of the same type, see [5, 10]. We shall mention that [19] presents the construction of a pair of coupled \(LC\) electronic circuits, one with amplification and the other with an equivalent attenuation. The interest behind these circuits, besides of the fact that they produce an interesting set of matrices in the analysis of (1.4), is that they can be used to construct an experimental setting which is connected with a PT symmetric system, i.e. with a quantum system whose Hamiltonian satisfies certain transformation rules with respect to the parity (\(P\)) and the time reversal (\(T\)) operators, [11]. These systems are becoming more and more studied, both by physicists, and by mathematicians, for their very peculiar properties, [11, 12]. In fact there exists a dynamical system \(\mathcal{S}\) in [19], realized by coupled LC electronic circuits, possessing PT symmetries and the dynamics of \(\mathcal{S}\) is governed by the differential equation \[i\frac{\mathrm{d}\Psi(t)}{\mathrm{d}t}=H_{\mathcal{S}}\Psi(t), \tag{1.5}\] where \(H_{\mathcal{S}}=iL_{\mathcal{S}}\) is the (formal) Hamiltonian of \(\mathcal{S}\), the symbols \(\alpha,\mu,\gamma\) are parameters defining the circuits, \[L_{\mathcal{S}}=\left(\begin{array}{cccc}0&0&1&0\\ 0&0&0&1\\ -\alpha&\mu\alpha&\gamma&0\\ \mu\alpha&-\alpha&0&-\gamma\end{array}\right)\qquad\text{ and }\qquad\Psi(t)=\left( \begin{array}{c}Q_{1}(t)\\ Q_{2}(t)\\ \dot{Q}_{1}(t)\\ \dot{Q}_{2}(t)\end{array}\right) \tag{1.6}\] is the vector of charges \(Q_{1}(t)\) and \(Q_{2}(t)\) on the capacitors of the two circuits with corresponding derivatives \(\dot{Q}_{1}(t)\) and \(\dot{Q}_{2}(t)\). It might be useful to stress that (1.5) looks like a Schrodinger equation, but this is just a formal identification. In fact, the four components of \(\Psi(t)\) are not all independent, as they should. Our main result is listed below. **Theorem 1.1**.: _The dynamical system \(\mathcal{S}\) above, realized by coupled LC electronic circuits, satisfies the following properties:_ 1. \(H_{\mathcal{S}}\) _can be decomposed in the linear combination of 12 bounded operators_ \(X_{k}\) _for_ \(k=1,2,\ldots,12\)_. Moreover, the operators which commute with all the_ \(X_{k}\)_'s are only those of diagonal type;_ 2. _The operators_ \(X_{k}\) _can be identified with matrices in_ \(\mathrm{SL}_{4}(\mathbb{C})\) _and it is possible to select six operators among the_ \(X_{k}\) _which are generators of the Pauli group_ \(P_{2}\) _on 2-qubits and are simultenously realized by pseudofermionic operators;_ 3. _There exist two subgroups_ \(U\) _and_ \(V\) _of_ \(P_{2}\)_, generated by sets of pseudofermionic operators_ \(\Gamma_{\mu}\) _and_ \(\Gamma_{\nu}\) _respectively, such that_ \(P_{2}=UV\) _and the derived subgroup_ \([U,V]\) _is trivial._ While Section 2 discusses some general parts of the theory of pseudohermitian operators in dynamical systems and their relevance in Quantum Mechanics, Section 3 focuses specifically on the theory of pseudofermionic operators and on their recent developments. The main proofs can be found in Section 4, while Section 5 describes further investigations which might be possible for larger Pauli groups. ## 2. Some folklore on pseudohermitian physical systems Self-adjoint and non-self-adjoint operators are adequate to describe many properties that are useful for physical systems in quantum mechanics, see [21]. Note also that the eigenvalues of operators \(\mathbf{a}\), which are significant in many quantum mathematical models, turn out to be real; \(\langle\psi,\mathbf{a}\psi\rangle\) is the expectation value for the measurements of the observable \(\mathbf{a}\) in the quantum state \(\psi\), and its eigenvalue \(\lambda\) represents one of the possible values for this measurement. Wave functions can evolve in time. In fact one of the main assumptions in quantum mechanics is that there exists an operator \(\mathbf{H}\) on the Hilbert space \(\mathcal{H}\), called the _Hamiltonian operator_ for the system, producing the well known Schrodinger Equation: **Proposition 2.1** (See [22], Axiom 5, Claim 3.17).: _The time-evolution of the wave function \(\psi\) in a quantum system is governed by the Schrodinger Equation,_ \[\frac{d\psi}{dt}=\frac{1}{i\hbar}\boldsymbol{H}\psi, \tag{2.1}\] _where \(\boldsymbol{H}\) is the Hamiltonian and \(\hbar\) is the constant of Planck. In particular, if \(\boldsymbol{H}\) is time independent, then (2.1) can be formally solved by setting_ \[\psi(t)=e^{-it\boldsymbol{H}/\hbar}\psi_{0}. \tag{2.2}\] _where \(\psi_{0}\) is the initial condition._ Heisenberg had a different view of the dynamics of a quantum system. He thought of the operators (quantum observable) as evolving in time instead of the quantum states (vectors in the Hilbert space). In his interpretation each operator \(\mathbf{a}\) evolves in time according to the operator-valued differential equation \[\frac{d\mathbf{a}(t)}{dt}=\frac{1}{i\hbar}\left[\mathbf{a}(t),\mathbf{H}\right], \tag{2.3}\] where \(\mathbf{H}\) is the Hamiltonian operator of the system (time-independent, here, as in the previous proposition), and where \[\left[\mathbf{a},\mathbf{b}\right]=\mathbf{a}\mathbf{b}-\mathbf{b}\mathbf{a} \tag{2.4}\] is _the commutator between the operators_\(\mathbf{a}\) and \(\mathbf{b}\) according to [22, Definition 3.20]. Note that since \(\mathbf{H}\) commutes with itself, the operator \(\mathbf{H}\) remains constant in time, even in this interpretation. However this turns out to have the same physical meaning in Schrodinger's interpretation. An increasing number of physicists and mathematicians started to be recently interested to situations in which the Hamiltonian of a mathematical model under consideration is not necessarily self-adjoint. Examples of a non-self-adjoint Hamiltonian operators have been studied by Bender [11, 13, 14, 15]), Mostafazadeh [26, 27] and other authors [2, 3, 4], introducing, for instance, generalized versions of the harmonic oscillator. ## 3. Elementary theory of pseudofermionic and pseudobosonic operators In particle physics, elementary particles and composite particles are essentially broken up into two (non-overlapping) classes: _bosons_ and _fermions_. Roughly speaking fermions are particles that are associated with the matter and _pseudofermions_ are generalizations of fermions. The bosons are usually used to represent the interactions (in terms of fields), while _pseudobosons_ generalize bosons. In this paper, we will only focus on pseudofermions. The reader can find more information the theory of the pseudobosons and of the pseudofermions in [6, 9]. Typically on a two dimensional Hilbert space \(\mathcal{H}=\mathbb{C}^{2}\) we can define _lowering_ and _raising operators_, which will lower or raise the eigenvalues (respectively) associated with an eigenstate by acting on the state itself. These come as a pair, say \(\mathbf{c}\) and \(\mathbf{c}^{*}\) respectively. The lowering operator, \(\mathbf{c}\) lowers the eigenvalue of a given state by acting on it, and the raising operator \(\mathbf{c}^{*}\) raises the eigenvalue of a given state by acting on it, and it is the adjoint of the lowering operator. _Remark 3.1_.: The _fermionic operators_ satisfy the _canonical anticommutation relations_ (CAR) \[\left\{\mathbf{c},\mathbf{c}^{*}\right\}:=\mathbf{c}\mathbf{c}^{*}+\mathbf{c} ^{*}\mathbf{c}=\mathbb{I},\ \ \left\{\mathbf{c},\mathbf{c}\right\}=\left\{\mathbf{c}^{*},\mathbf{c}^{*}\right\} =0. \tag{3.1}\] However _pseudofermionic operators_ satisfy more general anticommutation rules: \[\{\mathbf{a},\mathbf{b}\}=\mathbb{I},\ \ \{\mathbf{a},\mathbf{a}\}=\{\mathbf{b}, \mathbf{b}\}=0, \tag{3.2}\] where \(\mathbf{b}\neq\mathbf{a}^{*}\) a priori. Note that (3.1) is motivated by Pauli's Principle. Note that fermionic operators are bounded. A nonzero vector \(\varphi_{0}\in\mathcal{H}\) such that \(\mathbf{a}\varphi_{0}=0\) surely exists, as well as a nonzero vector \(\Psi_{0}\in\mathcal{H}\) such that \(\mathbf{b}^{*}\Psi_{0}=0\). This is because \(\mathbf{a}\) and \(\mathbf{b}^{*}\) have nontrivial kernels. We are going to illustrate better this aspect, providing a _minimal_ framework of functional analysis in the remaining part of the present section. **Definition 3.2**.: _A set \(\mathcal{E}=\{e_{n}\in\mathcal{H},n\geq 0\}\) is a Schauder basis for \(\mathcal{H}\) if any vector \(f\in\mathcal{H}\) can be written uniquely as_ \[f=\sum_{n=0}^{\infty}c_{n}(f)e_{n},\] _that is, as linear combination of \(e_{n}\) (eventually infinite) with \(c_{n}(f)\in\mathbb{C}\) depending only on \(f\)._ A particular type of bases is given by the _orthonormal bases_, that is, Schauder bases such that \(\langle e_{n},e_{m}\rangle=\delta_{n,m}\), where \(\delta_{n,m}\) denotes the Kronecker delta and \(n,m\) are positive integers. In this particular case, we recover \(c_{n}(f)=\langle e_{n},f\rangle\) via the scalar product. Note that the scalar product is linear in its second variable. A more specific version of orthonormal bases is given by the _Riesz bases_: **Definition 3.3**.: _In \(\mathcal{H}\) the set \(\mathcal{F}=\{f_{n}\in\mathcal{H},n\geq 0\}\) is a Riesz basis if there exists a bounded operator \(T\) on \(\mathcal{H}\) with bounded inverse \(T^{-1}\) and an orthonormal basis \(\mathcal{E}=\{e_{n}\in\mathcal{H},n\geq 0\}\) such that \(f_{n}=Te_{n}\), for all \(n\geq 0\)._ A detailed analysis on operators defined by Riesz bases can be found in [8, 17], but it is clear that \(\langle f_{n},f_{m}\rangle\neq\delta_{n,m}\) in the context of Definition 3.3. In this case, however, one can introduce a second set \(\mathcal{G}\), which is another Riesz basis biorthonormal to \(\mathcal{F}\), whose vectors are simply \(g_{n}=(T^{-1})^{*}e_{n}\). We go ahead to illustrate some results that are crucial in the present framework. First we define the following vectors \[\varphi_{1}:=\mathbf{b}\varphi_{0},\ \ \Psi_{1}:=\mathbf{a}^{*}\Psi_{0}, \tag{3.3}\] as well as the following non-self-adjoint operators \[\mathbf{N}:=\mathbf{b}\mathbf{a},\ \ \mathbf{N}^{*}=\mathbf{a}^{*}\mathbf{b}^{*}. \tag{3.4}\] Note that for \(n\geq 2\) the vectors \(\mathbf{b}^{n}\varphi_{0}\) and \(\mathbf{a}^{*n}\Psi_{0}\) are automatically equal to zero. Then the following equations are satisfied: \[\mathbf{a}\varphi_{1}=\varphi_{0},\ \ \mathbf{b}^{*}\Psi_{1}=\Psi_{0} \tag{3.5}\] \[\mathbf{N}\varphi_{n}=n\varphi_{n},\ \ \mathbf{N}^{*}\Psi_{n}=n\Psi_{n},\ \text{for}\ n=0,1. \tag{3.6}\] If the normalizations of \(\varphi_{0}\) and \(\Psi_{0}\) are chosen such that \(\langle\varphi_{0},\Psi_{0}\rangle=1\), then we have also \[\langle\varphi_{k},\Psi_{n}\rangle=\delta_{k,n}\ \text{for}\ k,n=0,1. \tag{3.7}\] Now we introduce the self-adjoint operators \(\mathbf{S}_{\varphi}\) and \(\mathbf{S}_{\Psi}\) via their action on a generic \(f\in\mathcal{H}\): \[\mathbf{S}_{\varphi}f=\sum_{n=0}^{1}\langle\varphi_{n},f\rangle\varphi_{n},\ \ \mathbf{S}_{\Psi}f=\sum_{n=0}^{1}\langle\Psi_{n},f\rangle\Psi_{n}. \tag{3.8}\] Note also that the operators \(\mathbf{S}_{\varphi}\) and \(\mathbf{S}_{\Psi}\) are bounded, strictly positive, self-adjoint, and invertible. They satisfy \[\|\mathbf{S}_{\varphi}\|\leq\|\varphi_{0}\|^{2}+\|\varphi_{1}\|^{2},\ \ \|\mathbf{S}_{\Psi}\|\leq\|\Psi_{0}\|^{2}+\|\Psi_{1}\|^{2}, \tag{3.9}\] \[\mathbf{S}_{\varphi}\Psi_{n}=\varphi_{n},\ \ \mathbf{S}_{\Psi}\varphi_{n}=\Psi_{n}, \tag{3.10}\] for \(n=0,1\), as well as \(\mathbf{S}_{\varphi}=\mathbf{S}_{\Psi}^{-1}\) and the following intertwining relations \[\mathbf{S}_{\Psi}\mathbf{N}=\mathbf{N}^{*}\mathbf{S}_{\Psi},\ \ \mathbf{S}_{\varphi} \mathbf{N}^{*}=\mathbf{N}\mathbf{S}_{\varphi}. \tag{3.11}\] The vectors of \(\mathcal{F}_{\varphi}=\{\varphi_{0},\varphi_{1}\}\) and \(\mathcal{F}_{\Psi}=\{\Psi_{0},\Psi_{1}\}\) are biorthogonal, linearly independent in a two dimensional complex Hilbert space so that \(\mathcal{F}_{\varphi}\) is a Schauder basis for \(\mathcal{H}\). The same argument applies to \(\mathcal{F}_{\Psi}\) and in fact it turns out that both these sets are Riesz bases for \(\mathcal{H}\). A connection between pseudofermions and fermions is reported below: **Proposition 3.4** (See [6], Theorem 3.5.1).: _Let **c** and **T**\(=\)**T**\({}^{*}\) be two operators on \(\mathcal{H}\) such that (3.1) are satisfied and in addition **T** is positive. Then the operators \(\textbf{a}=\)**T**c**T\({}^{-1}\) and \(\textbf{b}=\)**T**c\({}^{*}\)**T\({}^{-1}\) satisfy (3.2). Viceversa given two operators **a** and **b** acting on \(\mathcal{H}\), satisfying (3.2), it is possible to construct two operators **c** and **T** with the above properties._ Examples of pseudofermions (and generalizations) can be found in [1, 2, 3, 4, 6, 7]. In particular [6, Chapter 3.5.1] shows an effective non-selfadjoint hamiltonian of a mathematical model of a two-level atom which interacts with an electromagnetic field. Let's recall a few facts from [6, Chapter 3.5.1] with more details; it will be useful in the proof of Theorem 1.1. Maamache and others [16] described an effective non self-adjoint Hamiltonian involved in a two levels atom interacting with an electromagnetic field. This followed an intuition of Mostafazadeh in [26, 27]. Writing (2.1), one gets \[i\dot{\Phi}(t)=H_{eff}\Phi(t),\qquad H_{eff}=\frac{1}{2}\left(\begin{array}{ cc}-i\delta&\overline{\omega}\\ \omega&i\delta\end{array}\right), \tag{3.12}\] where \(\delta\) is a real quantity, related to the decay rates for the two levels, while the complex parameter \(\omega\) describes the interaction due to radiations of the atom. In particular we may write \[\omega=|\omega|e^{i\theta} \tag{3.13}\] Of course, the effective Hamiltonian cannot be self-adjoint in the present situation, but let's see better some details in connection with the framework of functional analysis which we have seen in (3.3)-(3.11). Now introduce the operators \[\mathbf{a}=\frac{1}{2\Omega}\left(\begin{array}{cc}-|\omega|&-e^{-i\theta}( \Omega+i\delta)\\ e^{i\theta}(\Omega-i\delta)&|\omega|\end{array}\right),\quad\mathbf{b}=\frac{ 1}{2\Omega}\left(\begin{array}{cc}-|\omega|&e^{-i\theta}(\Omega-i\delta)\\ -e^{i\theta}(\Omega+i\delta)&|\omega|\end{array}\right), \tag{3.14}\] where \(\Omega=\sqrt{|\omega|^{2}-\delta^{2}}\) can be assumed to be real and strictly positive. One can check easily that (3.14) are pseudofermionic operators, that is, they satisfy (3.2) and that \[H_{eff}=\Omega\left(\mathbf{ba}-\frac{1}{2}\mathbb{I}\right). \tag{3.15}\] In order to visualize the notions in (3.3)-(3.11), we may consider \[\varphi_{0}=k\left(\begin{array}{c}1\\ -\frac{e^{i\theta}(\Omega-i\delta)}{|\omega|}\end{array}\right),\qquad\Psi_{0 }=k^{\prime}\left(\begin{array}{c}1\\ -\frac{e^{i\theta}(\Omega+i\delta)}{|\omega|}\end{array}\right), \tag{3.16}\] where \(k\) and \(k^{\prime}\) are normalization constants such that the orthogonality condition below is satisfied \[\langle\varphi_{0},\Psi_{0}\rangle=\overline{k}\,k^{\prime}\left(1+\frac{1}{| \omega|^{2}}(\Omega+i\delta)^{2}\right)=1. \tag{3.17}\] Then we introduce \[\varphi_{1}=\mathbf{b}\varphi_{0}=k\left(\begin{array}{c}\frac{i\delta-\Omega}{| \omega|}\\ -e^{i\theta}\end{array}\right),\qquad\Psi_{1}=\mathbf{a}^{\dagger}\Psi_{0}=k^{ \prime}\left(\begin{array}{c}-i\delta-\Omega\\ |\omega|\\ -e^{i\theta}\end{array}\right). \tag{3.18}\] In particular, we find that \(\mathcal{F}_{\varphi}\) and \(\mathcal{F}_{\Psi}\) are biorthonormal bases of \(\mathcal{H}\), and that \[H_{eff}\varphi_{0}=-\,\frac{\Omega}{2}\,\varphi_{0},\quad H_{eff}\varphi_{1}= \frac{\Omega}{2}\,\varphi_{1},\quad H_{eff}^{\dagger}\Psi_{0}=-\,\frac{\Omega }{2}\,\Psi_{0},\quad H_{eff}^{\dagger}\Psi_{1}=\frac{\Omega}{2}\,\Psi_{1}. \tag{3.19}\] It is evident that \(H_{eff}\) and \(H_{eff}^{\dagger}\) are not self-adjoint. In order to find (3.10) and (3.11), now \[\mathbf{S}_{\varphi}=2|k|^{2}\left(\begin{array}{cc}1&\frac{-i\delta}{| \omega|}\,e^{-i\theta}\\ \frac{i\delta}{|\omega|}\,e^{i\theta}&1\end{array}\right),\quad\mathbf{S}_{ \Psi}=\frac{|\omega|^{2}}{2|k|^{2}\Omega^{2}}\left(\begin{array}{cc}1&\frac{ i\delta}{|\omega|}\,e^{-i\theta}\\ \frac{-i\delta}{|\omega|}\,e^{i\theta}&1\end{array}\right) \tag{3.20}\] and one is the inverse of the other. In the present situation Proposition 3.4 may be applied to the operator \(\mathbf{S}_{\varphi}^{\pm 1/2}=\pm\frac{1}{2}\mathbf{S}_{\varphi}\) to define two _standard_ fermionics operators \(\mathbf{c}\) and \(\mathbf{c}^{\dagger}\) such that \[\mathbf{N}_{0}=\mathbf{c}^{\dagger}\mathbf{c}\ \ \text{and}\ \ H_{eff}=\mathbf{S}_{ \varphi}^{1/2}\ \left(\Omega\left(\mathbf{N}_{0}-\frac{1}{2}\mathbb{I}\right)\right)\ \ \mathbf{S}_{\varphi}^{-1/2}, \tag{3.21}\] but now we have clearly produced another Hamiltonian which is self-adjoint and similar to \(H_{eff}\). More recently, [5, Theorem 1.2] describes the Pauli group (1.1) in terms of (3.14). In fact, it is possible to introduce the matrices \[\mu_{1}=\frac{1}{\Omega}\left(\begin{array}{cc}-|\omega|&-i\delta e^{-i \theta}\\ -i\delta e^{i\theta}&|\omega|\end{array}\right),\quad\mu_{2}=i\left(\begin{array} []{cc}0&e^{-i\theta}\\ e^{i\theta}&0\end{array}\right),\quad\mu_{3}=\frac{1}{\Omega}\left(\begin{array} []{cc}i\delta&-|\omega|e^{-i\theta}\\ -|\omega|e^{i\theta}&-i\delta\end{array}\right), \tag{3.22}\] where \[\mu_{1}=\mu_{1}(\theta,\delta)=\mathbf{b}+\mathbf{a},\qquad\mu_{2}=\mu_{2}( \theta,\delta)=i(\mathbf{b}-\mathbf{a}),\qquad\mu_{3}=\mu_{3}(\theta,\delta) =[\mathbf{a},\mathbf{b}]=\mathbf{a}\mathbf{b}-\mathbf{b}\mathbf{a}. \tag{3.23}\] Of course, \(\mu_{1}=\mu_{1}(\theta,\delta)\), that is, \(\mu_{1}\) depends on \(\theta\) and \(\delta\), similarly this happens for \(\mu_{2}\) and \(\mu_{3}\), but also for \(\mathbf{a}=\mathbf{a}(\theta,\delta)\) and \(\mathbf{b}=\mathbf{b}(\theta,\delta)\) and their linear combinations. As shown in [5, Theorem 1.2] independently on the choice of \(\theta\) and \(\delta\) we may consider the set \[P_{\mu}=\left\{\mu_{1},\mu_{2},\mu_{3}\right\} \tag{3.24}\] which is a concrete realization of the generators of the first Pauli group \(P_{1}\) via pseudofermionic operators. On the other hand, specific choices of \(\theta\) and \(\delta\) give more precise proportionality relations between (3.22) and (1.2). For instance, the choice \(\theta=\pi/2\) and \(\delta=0\) allows us to consider \[\mu_{1}\left(\frac{\pi}{2},0\right)=-Z,\quad\mu_{2}\left(\frac{\pi}{2},0\right) =-iY,\quad\mu_{3}\left(\frac{\pi}{2},0\right)=-Y, \tag{3.25}\] and using \(X=iZY\) in (1.2) we find that \[X=\mu_{1}\left(\frac{\pi}{2},0\right)\ \mu_{2}\left(\frac{\pi}{2},0\right),\quad Y =-\mu_{3}\left(\frac{\pi}{2},0\right),\quad Z=-\mu_{1}\left(\frac{\pi}{2},0 \right). \tag{3.26}\] Of course, we may use (3.14) and (3.23), getting in correspondence with \(\theta=\pi/2\) and \(\delta=0\) \[X=i(\mathbf{a}\mathbf{b}-\mathbf{b}\mathbf{a})\left(\frac{\pi}{2},0\right), \quad Y=(\mathbf{b}\mathbf{a}-\mathbf{a}\mathbf{b})\left(\frac{\pi}{2},0 \right),\quad Z=-(\mathbf{b}+\mathbf{a})\left(\frac{\pi}{2},0\right). \tag{3.27}\] The above computations play a fundamental role in the proof of Theorem 1.1. ## 4. Proof of the main theorem Before we prove our main result, we recall separately some terminology from [30]: **Definition 4.1** (See [30], p.145).: _A group \(G\) is (the internal) central product of its subgroups \(H\) and \(K\), if \(G=HK\) and the derived subgroup of \(H\) and \(K\) is trivial, that is,_ \[[H,K]=\{h^{-1}k^{-1}hk\mid h\in H,k\in K\}=1. \tag{4.1}\] For groups as in Definition 4.1, one has that both the subgroups realizing the central product are normal in the group, that is, \(H\) and \(K\) are normal in \(G\). Incidentally, (4.1) is notationally similar to (2.4). While in the first case one uses the algebraic structure of group (in multiplicative notation), in order to produce a new object which still has the structure of group, in the second case one uses two functional operators, in order to produce a new functional operator. Proof of Theorem 1.1.: (i). Let us consider the set of matrices \(\{X_{j},j=1,2,\ldots,12\}\), where \[X_{1}=\left(\begin{array}{cccc}0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\\ \end{array}\right),X_{2}=\left(\begin{array}{cccc}0&0&-i&0\\ 0&0&0&-i\\ i&0&0&0\\ 0&i&0&0\\ \end{array}\right),X_{3}=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&-1\\ \end{array}\right), \tag{4.2}\] \[X_{4}=\left(\begin{array}{cccc}1&0&0&0\\ 0&-1&0&0\\ 0&0&1&0\\ 0&0&0&-1\\ \end{array}\right),X_{5}=\left(\begin{array}{cccc}0&1&0&0\\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\\ \end{array}\right),X_{6}=\left(\begin{array}{cccc}0&1&0&0\\ -1&0&0&0\\ 0&0&0&1\\ 0&0&-1&0\\ \end{array}\right), \tag{4.3}\] \[X_{7}=\left(\begin{array}{cccc}0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\\ 1&0&0&0\\ \end{array}\right),X_{8}=\left(\begin{array}{cccc}0&0&-i&0\\ 0&0&0&i\\ i&0&0&0\\ 0&-i&0&0\\ \end{array}\right),X_{9}=\left(\begin{array}{cccc}0&0&0&-i\\ 0&0&-i&0\\ 0&i&0&0\\ i&0&0&0\\ \end{array}\right), \tag{4.4}\] \[X_{10}=\left(\begin{array}{cccc}1&0&0&0\\ 0&-1&0&0\\ 0&0&-1&0\\ 0&0&0&1\\ \end{array}\right),X_{11}=\left(\begin{array}{cccc}0&1&0&0\\ 1&0&0&0\\ 0&0&0&-1\\ 0&0&-1&0\\ \end{array}\right),X_{12}=\left(\begin{array}{cccc}0&0&1&0\\ 0&0&0&-1\\ 1&0&0&0\\ 0&-1&0&0\\ \end{array}\right). \tag{4.5}\] Now observe that \[[X_{j},X_{k}]=X_{j}X_{k}-X_{k}X_{j}\quad\text{ and }\quad\{X_{j},X_{k}\}=X_{j}X_{k}+X_ {k}X_{j}. \tag{4.6}\] The matrices \(X_{j}\) satisfy a mixed set of rules. For instance we have \[\{X_{1},X_{2}\}=\{X_{1},X_{3}\}=0\ \ \text{and}\ \ [X_{2},X_{4}]=[X_{2},X_{5}]=0. \tag{4.7}\] This aspect of the matrices is relevant when deducing explicitly the equation of motion, [24, 25]. To calculate the matrices that commute with all the \(X_{i}\)'s we consider a general expression for the following matrix below with coefficients \(f,g,h,\ldots,u\in\mathbb{C}\) \[M=\begin{pmatrix}f&g&h&d\\ j&k&l&m\\ n&o&p&q\\ r&s&t&u\\ \end{pmatrix} \tag{4.8}\] and calculate the commutator of the matrix \(M\) with each of the \(X_{i}\)'s. From a detailed (and long) analysis of the various \([M,X_{j}]\) it follows that the only matrices which commute with all the \(X_{j}\)'s are those proportional to the identity matrix. In fact we have to solve the linear system of 192 (12 \(\times\)\(16)\) equations in \(16\) complex variables. Of course, all the operators \(X_{i}\) turns out to be bounded, since they are expressed in terms of finite dimensional complex matrices. It is now an easy computation to check that \(L_{\mathcal{S}}\) in (1.6) can be written as a linear combination of the \(X_{j}\)'s above. In particular, we can check that \[L_{\mathcal{S}}=\sum_{k=1}^{12}\alpha_{k}X_{k}, \tag{4.9}\] where the only nonzero \(\alpha_{k}\) are the following: \[\alpha_{1}=\frac{1-\alpha}{2},\quad\alpha_{2}=\frac{i(1+\alpha)}{2},\quad \alpha_{4}=\frac{\gamma}{2},\quad\alpha_{7}=\frac{\alpha\mu}{2},\quad\alpha_{ 9}=-\frac{\alpha\mu}{2i},\quad\alpha_{10}=-\frac{\gamma}{2}. \tag{4.10}\] Note that the solution of the differential equation for \(\Psi(t)\) in (1.5) turns out to be \[\Psi(t)=e^{L_{\mathcal{S}}\ t}\Psi(0), \tag{4.11}\] which of course can be rewritten in terms of the \(X_{j}\)'s. Therefore, recalling that \(H_{\mathcal{S}}=iL_{\mathcal{S}}\), (i) is proved completely. (ii). First of all, one can check from (4.2) that all the matrices \(X_{k}\) belongs to \(\mathrm{SL}_{4}(\mathbb{C})\). Now the idea is that we begin to construct two pseudofermionic operators \(\mathbf{a}\) and \(\mathbf{b}\) on \(\mathbb{C}^{2}\) satisfying (3.2) and realizing (1.3). The various matrices \(X_{j}\) can be written in terms of pseudofermionic operators \(\mathbf{A}\), \(\mathbf{B}\), \(\widetilde{\mathbf{A}}\) and \(\widetilde{\mathbf{B}}\), which are deduced as tensor product of matrices in \(\mathrm{SL}_{2}(\mathbb{C})\). Interestingly enough, \(\mathbf{A}\), \(\mathbf{B}\), \(\widetilde{\mathbf{A}}\) and \(\widetilde{\mathbf{B}}\) can be deduced from some matrices, which appear in the analysis of the two levels atom interacting with an electromagnetic field as in Section 3. Here we may use \(\mathbf{a}\) and \(\mathbf{b}\) as in (3.14) and introduce \[\mathbf{A}:=\mathbf{a}\otimes\mathbb{I},\qquad\mathbf{B}:=\mathbf{b}\otimes \mathbb{I}. \tag{4.12}\] It is easy to check that also \(\mathbf{A}\) and \(\mathbf{B}\) give a concrete realization for \(P_{\mu}\), but now \(\mathbf{A}\) and \(\mathbf{B}\) are expressed by matrices in \(\mathrm{SL}_{4}(\mathbb{C})\) and no longer by matrices in \(\mathrm{SL}_{2}(\mathbb{C})\) as it was for \(\mathbf{a}\) and \(\mathbf{b}\). We have created in this way two pseudofermionic operators \(\mathbf{A}\) and \(\mathbf{B}\) on the Hilbert space \(\mathbb{C}^{4}\), via the notion of Kronecker product from (3.14). We can find another two pseudofermionic operators \[\widetilde{\mathbf{A}}:=\mathbb{I}\otimes\mathbf{a},\qquad\widetilde{\mathbf{ B}}:=\mathbb{I}\otimes\mathbf{b}. \tag{4.13}\] always in the same Hilbert space \(\mathbb{C}^{4}\), where we described \(\mathbf{A}\) and \(\mathbf{B}\). In fact, one can check that also (4.13) satisfy (3.2). We may conclude that the set \[\Gamma_{\mu}=\{\mu_{1}\otimes\mathbb{I},\mu_{2}\otimes\mathbb{I},\mu_{3} \otimes\mathbb{I}\} \tag{4.14}\] represents a concrete realization for \(P_{1}\) via the operators \(\mathbf{A}\) and \(\mathbf{B}\) in \(\mathbb{C}^{4}\), moreover in the same Hilbert space we find also the set \[\Gamma_{\nu}=\{\mathbb{I}\otimes\mu_{1},\mathbb{I}\otimes\mu_{2},\mathbb{I} \otimes\mu_{3}\} \tag{4.15}\] which represents another concrete realization for \(P_{1}\), but via the operators \(\widetilde{\mathbf{A}}\) and \(\widetilde{\mathbf{B}}\). Note that the Pauli matrices \(X\), \(Y\) and \(Z\) in (1.1) may be expressed by (3.26) and (3.27). Consequently, \(\mathbf{A}=\mathbf{A}(\theta,\delta)\), \(\mathbf{B}=\mathbf{B}(\theta,\delta)\), \(\widetilde{\mathbf{A}}=\widetilde{\mathbf{A}}(\theta,\delta)\) and \(\widetilde{\mathbf{B}}=\widetilde{\mathbf{B}}(\theta,\delta)\), where \(\theta\) and \(\delta\) are introduced in (3.13). Therefore we get \[\mu_{1}\otimes\mathbb{I}=\mathbf{B}+\mathbf{A},\ \mu_{2} \otimes\mathbb{I} =i(\mathbf{B}-\mathbf{A}),\ \mu_{3}\otimes\mathbb{I}=\mathbf{A}\mathbf{B}-\mathbf{B}\mathbf{A}, \tag{4.16}\] \[\mathbb{I}\otimes\mu_{1}=\widetilde{\mathbf{B}}+\widetilde{ \mathbf{A}},\ \mathbb{I}\otimes\mu_{2} =i(\widetilde{\mathbf{B}}-\widetilde{\mathbf{A}}),\ \mathbb{I}\otimes\mu_{3}=\widetilde{\mathbf{A}}\widetilde{\mathbf{B}}- \widetilde{\mathbf{B}}\widetilde{\mathbf{A}}.\] We are going to report some computations for the matrices \(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6}\). In fact we may write in correspondence with \(\theta=\pi/2\) and \(\delta=0\) \[X_{1}=X\otimes\mathbb{I}=\left(\mu_{1}\left(\frac{\pi}{2},0\right)\mu_{2} \left(\frac{\pi}{2},0\right)\right)\otimes\mathbb{I}=\left(i(\mathbf{a} \mathbf{b}-\mathbf{b}\mathbf{a})\otimes\mathbb{I}\right)\left(\frac{\pi}{2},0\right) \tag{4.17}\] \[X_{2}=Y\otimes\mathbb{I}=-\mu_{3}\left(\frac{\pi}{2},0\right)\otimes\mathbb{I }=\left(\mathbf{B}\mathbf{A}-\mathbf{A}\mathbf{B}\right)\left(\frac{\pi}{2},0 \right), \tag{4.18}\] \[X_{3}=Z\otimes\mathbb{I}=-\mu_{1}\left(\frac{\pi}{2},0\right)\otimes\mathbb{I }=-(\mathbf{B}+\mathbf{A})\left(\frac{\pi}{2},0\right), \tag{4.19}\] \[X_{4}=\mathbb{I}\otimes Z=\mathbb{I}\otimes\left(-\mu_{1}\left(\frac{\pi}{2},0 \right)\right)=-(\widetilde{\mathbf{B}}+\widetilde{\mathbf{A}})\left(\frac{ \pi}{2},0\right), \tag{4.20}\] \[X_{5}=\mathbb{I}\otimes X=\mathbb{I}\otimes\left(\mu_{1}\left(\frac{\pi}{2},0 \right)\mu_{2}\left(\frac{\pi}{2},0\right)\right)=\mathbb{I}\otimes\left(i( \mathbf{a}\mathbf{b}-\mathbf{b}\mathbf{a})\right)\left(\frac{\pi}{2},0\right) \tag{4.21}\] \[=i((\mathbb{I}\otimes\mathbf{a})(\mathbb{I}\otimes\mathbf{b})-(\mathbb{I} \otimes\mathbf{b})(\mathbb{I}\otimes\mathbf{a}))\left(\frac{\pi}{2},0\right)=i (\widetilde{\mathbf{A}}\widetilde{\mathbf{B}}-\widetilde{\mathbf{B}} \widetilde{\mathbf{A}})\left(\frac{\pi}{2},0\right),\] \[X_{6}=i(\mathbb{I}\otimes Y)=i\left(\mathbb{I}\otimes\left(-\mu_{3}\left(\frac{ \pi}{2},0\right)\right)\right)=i(\widetilde{\mathbf{B}}\widetilde{\mathbf{A}} -\widetilde{\mathbf{A}}\widetilde{\mathbf{B}})\left(\frac{\pi}{2},0\right). \tag{4.22}\] This means that \(\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6}\}\) is a set, which is realized by pseudofermionic operators. Now we claim that the same set is also a set of generators for \(P_{2}\). Consider that \[U\] is the Pauli group generated by \[\Gamma_{\mu}\] and \[V\] is the Pauli group generated by \[\Gamma_{\nu}\] , (4.23) referring to (4.14) and (4.15). Of course, at the level of isomorphism of groups we have \[U\simeq V\simeq P_{1} \tag{4.24}\] and both \(U\) and \(V\) are subgroups of \(\mathrm{SL}_{4}(\mathbb{C})\). Having in mind the usual matrix product in \(\mathrm{SL}_{4}(\mathbb{C})\) and the rules in (1.2) in terms of (4.16), we have that for (4.16) with \(\theta=\pi/2\) and \(\delta=0\) \[P_{2}=UV=\langle\mu_{1}\otimes\mathbb{I},\mu_{2}\otimes\mathbb{I},\mu_{3} \otimes\mathbb{I}\rangle\ \langle\mathbb{I}\otimes\mu_{1},\mathbb{I}\otimes\mu_{2},\mathbb{I}\otimes \mu_{3}\rangle \tag{4.25}\] \[=\langle\mu_{1}\otimes\mathbb{I},\mu_{2}\otimes\mathbb{I},\mu_{3}\otimes \mathbb{I},\mathbb{I}\otimes\mu_{1},\mathbb{I}\otimes\mu_{2},\mathbb{I} \otimes\mu_{3}\rangle=\langle X_{1},X_{2},X_{3},X_{4},X_{5},X_{6}\rangle. \tag{4.26}\] Note that (4.25) reflects at the level of operators the fact that each element of \(P_{2}\) may be written as \(M\otimes N=(M\otimes\mathbb{I})(\mathbb{I}\otimes N)\) with \(M\) and \(N\) which are Pauli matrices, see (1.4). Consequently, each element of \(P_{2}\) can be written as product of elements of \(\Gamma_{\mu}\) and \(\Gamma_{\nu}\), that is, \(\Gamma_{\mu}\cup\Gamma_{\nu}\) generates \(P_{2}\). This means that the claim is true in particular for (3.25), (3.26), (3.27), hence \(\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6}\}\) is a set of generators for \(P_{2}\) in terms of pseudofermionic operators. (iii). From what we have seen in (ii) above, \(P_{2}=UV\), where \(U\) and \(V\) are in (4.23). We have exactly the conditions in Definition 4.1 if we look at (4.23). In fact, the derived subgroup of \(U\) and \(V\) (according to (4.1)) is given by \[[U,V]=[\langle\mu_{1}\otimes\mathbb{I},\mu_{2}\otimes\mathbb{I},\mu_{3} \otimes\mathbb{I}\rangle,\langle\mathbb{I}\otimes\mu_{1},\mathbb{I}\otimes \mu_{2},\mathbb{I}\otimes\mu_{3}\rangle] \tag{4.27}\] and it is enough to check that the generators of \(U\) and \(V\) commute at the group level, in order to conclude that \([U,V]\) is trivial. Now if \(i,j\in\{1,2,3\}\), then a generic element of (4.27) may be written as \[(\mu_{i}\otimes\mathbb{I})^{-1}\ (\mathbb{I}\otimes\mu_{j})^{-1}\ (\mu_{i} \otimes\mathbb{I})\ (\mathbb{I}\otimes\mu_{j})=(\mu_{i}^{-1}\otimes\mathbb{I}^{-1})( \mathbb{I}^{-1}\otimes\mu_{j}^{-1})(\mu_{i}\otimes\mathbb{I})(\mathbb{I} \otimes\mu_{j}) \tag{4.28}\] \[=(\mu_{i}\otimes\mathbb{I})(\mathbb{I}\otimes\mu_{j})(\mu_{i}\otimes\mathbb{I })(\mathbb{I}\otimes\mu_{j})=(\mu_{i}\otimes\mu_{j})(\mu_{i}\otimes\mu_{j})=( \mu_{i}\mu_{i})\otimes(\mu_{j}\mu_{j})=\mathbb{I}\otimes\mathbb{I}=\mathbb{I}\] and so \([U,V]\) is trivial as claimed. We may conclude that \(P_{2}\) is central product of \(U\) and \(V\). We should make some comments on the ideas in the previous arguments. _Remark 4.2_.: It is worth noting that the specific values of \(\theta\) and \(\delta\) in (3.25) are not the only values that realise \(\mu_{1}\), \(\mu_{2}\) and \(\mu_{3}\) as proportional to the Pauli matrices. We note the following examples which can be produced with a different choice of \(\theta\) and \(\delta\) : \[\mu_{1}(0,0)=-Z,\quad\mu_{2}(0,0)=iX,\quad\mu_{3}(0,0)=-X, \tag{4.29}\] In fact \(\mu_{1},\mu_{2},\mu_{3}\) are proportional to \(X,Y,Z\), if \(2\theta=n\pi\) for \(n\in\mathbb{Z}\) and either \(\omega=0\) or \(\delta=0\). Furthermore we note explicitly that the choice of the pseudofermionic operators \(\mathbf{a}\) and \(\mathbf{b}\) in the proof of Theorem 1.1 has been done, in order to show analogies with the models in [6, 16, 26, 27]. Of course, different choices of pseudofermionic operators \(\mathbf{a}\) and \(\mathbf{b}\) can be used in the proof of Theorem 1.1, or, we can keep \(\mathbf{a}\) and \(\mathbf{b}\) and get different expressions of (4.17)-(4.21) in terms of \(\mathbf{A}\), \(\mathbf{B}\), \(\widetilde{\mathbf{A}}\) and \(\widetilde{\mathbf{B}}\). For instance, this last case happens when we use (4.29) instead of (3.25); we would get a different formulation of (4.17)-(4.21) always in terms of \(\mathbf{A}\), \(\mathbf{B}\), \(\widetilde{\mathbf{A}}\) and \(\widetilde{\mathbf{B}}\). Let's note another relevant fact: **Corollary 4.3**.: _The statement of Theorem 1.1 (ii) is true for six operators \(X_{k}\) which are fermionic, up to similarity in \(\mathrm{GL}_{4}(\mathbb{C})\)._ Proof.: Repeat the argument of the proof of Theorem 1.1, using Proposition 3.4 along with (3.20) and (3.21), in order to replace \(\mathbf{a}\) and \(\mathbf{b}\) with fermionic operators \(\mathbf{c}\) and \(\mathbf{c}^{*}\). Another important aspect that we isolate in form of corollary is due to the presence of the structure of central product and to its description via pseudofermionic operators: **Corollary 4.4**.: _There exist two subgroups \(U\) and \(V\) of \(P_{2}\) such that \(P_{2}\) is the central product of \(U\) and \(V\). Moreover \(U\) and \(V\) are generated by pseudofermionic operators._ Proof.: Application of Theorem 1.1 (iii). ## 5. Conclusions and analogies with other dynamical systems The dynamical system \(\mathcal{S}\) of Theorem 1.1 is only one of the systems whose dynamics can be described in terms of the matrices \(X_{1},X_{2},\ldots,X_{12}\) in (4.2). Another dynamical system \(\mathcal{T}\) is discussed in [20], involving the same matrices (4.2) for the description of its Hamiltonian. Again, the physical system is an electronic circuit, and again the main interest is in the possibility of having a simple experimental device connected with PT symmetry in quantum mechanics. In this case the dynamical system is described in analogy with \(\mathcal{S}\) of Theorem 1.1, but with some minor differences, see [20] for more details. What is interesting is the fact that the operator \(L_{\mathcal{T}}\) is now replaced by the following symmetric, but not self-adjoint, matrix: \[H_{\mathcal{T}}=\left(\begin{array}{cccc}0&b+ir&d+ir&0\\ b+ir&0&0&d-ir\\ d+ir&0&0&d-ir\\ 0&d-ir&d-ir&0\end{array}\right). \tag{5.1}\] Interestingly enough, \(\{X_{k}\ \mid\ k=1,2,\ldots,12\}\) allows us to decompose also \(H_{\mathcal{T}}\) as follows: \[H_{\mathcal{T}}=\sum_{k=1}^{12}\beta_{k}X_{k}, \tag{5.2}\] where the only non zero \(\beta_{k}\) are the following: \[\alpha_{1}=d,\qquad\beta_{5}=\frac{b+d}{2},\qquad\beta_{6}=ir,\qquad\beta_{11}= \frac{b-d}{2}+ir. \tag{5.3}\] This means that Theorem 1.1 can be proved also beginning from the dynamical system \(\mathcal{T}\) instead of \(\mathcal{S}\). The argument of the proof of Theorem 1.1 applies in fact to \(X_{k}\) which appear also in (5.2). More examples can be constructed from the circuits considered in [1, 7]. Again we will find a description for the matrices (4.2)-(4.6) in terms of pseudofermionic operators, for instance, arising from the model described in (3.14)-(3.21). This suggests the presence of PT symmetries both in \(\mathcal{S}\) and \(\mathcal{T}\), but the formalization of the PT symmetry can be difficult to write explicitly; this is made for instance in [16], but not in [1]. Another aspect, which deserves interest, is due to the presence of the structure of central product in \(P_{2}\); this peculiar structure has been already investigated in [5, 10]. Generally, larger Pauli groups, i.e.: \(P_{n}\) for \(n\geq 2\), may be decomposed in central products (see [31]) so it may be interesting to generalize Theorem 1.1 to dynamical systems involving larger Pauli groups.
2304.13432
Design and analysis of bent functions using $\mathcal{M}$-subspaces
In this article, we provide the first systematic analysis of bent functions $f$ on $\mathbb{F}_2^{n}$ in the Maiorana-McFarland class $\mathcal{MM}$ regarding the origin and cardinality of their $\mathcal{M}$-subspaces, i.e., vector subspaces on which the second-order derivatives of $f$ vanish. By imposing restrictions on permutations $\pi$ of $\mathbb{F}_2^{n/2}$, we specify the conditions, such that Maiorana-McFarland bent functions $f(x,y)=x\cdot \pi(y) + h(y)$ admit a unique $\mathcal{M}$-subspace of dimension $n/2$. On the other hand, we show that permutations $\pi$ with linear structures give rise to Maiorana-McFarland bent functions that do not have this property. In this way, we contribute to the classification of Maiorana-McFarland bent functions, since the number of $\mathcal{M}$-subspaces is invariant under equivalence. Additionally, we give several generic methods of specifying permutations $\pi$ so that $f\in\mathcal{MM}$ admits a unique $\mathcal{M}$-subspace. Most notably, using the knowledge about $\mathcal{M}$-subspaces, we show that using the bent 4-concatenation of four suitably chosen Maiorana-McFarland bent functions, one can in a generic manner generate bent functions on $\mathbb{F}_2^{n}$ outside the completed Maiorana-McFarland class $\mathcal{MM}^\#$ for any even $n\geq 8$. Remarkably, with our construction methods it is possible to obtain inequivalent bent functions on $\mathbb{F}_2^8$ not stemming from two primary classes, the partial spread class $\mathcal{PS}$ and $\mathcal{MM}$. In this way, we contribute to a better understanding of the origin of bent functions in eight variables, since only a small fraction, of which size is about $2^{76}$, stems from $\mathcal{PS}$ and $\mathcal{MM}$, whereas the total number of bent functions on $\mathbb{F}_2^8$ is approximately $2^{106}$.
Enes Pasalic, Alexandr Polujan, Sadmir Kudin, Fengrong Zhang
2023-04-26T10:39:28Z
http://arxiv.org/abs/2304.13432v1
# Design and analysis of bent functions using \(\mathcal{M}\)-subspaces ###### Abstract In this article, we provide the first systematic analysis of bent functions \(f\) on \(\mathbb{F}_{2}^{n}\) in the Maiorana-McFarland class \(\mathcal{MM}\) regarding the origin and cardinality of their \(\mathcal{M}\)_-subspaces_, i.e., vector subspaces on which the second-order derivatives of \(f\) vanish. By imposing restrictions on permutations \(\pi\) of \(\mathbb{F}_{2}^{n/2}\), we specify the conditions, such that Maiorana-McFarland bent functions \(f(x,y)=x\cdot\pi(y)+h(y)\) admit a unique \(\mathcal{M}\)-subspace of dimension \(n/2\). On the other hand, we show that permutations \(\pi\) with linear structures give rise to Maiorana-McFarland bent functions that do not have this property. In this way, we contribute to the classification of Maiorana-McFarland bent functions, since the number of \(\mathcal{M}\)-subspaces is invariant under equivalence. Additionally, we give several generic methods of specifying permutations \(\pi\) so that \(f\in\mathcal{MM}\) admits a unique \(\mathcal{M}\)-subspace. Most notably, using the knowledge about \(\mathcal{M}\)-subspaces, we show that using the bent 4-concatenation of four suitably chosen Maiorana-McFarland bent functions, one can in a generic manner generate bent functions on \(\mathbb{F}_{2}^{n}\) outside the completed Maiorana-McFarland class \(\mathcal{MM}^{\#}\) for any even \(n\geq 8\). Remarkably, with our construction methods it is possible to obtain inequivalent bent functions on \(\mathbb{F}_{2}^{8}\) not stemming from two primary classes, the partial spread class \(\mathcal{PS}\) and \(\mathcal{MM}\). In this way, we contribute to a better understanding of the origin of bent functions in eight variables, since only a small fraction, of which size is about \(2^{76}\), stems from \(\mathcal{PS}\) and \(\mathcal{MM}\), whereas the total number of bent functions on \(\mathbb{F}_{2}^{8}\) is approximately \(2^{106}\). **Keywords.** Bent function, Maiorana-McFarland class, Partial spread class, Equivalence, Linear structure, Permutation, Bent 4-concatenation. Introduction Bent functions are famous combinatorial objects introduced by Rothaus [21] in the mid-1960s that give rise to various discrete structures. Two known primary classes of bent functions are the Maiorana-McFarland class \(\mathcal{MM}\) and the Partial Spread class \(\mathcal{PS}\), which were introduced in the 1970s in [15] and [8], respectively. On the other hand, the so-called secondary constructions (the reader is referred to [17]) use the known bent functions for the purpose of constructing new ones. However, only a few sporadic works on bent functions analyze the class inclusion properly, being more focused on specifying explicit univariate/bivariate trace form or construction methods without being precise whether these functions might belong to \(\mathcal{MM}\) class for instance. This eventually leads to a lack of understanding related to the classification and enumeration of bent functions. For instance, bent functions on \(\mathbb{F}_{2}^{8}\) that belong to the main two primary classes are only a small fraction (about the size of \(2^{76}\)) of all \(\approx 2^{106}\) bent functions in eight variables [13]. A pioneering work to provide bent functions that provably do not belong to \(\mathcal{MM}\) or to \(\mathcal{PS}\), up to equivalence, is due to Carlet [5] who introduced two new classes of bent functions, the so-called \(\mathcal{C}\) and \(\mathcal{D}\) classes. In a recent series of articles [1, 2, 11, 12, 22, 23], the authors specified explicit families of bent functions outside the completed \(\mathcal{MM}\) class that belong to \(\mathcal{C}\) and \(\mathcal{D}\). Nevertheless, apart from the class \(\mathcal{D}_{0}\) of Carlet, these functions are defined on the variable space \(n\geq 10\). Thus, the origin of bent functions outside \(\mathcal{MM}^{\#}\cup\mathcal{PS}^{\#}\) on \(\mathbb{F}_{2}^{8}\) is still unclear. Moreover, most of the known secondary methods for constructing bent functions commonly employ bent functions on a smaller variable space. For example, in a recent article [18], the authors provided several methods of generating infinite families of bent functions outside \(\mathcal{MM}^{\#}\) using the so-called \(4\)-concatenation \(f=f_{1}||f_{2}||f_{3}||f_{4}\) of bent functions \(f_{1},f_{2},f_{3},f_{4}\) in \(n\) variables introduced in [4] and later restated in [9]. Due to the design approach, employing bent functions outside \(\mathcal{MM}^{\#}\) on a smaller space, these results are significant only for \(n\geq 10\) and do not answer the existence of bent functions outside the known primary classes when \(n=8\). Such an approach then makes it impossible to construct bent functions on \(\mathbb{F}_{2}^{8}\) since all bent functions in less than 8 variables are in \(\mathcal{MM}^{\#}\). Dillon in his thesis [8] proved that a given bent function \(f\) on \(\mathbb{F}_{2}^{n}\) belongs to the \(\mathcal{MM}^{\#}\) class if and only if \(D_{a}D_{b}f=0\) for all \(a,b\in V\), where \(V\) is a vector space of \(\mathbb{F}_{2}^{n}\) of dimension \(n/2\) (see also Lemma 1.2 for details); these vector spaces were called \(\mathcal{M}\)_-subspaces_ in [20]. Despite being introduced decades ago, the algebraic properties of \(\mathcal{M}\)-subspaces attracted attention only recently in a few works, e.g, in [10, 19, 20]. The main aim of this article is to provide the first systematic investigation of \(\mathcal{M}\)-subspaces of Boolean bent functions, and using this knowledge, provide generic construction methods of Boolean bent functions in \(n\) variables outside the \(\mathcal{MM}^{\#}\) class for all even \(n\geq 8\). Notably, we give a characterization of bent functions on \(\mathbb{F}_{2}^{n}\) in \(\mathcal{MM}\) class, that have a unique \(\mathcal{M}\)-subspace \(V=\mathbb{F}_{2}^{n/2}\times\{0_{n/2}\}\). We show that the property of a Maiorana-McFarland bent function \(f(x,y)=x\cdot\pi(y)+h(y)\) to have a _unique \(\mathcal{M}\)-subspace_ is, in many cases, completely determined by choice of permutation \(\pi\). In the other direction, if a permutation \(\pi\) admits linear structures (implying that its components also do) then \(f\in\mathcal{MM}\) has at least two \(\mathcal{M}\)-subspaces. This characterization not only contributes to the classification of Maiorana-McFarland bent functions but also partially explains why the condition that the components of \(\pi\) do not admit linear structures has been efficiently used in, e.g., [1, 2, 11, 22, 23] to specify functions in \(\mathcal{C}\) and \(\mathcal{D}\) that are outside \(\mathcal{MM}^{\#}\). More precisely, a modification of a bent function \(f\in\mathcal{MM}\) is easier performed if only one vanishing subspace needs to be deprived of this property through the addition of an indicator function. Using the obtained knowledge about \(\mathcal{M}\)-subspaces of Maiorana-McFarland bent functions, we provide several design methods of specifying bent functions \(f_{1},f_{2},f_{3},f_{4}\) on \(\mathbb{F}_{2}^{n}\) such that the concatenation \(f=f_{1}||f_{2}||f_{3}||f_{4}\) is bent on \(\mathbb{F}_{2}^{n+2}\) and outside \(\mathcal{MM}^{\#}\) for all \(n\geq 6\). Additionally, we indicate that obtained with our approach bent functions on \(\mathbb{F}_{2}^{8}\) are outside the \(\mathcal{PS}^{\#}\) class as well, thus we contribute to the better understanding of the origin of all bent functions in \(n=8\) variables. The rest of the paper is organized in the following way. In Subsection 1.1 we recall basic definitions related to Boolean functions, and in Subsection 1.2 we summarize the necessary algebraic properties of bent \(4\)-concatenation. In Section 2, we investigate, which classes of permutations \(\pi\) on \(\mathbb{F}_{2}^{m}\) are suitable for the construction of Maiorana-McFarland bent functions of the form \((x,y)\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\mapsto x\cdot\pi(y)\) with several \(\mathcal{M}\)-subspaces. Particularly, in Subsections 2.1 and 2.2, we show that permutations with linear structures as well as quadratic permutations that admit many \(\mathcal{M}\)-subspaces, respectively, lead to Maiorana-McFarland bent functions with several \(\mathcal{M}\)-subspaces. In Section 3, we study the opposite question, namely, we investigate, which classes of permutations \(\pi\) on \(\mathbb{F}_{2}^{m}\) are suitable for the construction of Maiorana-McFarland bent functions of the form \((x,y)\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\mapsto x\cdot\pi(y)+h(y)\) with the unique canonical \(\mathcal{M}\)-subspace. In Subsection 3.1, we introduce permutations with the \((P_{1})\) property as those permutations \(\pi\) on \(\mathbb{F}_{2}^{m}\) for which \(D_{v}D_{w}\pi\neq 0_{m}\) for all linearly independent \(v,w\in\mathbb{F}_{2}^{m}\). Remarkably, we show that permutations \(\pi\) with this property guarantee that Maiorana-McFarland bent functions of the form \((x,y)\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\mapsto x\cdot\pi(y)+h(y)\) have the unique canonical \(\mathcal{M}\)-subspace independently on the choice of a Boolean function \(h\) on \(\mathbb{F}_{2}^{m}\); the latter provides a variety of different Maiorana-McFarland bent functions with the unique \(\mathcal{M}\)-subspace even from a single permutation \(\pi\) with this property. In Subsection 3.2, we consider permutations \(\pi\) on \(\mathbb{F}_{2}^{m}\) for which \(D_{u}D_{w}\pi=0_{m}\), for any \(u,v\in S\), where \(\dim(S)\geq 1\). Remarkably, we completely characterize such permutations \(\pi\) on \(\mathbb{F}_{2}^{m}\) giving rise to bent functions \((x,y)\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\mapsto x\cdot\pi(y)\) with the unique canonical \(\mathcal{M}\)-subspace and refer to them as permutations with \((P_{2})\) property in the sequel. In Section 4 we give several explicit constructions of permutations with \((P_{1})\) and \((P_{2})\) properties. In Section 5, we provide several generic construction methods of bent functions outside the \(\mathcal{MM}^{\#}\) class using the bent \(4\)-concatenation. First, in Subsection 5.1, we completely describe possible \(\mathcal{M}\)-subspaces of the bent \(4\)-concatenation of four Maiorana-McFarland bent functions. Additionally, we explain how to check the membership in the \(\mathcal{PS}^{\#}\) computationally. Consequently, we consider two different scenarios of the concatenation of Maiorana-McFarland bent functions which both lead to bent functions outside \(\mathcal{MM}^{\#}\). In Subsection 5.2, we show that if Maiorana-McFarland bent functions do not share a common \(\mathcal{M}\)-subspace, then their concatenation is outside \(\mathcal{MM}^{\#}\). In subsection 5.3, we show that even if Maiorana-McFarland bent functions share a common \(\mathcal{M}\)-subspace, then under certain technical conditions it is still possible that their concatenation is outside \(\mathcal{MM}^{\#}\). Moreover, we indicate that with our approaches it is possible to construct inequivalent bent functions on \(\mathbb{F}_{2}^{8}\) outside \(\mathcal{MM}^{\#}\cup\mathcal{PS}^{\#}\). In Section 6, we conclude the paper and give a list of open problems. ### Preliminaries The vector space \(\mathbb{F}_{2}^{n}\) is the space of all \(n\)-tuples \(x=(x_{1},\ldots,x_{n})\), where \(x_{i}\in\mathbb{F}_{2}\). For \(x=(x_{1},\ldots,x_{n})\) and \(y=(y_{1},\ldots,y_{n})\) in \(\mathbb{F}_{2}^{n}\), the usual scalar (or dot) product over \(\mathbb{F}_{2}\) is defined as \(x\cdot y=x_{1}y_{1}+\cdots+x_{n}y_{n}.\) The Hamming weight of \(x=(x_{1},\ldots,x_{n})\in\mathbb{F}_{2}^{n}\) is denoted and computed as \(wt(x)=\sum_{i=1}^{n}x_{i}.\) Throughout the paper, we denote by \(0_{n}=(0,0,\ldots,0)\in\mathbb{F}_{2}^{n}\) the all-zero vector with \(n\) coordinates, and by \(\operatorname{\mathbbm{e}}_{k}\in\mathbb{F}_{2}^{n}\) the \(k\)-th canonical basis vector. In certain cases, we endow \(\mathbb{F}_{2}^{n}\) with the structure of the finite field \((\mathbb{F}_{2^{n}},\cdot)\). An element \(\alpha\in\mathbb{F}_{2^{n}}\) is said to be a _primitive element_, if it is a generator of the multiplicative group \(\mathbb{F}_{2^{n}}^{*}\). The _absolute trace_\(Tr\colon\mathbb{F}_{2^{n}}\to\mathbb{F}_{2}\) is given by \(Tr(x)=\sum_{i=0}^{n-1}x^{2^{i}}\). The set of all Boolean functions in \(n\) variables, which is the set of mappings from \(\mathbb{F}_{2}^{n}\) to \(\mathbb{F}_{2}\), is denoted by \(\mathcal{B}_{n}\). It is well-known that any Boolean function \(f\in\mathcal{B}_{n}\) can be uniquely represented by the _algebraic normal form (ANF)_, which is given by \(f(x_{1},\ldots,x_{n})=\sum_{u\in\mathbb{F}_{2}^{n}}\lambda_{u}(\prod_{i=1}^{n }{x_{i}}^{u_{i}})\), where \(x_{i},\lambda_{u}\in\mathbb{F}_{2}\) and \(u=(u_{1},\ldots,u_{n})\in\mathbb{F}_{2}^{n}\). The _algebraic degree_ of \(f\), denoted by \(\deg(f)\), is the maximum Hamming weight of \(u\in\mathbb{F}_{2}^{n}\) for which \(\lambda_{u}\neq 0\) in its ANF. The _first order-derivative_ of a function \(f\in\mathcal{B}_{n}\) in the direction \(a\in\mathbb{F}_{2}^{n}\) is the mapping \(D_{a}f(x)=f(x+a)+f(x)\). Derivatives of higher orders are defined recursively, i.e., the \(k\)_-th order derivative_ of a function \(f\in\mathcal{B}_{n}\) is defined by \(D_{V}f(x)=D_{a_{k}}D_{a_{k-1}}\ldots D_{a_{1}}f(x)=D_{a_{k}}(D_{a_{k-1}}\ldots D _{a_{1}}f)(x)\), where \(V=\langle a_{1},\ldots,a_{k}\rangle\) is a vector subspace of \(\mathbb{F}_{2}^{n}\) spanned by elements \(a_{1},\ldots,a_{k}\in\mathbb{F}_{2}^{n}\). An element \(a\in\mathbb{F}_{2}^{n}\) is called a _linear structure_ of \(f\in\mathcal{B}_{n}\), if \(f(x+a)+f(x)=const\) for all \(x\in\mathbb{F}_{2}^{n}\). We say that \(f\in\mathcal{B}_{n}\)_has no linear structures_, if \(0_{n}\) is the only linear structure of \(f\). The _Walsh-Hadamard transform_ (WHT) of \(f\in\mathcal{B}_{n}\), and its inverse WHT, at any point \(a\in\mathbb{F}_{2}^{n}\) are defined, respectively, by \[W_{f}(a)=\sum_{x\in\mathbb{F}_{2}^{n}}(-1)^{f(x)+a\cdot x}\quad\text{and} \quad(-1)^{f(x)}=2^{-n}\sum_{a\in\mathbb{F}_{2}^{n}}W_{f}(a)(-1)^{a\cdot x}.\] For even \(n\), a function \(f\in\mathcal{B}_{n}\) is called _bent_ if \(W_{f}(u)=\pm 2^{\frac{n}{2}}\) for all \(u\in\mathbb{F}_{2}^{n}\). For a bent function \(f\in\mathcal{B}_{n}\), a Boolean function \(f^{*}\in\mathcal{B}_{n}\) defined by \(W_{f}(u)=2^{\frac{n}{2}}(-1)^{f^{*}(u)}\) for all \(u\in\mathbb{F}_{2}^{n}\) is a bent function, called the _dual_ of \(f\). Two Boolean functions \(f,f^{\prime}\in\mathcal{B}_{n}\) are called _extended-affine equivalent_, if there exists an affine permutation \(A\) of \(\mathbb{F}_{2}^{n}\) and affine function \(l\in\mathcal{B}_{n}\), such that \(f\circ A+l=f^{\prime}\). It is well known, that extended-affine equivalence preserves the bent property. In the sequel, while saying two Boolean functions are (in)equivalent, we always mean extended-affine equivalence, since this is the only type of equivalence we deal with in this article. The _Maiorana-McFarland class_\(\mathcal{MM}\) is the set of \(n\)-variable (\(n=2m\)) Boolean bent functions of the form \[f(x,y)=x\cdot\pi(y)+h(y),\text{ for all }x,y\in\mathbb{F}_{2}^{m},\] where \(\pi\) is a permutation on \(\mathbb{F}_{2}^{m}\), and \(h\) is an arbitrary Boolean function on \(\mathbb{F}_{2}^{m}\). **Definition 1.1**.: _A class of bent functions \(\text{B}_{n}\subset\mathcal{B}_{n}\) is complete if it is globally invariant under extended-affine equivalence. The completed class, denoted by \(\mathcal{MM}^{\#}\) in the case of the Maiorana-McFarland class \(\mathcal{MM}\), is the smallest possible complete class that contains the class under consideration._ With the following criterion of Dillon, one can show that a given Boolean bent function \(f\in\mathcal{B}_{n}\) is (not) a member of the completed Maiorana-McFarland class. **Lemma 1.2**.: _[_8_, p. 102]_ _Let \(n=2m\). A Boolean bent function \(f\in\mathcal{B}_{n}\) belongs to \(\mathcal{MM}^{\#}\) if and only if there exists an \(m\)-dimensional linear subspace \(V\) of \(\mathbb{F}_{2}^{n}\) such that the second-order derivatives \(D_{a}D_{b}f(x)=f(x)+f(x+a)+f(x+b)+f(x+a+b)\) vanish for any \(a,b\in V\)._ Following the terminology in [20], we introduce the \(\mathcal{M}\)-subspaces of Boolean (not necessarily bent) functions in the following way. **Definition 1.3**.: _Let \(f\in\mathcal{B}_{n}\) be a Boolean function. We call a vector subspace \(V\) of \(\mathbb{F}_{2}^{n}\) an \(\mathcal{M}\)-subspace of \(f\), if for any \(a,b\in V\) we have that \(D_{a}D_{b}f=0\). We denote by \(\mathcal{MS}_{r}(f)\) the collection of all \(r\)-dimensional \(\mathcal{M}\)-subspaces of the function \(f\)._ It is well known [6], that for a bent function \(f\in\mathcal{B}_{n}\) the maximum dimension of an \(\mathcal{M}\)-subspace is \(n/2\); bent functions achieving this bound with equality are exactly the bent functions in \(\mathcal{MM}^{\#}\) by Lemma 1.2. For every Maiorana-McFarland bent function \(f(x,y)=x\cdot\pi(y)+h(y)\) on \(\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\), the vector space \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\) is an \(\mathcal{M}\)-subspace of \(f\), as observed by Dillon [8]. However, in general, this vector space \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\), which we refer to as _the canonical \(\mathcal{M}\)-subspace_, is not necessarily unique. For instance, for a Maiorana-McFarland bent function \(f\) on \(\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\), the number of its \(\mathcal{M}\)-subspaces is at most \(\prod_{i=1}^{m}\left(2^{i}+1\right)\). Moreover, the equality is attained if and only if \(f\in\mathcal{B}_{2m}\) is quadratic, as it was deduced in [19] from [10, Theorem 2]. Finally, we note that in [20, Proposition 4.4] it was shown that the number of \(\mathcal{M}\)-subspaces of a Boolean function \(f\in\mathcal{B}_{n}\) is invariant under equivalence; consequently, two bent functions with a different number of \(\mathcal{M}\)-subspaces are inequivalent. One can determine all \(\mathcal{M}\)-subspaces of a Boolean function \(f\in\mathcal{B}_{n}\) as described in [20, Algorithm 1]. We note that for vectorial functions, i.e., the mappings \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{m}\), one can essentially extend the definitions related to differential properties (e.g., derivatives, linear structures and \(\mathcal{M}\)-subspaces) by simply replacing \(f\in\mathcal{B}_{n}\) by \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{m}\) in the corresponding definitions. For \(b\in\mathbb{F}_{2}^{m}\), the _component function_\(F_{b}\in\mathcal{B}_{n}\) of \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{m}\) is defined by \(F_{b}(x)=b\cdot F(x)\) for all \(x\in\mathbb{F}_{2}^{n}\). Finally, every vectorial function \(F\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}^{m}\) can be uniquely represented in the form \(F(x)=(f_{1}(x),\ldots,f_{m}(x))^{T}\), where Boolean functions \(f_{i}\in\mathcal{B}_{n}\) are called the _coordinate functions_ of \(F\); thus the algebraic normal form and the algebraic degree of \(F\) are defined coordinate-wise. ### Bent 4-concatenation and its algebraic properties In the following, we will be mainly interested in the design of bent functions \(f\in\mathcal{B}_{n+2}\) from four bent functions \(f_{1},f_{2},f_{3},f_{4}\in\mathcal{B}_{n}\) using the _bent 4-concatenation_\(f=f_{1}||f_{2}||f_{3}||f_{4}\), of which ANF is given by \[f(x,y_{1},y_{2})=f_{1}(x)+y_{1}(f_{1}+f_{3})(x)+y_{2}(f_{1}+f_{2})(x)+y_{1}y_{2 }(f_{1}+f_{2}+f_{3}+f_{4})(x). \tag{1.1}\] From this expression, it is not difficult to see that \(f_{1}(x)=f(x,0,0),f_{2}(x)=f(x,0,1),f_{3}(x)=f(x,1,0)\) and \(f_{4}(x)=f(x,1,1)\). Note that if \(f_{i}\in\mathcal{B}_{n}\) are all bent, then the necessary and sufficient condition that \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\) is bent as well, is that the _dual bent condition_ is satisfied [9], i.e., \(f_{1}^{*}+f_{2}^{*}+f_{3}^{*}+f_{4}^{*}=1\). For the further analysis of the bent 4-concatenation \(f=f_{1}||f_{2}||f_{3}||f_{4}\) in terms of the second-order derivatives, we derive the expression for \(D_{a}D_{b}f(x,y_{1},y_{2})\) where \(a=(a^{\prime},a_{1},a_{2})\) and \(b=(b^{\prime},b_{1},b_{2})\) and \(a^{\prime},b^{\prime}\in\mathbb{F}_{2}^{n}\) and \(a_{i},b_{i}\in\mathbb{F}_{2}\) as follows: \[\begin{split} D_{a}D_{b}f(x,y_{1},y_{2})&=D_{a^{ \prime}}D_{b^{\prime}}f_{1}(x)+y_{1}D_{a^{\prime}}D_{b^{\prime}}f_{13}(x)+y_{2}D _{a^{\prime}}D_{b^{\prime}}f_{12}(x)+y_{1}y_{2}D_{a^{\prime}}D_{b^{\prime}}f_{1 234}(x)\\ &+a_{1}D_{b^{\prime}}f_{13}(x+a^{\prime})+b_{1}D_{a^{\prime}}f_{1 3}(x+b^{\prime})+a_{2}D_{b^{\prime}}f_{12}(x+a^{\prime})+b_{2}D_{a^{\prime}}f_ {12}(x+b^{\prime})\\ &+(a_{1}y_{2}+a_{2}y_{1}+a_{1}a_{2})D_{b^{\prime}}f_{1234}(x+a^{ \prime})+(b_{1}y_{2}+b_{2}y_{1}+b_{1}b_{2})D_{a^{\prime}}f_{1234}(x+b^{\prime}) \\ &+(a_{1}b_{2}+b_{1}a_{2})f_{1234}(x+a^{\prime}+b^{\prime}),\end{split} \tag{1.2}\] where the Boolean function \(f_{i_{1}\ldots i_{k}}\in\mathcal{B}_{n}\) is defined by \(f_{i_{1}\ldots i_{k}}:=f_{i_{1}}+\cdots+f_{i_{k}}\). In this context, the main design goal is to specify suitable \(f_{i}\in\mathcal{B}_{n}\) so that \(f\in\mathcal{B}_{n+2}\) is a bent function, and to ensure that \(f\) does not satisfy the \(\mathcal{MM}^{\#}\) class membership criterion of Dillon. ## 2 Bent functions with more than one \(\mathcal{M}\)-subspace In this section, we derive sufficient conditions that \(f(x,y)=x\cdot\pi(y)+h(y)\) admits more than one \(\mathcal{M}\)-subspace. This feature is disadvantageous from the perspective of constructing bent functions \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{2m+2}\) outside \(\mathcal{MM}^{\#}\) from Maiorana-McFarland bent functions \(f_{i}\in\mathcal{B}_{2m}\), since in this case, it is more difficult to ensure that the second-order derivatives of \(f\) do not vanish on any \((m+1)\)-dimensional subspace of \(\mathbb{F}_{2}^{2m+2}\). Essentially, this property is closely related to the choice of a permutation \(\pi\) on \(\mathbb{F}_{2}^{m}\) which is then characterized by the presence of non-zero linear structures or being quadratic. ### Permutations with linear structures First, we show that permutations with linear structures give rise to Maiorana-McFarland bent functions with more than one \(\mathcal{M}\)-subspace. **Proposition 2.1**.: _Let \(\pi\) be a permutation of \(\mathbb{F}_{2}^{m}\) with a non-zero linear structure \(s\in\mathbb{F}_{2}^{m}\), i.e.,_ \[D_{s}\pi(x)=\pi(x)+\pi(x+s)=v\in\mathbb{F}_{2}^{m}\] _holds for all \(x\in\mathbb{F}_{2}^{m}\), and let \(h:\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) be an arbitrary Boolean function. Then, the function \(g\colon\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) defined by_ \[g(x,y)=x\cdot\pi(y)+h(y),\quad\text{for all $x,y\in\mathbb{F}_{2}^{m}$},\] _has at least two \(\mathcal{M}\)-subspaces._ Proof.: Clearly, the canonical \(\mathcal{M}\)-subspace \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\) is the first one. We will now construct another one. Let \(v=D_{s}\pi\in\mathbb{F}_{2}^{m}\) and \(W=\langle v\rangle^{\perp}\subset\mathbb{F}_{2}^{m}\). Set \(V=\langle W\times\{0_{m}\},(0_{m},s)\rangle\). For two different non-zero vectors \(a=(a_{1},a_{2})\) and \(b=(b_{1},b_{2})\) in \(V\) we compute \[D_{a}D_{b}g(x)=x\cdot(D_{a_{2}}D_{b_{2}}\pi(y))+a_{1}\cdot D_{b_{2}}\pi(y+a_{2} )+b_{1}\cdot D_{a_{2}}\pi(y+b_{2})+D_{a_{2}}D_{b_{2}}h(y).\] If \(a_{2}=b_{2}=0_{m}\), i.e, if \(a\) and \(b\) are in \(W\times\{0_{m}\}\), we deduce that \(D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}g(x)=0\). If \(b=(0_{m},s)\) and \(a\in W\times\{0_{m}\}\), then \(a_{2}=0_{m}\), and we have \[D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}g(x)=a_{1}\cdot D_{s}\pi(y)=a_{1}\cdot v=0,\] since \(a_{1}\in W=\langle v\rangle^{\perp}\). From this, we conclude that the second-order derivatives of \(g\) vanish on \(V\) as well. However, the condition that permutation \(\pi\) of \(\mathbb{F}_{2}^{m}\) has no linear structures does not imply that the only vanishing \(\mathcal{M}\)-subspace is \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\), as the following example shows. **Example 2.2**.: _Let \(m=5\) and \(\pi\) be a permutation of \(\mathbb{F}_{2}^{m}\) defined by its algebraic normal form in the following way:_ \[\pi(y)=\begin{bmatrix}y_{1}\\ y_{2}\\ y_{3}+y_{1}y_{3}+y_{1}y_{5}\\ y_{1}y_{3}+y_{2}y_{3}+y_{4}\\ y_{1}y_{3}+y_{2}y_{4}+y_{5}+y_{1}y_{5}\end{bmatrix}. \tag{2.1}\] _It is not difficult to check, that the only linear structure of \(\pi\) is \(s=0\). However, the function \(g(x,y)=x\cdot\pi(y)\) has exactly two \(\mathcal{M}\)-subspaces: the canonical \(\mathcal{M}\)-subspace \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\) as well as \(V\), which is given by:_ \[V=\left\langle\begin{array}{ccccccccc}1&0&0&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&0&1\end{array}\right\rangle.\] _Note that for the permutation \(\pi\) defined in (2.1), there exist a lot of Boolean functions \(h\) on \(\mathbb{F}_{2}^{5}\) such that by adding Boolean function \(h(y)\) on \(\mathbb{F}_{2}^{5}\) to \(g(x,y)=x\cdot\pi(y)\), one gets a bent function \(f(x,y)=x\cdot\pi(y)+h(y)\) having the unique canonical \(\mathcal{M}\)-subspace. A concrete example of such a function is \(h(y_{1},\ldots,y_{5})=y_{3}y_{4}y_{5}\)._ Quadratic permutations inducing more than one \(\mathcal{M}\)-subspace for bent functions in \(\mathcal{MM}\) In this subsection, we provide instances of quadratic permutations for which the function defined by \(f(x,y)=x\cdot\pi(y)\) has more than one \(\mathcal{M}\)-subspace. We will use the following two results from [12]. **Lemma 2.3**.: _[_12_]_ _Let \(G:\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}^{t}\) be a vectorial Boolean function. If there exists an \((m-k)\)-dimensional subspace \(H\) of \(\mathbb{F}_{2}^{m}\) such that \(D_{a}D_{b}G=0_{t}\) for all \(a,b\in H\), then the algebraic degree of \(G\) is at most \(k+1\)._ **Lemma 2.4**.: _[_12_]_ _Let \(\pi:\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}^{m}\) be a permutation such that there is a linear hyperplane \(V\) of \(\mathbb{F}_{2}^{m}\), on which \(\pi\) is affine. Let \(l(x)\) be the linear Boolean function that defines \(V\), that is, \(l(x)=0\) if and only if \(x\in V\). Then, \(l(x)\) or \(l(x)+1\) is a component function of \(\pi\)._ **Lemma 2.5**.: _Let \(\pi\) be a permutation of \(\mathbb{F}_{2}^{m}\), such that there exists an \((m-1)\)-dimensional subspace \(S\subset\mathbb{F}_{2}^{m}\) for which \(D_{a}D_{b}\pi=0_{m}\), for all \(a,b\in S\). Let \(s:\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) be the linear Boolean function that defines \(S\), that is, \(s(y)=0\) if and only if \(y\in S\). Then, \(\pi\) is at most quadratic and \(s(y)\) or \(s(y)+1\) is a component function of \(\pi\)._ Proof.: The fact that \(\pi\) is at most quadratic follows directly from Lemma 2.3. Let \(a,b\) be two arbitrary vectors from \(S\). Since \(D_{a}D_{b}\pi(y)=0_{m}\) for all \(y\in\mathbb{F}_{2}^{m}\), setting \(y=0_{m}\) we get: \[\pi(a+b)+\pi(a)+\pi(b)+\pi(0_{m})=0_{m}.\] Since \(a,b\in S\) were arbitrary, we deduce that \(\pi\) is affine on the linear hyperplane \(S\), and from Lemma 2.4 it follows that \(s(y)\) or \(s(y)+1\) is a component function of \(\pi\). **Proposition 2.6**.: _Let \(\pi\) be a permutation of \(\mathbb{F}_{2}^{m}\), such that there exists an \((m-1)\)-dimensional subspace \(S\subset\mathbb{F}_{2}^{m}\) for which \(D_{a}D_{b}\pi=0_{m}\), for all \(a,b\in S\). Let \(f\colon\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) be the function defined by:_ \[f(x,y)=x\cdot\pi(y).\] _Then, \(f\) has at least two \(\mathcal{M}\)-subspaces._ Proof.: It is obvious that \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\) is one \(\mathcal{M}\)-subspace for \(f\). Let \(s:\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) be the linear Boolean function that defines \(S\), that is, \(s(y)=0\) if and only if \(y\in S\). From Lemma 2.5 we deduce that \(s(y)\) or \(s(y)+1\) is a component function of \(\pi\). Let \(c\in\mathbb{F}_{2}^{m}\) be such that \(c\cdot\pi\) is equal to \(s\) or \(s+1\). Let \(S^{\prime}\) denote the subspace \(S^{\prime}=\{0_{m}\}\times S\), and let \(V\) be the \(m\)-dimensional subspace of \(\mathbb{F}_{2}^{2m}\) defined by \(V=\langle(c,0_{m}),S^{\prime}\rangle\). We will show that \(V\) is also an \(\mathcal{M}\)-subspace for \(f\). If \(v=(v_{1},v_{2})\) and \(w=(w_{1},w_{2})\) are from \(V\) such that \(v_{1}=w_{1}=0_{m}\), that is \(v,w\in S^{\prime}\), then \(v_{2},w_{2}\) are in \(S\), and \[D_{v}D_{w}f(x,y)=x\cdot D_{v_{2}}D_{w_{2}}\pi(y)=0.\] Assume now that \(v=(c,0_{m})\) and \(w\in S^{\prime}\). Then \[D_{v}D_{w}f(x,y)=D_{w}(c\cdot\pi(y))=s(y+w_{2})+s(y).\] Since \(w_{2}\) is in \(S\), then \(y+w_{2}\) is in \(S\) if and only if \(y\) is in \(S\), hence \(s(y+w_{2})=s(y)\), and consequently \[D_{v}D_{w}f(x,y)=s(y+w_{2})+s(y)=0.\] We conclude that \(D_{v}D_{w}f=0\) for all \(v,w\in V\), and hence that \(V\) is also an \(\mathcal{M}\)-subspace for \(f\). ## 3 Bent functions in \(\mathcal{MM}\) with the unique canonical \(\mathcal{M}\)-subspace In this section, we characterize more precisely permutations that give rise to the unique canonical \(\mathcal{M}\)-subspace for \(f(x,y)=x\cdot\pi(y)+h(y)\). This is achieved through two useful properties called \((P_{1})\) and \((P_{2})\) which classify permutations with respect to vanishing subspaces of its second-order derivatives \(D_{a}D_{b}\pi\). In Section 4, we will provide some generic methods of specifying permutations satisfying these properties, including a generic class of APN permutations that necessarily satisfy the \((P_{1})\) property. ### Bent functions from permutations having \((P_{1})\) property In the following statement, we provide a sufficient condition on permutations \(\pi\) of \(\mathbb{F}_{2}^{m}\), such that \(f(x,y)=x\cdot\pi(y)+h(y)\) has the unique \(\mathcal{M}\)-subspace \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\) independently on the choice of a function \(h\) on \(\mathbb{F}_{2}^{m}\). **Theorem 3.1**.: _Let \(\pi\) be a permutation of \(\mathbb{F}_{2}^{m}\) which has the following property:_ \[D_{v}D_{w}\pi\neq 0_{m}\text{ for all linearly independent }v,w\in\mathbb{F}_{2}^{m}.\] ( \[P_{1}\] ) _Define \(f\colon\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) by \(f(x,y)=x\cdot\pi(y)+h(y)\), for all \(x,y\in\mathbb{F}_{2}^{m}\), where \(h\colon\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) is an arbitrary Boolean function. Then, the following hold:_ 1. _Permutation_ \(\pi\) _has no linear structures._ 2. _The vector space_ \(V=\mathbb{F}_{2}^{m}\times\{0_{m}\}\) _is the only_ \(\mathcal{M}\)_-subspace of_ \(f\)_._ Proof.: _1._ Assume that \(\pi\) has a non-zero linear structure \(a\in\mathbb{F}_{2}^{m}\), i.e., for all \(x\in\mathbb{F}_{2}^{m}\) holds \(D_{a}\pi(x)=v\) for some \(v\in\mathbb{F}_{2}^{m}\). Then, taking \(b\in\mathbb{F}_{2}^{m}\setminus\{0_{m},a\}\), we get that \(D_{a}D_{b}\pi=0_{m}\), which contradicts the property (\(P_{1}\)). _2._ Let \(V\) be an \(m\)-dimensional subspace of \(\mathbb{F}_{2}^{2m}\) such that \(D_{a}D_{b}f=0\) for all \(a,b\in V\). Define the linear mapping \(L:V\to\mathbb{F}_{2}^{m}\) by \(L(x,y)=y\), for all \((x,y)\in V\). In general, the second-order derivative of \(f\) is given by, \[D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,y)=x\cdot(D_{a_{2}}D_{b_{2}}\pi(y))+a_{1 }\cdot D_{b_{2}}\pi(y+a_{2})+b_{1}\cdot D_{a_{2}}\pi(y+b_{2})+D_{a_{2}}D_{b_{2 }}h(y). \tag{3.1}\] If \(a_{2},b_{2}\in\mathbb{F}_{2}^{m}\setminus\{0_{m}\}\) and \(a_{2}\neq b_{2}\), then \(D_{a_{2}}D_{b_{2}}\pi(y)\neq 0_{m}\), so \(D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f\neq 0\), because \(x\cdot(D_{a_{2}}D_{b_{2}}\pi(y))\neq 0\). Since for all \(a,b\in V\) we have \(D_{a}D_{b}f=0\), we deduce that, for all \(a=(a_{1},a_{2}),b=(b_{1},b_{2})\) in \(V\), either \(L(a)=a_{2}=0_{m}\), or \(L(b)=b_{2}=0_{m}\), or \(L(a)=a_{2}=b_{2}=L(b)\). This means that \(\dim(Im(L))\leq 1\). From the rank-nullity theorem, we get that \(\dim(Ker(L))\geq m-1\). If \(\dim(Ker(L))=m\), then \(V=\mathbb{F}_{2}^{m}\times\{0_{m}\}\). Assume now that \(\dim(Ker(L))=m-1\), and let \(b=(b_{1},b_{2})\in V\) be the vector such that \(b_{2}\neq 0_{m}\). For all \(a=(a_{1},a_{2})\in Ker(L)\) we have \(a_{2}=0\), and hence \[D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,y)=a_{1}\cdot D_{b_{2}}\pi(y)=0,\text{ for all }y\in\mathbb{F}_{2}^{m}. \tag{3.2}\] Denote by \(S_{b}\) the subspace of \(\mathbb{F}_{2}^{m}\) generated by \(\{D_{b_{2}}\pi(y)\colon y\in\mathbb{F}_{2}^{m}\}\). Not that, since \(\pi\) is a permutation, and \(b_{2}\neq 0_{m}\) the vector \(D_{b_{2}}\pi(y)=\pi(y)+\pi(y+b_{2})\) is never equal to \(0_{m}\), this means that if \(\dim(S_{b})=1\), then \(D_{b_{2}}\pi(y)\) is constant (i.e., \(b_{2}\) is a linear structure for \(\pi\)), and consequently, for any nonzero \(c\in\mathbb{F}_{2}^{m}\setminus\{0_{m},b_{2}\}\), we have \(D_{c}D_{b_{2}}\pi=0_{m}\), which is in contradiction with the assumption \(D_{v}D_{w}\pi\neq 0_{m}\), for all nonzero different \(v,w\in\mathbb{F}_{2}^{m}\). This implies that \(\dim(S_{b})\geq 2\), and hence \(\dim(S_{b}^{\perp})\leq m-2\). From the equation (3.2) we have that for every \(a=(a_{1},a_{2})\in Ker(L)\), the vector \(a_{1}\) is in \(S_{b}^{\perp}\), hence \(\{a_{1}\colon a=(a_{1},a_{2})\in Ker(L)\}\subseteq S_{b}^{\perp}\). However, \(\dim(\{a_{1}\colon a=(a_{1},a_{2})\in Ker(L)\})=\dim(Ker(L))=m-1\), and this is a contradiction, because \(\dim(S_{b}^{\perp})\leq m-2\). This means that the case \(\dim(Ker(L))=m-1\) is not possible, hence, the only \(m\)-dimensional subspace of \(\mathbb{F}_{2}^{2m}\) such that \(D_{a}D_{b}f=0\) for all \(a,b\in V\), is \(V=\mathbb{F}_{2}^{m}\times\{0_{m}\}\). Imposing an additional condition on the permutation \(\pi\), it is possible to further refine the structure of vanishing subspaces. **Corollary 3.2**.: _Let \(\pi\) be a permutation of \(\mathbb{F}_{2}^{m}\) with the property (\(P_{1}\)) and such that \(\gamma\cdot\pi\) has no nonzero linear structures for \(\gamma\in\mathbb{F}_{2}^{m}\setminus\{0_{m}\}\). Let \(f\colon\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) be the function defined by \(f(x,y)=x\cdot\pi(y)+h(y)\), for all \(x,y\in\mathbb{F}_{2}^{m}\), where \(h:\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) is an arbitrary Boolean function. If \(S\) is a subspace of \(\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\) such that \(\dim(S)>1\) and \(D_{a}D_{b}f=0\), for all \(a,b\in S\), then \(S\) is a subspace of \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\)._ Proof.: Notice that since \(\pi\) has the (\(P_{1}\)) property, there exist no two distinct nonzero elements \(u,v\in\mathbb{F}_{2}^{m}\) such that \(D_{u}D_{v}\pi(y)=0_{m}\), for all \(y\in\mathbb{F}_{2}^{m}\). Consequently, \(\pi(y)+\pi(y+u)+\pi(y+v)+\pi(y+u+v)\neq 0_{m}\) for any distinct nonzero \(u,v\in\mathbb{F}_{2}^{m}\). Then, denoting \(a=(a_{1},a_{2})\), \(b=(b_{1},b_{2})\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\), we have \[D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,y)=x\cdot(D_{a_{2}}D_{b_{2}}\pi(y))+a_{1 }\cdot D_{b_{2}}\pi(y+a_{2})+b_{1}\cdot D_{a_{2}}\pi(y+b_{2})+D_{a_{2}}D_{b_{ 2}}h(y).\] The term \(x\cdot(D_{a_{2}}D_{b_{2}}\pi(y))\) cannot be cancelled unless \(a_{2}=0_{m}\) or \(b_{2}=0_{m}\), alternatively \(a_{2}=b_{2}\neq 0_{m}\). Assuming that \(a_{2}=0_{m}\) and \(b_{2}\neq 0_{m}\) (the same reasoning applies if \(b_{2}=0_{m}\)) leads to \(D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,y)=a_{1}\cdot D_{b_{2}}\pi(y)\) which implies that \(a_{1}=0_{m}\) and therefore \(a=(a_{1},a_{2})=0_{m},0_{m})\), a contradiction. The case \(a_{2}=b_{2}\neq 0_{m}\), implying also that \(a_{1}\neq b_{1}\) since \(\dim(S)>1\), gives \(D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,y)=(a_{1}+b_{1})\cdot D_{a_{2}}\pi(y+a_{ 2})\) which is nonzero (since \(a_{1}+b_{1}\neq 0_{m}\)) and consequently \(D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,y)\neq 0\). The following result specifies both the necessary and sufficient condition for a permutation \(\pi\) on \(\mathbb{F}_{2}^{m}\), when the function \(h(y)=\delta_{0}(y)=\prod_{i=1}^{m}(y_{i}+1)\) is used to define \(f(x,y)=x\cdot\pi(y)+h(y)\), so that \(f\) admits only the canonical vanishing \(\mathcal{M}\)-subspace \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\). **Proposition 3.3**.: _Let \(\pi\) be a permutation of \(\mathbb{F}_{2}^{m}\) with \(\deg(\pi)<m-1\), and let \(f\colon\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}\) be the function defined by_ \[f(x,y)=x\cdot\pi(y)+\delta_{0}(y),\text{ for all }x,y\in\mathbb{F}_{2}^{m}.\] _Then \(f\) has only one \(\mathcal{M}\)-subspace if and only if \(\pi\) has no nonzero linear structures._ Proof.: If \(\pi\) has linear structures, then the fact that \(f\) has at least two \(\mathcal{M}\)-subspaces follows from Proposition 2.1. Assume now that \(\pi\) has no nonzero linear structures. Let \(V\) be an \(m\)-dimensional subspace of \(\mathbb{F}_{2}^{2m}\) such that \(D_{a}D_{b}f=0\) for all \(a,b\in V\). Define the linear mapping \(L:V\to\mathbb{F}_{2}^{m}\) by \(L(x,y)=y\), for all \((x,y)\in V\). In general, the second-order derivative of \(f\), for any \(a_{1},a_{2},b_{1},b_{2}\in\mathbb{F}_{2}^{m}\), is given by \[D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,y)=x\cdot(D_{a_{2}}D_{b_{2}}\pi(y))+a_{1 }\cdot D_{b_{2}}\pi(y+a_{2})+b_{1}\cdot D_{a_{2}}\pi(y+b_{2})+D_{a_{2}}D_{b_{2} }\delta_{0}(y). \tag{3.3}\] Assume that \(\dim(Im(L))\geq 2\). Let \((c_{1},c_{2}),(d_{1},d_{2})\in V\) be such that \(c_{2}\) and \(d_{2}\) are two different nonzero elements in \(\mathbb{F}_{2}^{m}\). Since the algebraic degree of \(D_{c_{2}}D_{d_{2}}\delta_{0}(y)\) is \(m-2\), and since \(\deg(\pi)<m-1\), from (3.3) we deduce that the algebraic degree of \(D_{(c_{1},c_{2})}D_{(d_{1},d_{2})}f\) is \(m-2\), and that is a contradiction, since \((c_{1},c_{2}),(d_{1},d_{2})\in V\) and so \(D_{(c_{1},c_{2})}D_{(d_{1},d_{2})}f=0\). If \(\dim(Im(L))=1\), then \(\dim(Ker(L))=m-1\). Let \((a_{1},a_{2})\in V\) be such that \(a_{2}\neq 0_{m}\), and let \((b_{1},0_{m})\in V\) be an arbitrary element in \(Ker(L)\). From (3.3) we compute \[D_{(a_{1},a_{2})}D_{(b_{1},0_{m})}f(x,y)=b_{1}\cdot D_{a_{2}}\pi(y)=0,\text{ for all }x,y\in\mathbb{F}_{2}^{m}.\] This means that the subspace \(S_{a_{2}}\) generated by the set \(\{D_{a_{2}}\pi(y)\colon y\in\mathbb{F}_{2}^{m}\}\) is in the orthogonal complement of \(b_{1}\), for every \(b_{1}\) such that \((b_{1},0_{m})\in Ker(L)\). Since \(\dim(Ker(L))=m-1\), we deduce that \(\dim(S_{a_{2}})=1\). Also, \(\pi\) is a permutation and \(a_{2}\neq 0_{m}\), so \(D_{a_{2}}\pi(y)\neq 0_{m}\), for all \(y\in\mathbb{F}_{2}^{m}\), hence \(\{D_{a_{2}}\pi(y)\colon y\in\mathbb{F}_{2}^{m}\}=\{v\}\) for some nonzero \(v\in\mathbb{F}_{2}^{m}\), and this means that \(a_{2}\) is a nonzero linear structure of \(\pi\). However, this is a contradiction, since the assumption is that \(\pi\) has no nonzero linear structures. We conclude that it has to be the case that \(\dim(Im(L))=0\), and consequently that the only \(\mathcal{M}\)-subspace of \(f\) is \(V=\mathbb{F}_{2}^{m}\times\{0_{m}\}\). ### Bent functions from permutations having \((P_{2})\) property In the following statement, we show that even permutations on \(\mathbb{F}_{2}^{m}\), for which second-order derivatives vanish on a certain \((m-k)\)-dimensional subspace \(S\) (where \(2\leq k\leq m-1\)), can still be used for the construction of Maiorana-McFarland bent functions with a unique \(\mathcal{M}\)-subspace. **Proposition 3.4**.: _Let \(\pi\) be a nonlinear permutation over \(\mathbb{F}_{2}^{m}\) and \(f(x,y)=x\cdot\pi(y)\) a bent function in \(\mathcal{MM}\). Denote by \(S\) a vector subspace of \(\mathbb{F}_{2}^{m}\) for which \(D_{a}D_{b}\pi(y)=0\), for any \(a,b\in S\), where \(\dim(S)\geq 1\). If \(\dim(S)=m-k\), then the necessary and sufficient condition for \(f\) to have the unique canonical \(\mathcal{M}\)-subspace is that there do not exist linearly independent \(u_{1},\ldots,u_{k}\in\mathbb{F}_{2}^{m}\) for which \(u_{i}\cdot D_{a}\pi(y)=0\) for any \(a\in S\), and we necessarily have that \(2\leq k\leq m-1\)._ Proof.: It is clear that if \(\pi\) is linear/affine then \(\dim(S)=m\) and the number of \(\mathcal{M}\)-subspaces is \(\prod_{i=1}^{m}\left(2^{i}+1\right)\). Thus, we need to show that \(\dim(S)\) cannot be \(m-1\). Assuming that \(\dim(S)=m-1\), Lemma 2.3 and Lemma 2.5 imply that \(\pi\) is at most quadratic and affine on this hyperplane determined by \(S\). Furthermore, there exists \(u_{1}\) such that \(u_{1}\cdot D_{a}\pi(y)=0\). Noticing that \(D_{a_{2}}D_{b_{2}}\pi(y)=0\) for any \(a_{2},b_{2}\in S\), let \(S^{\prime}=\{0_{m}\}\times S\) be a subspace of \(\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\) of dimension \(m-1\). Then, for any \(a=(a_{1},a_{2}),b=(b_{1},b_{2})\in S^{\prime}\) \[D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,y)=x\cdot(D_{a_{2}}D_{b_{2}}\pi(y))+a_{1 }\cdot D_{b_{2}}\pi(y+a_{2})+b_{1}\cdot D_{a_{2}}\pi(y+b_{2})=0, \tag{3.4}\] since \(a_{1}=b_{1}=0\). Then, adjoining \((u_{1},0)\) to \(S^{\prime}\) so that \(S=\langle(u_{1},0),S^{\prime}\rangle\), we would have that \(\dim(S)=m\) and \(D_{(u_{1},0)}D_{(b_{1},b_{2})}f(x,y)=0\) for any \((b_{1},b_{2})\in S^{\prime}\) (where \(b_{1}=0\)). Consequently, \(S\) is a vanishing subspace for \(f\) and different from \(\mathbb{F}_{2}^{m}\times 0\). Thus, to have the unique vanishing subspace we necessarily have that \(\dim(S)\leq m-2\), that is \(k\geq 2\). In general, when \(\dim(S)=m-k\) where \(2\leq k\leq m-1\) a similar reasoning applies. Extending \(S^{\prime}=\{0_{m}\}\times S\) to the full dimension \(m\), by adjoining \((u_{1},0_{m}),\ldots,(u_{k},0_{m})\) to \(S^{\prime}\), is impossible due to our assumption. This follows from the fact that taking, e.g., \((u_{1},0)\) and \((b_{1},b_{2})\in S^{\prime}\) (where \(b_{1}=0\)), the equation (3.4) reduces to \(u_{1}\cdot D_{b_{2}}\pi(y)\), which is nonzero. On the other hand, we can also extend \(S^{\prime}\) by adjoining elements in \((b_{1},b_{2})\in\mathbb{F}_{2}^{m}\times\mathbb{F}_{2}^{m}\) where \(b_{2}\in S\), which is necessary for ensuring that \(x\cdot(D_{a_{2}}D_{b_{2}}\pi(y))\) is cancelled if we consider \((a_{1},a_{2})\) and \((b_{1},b_{2})\), where \(a_{2}\neq b_{2}\in S\). However, adjoining \((b_{1},b_{2})\) to \(S^{\prime}\) implies that \((u_{i},0)\in\langle(b_{1},b_{2}),S^{\prime}\rangle\) and the same reasoning as above applies. We state this property more formally in the following definition. **Definition 3.5**.: _Let \(S\) be any subspace of dimension \(m-k\), with \(2\leq k\leq m-1\), such that \(D_{a}D_{b}\pi(y)=0_{m}\) for all \(a,b\in S\), where \(\pi\) is a nonlinear permutation on \(\mathbb{F}_{2}^{m}\). Then, \(\pi\) satisfies the property (\(P_{2}\)) with respect to \(S\) if:_ \[\dim(S)=m-k\ \mathrm{with}\ 2\leq k\leq m-1;\,\not\exists u_{1},\ldots,u_{k}\in \mathbb{F}_{2}^{m}:u_{i}\cdot D_{a}\pi(y)=0\ \mathrm{for\ all}\ a\in S.\ \ (P_{2})\] _If \(\pi\) satisfies this property with respect to any \(S\) of arbitrary dimension \(1\leq\dim(S)\leq m-2\), then we simply say that \(\pi\) (fully) satisfies (\(P_{2}\))._ **Remark 3.6**.: _For instance, the permutation \(\pi\) on \(\mathbb{F}_{2}^{5}\) from Example 2.2 does not satisfy the conditions in Proposition 3.4. Here \(\dim(S)=m-2=3\) and two vectors \(u_{1}=((1,0,0,0,0),0_{5})\) and \(u_{1}=((0,1,0,0,0),0_{5})\) can be adjoined to \(S^{\prime}=\{0_{5}\}\times S\) since they select linear functions \(y_{1}\) and \(y_{2}\) whose first order derivatives vanish for any choice of \(a_{2}\in S\)._ **Remark 3.7**.: _1. Note that the property (\(P_{1}\)) implies (\(P_{2}\)), but not vice versa. 2. As shown in [3], there exist 75 affine inequivalent quadratic permutations \(\pi\) of \(\mathbb{F}_{2}^{5}\). Among them, 34 permutations give rise to bent functions \((x,y)\mapsto x\cdot\pi(y)\) with the unique canonical \(\mathcal{M}\)-subspace. With respect to the properties (\(P_{1}\)), (\(P_{2}\)), they are distributed as follows:_ * _2 permutations have the property (_\(P_{1}\)_), note that these permutations are APN;_ * _32 permutations have the property (_\(P_{2}\)_) (but not (_\(P_{1}\)_))._ * _For 28 of them there exist a subspace_ \(S_{i}\) _of_ \(\mathbb{F}_{2}^{m}\) _of dimension_ \(m-3=2\)_, s.t._ \(D_{a}D_{b}\pi_{i}=0\) _for all_ \(a,b\in S_{i}\)_. An example of such a permutation_ \(\pi_{i}\) _and a subspace_ \(S_{i}\) _is given by:_ \[\pi_{1}(y)=\begin{bmatrix}y_{1}\\ y_{2}+y_{1}y_{2}+y_{1}y_{4}\\ y_{1}y_{2}+y_{3}+y_{2}y_{4}\\ y_{2}y_{3}+y_{4}+y_{1}y_{4}+y_{2}y_{4}+y_{1}y_{5}\\ y_{1}y_{2}+y_{3}y_{4}+y_{5}+y_{1}y_{5}\end{bmatrix}\quad\text{and}\quad S_{1}= \left\langle\begin{array}{cccc}0&0&0&1&0\\ 0&0&0&0&1\end{array}\right\rangle.\] * _For the remaining 4 permutations, the maximum dimension of_ \(S_{i}\) _s.t._ \(D_{a}D_{b}\pi_{i}=0\) _for all_ \(a,b\in S_{i}\) _is equal to_ \((m-2)=3\)_. An example of such a permutation_ \(\pi_{i}\) _and a subspace_ \(S_{i}\) _is given by:_ \[\pi_{2}(y)=\begin{bmatrix}y_{1}\\ y_{2}+y_{1}y_{2}+y_{1}y_{3}\\ y_{3}+y_{1}y_{3}+y_{1}y_{5}\\ y_{1}y_{2}+y_{4}+y_{1}y_{4}\\ y_{2}y_{3}+y_{1}y_{4}+y_{5}+y_{1}y_{5}\end{bmatrix}\quad\text{and}\quad S_{2}= \left\langle\begin{array}{cccc}0&0&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1\end{array}\right\rangle.\] ## 4 Explicit constructions of permutations with \((P_{1})\) and \((P_{2})\) properties The main aim of this section is to specify certain classes of permutations on \(\mathbb{F}_{2}^{m}\) satisfying either \((P_{1})\) or \((P_{2})\) property, and thus to provide constructions of Maiorana-McFarland bent functions with the unique canonical \(\mathcal{M}\)-subspace \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\). ### APN and APN-like permutations In the following remark, we indicate that APN permutations have the property \((P_{1})\), and, hence, can be used for the construction of Maiorana-McFarland bent functions with the unique canonical \(\mathcal{M}\)-subspace. **Remark 4.1**.: _Recall that a function \(F\colon\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}^{m}\) is called almost perfect nonlinear (APN) if, for all \(a\in\mathbb{F}_{2}^{m}\setminus\{0_{m}\},b\in\mathbb{F}_{2}^{m}\), the equation \(F(x+a)+F(x)=b\) has 0 or 2 solutions \(x\in\mathbb{F}_{2}^{m}\). Using the notation in [14, 16], for \(n\geq 2\), we define the set of all \(2\)-dimensional flats in \(\mathbb{F}_{2}^{m}\) as follows:_ \[\mathcal{F}_{m}=\{\{x_{1},x_{2},x_{3},x_{4}\}\mid x_{1}+x_{2}+x_{3}+x_{4}=0_{m }\text{ and }x_{1},x_{2},x_{3},x_{4}\in\mathbb{F}_{2}^{m}\text{ are distinct}\}.\] _It is well-known, that a function \(F\colon\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}^{m}\) is APN if and only if for each \(\{x_{1},x_{2},x_{3},x_{4}\}\in\mathcal{F}_{m}\), holds_ \[F(x_{1})+F(x_{2})+F(x_{3})+F(x_{4})\neq 0_{m}.\] _Namely, the summation of \(F\) over each \(2\)-dimensional flat is non-vanishing. For a function \(F\colon\mathbb{F}_{2}^{m}\to\mathbb{F}_{2}^{m}\), define the set of vanishing flats with respect to \(F\) as_ \[\mathcal{VF}_{m,F}=\{\{x_{1},x_{2},x_{3},x_{4}\}\in\mathcal{F}_{m}\mid F(x_{1} )+F(x_{2})+F(x_{3})+F(x_{4})=0_{m}\}.\] _With this notation, \(F\) is APN on \(\mathbb{F}_{2}^{m}\) if and only if \(\mathcal{VF}_{m,F}=\varnothing\). Therefore, any permutation \(\pi\) of \(\mathbb{F}_{2}^{m}\), which is APN, satisfies the condition (\(P_{1}\)). For instance, all power APN functions \(x\mapsto x^{d}\) are permutations of \(\mathbb{F}_{2}^{m}\) for \(m\) odd, as shown by Dobbertin, for the proof we refer to [6]._ Note that if a function \(\pi\) on \(\mathbb{F}_{2}^{m}\) is quadratic, then \(D_{a,b}\pi(y)=const\) for all \(a,b\in\mathbb{F}_{2}^{m}\). In this way, with the "vanishing flats" characterization of APN functions, we deduce the following characterization of quadratic permutations with the (\(P_{1}\)) property. **Corollary 4.2**.: _A quadratic permutation \(\pi\) of \(\mathbb{F}_{2}^{m}\) has the (\(P_{1}\)) property if and only if \(\pi\) is a quadratic APN permutation of \(\mathbb{F}_{2}^{m}\)._ **Example 4.3**.: _Every bent function in \(n=6\) variables with the unique \(\mathcal{M}\)-subspace is equivalent to a bent function of the form \(f(x,y)=Tr(xy^{3})\), for \(x,y\in\mathbb{F}_{2^{3}}\). In this case, \(y\mapsto y^{3}\) is an APN permutation of \(\mathbb{F}_{2^{3}}\)._ Further, we show that the following family of quadratic _APN-like permutations_, i.e., non-APN permutations with a small number of vanishing flats (relative to the total number of vanishing flats), have the (\(P_{2}\)) property. In this way, they can be used for constructing bent functions with the unique \(\mathcal{M}\)-subspace. **Theorem 4.4**.: _[_14_]_ _Let \(\pi(x)=x^{2^{t}+1}\) be a function over \(\mathbb{F}_{2^{m}}\) with \((m,t)=s>1\). Then, \(\left|\mathcal{VF}_{m,\pi}\right|=2^{n-2}\left(2^{s-1}-1\right)\cdot\left(2^{ n}-1\right)/3\)._ The following characterization of linear structures of the components of permutation monomials given in [7] (stated only for binary quadratic case) is useful for our purpose. **Theorem 4.5**.: _[_7_]_ _Let \(\delta\in\mathbb{F}_{2^{m}}\) and \(1\leq s\leq 2^{m}-2\) be such that \(f(x)=Tr(\delta x^{s})\) is not the zero function on \(\mathbb{F}_{2}^{m}\). Then, when \(wt_{H}(s)=2\) the function \(f\) has a linear structure if and only if the following is true: (ii): \(s=2^{j}(2^{i}+1)\), where \(0\leq i,j\leq m-1\), \(i\not\in\{0,m/2\}\). In this case, \(\alpha\in\mathbb{F}_{2^{m}}\) is a linear structure of \(f\) if and only if it satisfies \((\delta^{2^{m-j}}\alpha^{2^{l}+1})^{2^{i}-1}+1=0\). More exactly the linear space \(\Lambda\) of \(f\) is as follows. Denote \(\sigma=\gcd(m,2i)\). Then, \(\Lambda=\{0\}\) if \(\delta\) is not a \((2^{i}+1)\)-th power in \(\mathbb{F}_{2^{m}}\). Otherwise, if \(\delta=\beta^{2^{j}(2^{i}+1)}\) for some \(\beta\in\mathbb{F}_{2^{m}}\), it holds that \(\Lambda=\beta^{-1}\mathbb{F}_{2^{\sigma}}\)._ **Proposition 4.6**.: _Let \(\pi(y)=y^{2^{t}+1}\) for \(y\in\mathbb{F}_{2^{m}}\), where \(s=\gcd(t,m)=2\), \(m=2r\) and \(r\geq 3\) is odd. Denote by \(S\) a vector subspace of \(\mathbb{F}_{2^{m}}\) for which \(D_{a}D_{b}\pi(y)=0_{m}\), for any \(a,b\in S\). Then, \(\dim(S)\leq 2\) and permutation \(\pi\) has the property (P\({}_{2}\))._ Proof.: We first notice that when \(\dim(S)=1\) we trivially have that \(D_{a}D_{b}\pi(y)=0_{m}\), since either \(a\) or \(b\) is zero. To prove that \(\pi\) has the property (P\({}_{2}\)), let \(S\) be a vector subspace of \(\mathbb{F}_{2^{m}}\) for which \(D_{a}D_{b}\pi(y)=0_{m}\), such that \(\dim(S)=2\). We will show that there do not exist linearly independent \(u_{1},\ldots,u_{m-2}\in\mathbb{F}_{2}^{m}\) such that \(Tr(u_{i}D_{a}\pi)=D_{a}(Tr(u_{i}\pi))=0\), for all \(a\in S\) and \(i=1,\ldots m-2\). Let \(u_{1},\ldots,u_{m-2}\) be any \(m-2\) linearly independent elements in \(\mathbb{F}_{2}^{m}\). Set \(j=0\) and \(i=t\) in Theorem 4.5. Since \(m=2r\), \(r\) is odd and \(\gcd(t,m)=2\), we have that \(\gcd(2t,m)=2\), i.e., \(\sigma=2\) in Theorem 4.5. From Theorem 4.5, we deduce that the linear space of \(Tr(\delta y^{2^{t}+1})\) is \(\beta^{-1}\mathbb{F}_{2^{2}}\), where \(\beta\) is such that \(\delta=\beta^{2^{t}+1}\). This means that the linear space of \(Tr(u_{i}y^{2^{t}+1})\) is \(\beta_{i}^{-1}\mathbb{F}_{2^{2}}\), where \(u_{i}=\beta_{i}^{2^{t}+1}\), for \(i=1,\ldots m-2\). Since \(u_{1},\ldots u_{4}\) are four linearly independent vectors, then \(\beta_{1}^{-1}\), \(\beta_{2}^{-1}\), \(\beta_{3}^{-1}\), \(\beta_{4}^{-1}\) are four different nonzero elements, and hence we have that for at least two, w.l.o.g., \(u_{1}\) and \(u_{2}\) the subspaces \(\beta_{1}^{-1}\mathbb{F}_{2^{2}}\) and \(\beta_{2}^{-1}\mathbb{F}_{2^{2}}\) are different. The subspace \(S\) does not cover both of them, w.l.o.g., assume that it does not cover \(\beta_{1}^{-1}\mathbb{F}_{2^{2}}\). Let \(a\in S\setminus\{0\}\) be such that \(a\notin\beta_{1}^{-1}\mathbb{F}_{2^{2}}\), which exists since both \(S\) and \(\beta_{1}^{-1}\mathbb{F}_{2^{2}}\) have \(4\) elements and \(S\) does not cover \(\beta_{1}^{-1}\mathbb{F}_{2^{2}}\). Then, since \(\beta_{1}^{-1}\mathbb{F}_{2^{2}}\) is the linear space of \(Tr(u_{1}y^{2^{t}+1})\), we have that \(D_{a}(Tr(u_{1}y^{2^{t}+1}))\) is not constant. Since \(u_{1},\ldots,u_{m-2}\) were arbitrary linearly independent elements from \(\mathbb{F}_{2}^{m}\), we deduce that there do not exist linearly independent \(u_{1},\ldots,u_{m-2}\in\mathbb{F}_{2}^{m}\) for which \(Tr(u_{i}D_{a}\pi)=D_{a}(Tr(u_{i}\pi))=0\), for all \(a\in S\) and \(i=1,\ldots m-2\). That is \(\pi\) has the property (P\({}_{2}\)). Assume that \(\dim(S)=t\), where \(3\leq t\leq m-1\), and assume that there exist \(u_{1},\ldots,u_{m-t}\in\mathbb{F}_{2}^{m}\) such that \(Tr(u_{i}D_{a}\pi)=D_{a}(Tr(u_{i}\pi))=0\), for all \(a\in S\) and \(i=1,\ldots m-t\). From Theorem 4.5, we have that the linear space of \(Tr(u_{i}y^{2^{t}+1})\) is \(\beta_{i}^{-1}\mathbb{F}_{2^{2}}\), where \(u_{i}=\beta_{i}^{2^{t}+1}\), for \(i=1,\ldots m-t\). Since \(\dim(S)\geq 3\), there is an element \(a\in S\) such that \(a\neq\beta_{i}^{-1}\mathbb{F}_{2^{2}}\). This means that \(a\in S\) is not in the linear space of \(Tr(u_{1}y^{2^{t}+1})\), hence \(D_{a}(Tr(u_{1}y^{2^{t}+1}))\) is not constant, which is a contradiction with our assumption \(D_{a}(Tr(u_{1}\pi))=0\). ### Piecewise permutations having (P\({}_{1}\)) property Now, we provide a secondary construction of permutations with the (P\({}_{1}\)) property. In this way, we obtain infinite families of permutations with the (P\({}_{1}\)) in all dimensions. We also indicate that permutations with the (P\({}_{1}\)) property are not necessarily APN. **Proposition 4.7**.: _Let \(\sigma_{1}\) and \(\sigma_{2}\) be two permutations of \(\mathbb{F}_{2}^{m}\) such that \(D_{V}\sigma_{1}\neq D_{V}\sigma_{2}\) for all two dimensional subspaces \(V\) of \(\mathbb{F}_{2}^{m}\). Define the function \(\pi\colon\mathbb{F}_{2}^{m+1}\to\mathbb{F}_{2}^{m+1}\) by_ \[\pi(y,y_{m+1})=\left(\sigma_{1}(y)+y_{m+1}(\sigma_{1}(y)+\sigma_{2}(y)),y_{m+1} \right),\,\text{for all }y\in\mathbb{F}_{2}^{m},y_{m+1}\in\mathbb{F}_{2}.\] _Then the function \(\pi\) is a permutation of \(\mathbb{F}_{2}^{m+1}\) such that \(D_{W}\pi\neq 0_{m+1}\) for all two dimensional subspaces \(W\) of \(\mathbb{F}_{2}^{m+1}\)._ Proof.: Since \(\pi(y,0)=(\sigma_{1}(y),0)\) and \(\pi(y,1)=(\sigma_{2}(y),1)\) and since \(\sigma_{1}\) and \(\sigma_{2}\) are permutations, \(\pi\) is a permutation as well. Take two linearly independent vectors \((a,a_{m+1}),(b,b_{m+1})\in\mathbb{F}_{2}^{m+1}\), where \(a,b\in\mathbb{F}_{2}^{m}\) and \(a_{m+1},b_{m+1}\in\mathbb{F}_{2}\). Assume first that \(a_{m+1}=b_{m+1}=0\). Then \[D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})=(D_{a}D_{b}\sigma_{1}(y)+y_{m+1}(D _{a}D_{b}\sigma_{1}(y)+D_{a}D_{b}\sigma_{2}(y),0)\] Since \((a,a_{m+1})\) and \((b,b_{m+1})\) are linearly independent and \(a_{m+1}=b_{m+1}=0\), the vectors \(a\) and \(b\) are linearly independent. If \(D_{a}D_{b}\sigma_{1}(y)\neq 0_{m}\), then \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,0)=(D_{a}D_{b}\sigma_{1}(y),0)\neq 0_{m+1}\), hence \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})\neq 0_{m+1}\). If \(D_{a}D_{b}\sigma_{1}(y)=0_{m}\), then, since from the assumption \(D_{a}D_{b}\sigma_{2}(y)\neq D_{a}D_{b}\sigma_{1}(y)=0_{m}\), we have that \[D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,1)=(\sigma_{2}(y),0)\neq 0_{m+1},\] hence \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})\neq 0_{m+1}\). We conclude that in any case, when \(a_{m+1}=b_{m+1}=0\), we have \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})\neq 0_{m+1}\). Now assume that \(a_{m+1}=1\) or \(b_{m+1}=1\). W.l.o.g, we assume that \(b_{m+1}=1\). Then, since \[D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})=D_{(a+b,a_{m+1}+b_{m+1})}D_{(b,b_ {m+1})}\pi(y,y_{m+1}),\] we can assume that \(a_{m+1}=0\). Computing the second-order derivative of \(\pi\), we get \[D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})=D_{(b,1)}(D_{a} \sigma_{1}(y)+y_{m+1}(D_{a}\sigma_{1}(y)+D_{a}\sigma_{2}(y)),0)\\ =(D_{a}D_{b}\sigma_{1}(y)+y_{m+1}(D_{a}D_{b}\sigma_{1}(y)+D_{a}D_ {b}\sigma_{2}(y))+D_{a}\sigma_{1}(y+b)+D_{a}\sigma_{2}(y+b),0),\] for all \(y\in\mathbb{F}_{2}^{m},y_{m+1}\in\mathbb{F}_{2}\). Setting \(y_{m+1}=0\), we have \[D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,0)=(D_{a}D_{b}\sigma_{1}(y)+D_{a}\sigma_{ 1}(y+b)+D_{a}\sigma_{2}(y+b),0).\] If \(D_{a}D_{b}\sigma_{1}(y)+D_{a}\sigma_{1}(y+b)+D_{a}\sigma_{2}(y+b)\neq 0_{m}\), we deduce that \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,0)\neq 0_{m+1}\), hence \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})\neq 0_{m+1}\). If however, \(D_{a}D_{b}\sigma_{1}(y)+D_{a}\sigma_{1}(y+b)+D_{a}\sigma_{2}(y+b)=0_{m}\), then we compute \[D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,1)=(D_{a}D_{b}\sigma_{1}(y)+D_{a}D_{b} \sigma_{2}(y),0).\] From the assumption \(D_{a}D_{b}\sigma_{2}(y)\neq D_{a}D_{b}\sigma_{1}(y)\), we have \(D_{a}D_{b}\sigma_{2}(y)+D_{a}D_{b}\sigma_{1}(y)\neq 0_{m}\), hence \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,1)\neq 0_{m+1}\), and consequently \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})\neq 0_{m+1}\). We deduce that \(D_{(a,a_{m+1})}D_{(b,b_{m+1})}\pi(y,y_{m+1})\neq 0_{m+1}\), what concludes the proof. **Corollary 4.8**.: _Let \(\sigma\) be a permutation of \(\mathbb{F}_{2}^{m}\) such that \(D_{V}\sigma\neq 0_{m}\) for all two dimensional subspaces \(V\) of \(\mathbb{F}_{2}^{m}\). Define the function \(\pi\colon\mathbb{F}_{2}^{m+1}\to\mathbb{F}_{2}^{m+1}\) by_ \[\pi(y,y_{m+1})=\left(y+y_{m+1}(\sigma(y)+y),y_{m+1}\right),\text{ for all }y\in\mathbb{F}_{2}^{m},y_{m+1}\in\mathbb{F}_{2}. \tag{4.1}\] _Then, \(\pi\) is a permutation of \(\mathbb{F}_{2}^{m+1}\) such that \(D_{W}\pi\neq 0_{m+1}\) for all two dimensional subspaces \(W\) of \(\mathbb{F}_{2}^{m+1}\), thus it satisfies the \((P_{1})\) property._ Proof.: Set \(\sigma_{1}(y)=y\) and \(\sigma_{2}(y)=\sigma(y)\) for all \(y\in\mathbb{F}_{2}^{m}\). Then \(D_{V}\sigma_{1}(y)=0_{m}\neq D_{V}\sigma_{2}(y)\) for all two dimensional subspaces \(V\) of \(\mathbb{F}_{2}^{m}\). The result then follows from Proposition 4.7. Note that, with the same assumptions as in Corollary 4.8, using Proposition 4.7 and setting \(\sigma_{1}(y)=\sigma(y)\) and \(\sigma_{2}(y)=y\), we can deduce in the same way that \[\pi^{\prime}(y,y_{m+1})=(\sigma(y)+y_{m+1}(\sigma(y)+y),y_{m+1})\] is also a permutation such that \(D_{W}\pi^{\prime}\neq 0_{m+1}\) for all two dimensional subspaces \(W\) of \(\mathbb{F}_{2}^{m+1}\). In the following remark, we indicate that APN-ness of permutations \(\pi\) on \(\mathbb{F}_{2}^{m}\) with the \((P_{1})\) property, plays a very important role in the vanishing behaviour of Maiorana-McFarland bent functions \(x\cdot\pi(y)\). **Remark 4.9**.: _Let \(\sigma\) be a permutation on \(\mathbb{F}_{2}^{m}\) such that \(D_{V}\sigma\neq 0_{m}\) for all two dimensional subspaces \(V\) of \(\mathbb{F}_{2}^{m}\). Define the permutation \(\pi\colon\mathbb{F}_{2}^{m+1}\to\mathbb{F}_{2}^{m+1}\) as in Corollary 4.8 by_ \[\pi(y,y_{m+1})=\left(y+y_{m+1}(\sigma(y)+y),y_{m+1}\right)\text{, for all }y\in\mathbb{F}_{2}^{m},y_{m+1}\in\mathbb{F}_{2}.\] _Clearly, the permutation \(\pi\) is not APN, since the last coordinate is linear. Define the function \(f\colon\mathbb{F}_{2}^{2m+2}\to\mathbb{F}_{2}\) by_ \[f(x,x_{m+1},y,y_{m+1})=(x,x_{m+1})\cdot\pi(y,y_{m+1}),\] _for all \(x,y\in\mathbb{F}_{2}^{m}\) and \(x_{m+1},y_{m+1}\in\mathbb{F}_{2}\). From Corollary 4.8 and Theorem 3.1 we deduce that \(\pi\) has the property \((P_{1})\), and \(\mathbb{F}_{2}^{m+1}\times\{0_{m+1}\}\) is the unique \(\mathcal{M}\)-subspace of \(f\)._ _Now, define \(a_{1}=\operatorname{e}_{m+1}\in\mathbb{F}_{2}^{m+1},a_{2}=0_{m+1}\in\mathbb{F} _{2}^{m+1}\) and \(b_{1}={}_{m+1}\in\mathbb{F}_{2}^{m+1},b_{2}=(b,0)\in\mathbb{F}_{2}^{m+1}\), where \(b\) is a nonzero vector in \(\mathbb{F}_{2}^{m}\). From (3.1), we have_ \[D_{(a_{1},a_{2})}D_{(b_{1},b_{2})}f(x,x_{m+1},y,y_{m+1}) =(x,x_{m+1})\cdot D_{a_{2}}D_{b_{2}}\pi(y,y_{m+1})\] \[+a_{1}\cdot D_{b_{2}}\pi((y,y_{m+1})+a_{2})+b_{1}\cdot D_{a_{2}} \pi((y,y_{m+1})+b_{2})\] \[=\operatorname{e}_{m+1}\cdot D_{(b,0)}\pi(y,y_{m+1})\] \[=\operatorname{e}_{m+1}\cdot(b+y_{m+1}(D_{b}\sigma(y)+b),0)\] \[=0.\] _However, \(\dim(\langle(a_{1},a_{2}),(b_{1},b_{2})\rangle)=2\), and since \(b_{2}=(b,0)\neq 0_{m+1}\), it is not a subspace of \(\mathbb{F}_{2}^{m+1}\times\{0_{m+1}\}\). This means that \(D_{a}D_{b}f=0\) vanishes not only on the two-dimensional subspaces \(\{a,b\}\) of \(\mathbb{F}_{2}^{m}\times\{0_{m}\}\), from what follows that not every permutation \(\pi\) with the \((P_{1})\) property defines the bent function \((x,y)\mapsto x\cdot\pi(y)\) with the vanishing behavior as in Corollary 3.2._ The problem of preserving the \((P_{2})\) property for the class of permutations defined by (4.1) appears to be harder. One can eventually show that the \((P_{2})\) property for \(\pi\) is inherited from \(\sigma\) for some particular subspaces whereas it remains an open problem to show that \(\pi\) fully satisfies the \((P_{2})\) property when \(\sigma\) does. **Open Problem 4.10**.: _Find more constructions of permutations with the \((P_{2})\) property._ Generic construction methods of bent functions outside \(\mathcal{MM}^{\#}\) In this section, we provide a theoretical analysis of possible \(\mathcal{M}\)-subspaces of the bent 4-concatenation \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\). Based on this analysis, we consequently provide two generic methods of constructing bent functions outside \(\mathcal{MM}^{\#}\) for even \(n\geq 8\). Our first approach is based on the concatenation of bent functions \(f_{1},f_{2},f_{3},f_{4}\in\mathcal{B}_{n}\) that _do not share any \(\mathcal{M}\)-subspace of dimension \(n/2-1\)_, i.e, \(\bigcap_{i=1}^{4}\mathcal{MS}_{n/2-1}(f_{i})=\varnothing\). Our second approach is based on the concatenation of bent functions \(f_{1},f_{2},f_{3},f_{4}\in\mathcal{B}_{n}\) that _share a unique \(\mathcal{M}\)-subspace of dimension \(n/2\)_, i.e, \(|\bigcap_{i=1}^{4}\mathcal{MS}_{n/2}(f_{i})|=1\). Finally, we provide an algorithm for checking the membership in the completed partial spread class \(\mathcal{PS}^{\#}\), and show that with our approaches it is possible to construct inequivalent bent functions in \(n=8\) outside \(\mathcal{MM}^{\#}\cup\mathcal{PS}^{\#}\). ### Possible \(\mathcal{M}\)-subspaces of the bent 4-concatenation The following result is crucial in understanding the structural properties of bent functions in \(\mathcal{MM}^{\#}\) in terms of 4-concatenation. Notice that when considering \(f=f_{1}||f_{2}||f_{3}||f_{4}\) we do not assume neither that \(f_{i}\) are bent nor that \(f_{i}\) share the same unique \(\mathcal{M}\)-subspace. **Proposition 5.1**.: _Let \(f_{1},\ldots,f_{4}\) be four Boolean functions in \(n\) variables, not necessarily bent, such that \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\) is a bent function in \(\mathcal{MM}^{\#}\). Let \(W\) be an \(\mathcal{M}\)-subspace of \(f\) of dimension \((\frac{n}{2}+1)\). Then, there is an \((\frac{n}{2}-1)\)-dimensional subspace \(V\) of \(\mathbb{F}_{2}^{n}\) such that:_ 1. \(V\times\{(0,0)\}\) _is a subspace of_ \(W\)_,_ 2. \(V\) _is an_ \(\mathcal{M}\)_-subspace of_ \(f_{i}\) _for all_ \(i=1,\ldots,4\)_._ Proof.: Let \(W\) be an \(\mathcal{M}\)-subspace of \(f\) of dimension \((\frac{n}{2}+1)\) (we know that it exists since \(f\) is in \(\mathcal{MM}^{\#}\)). We have \[\dim(W\cap(\mathbb{F}_{2}^{n}\times\{(0,0)\}))=\dim(W)+\dim(\mathbb{F}_{2}^{n }\times\{(0,0)\})-\dim(\langle W,\mathbb{F}_{2}^{n}\times\{(0,0)\}\rangle).\] Because \(\dim(W+(\mathbb{F}_{2}^{n}\times\{(0,0)\}))\leq n+2\), we have \[\dim(W\cap(\mathbb{F}_{2}^{n}\times\{(0,0)\}))\geq(\frac{n}{2}+1)+n-(n+2)= \frac{n}{2}-1.\] Hence, there is an \((\frac{n}{2}-1)\)-dimensional subspace \(V\) of \(\mathbb{F}_{2}^{n}\) such that \(V\times\{(0,0)\}\) is a subspace of \(W\). Let \(a\) and \(b\) be two arbitrary vectors from \(V\). Then \((a,0,0)\) and \((b,0,0)\) are in \(W\), so \(D_{(a,0,0)}D_{(b,0,0)}f=0\). Using (1.2), we compute: \[D_{(a,0,0)}D_{(b,0,0)}f(x,z_{1},z_{2}) = D_{a}D_{b}f_{1}(x)+z_{1}(D_{a}D_{b}(f_{1}+f_{2})(x))+z_{2}(D_{a} D_{b}(f_{1}+f_{3})(x)) \tag{5.1}\] \[+z_{1}z_{2}(D_{a}D_{b}(f_{1}+f_{2}+f_{3}+f_{4})(x))=0,\] for all \((x,z_{1},z_{2})\in\mathbb{F}_{2}^{n+2}\). From this, we deduce that \[D_{a}D_{b}f_{1}(x)=D_{a}D_{b}(f_{1}+f_{2})(x)=D_{a}D_{b}(f_{1}+f_{3})(x)=D_{a} D_{b}(f_{1}+f_{2}+f_{3}+f_{4})(x)=0, \tag{5.2}\] for all \(x\in\mathbb{F}_{2}^{n}\), and consequently, that \(D_{a}D_{b}f_{1}=D_{a}D_{b}f_{2}=D_{a}D_{b}f_{3}=D_{a}D_{b}f_{4}=0\). Since \(a\) and \(b\) were two arbitrary elements from \(V\) this completes the proof. As a special case of concatenating four bent functions \(f_{i}\in\mathcal{B}_{n}\) in \(\mathcal{MM}\), that share the same unique vanishing subspace \(V=\mathbb{F}_{2}^{m}\times\{0_{m}\}\), we have the following important result that describes the form of \(\mathcal{M}\)-subspaces for \(f=f_{1}||f_{2}||f_{3}||f_{4}\). **Proposition 5.2**.: _Let \(f_{1},\ldots,f_{4}\in\mathcal{B}_{n}\), with \(n=2m\), all belong to the \(\mathcal{MM}\) class and additionally assume that the only \(n/2\)-dimensional subspace \(U\) of \(\mathbb{F}_{2}^{n}\) for which \(D_{a}D_{b}f_{i}=0\) for all \(a,b\in U\), is given by \(U=\mathbb{F}_{2}^{m}\times\{0_{m}\}\). Then, the only possible \((n/2+1)\)-dimensional \(\mathcal{M}\)-subspaces \(\{W\}\) for \(f=f_{1}||f_{2}||f_{3}||f_{4}\) are of the following form:_ 1. \(W=\langle U\times(0,0),(a,b,c_{1},c_{2})\rangle\)_, where_ \(c_{1},c_{2}\in\mathbb{F}_{2}\) _and_ \((c_{1},c_{2})\neq 0_{2}\)_; or_ \(W=\langle V\times(0,0),(a,b,c_{1},c_{2}),(e,f,d_{1},d_{2})\rangle\)_, where_ \(V\subset U\) _with_ \(\dim(V)=n/2-1\)_,_ \((c_{1},c_{2})\neq 0_{2},(d_{1},d_{2})\neq 0_{2},(c_{1},c_{2})\neq(d_{1},d_{2})\)_._ 2. \(W=\langle U^{\prime}\times(0,0),(a,b,c_{1},c_{2}),(e,f,d_{1},d_{2})\rangle\)_, where_ \(\dim(U^{\prime})=n/2-1\) _and_ \(U^{\prime}\not\subset U\)_,_ \((c_{1},c_{2})\neq 0_{2},(d_{1},d_{2})\neq 0_{2},(c_{1},c_{2})\neq(d_{1},d_{2})\)_._ Proof.: By Proposition 5.1, if \(f\in\mathcal{MM}^{\#}\) then any \((n/2+1)\)-dimensional \(\mathcal{M}\)-subspace \(W\) of \(f\) contains an \((n/2-1)\)-dimensional (shared) subspace \(V\) of \(\mathbb{F}_{2}^{n}\) such that \(D_{a}D_{b}f_{i}=0\), for all \(a,b\in V\) and \(i=1,\ldots,4\). By assumption, this \((n/2-1)\)-dimensional subspace \(V\) of \(\mathbb{F}_{2}^{n}\) such that \(D_{a}D_{b}f_{i}=0\), for all \(a,b\in V\) and \(i=1,\ldots,4\), is either a subspace of \(U=\mathbb{F}_{2}^{m}\times\{0_{m}\}\) or alternatively \(V\not\subset U\). Furthermore, by Proposition 5.1, if \(f\in\mathcal{MM}^{\#}\) then \(V\times\{(0,0)\}\) is a vanishing subspace of \(\mathbb{F}_{2}^{n+2}\) (of dimension \(n/2-1\)) for \(f\). Notice that since \(\dim(W)=n/2+1\) and \(V\times(0,0)\subset W\), then \[d=\dim(\{(a,b,0_{2})\in\mathbb{F}_{2}^{n/2}\times\mathbb{F}_{2}^{n/2}\times \mathbb{F}_{2}^{2}\colon(a,b,c_{1},c_{2})\in W\})\geq n/2-1.\] However, we also have that \(d\leq n/2\) since any bent function on \(\mathbb{F}_{2}^{n}\) cannot have an \(\mathcal{M}\)-subspace of dimension larger than \(n/2\), which can be deduced from [6, Proposition 8.33] and is explicitly stated in [20, Result 1.35]. Thus, there are two cases to consider. \(a)\) **The case \(V\subset U\):** This implies that we have two situations here. When \(d=n/2\), that is \(V\) is extended to \(U\), so that \(W^{(1)}=\langle U\times(0,0),(a,b,c_{1},c_{2})\rangle\) is an \(\mathcal{M}\)-subspace of \(f\). When \(d=n/2-1\), then we have \(W^{(2)}=\langle V\times(0,0),(a,b,c_{1},c_{2}),(e,f,d_{1},d_{2})\rangle\) is an \(\mathcal{M}\)-subspace of \(f\), where \(V\subset U\) with \(\dim(V)=n/2-1\). Assuming that \((c_{1},c_{2})=0_{2}\) or \((d_{1},d_{2})=0_{2}\), would contradict that \(d=n/2-1\) and lead to \(W^{(2)}=W^{(1)}\). Similarly, one can deduce \((c_{1},c_{2})\neq(d_{1},d_{2})\) as otherwise we would get \(d=n/2\). It is obvious that \(W^{(1)}\neq W^{(2)}\). \(b)\) **The case \(V\not\subset U\):** We have only the case \(d=n/2-1\) since by assumption \(f_{1},\ldots,f_{4}\in\mathcal{B}_{n}\) have only \((n/2-1)\)-dimensional subspaces \(V\) of \(\mathbb{F}_{2}^{n}\) for which \(D_{a}D_{b}f_{i}=0\) for all \(a,b\in V\). Hence, we have \(W^{(3)}=\langle V\times(0,0),(a,b,c_{1},c_{2}),(e,f,d_{1},d_{2})\rangle\), where \(\dim(V)=n/2-1\). Without loss of generality, we assume \((c_{1},c_{2})\neq 0_{2}\), then \(d=n/2\) which contradicts that \(d=n/2-1\). Similarly, we know \((d_{1},d_{2})\neq 0_{2},(c_{1},c_{2})\neq(d_{1},d_{2})\). Hence, we have \((c_{1},c_{2})\neq 0_{2},(d_{1},d_{2})\neq 0_{2},(c_{1},c_{2})\neq(d_{1},d_{2})\). It is obvious that \(W^{(3)}\neq W^{(1)}\). Now we prove that \(W^{(3)}\neq W^{(2)}\). Since \(V\not\subset U\), we have \[\{(a,b,0_{2})\colon(a,b,c_{1},c_{2})\in W^{(3)}\}\neq\{(a,b,0_{2})\colon(a,b,c_{ 1},c_{2})\in W^{(2)}\},\] which confirms the claim. An algorithm for checking the membership in the \(\mathcal{PS}^{\#}\) class.Recall that a partial spread of order \(s\) in \(\mathbb{F}_{2}^{n}\) with \(n=2m\) is a set of \(s\) vector subspaces \(U_{1},\ldots,U_{s}\) of \(\mathbb{F}_{2}^{n}\) of dimension \(m\) each, such that \(U_{i}\cap U_{j}=\{0_{n}\}\) for all \(i\neq j\). The partial spread of order \(s=2^{m}+1\) in \(\mathbb{F}_{2}^{n}\) with \(n=2m\) is called a spread. In the following, we denote by \(\mathbbm{1}_{U}\colon\mathbb{F}_{2}^{n}\to\mathbb{F}_{2}\) the _indicator function_ of \(U\subseteq\mathbb{F}_{2}^{n}\), i.e., \(\mathbbm{1}_{U}(x)=1\) if \(x\in U\), and \(0\) otherwise. The _partial spread class_\(\mathcal{PS}\) of bent functions on \(\mathbb{F}_{2}^{n}\) is the union of the following two classes [8]: the \(\mathcal{PS}^{+}\)_class_ is the set of Boolean bent functions of the form \(f(x)=\sum_{i=1}^{2^{m-1}+1}\mathbbm{1}_{U_{i}}(x)\); the \(\mathcal{PS}^{-}\)_class_ is the set of Boolean bent functions of the form \(f(x)=\sum_{i=1}^{2^{m-1}}\mathbbm{1}_{U_{i}^{*}}(x)\), where \(U_{i}^{*}:=U_{i}\setminus\{0\}\). The _Desarguesian partial spread_ class \(\mathcal{PS}_{ap}\subset\mathcal{PS}^{-}\) is the set of Boolean bent functions \(f\) on \(\mathbb{F}_{2^{m}}\times\mathbb{F}_{2^{m}}\) of the form \(f\colon(x,y)\in\mathbb{F}_{2^{m}}\times\mathbb{F}_{2^{m}}\mapsto h\left(x/y\right)\), where \(\frac{x}{0}=0\), for all \(x\in\mathbb{F}_{2^{k}}\) and \(h\colon\mathbb{F}_{2^{k}}\to\mathbb{F}_{2}\) is a balanced Boolean function with \(h(0)=0\). The property of a bent function to be a member of the partial spread class is not invariant under equivalence. If \(f\) is partial spread function on \(\mathbb{F}_{2}^{n}\), i.e., \(f(x)=\sum_{i=1}^{s}\mathbbm{1}_{U_{i}}(x)\) for a partial spread \(\{U_{1},\ldots,U_{s}\}\) of order \(s\) in \(\mathbb{F}_{2}^{n}\), then for an invertible \(n\times n\)-matrix \(A\), the function \(g\colon x\in\mathbb{F}_{2}^{n}\mapsto f(xA)\) is a partial spread function as well, since \(g(x)=\sum_{i=1}^{s}\mathbbm{1}_{U_{i}A^{-1}}(x)\) for the partial spread \(\{U_{1}A^{-1},\ldots,U_{s}A^{-1}\}\). However, translations of the input \(x\mapsto x+b\) for \(b\in\mathbb{F}_{2}^{n}\) and additions of affine functions \(l\) on \(\mathbb{F}_{2}^{n}\) to the output of a partial spread function \(f\) on \(\mathbb{F}_{2}^{n}\) may lead to functions \(g\colon x\mapsto f(x+b)\) and \(h\colon x\mapsto f(x)+l(x)\) on \(\mathbb{F}_{2}^{n}\), respectively, which do not belong to the partial spread class \(\mathcal{PS}\). In Algorithm 5.1, we describe how to check computationally the membership of a given bent function \(f\) on \(\mathbb{F}_{2}^{n}\) in the \(\mathcal{PS}\) class. ``` 0: Bent function \(f\in\mathcal{B}_{n}\). 0: True, \(f\) is a partial spread function and false, otherwise. 1:if\(f(0)=1\)then 2: Assign\(s:=2^{n/2-1}+1\) and \(V:=\operatorname{supp}(f)\) (the support of \(f\)). 3:else 4: Assign\(s:=2^{n/2-1}\qquad\text{and }V:=\operatorname{supp}(f)\bigcup\{0_{n}\}\). 5:endif 6:Construct the graph \(G=(V,E)\), for which the relation between vertices in \(V\) and edges in \(E\) is determined by the incidence matrix \([f(x+y)]_{x,y\in V}\). 7:Find the set \(S\) of cliques of the size \(2^{n/2}\) in \(G\). 8:Construct the set \(V^{\prime}\) of cliques in \(S\), whose elements form an \(n/2\)-dimensional vector space. 9:if\(|V^{\prime}|<k\)then 10: Return false. 11:endif 12:Construct the graph \(G^{\prime}=(V^{\prime},E^{\prime})\), for which the relation between vertices in \(V^{\prime}\) and edges in \(E^{\prime}\) is determined by the incidence matrix \((a_{i,j})\), where \(a_{i,j}=1\), if for \(U_{i},U_{j}\in S\) holds \(U_{i}\cap U_{j}=\{0_{n}\}\), and \(0\) otherwise. 13:Return true, \(f\) is a partial spread function, if the graph \(G^{\prime}\) contains a clique of size \(k\), and false otherwise. ``` **Algorithm 5.1**Membership in the partial spread class \(\mathcal{PS}\) ### Algorithm 5.2 ``` 0: The graph \(G=(V,E)\), for which the relation between vertices in \(V\) and edges in \(E\) is determined by the incidence matrix \([f(x+y)]_{x,y\in V}\). 1:if\(|V^{\prime}|<k\)then 2: Return false. [MISSING_PAGE_POST] **Remark 5.3**.: _Note that, it is possible to establish with Algorithm 5.1 whether a bent function \(f\in\mathcal{B}_{n}\) belongs to the completed partial spread class \(\mathcal{PS}^{\#}\). If for a vector \(b\in\mathbb{F}_{2}^{n}\) and an affine function \(l\) on \(\mathbb{F}_{2}^{n}\) the function \(g\colon x\mapsto f(x+b)+l(x)\) on \(\mathbb{F}_{2}^{n}\) is a member of the \(\mathcal{PS}\) class, we have \(f\in\mathcal{PS}^{\#}\), otherwise \(f\notin\mathcal{PS}^{\#}\)._ Concatenating bent functions on \(\mathbb{F}_{2}^{n}\) that do not share any \(\mathcal{M}\)-subspace of dimension \(n/2-1\) With this result, we derive the following generic construction method of bent functions outside the \(\mathcal{MM}^{\#}\) class. **Theorem 5.4**.: _Let \(f_{1},\ldots,f_{4}\in\mathcal{B}_{n}\) be four Boolean functions, not necessarily bent, such that \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\) is a bent function. Assume that there is no \((\frac{n}{2}-1)\)-dimensional subspace \(V\) of \(\mathbb{F}_{2}^{n}\) such that \(D_{a}D_{b}f_{i}=0\), for all \(a,b\in V\) and all \(i\in\{1,\ldots,4\}\). Then, \(f\in\mathcal{B}_{n+2}\) is a bent function outside \(\mathcal{MM}^{\#}\)._ Proof.: The result is a direct consequence of Proposition 5.1. **Example 5.5**.: _Let \(\pi\) be a quadratic APN permutation of \(\mathbb{F}_{2}^{3}\), which, in turn, has the \((\mathbb{P}_{1})\) property:_ \[\pi(y_{1},y_{2},y_{3})=\begin{bmatrix}y_{2}y_{3}+y_{1}+y_{2}+y_{3}\\ y_{1}y_{2}+y_{1}y_{3}+y_{2}\\ y_{1}y_{2}+y_{3}\end{bmatrix}. \tag{5.3}\] _Define four bent functions \(f_{1},\ldots,f_{4}\in\mathcal{B}_{6}\), which all belong to \(\mathcal{MM}^{\#}\), as follows:_ \[\begin{array}{ll}f_{1}(x,y)=x\cdot y+\delta_{0}(x),&f_{2}(x,y)=x\cdot\pi(y)+ \delta_{0}(x),\\ f_{3}(x,y)=x\cdot y,&f_{4}(x,y)=x\cdot\pi(y)+1.\end{array} \tag{5.4}\] _One can check that for defined in (5.4) bent functions, the dual bent condition is satisfied. In this way, we have that \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{8}\) is bent. Its ANF is given by_ \[\begin{array}{ll}f(z)=&1+z_{1}+z_{2}+z_{1}z_{2}+z_{3}+z_{1}z_{3}+z_{2}z_{3}+ z_{1}z_{2}z_{3}+z_{3}z_{4}+z_{1}z_{5}+z_{2}z_{6}+z_{7}+\\ &z_{1}z_{7}+z_{2}z_{7}+z_{1}z_{2}z_{7}+z_{3}z_{7}+z_{1}z_{3}z_{7}+z_{2}z_{3}z_{ 7}+z_{1}z_{2}z_{3}z_{7}+z_{1}z_{4}z_{8}+z_{2}z_{4}z_{5}z_{8}+\\ &z_{1}z_{6}z_{8}+z_{1}z_{4}z_{6}z_{8}+z_{2}z_{5}z_{6}z_{8}+z_{3}z_{5}z_{6}z_{8}+ z_{7}z_{8}.\end{array} \tag{5.5}\] _Finally, we confirm that the functions \(f_{1},f_{2},f_{3},f_{4}\) satisfy the conditions of Theorem 5.4. Due to the APN-ness of \(\pi\), we have that \(D_{a}D_{b}f_{4}=0\) if and only if two-dimensional subspace \(\{a,b\}\) is a subspace of \(S=\mathbb{F}_{2}^{3}\times\{0_{3}\}\). On the other hand, \(D_{a}D_{b}f_{1}\neq 0\) for any two dimensional subspace \(\{a,b\}\) of \(S=\mathbb{F}_{2}^{3}\times\{0_{3}\}\). In this way, we conclude that \(f\notin\mathcal{MM}^{\#}\). Using Algorithm 5.1, we also confirm that \(f\notin\mathcal{PS}^{\#}\). In this way, we have that \(f\notin(\mathcal{MM}^{\#}\cup\mathcal{PS}^{\#})\)._ Now, we provide one generic method of specifying \(f=f_{1}||f_{2}||f_{3}||f_{4}\) outside \(\mathcal{MM}^{\#}\), where \(f_{i}\) are bent functions within or outside \(\mathcal{MM}^{\#}\). The dual bent condition \(f_{1}^{*}+f_{2}^{*}+f_{3}^{*}+f_{4}^{*}=1\) can be satisfied if we simply select, e.g., \(f_{1}=f_{2}\) and \(f_{4}=1+f_{3}\), where \(f_{i}\in\mathcal{B}_{n}\) are bent. Then, according to Theorem 5.4, it is enough to ensure that \(f_{1}\) and \(f_{3}\) do not share any \(\mathcal{M}\)-subspace of dimension \(n/2-1\). **Theorem 5.6**.: _Let \(\pi\) be a permutation of \(\mathbb{F}_{2}^{m}\) having the property (P\({}_{1}\)). Let \(\sigma\) a permutation of \(\mathbb{F}_{2}^{m}\), such that there is no \((m-2)\)-dimensional subspace \(S\) of \(\mathbb{F}_{2}^{m}\) for which \(D_{a}D_{b}\sigma=0\) for all \(a,b\in S\). Let \(h_{1},h_{2}\in\mathcal{B}_{m}\) be arbitrary Boolean functions. Let \(f_{i}\in\mathcal{B}_{2m}\), \(i=1,\ldots,4\) be the functions defined by_ \[\begin{split} f_{1}(x,y)&=f_{2}(x,y)=x\cdot\pi(y)+h _{1}(y),\\ f_{3}(x,y)&=f_{4}(x,y)+1=y\cdot\sigma(x)+h_{2}(x) \end{split} \tag{5.6}\] _for all \(x,y\in\mathbb{F}_{2}^{m}\). Then \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{2m+2}\) is a bent function outside the \(\mathcal{MM}^{\#}\) class._ Proof.: Assume that \(f\) is in the \(\mathcal{MM}^{\#}\) class. From Proposition 5.1, there exists an \((m-1)\)-dimensional subspace \(V\) of \(\mathbb{F}_{2}^{2m}\) such that \(D_{a}D_{b}f_{i}=0\), for all \(a,b\in V\); \(i=1,\ldots,4.\) Define the mapping \(L:V\to\mathbb{F}_{2}^{m}\) by \(L(x,y)=y\), for all \((x,y)\in\mathbb{F}_{2}^{2m}\). Since \(D_{a}D_{b}f_{1}=0\) for all \(a,b\in V\), from the proof of Theorem 3.1 we deduce that \(\dim(Im(L))\leq 1\). From the rank-nullity theorem, we have that \(\dim(Ker(L))\geq m-2\). For \(a=(a_{1},a_{2})\), \(b=(b_{1},b_{2})\) in \(Ker(L)\) we have \(a_{2}=b_{2}=0_{m}\), and since \(Ker(L)\subseteq V\) so \(D_{a}D_{b}f_{3}=0\), we get \[y\cdot D_{a_{1}}D_{b_{1}}\sigma(x)+D_{a_{1}}D_{b_{1}}h_{2}(x)=0,\text{ for all }x,y\in\mathbb{F}_{2}^{m}.\] Consequently, \(D_{a_{1}}D_{b_{1}}\sigma=0\). Since \(\dim(Ker(L))\geq m-2\), this means that there is a subspace \(S\) of \(\mathbb{F}_{2}^{m}\) of dimension \(m-2\) such that \(D_{a_{1}}D_{b_{1}}\sigma=0\) for all \(a_{1},b_{1}\in S\). However, this is in contradiction with the assumption about \(\sigma\). Hence \(f\) is outside of the \(\mathcal{MM}^{\#}\) class. With this result, we can now demonstrate how one can construct bent functions in \(8\) variables outside \(\mathcal{MM}^{\#}\) class from four bent functions in \(6\) variables in \(\mathcal{MM}^{\#}\). We emphasize that this is the first attempt in the literature towards our better understanding of the origin of bent functions. **Example 5.7**.: _Let \(\pi\) be the APN permutation defined in (5.3) and \(\sigma\) be another APN permutation of \(\mathbb{F}_{2}^{3}\), defined by the algebraic normal form in the following way:_ \[\sigma(x)=\begin{bmatrix}x_{1}+x_{2}+x_{3}+x_{2}x_{3}\\ x_{2}+x_{3}+x_{1}x_{3}\\ x_{2}+x_{1}x_{2}+x_{1}x_{3}\end{bmatrix}.\] _Let \(h_{1},h_{2}\in\mathcal{B}_{3}\) be arbitrary Boolean functions. Define four bent functions \(f_{i}\in\mathcal{B}_{6}\) for \(i=1,2,3,4\) as in (5.6), which all belong to \(\mathcal{MM}^{\#}\). Then, the function \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{8}\) is a bent function outside the \(\mathcal{MM}^{\#}\) class by Theorem 5.6 (independently on the choice of \(h_{1}\) and \(h_{2}\)). Now, set \(h_{1}(y)=y_{1}y_{2}y_{3}+y_{1}y_{2}+y_{1}y_{3}+y_{2}y_{3}+y_{1}+y_{2}+y_{3}\) and \(h_{2}(y)=y_{1}y_{2}y_{3}+y_{1}y_{3}+y_{2}y_{3}+1\). Then, the algebraic normal form of \(f=f_{1}||f_{2}||f_{3}||f_{4}\) is given as follows:_ \[\begin{split} f(z)&=z_{4}+z_{1}z_{4}+z_{5}+z_{1}z_{5}+z_ {2}z_{5}+z_{4}z_{5}+z_{2}z_{4}z_{5}+z_{3}z_{4}z_{5}+z_{6}+z_{1}z_{6}+z_{3}z_{6} \\ &+z_{4}z_{6}+z_{2}z_{4}z_{6}+z_{5}z_{6}+z_{1}z_{5}z_{6}+z_{4}z_{5} z_{6}+z_{1}z_{3}z_{7}+z_{2}z_{3}z_{7}+z_{1}z_{2}z_{3}z_{7}\\ &+z_{4}z_{7}+z_{2}z_{4}z_{7}+z_{3}z_{4}z_{7}+z_{2}z_{3}z_{4}z_{7}+z_ {5}z_{7}+z_{1}z_{5}z_{7}+z_{1}z_{2}z_{5}z_{7}+z_{1}z_{3}z_{5}z_{7}\\ &+z_{4}z_{5}z_{7}+z_{2}z_{4}z_{5}z_{7}+z_{3}z_{4}z_{5}z_{7}+z_{6}z_{7 }+z_{1}z_{6}z_{7}+z_{1}z_{2}z_{6}z_{7}+z_{4}z_{6}z_{7}+z_{2}z_{4}z_{6}z_{7}\\ &+z_{5}z_{6}z_{7}+z_{1}z_{5}z_{6}z_{7}+z_{4}z_{5}z_{6}z_{7}+z_{7}z_{8 }.\end{split} \tag{5.7}\] _Using Algorithm 5.1, we confirm that \(f\notin\mathcal{PS}^{\#}\), and, hence, \(f\notin(\mathcal{MM}^{\#}\cup\mathcal{PS}^{\#})\)._ **Remark 5.8**.: _It is important to notice that the condition that any \((\frac{n}{2}-1)\)-dimensional \(\mathcal{M}\)-subspace \(V\) is not shared between \(f_{i}\) in Theorem 5.4 is only sufficient, and there exist functions \(f_{i}\) that do share the unique canonical \(\mathcal{M}\)-subspace \(V=\mathbb{F}_{2}^{n/2}\times\{0_{n/2}\}\) even though \(f=f_{1}||f_{2}||f_{3}||f_{4}\) is outside \(\mathcal{M}\mathcal{M}^{\#}\), which is discussed in Section 5.3._ We notice that bent functions on \(\mathbb{F}_{2}^{n}\) outside \(\mathcal{M}\mathcal{M}^{\#}\) do not admit \(n/2\)-dimensional vanishing subspaces, and furthermore it was observed in [18] that many instances of bent functions in \(\mathcal{PS}\setminus\mathcal{M}\mathcal{M}^{\#}\) only have vanishing subspaces of dimension less than \(n/2-1\). **Corollary 5.9**.: _Let \(f_{1}=f_{2}\) be two arbitrary bent functions on \(\mathbb{F}_{2}^{n}\) in \(\mathcal{M}\mathcal{M}^{\#}\) and define \(f_{4}=1+f_{3}\) on \(\mathbb{F}_{2}^{n}\) where \(f_{3}\not\in\mathcal{M}\mathcal{M}^{\#}\) and it does not admit \(\mathcal{M}\)-subspaces of dimension larger than \(n/2-2\). Then, \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\) is a bent function outside \(\mathcal{M}\mathcal{M}^{\#}\)._ **Open Problem 5.10**.: _The non-sharing property provides a theoretical framework for bent 4-concatenation, however finding such \(f_{i}\) (also satisfying the dual bent condition) appears to be difficult. We leave as an open problem a specification of such quadruples in a generic manner._ ### Concatenating bent functions that share a unique \(\mathcal{M}\)-subspace of dimension \(n/2\) Proposition 5.2 provides the possibility to analyze the class exclusion from \(\mathcal{M}\mathcal{M}^{\#}\) by only considering the subspaces \(W\) of dimension \(n/2+1\) of the above form. In particular, this general case is not covered by Proposition 5.1, since \(f_{i}\) share the unique \(\mathcal{M}\)-subspace \(U=\mathbb{F}_{2}^{m}\times\{0_{m}\}\). The analysis can be divided into two cases, namely considering the case that the only \((n/2-1)\)-dimensional vanishing subspace \(U^{\prime}\) for all \(f_{i}\) is such that \(U^{\prime}\subset U\) or alternatively \(U^{\prime}\not\subset U\). The main problem in this analysis is the fact that \(f_{1}+f_{2}\), \(f_{1}+f_{3}\) or \(f_{1}+f_{2}+f_{3}+f_{4}\) are not in general bent functions and therefore the analysis of second-order derivatives in (1.2) becomes harder. **Theorem 5.11**.: _Let \(f_{1},\ldots,f_{4}\) be four bent functions on \(\mathbb{F}_{2}^{n}\), with \(n=2m\), satisfying the following conditions:_ 1. \(f_{1},\ldots,f_{4}\) _belong to_ \(\mathcal{M}\mathcal{M}^{\#}\) _and share a unique_ \(\mathcal{M}\)_-subspace of dimension_ \(m\)_;_ 2. \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\) _is a bent function;_ _Let \(V\) be an \((\frac{n}{2}-1)\)-dimensional subspace of \(\mathbb{F}_{2}^{n}\) such that \(D_{a}D_{b}f_{i}=0\), for all \(a,b\in V\); \(i=1,\ldots,4.\) If for any \(v\in\mathbb{F}_{2}^{n}\) and any such \(V\subset\mathbb{F}_{2}^{n}\), there exist \(u^{(1)},u^{(2)},u^{(3)}\in V\) such that the following three conditions hold simultaneously_ 1. \(D_{u^{(1)}}f_{1}(x)+D_{u^{(1)}}f_{2}(x+v)\neq 0,\text{ or }D_{u^{(1)}}f_{3}(x)+D_{u^{(1)}}f_{4}(x+v)\neq 0,\)__ 2. \(D_{u^{(2)}}f_{1}(x)+D_{u^{(2)}}f_{3}(x+v)\neq 0,\text{ or }D_{u^{(2)}}f_{2}(x)+D_{u^{(2)}}f_{4}(x+v)\neq 0,\)__ 3. \(D_{u^{(3)}}f_{2}(x)+D_{u^{(3)}}f_{3}(x+v)\neq 0,\text{ or }D_{u^{(3)}}f_{1}(x)+D_{u^{(3)}}f_{4}(x+v)\neq 0,\)__ _then \(f\) is outside \(\mathcal{M}\mathcal{M}^{\#}\)._ Proof.: W.l.o.g., we assume that the unique \(\mathcal{M}\)-subspace shared between \(f_{i}\) is \(U=\mathbb{F}_{2}^{m}\times\{0\}\). Let \(\{W\}\) be \((n/2+1)\)-dimensional subspaces of \(\mathbb{F}_{2}^{n+2}\). We prove that \(f\) does not belong to \(\mathcal{MM}^{\#}\) by using Lemma 1.2. We need to show that, for any \(W\), there exist two vectors \((u,c_{1},c_{2}),(v,d_{1},d_{2})\in W\) such that \(D_{(u,c_{1},c_{2})}D_{(v,d_{1},d_{2})}f\neq 0\). From Proposition 5.2, if \(W\) is an \((n/2+1)\)-dimensional vanishing subspaces of \(f\) then \(W=\langle U\times(0,0),(a,b,c_{1},c_{2})\rangle\), where \(c_{1},c_{2}\in\mathbb{F}_{2},a,b\in\mathbb{F}_{2}^{n/2}\) and \((c_{1},c_{2})\neq 0_{2}\); or \(W=\langle V\times(0,0),(a,b,c_{1},c_{2}),(e,f,d_{1},d_{2})\rangle\), where \(\dim(V)=n/2-1\) and \(a,b,e,f\in\mathbb{F}_{2}^{n/2}\), \((c_{1},c_{2})\neq 0_{2},(d_{1},d_{2})\neq 0_{2},(c_{1},c_{2})\neq(d_{1},d_{2})\). In addition, we know \[W=\langle U\times(0,0),(a,b,c_{1},c_{2})\rangle=\langle V\times(0,0),(a,b,c_{1 },c_{2}),(e,f,0,0)\rangle,\] when \(V\subset U,(e,f)\in U\setminus V\) (where \(\dim(V)=n/2-1\)). Hence, if we prove that for any \((v,d_{1},d_{2})\in W\) there always exists one vector \((u,0,0)\in W\) such that \(D_{(u,0,0)}D_{(v,d_{1},d_{2})}f\neq 0\) where \((d_{1},d_{2})\neq 0_{2}\), then \(f\) is outside \(\mathcal{MM}^{\#}\). In order to show it, consider the following three cases. **Case 1**. Let \((d_{1},d_{2})=(0,1)\). From Equation (1.2), we have that \[\begin{split} D_{(u,0,0)}D_{(v,d_{1},d_{2})}f(x,y_{1},y_{2})=& D_{u}f_{12}(x+v)+y_{1}D_{u}f_{1234}(x+v)\\ =&(y_{1}+1)(D_{u}f_{12}(x+v))+y_{1}D_{u}f_{34}(x+v) \\ =&(y_{1}+1)(D_{u}f_{1}(x)+D_{u}f_{2}(x+v))\\ +& y_{1}(D_{u}f_{3}(x)+D_{u}f_{4}(x+v)).\end{split} \tag{5.8}\] Since for any \(v\in\mathbb{F}_{2}^{n}\) and any \(V\), there exist \(u^{(1)}\in V\) such that \(D_{u^{(1)}}f_{1}(x)+D_{u^{(1)}}f_{2}(x+v)\neq 0,\text{ \emph{or} }D_{u^{(1)}}f_{3}(x)+D_{u^{(1)}}f_{4}(x+v)\neq 0\), from (5.8), we have \[D_{(u^{(1)},0,0)}D_{(v,d_{1},d_{2})}f(x,y_{1},y_{2})\neq 0.\] **Case 2**. Let \((d_{1},d_{2})=(1,0)\). From Equation (1.2), we have that \[\begin{split} D_{(u,0,0)}D_{(v,d_{1},d_{2})}f(x,y_{1},y_{2})=& D_{u}f_{13}(x+v)+y_{2}D_{u}f_{1234}(x+v)\\ =&(y_{2}+1)(D_{u}f_{13}(x+v))+y_{2}D_{u}f_{24}(x+v) \\ =&(y_{2}+1)(D_{u}f_{1}(x)+D_{u}f_{3}(x+v))\\ +& y_{2}(D_{u}f_{2}(x)+D_{u}f_{4}(x+v)).\end{split} \tag{5.9}\] Since for any \(v\in\mathbb{F}_{2}^{n}\) and any \(V\), there exist \(u^{(2)}\in V\) such that \(D_{u^{(2)}}f_{1}(x)+D_{u^{(2)}}f_{3}(x+v)\neq 0,\text{ \emph{or} }D_{u^{(2)}}f_{2}(x)+D_{u^{(2)}}f_{4}(x+v)\neq 0\), from (5.9), we have \[D_{(u^{(2)},0,0)}D_{(v,d_{1},d_{2})}f(x,y_{1},y_{2})\neq 0.\] **Case 3**. Let \((d_{1},d_{2})=(1,1)\). From Equation (1.2), we have that \[\begin{split} D_{(u,0,0)}D_{(v,d_{1},d_{2})}f(x,y_{1},y_{2})=& D_{u}f_{23}(x+v)+(y_{1}+y_{2}+1)D_{u}f_{1234}(x+v)\\ =&(y_{1}+y_{2})(D_{u}f_{23}(x+v))+(y_{1}+y_{2}+1)D_ {u}f_{14}(x+v)\\ =&(y_{1}+y_{2})(D_{u}f_{2}(x)+D_{u}f_{3}(x+v))\\ +&(y_{1}+y_{2}+1)(D_{u}f_{1}(x)+D_{u}f_{4}(x+v)).\end{split} \tag{5.10}\] Since for any \(v\in\mathbb{F}_{2}^{n}\) and any \(V\), there exist \(u^{(3)}\in V\) such that \(D_{u^{(3)}}f_{2}(x)+D_{u^{(1)}}f_{3}(x+v)\neq 0,\text{ or }D_{u^{(3)}}f_{1}(x)+D_{u^{(3)}}f_{4}(x+v)\neq 0\), from (5.10), we have \[D_{(u^{(3)},0,0)}D_{(v,d_{1},d_{2})}f(x,y_{1},y_{2})\neq 0.\] In this way, we conclude that \(f\notin\mathcal{MM}^{\#}\). In the special case when \(f_{4}=f_{1}+f_{2}+f_{3}\), we have the following corollary. **Corollary 5.12**.: _Let \(f_{1},\ldots,f_{4}\) be four bent functions on \(\mathbb{F}_{2}^{n}\), with \(n=2m\), satisfying the following conditions:_ 1. \(f_{1},\ldots,f_{4}\) _belong to_ \(\mathcal{MM}^{\#}\) _and share a unique_ \(\mathcal{M}\)_-subspace_ \(U\)_;_ 2. \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\) _is a bent function._ _Let \(V\) be an \((\frac{n}{2}-1)\)-dimensional subspace of \(\mathbb{F}_{2}^{n}\) such that \(D_{a}D_{b}f_{i}=0\), for all \(a,b\in V\); \(i=1,\ldots,4\). If for any \(v\in\mathbb{F}_{2}^{n}\) and any such \(V\subset\mathbb{F}_{2}^{n}\), there exist \(u^{(1)},u^{(2)},u^{(3)}\in V\) such that the following three conditions hold simultaneously_ 1. \(D_{u^{(1)}}f_{1}(x)+D_{u^{(1)}}f_{2}(x+v)\neq 0,\)__ 2. \(D_{u^{(2)}}f_{1}(x)+D_{u^{(2)}}f_{3}(x+v)\neq 0,\)__ 3. \(D_{u^{(3)}}f_{2}(x)+D_{u^{(3)}}f_{3}(x+v)\neq 0,\)__ _then \(f\) is outside \(\mathcal{MM}^{\#}\)._ **Corollary 5.13**.: _With the same notation as in Theorem 5.11, we assume that \(f_{4}=f_{1}+f_{2}+f_{3}\) and \(V\subset U\) for any \(V\), where \(\dim(V)=n-1\) and \(U\) is a unique common \(\mathcal{M}\)-subspace of \(f_{1},f_{2},f_{3},f_{4}\). Then, the following set of sufficient conditions ensures that \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\) does not belong to \(\mathcal{MM}^{\#}\): There exist one subspace \(S\subset U\) with \(\dim(S)=2\) such that_ \[\begin{array}{l}D_{u}f_{1}(x)+D_{u}f_{2}(x+v)\neq 0;\\ D_{u}f_{1}(x)+D_{u}f_{3}(x+v)\neq 0;\\ D_{u}f_{2}(x)+D_{u}f_{3}(x+v)\neq 0,\end{array}\] _for any \(u\in S\setminus\{0_{n}\},v\in\mathbb{F}_{2}^{n}\)._ Proof.: If we always have \(V\subset U\) for any \(V\), then \(\dim(V\cap S)\geq 1.\) This follows from the fact that \(\dim(S)=2,\dim(V)=n-1\) and furthermore \(S\subset U\) and \(V\subset U\). Thus, for any \(V\), we always can find at least one nonzero vector \(u^{\prime}\in V\cap S\). Since \[\begin{array}{l}D_{u}f_{1}(x)+D_{u}f_{2}(x+v)\neq 0;\\ D_{u}f_{1}(x)+D_{u}f_{3}(x+v)\neq 0;\\ D_{u}f_{2}(x)+D_{u}f_{3}(x+v)\neq 0,\end{array}\] for any \(u\in S\setminus\{0_{n}\},v\in\mathbb{F}_{2}^{n}\), we have \[\begin{split}& D_{u^{\prime}}f_{1}(x)+D_{u^{\prime}}f_{2}(x+v)\neq 0 ;\\ & D_{u^{\prime}}f_{1}(x)+D_{u^{\prime}}f_{3}(x+v)\neq 0;\\ & D_{u^{\prime}}f_{2}(x)+D_{u^{\prime}}f_{3}(x+v)\neq 0.\end{split}\] From Theorem 5.11, we know \(f\) is outside \(\mathcal{MM}^{\#}\). **Example 5.14**.: _Consider the following Boolean bent functions \(f_{1},f_{2},f_{3},f_{4}\in\mathcal{B}_{6}\), which all belong to \(\mathcal{MM}^{\#}\) and are given by algebraic normal form as follows:_ \[\begin{split} f_{1}(x,y)=& x_{1}(y_{2}+y_{3}+y_{1} y_{3})+x_{2}(y_{1}+y_{1}y_{3}+y_{2}y_{3})+x_{3}(y_{1}y_{2}+y_{3})+y_{1}+y_{2}+y_{3},\\ f_{2}(x,y)=& x_{1}(y_{2}+y_{1}y_{2}+y_{1}y_{3})+x_{ 2}(y_{1}+y_{2}+y_{1}y_{2}+y_{2}y_{3})\\ +& x_{3}(y_{1}+y_{1}y_{2}+y_{3}+y_{1}y_{3}+y_{2}y_{3}) +y_{3}+1,\\ f_{3}(x,y)=& x_{1}(y_{1}+y_{2}+y_{1}y_{2}+y_{2}y_{3} )+x_{2}(y_{2}+y_{3}+y_{1}y_{3})+x_{3}(y_{1}+y_{2}+y_{3}+y_{2}y_{3})\\ +& y_{2}+y_{3}+1,\\ f_{4}(x,y)=& x_{1}(y_{1}+y_{2}+y_{3}+y_{2}y_{3})+x_{ 2}(y_{1}y_{2}+y_{3})+x_{3}(y_{2}+y_{3}+y_{1}y_{3})+y_{1}+1.\end{split} \tag{5.11}\] _One can check that for defined in (5.11) bent functions, the dual bent condition is satisfied. In this way, we have that \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{8}\) is bent. Its ANF is given by_ \[\begin{split} f(z)&=z_{4}+z_{2}z_{4}+z_{5}+z_{1}z_{ 5}+z_{3}z_{4}z_{5}+z_{6}+z_{1}z_{6}+z_{3}z_{6}+z_{1}z_{4}z_{6}+z_{2}z_{4}z_{6}+ z_{2}z_{5}z_{6}\\ &+z_{7}+z_{4}z_{7}+z_{1}z_{4}z_{7}+z_{2}z_{4}z_{7}+z_{3}z_{4}z_{7 }+z_{2}z_{5}z_{7}+z_{1}z_{4}z_{5}z_{7}+z_{3}z_{4}z_{5}z_{7}\\ &+z_{1}z_{6}z_{7}+z_{2}z_{6}z_{7}+z_{1}z_{4}z_{6}z_{7}+z_{1}z_{5} z_{6}z_{7}+z_{2}z_{5}z_{6}z_{7}+z_{3}z_{5}z_{6}z_{7}+z_{8}+z_{4}z_{8}\\ &+z_{3}z_{4}z_{8}+z_{5}z_{8}+z_{2}z_{5}z_{8}+z_{1}z_{4}z_{5}z_{8}+ z_{2}z_{4}z_{5}z_{8}+z_{1}z_{6}z_{8}+z_{2}z_{4}z_{6}z_{8}+z_{3}z_{4}z_{6}z_{8}\\ &+z_{3}z_{5}z_{6}z_{8}+z_{7}z_{8}+z_{6}z_{7}z_{8}.\end{split} \tag{5.12}\] _Since every bent function \(f_{i}\) has the form \(f_{i}(x,y)=x\cdot\pi_{i}(y)+h_{i}(y)\), where \(\pi_{i}\) is a quadratic APN permutation, then \(f_{i}\) share the unique canonical \(\mathcal{M}\)-subspace \(U=\mathbb{F}_{2}^{3}\times\{0_{3}\}\). In this way, we cannot use Theorem 5.6. One can check that for every two-dimensional subspace \(V\) of \(\mathbb{F}_{2}^{8}\) such that \(D_{a}D_{b}f_{i}=0\), for all \(a,b\in V\), where \(i=1,\ldots,4\), the conditions of Theorem 5.11 are satisfied, and hence, the bent function \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{8}\) is outside \(\mathcal{MM}^{\#}\). Additionally, using Algorithm 5.1, we confirm that \(f\notin\mathcal{PS}^{\#}\), and, hence, \(f\notin(\mathcal{MM}^{\#}\cup\mathcal{PS}^{\#})\)._ **Remark 5.15**.: _The examples in this section indicate that concatenation \(f=f_{1}||f_{2}||f_{3}||f_{4}\) of four bent functions \(f_{i}\in\mathcal{MM}^{\#}\) can give a new bent function \(f\notin(\mathcal{MM}^{\#}\cup\mathcal{PS}^{\#})\). We would also like to note that all functions \(f\in\mathcal{B}_{8}\) obtained in Examples 5.5, 5.7 and 5.14 are pairwise inequivalent. The latter was checked with Magma using the design isomorphism, as described in [20]._ The examples in this section indicate, that proper concatenations of bent functions satisfying the dual bent condition can give rise to many instances of (inequivalent) bent functions outside \(\mathcal{MM}^{\#}\). This observation motivates the following research problem. **Open Problem 5.16**.: _Find bent functions \(f_{1},f_{2},f_{3},f_{4}\in\mathcal{B}_{n}\) satisfying the dual bent condition, i.e., \(f_{1}^{*}+f_{2}^{*}+f_{3}^{*}+f_{4}^{*}=1\), such that \(f=f_{1}||f_{2}||f_{3}||f_{4}\in\mathcal{B}_{n+2}\) is bent and outside \(\mathcal{MM}^{\#}\)._ Conclusion and open problems In this article we have analyzed the structure of bent functions in the Maiorana-McFarland class with respect to their inherent \(\mathcal{M}\)-subspaces, thus contributing to the analysis of inequivalent Maiorana-McFarland bent functions. Moreover, we provided generic construction methods of bent functions outside \(\mathcal{MM}^{\#}\) for any \(n\geq 8\) using the bent \(4\)-concatenation. Most notably, our results indicate that it is possible to construct bent functions outside \(\mathcal{MM}^{\#}\cup\mathcal{PS}^{\#}\), thus we contribute to the better understanding of the origin of bent functions in \(n=8\) variables. To conclude, we believe that answering the following questions (in addition to the already mentioned open problems) will help to shed more light on the classification of bent functions as well as to develop new generic construction methods of these functions. 1. As we mentioned in the introduction, for a Maiorana-McFarland bent function \(f\in\mathcal{B}_{n}\), the number of its \(\mathcal{M}\)-subspaces is at most \(\prod_{i=1}^{n/2}\left(2^{i}+1\right)\) and the equality is attained if and only if \(f\) is quadratic. What is the maximum number of \(\mathcal{M}\)-subspaces for a bent function \(f\in\mathcal{B}_{n}\) in \(\mathcal{MM}\) of a fixed degree \(d>2\), and is it possible to characterize the functions achieving this bound? Our computational results indicate, that bent functions of the form \((x,y)\mapsto x\cdot y+y_{i_{1}}y_{i_{1}}\cdots y_{i_{d}}\) have the maximum number of \(\mathcal{M}\)-subspaces among all Maiorana-McFarland bent function of a fixed degree \(d>2\). 2. In this article, we analyzed, which properties of permutations \(\pi\) guarantee that Maiorana-McFarland bent functions \(x\cdot\pi(y)+h(y)\) have either one or many \(\mathcal{M}\)-subspaces. For example, if \(\pi\) has the \((P_{1})\) property, we know that independently of the choice of the function \(h\), the bent function \(x\cdot\pi(y)+h(y)\) has the unique canonical \(\mathcal{M}\)-subspace. However, if the \((P_{1})\) property is relaxed, then the properties of the function \(h\) become crucial to guarantee the uniqueness of the \(\mathcal{M}\)-subspace. We think it is important to understand in general, how the choice of a pair \((\pi,h)\) affects the number of \(\mathcal{M}\)-subspaces of the corresponding Maiorana-McFarland function. 3. An efficient way to satisfy the dual bent condition (we have to ensure that \(f_{1}^{*}+f_{2}^{*}+f_{3}^{*}+f_{4}^{*}=1\) so that \(f=f_{1}||f_{2}||f_{3}||f_{4}\) is bent) is to use \(f_{1}=f_{2}\) and \(f_{3}=1+f_{4}\) which we employed in Theorem 5.6. However, there exist other possibilities to satisfy the dual bent condition which need to be examined further with regard to the class membership of the designed bent functions. We notice that Proposition 5.1 does not require that the functions \(f_{i}\) that define \(f=f_{1}||f_{2}||f_{3}||f_{4}\) are bent. Therefore, another interesting research problem is to apply a similar approach as taken in Theorem 5.6 to semi-bent and \(5\)-valued spectra functions. ## Acknowledgements Enes Pasalic is supported in part by the Slovenian Research Agency (research program P1-0404 and research projects J1-1694, N1-0159, J1-2451 and J1-4084). Sadmir Kudin is supported in part by the Slovenian Research Agency (research program P1-0404, research project J1-4084 and Young Researchers Grant). Fengrong Zhang is supported in part by the Natural Science Foundation of China (No. 61972400), the Fundamental Research Funds for the Central Universities (XJS221503), and the Youth Innovation Team of Shaanxi Universities.
2306.12155
Joint Dense-Point Representation for Contour-Aware Graph Segmentation
We present a novel methodology that combines graph and dense segmentation techniques by jointly learning both point and pixel contour representations, thereby leveraging the benefits of each approach. This addresses deficiencies in typical graph segmentation methods where misaligned objectives restrict the network from learning discriminative vertex and contour features. Our joint learning strategy allows for rich and diverse semantic features to be encoded, while alleviating common contour stability issues in dense-based approaches, where pixel-level objectives can lead to anatomically implausible topologies. In addition, we identify scenarios where correct predictions that fall on the contour boundary are penalised and address this with a novel hybrid contour distance loss. Our approach is validated on several Chest X-ray datasets, demonstrating clear improvements in segmentation stability and accuracy against a variety of dense- and point-based methods. Our source code is freely available at: www.github.com/kitbransby/Joint_Graph_Segmentation
Kit Mills Bransby, Greg Slabaugh, Christos Bourantas, Qianni Zhang
2023-06-21T10:07:17Z
http://arxiv.org/abs/2306.12155v1
# Joint Dense-Point Representation for ###### Abstract We present a novel methodology that combines graph and dense segmentation techniques by jointly learning both point and pixel contour representations, thereby leveraging the benefits of each approach. This addresses deficiencies in typical graph segmentation methods where misaligned objectives restrict the network from learning discriminative vertex and contour features. Our joint learning strategy allows for rich and diverse semantic features to be encoded, while alleviating common contour stability issues in dense-based approaches, where pixel-level objectives can lead to anatomically implausible topologies. In addition, we identify scenarios where correct predictions that fall on the contour boundary are penalised and address this with a novel hybrid contour distance loss. Our approach is validated on several Chest X-ray datasets, demonstrating clear improvements in segmentation stability and accuracy against a variety of dense- and point-based methods. Our source code is freely available at: www.github.com/kitbransby/Joint_Graph_Segmentation Keywords:Semantic Segmentation Graph Convolutional Networks ## 1 Introduction Semantic segmentation is a fundamental task in medical imaging used to delineate regions of interest, and has been applied extensively in diagnostic radiology. Recently, deep learning methods that use a dense probability map to classify each pixel such as UNet [2], R-CNN [3], FCN [4] have advanced the state-of-the-art in this area. Despite overall excellent performance, dense-based approaches learn using a loss defined at the pixel-level which can lead to implausible segmentation boundaries such as unexpected interior holes or disconnected blobs [1]. This is a particular problem in medical image analysis where information-poor, occluded or artefact-affected areas are common and often limit a network's ability to predict reasonable boundaries. Furthermore, minimising the largest error (Hausdorff distance (HD)) is often prioritised over general segmentation metrics such as Dice Similarity (DS) or Jaccard Coefficient (JC) in medical imaging, as stable and trustworthy predictions are more desirable. To address this problem in segmentation networks, Gaggion _et al_. proposed HybridGNet [1] that replaces the convolutional decoder in UNet with a graph convolutional network (GCN), where images are segmented using a polygon generated from learned points. Due to the relational inductive bias of graph networks where features are shared between neighbouring nodes in the decoder, there is a natural smoothing effect in predictions leading to stable segmentation and vastly reduced HD. In addition this approach is robust to domain shift and can make reasonable predictions on unseen datasets sourced from different medical centres, whereas dense-based methods fail due to domain memorization [5]. In HybridGNet, improved stability and HD comes at the cost of reduced contour detail conveyed by sub-optimal DS and JC metrics when compared to dense-based approaches such as UNet. Many methods have addressed this problem by rasterizing polygon points predicted by a decoder to a dense mask and then training the network using typical pixel-level losses such as Dice or cross-entropy [7, 9, 10]. These approaches have merit but are often limited by their computational requirements. For example, in CurveGCN [7], the rasterization process uses OpenGL polygon triangulation which is not differentiable, and the gradients need to be approximated using Taylor expansion which is computationally expensive and can therefore only be applied at the fine-tuning stage [8]. While in ACDRNet [10], rasterization is differentiable, however the triangulation process is applicable only to convex polygons, and therefore limits application to more complicated polygon shapes. Rasterization is extended to non-convex polygons in BoundaryFormer [9] by bypassing the triangulation step and instead approximating the unsigned distance field. This method gives excellent results on MS-COCO dataset [11], however is computationally expensive (see Section 3.3). With this in mind, we return to HybridGNet which efficiently optimises points directly and theorise about the causes of the performance gap relative to dense segmentation models. We identify that describing segmentation contours using points is a sub-optimal approach because (1) points are an incomplete representation of the segmentation map; (2) the supervisory signal is usually weaker (\(n\) distances are calculated from \(n\) pairs of points, versus, \(h\) x \(w\) distances for pairs of dense probability maps); (3) the distance from the contour is more meaningful than the distance from the points representing the contour, hence minimising the point-wise distance can lead to predictions which fall on the contour being penalised. _Contributions_: We propose a novel joint architecture and contour loss to address this problem that leverages the benefits of both point and dense approaches. First, we combine image features from an encoder trained using a point-wise distance with image features from a decoder trained using a pixel-level objective. Our motivation is that contrasting training strategies enable diverse image features to be encoded which are highly detailed, discriminative and semantically rich when combined. Our joint learning strategy benefits from the segmentation accuracy of dense-based approaches, but without topological errors that regularly afflict models trained using a pixel-level loss. Second, we propose a novel hybrid contour distance (HCD) loss which biases the distance field towards pre dictions that fall on the contour boundary using a sampled unsigned distance function which is fully differentiable and computationally efficient. To our knowledge this is the first time unsigned distance fields have been applied to graph segmentation tasks in this way. Our approach is able to generate highly plausible and accurate contour predictions with lower HD and higher DS/JC scores than a variety of dense and graph-based segmentation baselines. ## 2 Methods ### Network Design We implement an architecture consisting of two networks, a Dense-Graph (DG) network and a Dense-Dense (DD) network, as shown in Fig 1. Each network takes the same image input \(X\) of height \(H\) and width \(W\) with skip connections passing information from the decoder of DD to the encoder of DG. For DG, we use a HybridGNet-style architecture containing a convolutional encoder to learn image features at multiple resolutions, and a graph convolutional decoder to regress the 2D coordinates of each point. In DG, node features are initialised in a variational autoencoder (VAE) bottleneck where the final convolutional output is flattened to a low dimensional latent space vector \(z\). We sample \(z\) from a distribution \(Normal(\mu,\sigma)\) using the reparameterization trick [12], where \(\mu\) and \(\sigma\) are learnt parameters of the encoder. Image-to-Graph Skip Connections (IGSC) [1] are used to sample dense feature maps \(F_{I}\in\mathbb{R}^{H\times W\times C}\) from DG's encoder using node position predictions \(P\in\mathbb{R}^{N\times 2}\) from DG's graph decoder and concatenate these with previous node features \(F_{G}\in\mathbb{R}^{N\times f}\) to give new node features \(F^{\prime}_{G}\in\mathbb{R}^{N\times(f+C+2)}\). Here, \(N\) is the number of nodes in the graph and \(f\) is the dimension of the node embedding. We implement IGSC at every encoder-decoder level and pass node predictions as output, resulting in seven node predictions. For DD, we use a standard UNet using the same number of layers and dimensions as the DG encoder with a dense segmentation prediction at the final decoder layer. Figure 1: Network Architecture: a Dense-Dense network (top) enriches image features in a Dense-Graph network (bottom). ### Graph Convolutional Network Our graph decoder passes features initialised from the VAE bottleneck through six Chebyshev spectral graph convolutional [13] (ChebConv) layers using K-order polynomial filters. Briefly, this is defined by \(X^{\prime}=\sigma(\sum_{K=1}^{k}Z^{(k)}\cdot\Theta^{(k)})\) where \(\Theta^{(k)}\in\mathbb{R}^{f_{in}\times f_{out}}\) are learnable weights and \(\sigma\) is a ReLU activation function. \(Z^{(k)}\) is computed recursively such that \(Z^{(1)}=X\), \(Z^{(2)}=\hat{L}\cdot Z^{(1)}\), \(Z^{(k)}=2\cdot\hat{L}\cdot Z^{(k-1)}-Z^{(k-2)}\) where \(X\in\mathbb{R}^{N\times f_{in}}\) are graph features, and \(\hat{L}\) represents the scaled and normalized graph Laplacian [14]. In practice, this allows for node features to be aggregated within a K-hop neighbourhood, eventually regressing the 2D location of each node using additional ChebConv prediction layers (\(f_{out}=2\)). As in [1], our graph network also includes an unpooling layer after ChebConv block 3 to upsample the number of points by adding a new point in between existing ones. ### Joint Dense-Point Learning As typical DG networks are trained with a point-wise distance loss and not a pixel-level loss, the image encoder is not directly optimised to learn clear and well-defined boundary features. This misalignment problem results in the DG encoder learning features pertinent to segmentation which are distinctively different from those learnt in DD encoders. This is characterised by activation peaks in different image regions such as the background and other non-boundary areas (see Fig 2). To leverage this observation, we enrich the DG encoder feature maps at multiple scales by fusing them with image features learnt by a DD Figure 2: Feature map activation comparison between UNet encoder, UNet decoder, HybridGNet encoder and our encoder, using two examples. Top four most activated channels are summed channel-wise for convolutional layers 1-5 in each encoder/decoder. L\(\rightarrow\)R: decreasing resolution, increasing channel depth. Note, activations in our encoder consistently highlight areas which are more pertinent to segmentation decoder using a pixel-level loss. These diverse and highly discriminative features are concatenated before being passed through the convolutional block at each level. Current GCN feature learning paradigms aim at combining feature maps from neighbouring or adjacent levels so as to aggregate similar information. This results in a "coarse-to-fine" approach by first passing high level features to early graph decoder blocks, followed by low level features to late graph decoder blocks. Our joint learning approach is similar to this strategy but also supplements each DG encoder level with both semantically rich and highly detailed contour features learnt by the DD network. ### Hybrid Contour Distance Mean squared error (MSE) is a spatially symmetric loss which is agnostic to true contour borders. We alleviate this pitfall by designing an additional contour-aware loss term that is sensitive to the border. To achieve this we precompute a 2D unsigned distance map \(S\) from the dense segmentation map for each class \(c\) (i.e lungs, heart), where each position represents the normalised distance to the closest contour border of that class. Specifically, for a dense segmentation map \(M\) we use a Canny filter [15] to find the contour boundary \(\delta M\) and then determine the minimum distance between a point \(x\in c\) and any point \(p\) on the boundary \(\delta M_{c}\). This function is positive for both the interior and exterior regions, and zero on the boundary. Our method is visualised in Fig 3 (first row) and formalised below: \[S_{c}(\vec{x})=\min|\vec{x}-\vec{p}\mid\text{for all }\vec{p}\in\delta M_{c} \tag{1}\] Figure 3: Our Hybrid Contour Distance loss biases the distance field to contours rather than the points representing the contour. Top L\(\rightarrow\)R: Segmentation mask represented with edges, unsigned distance field for lungs, and heart. Bottom: Effect of beta in HCD. During training, we sample \(S_{c}\) as an additional supervisory signal using the predicted 2D point coordinates \(\hat{y}_{i}\in c\), and combine with MSE with weight \(\beta\). The effect of \(\beta\) is illustrated in Fig 3 (second row) and full HCD loss function is defined below, where \(N\) is the number of points and \(y_{i}\in c\) is the ground truth point coordinate. \[\mathcal{L}_{HCD}=\frac{1}{N}\sum_{i=1}^{N}[(y_{i}-\hat{y}_{i})^{2}+\beta S_{c} (\hat{y}_{i})] \tag{2}\] ## 3 Experiments and Results ### Datasets We obtain four publicly available Chest X-ray segmentation datasets (JSRT [16], Padchest [17], Montgomery [18], and Shenzen [19]), with 245, 137, 566 and 138 examples respectively. JSRT cases are from patients diagnosed with lung nodules, while Padchest contains patients with a cardiomegaly diagnosis and features 20 examples where a pacemaker occludes the lung border. These two datasets contain heart and lung contour ground truth labels and are combined in a single dataset of 382 examples. Montgomery and Shenzen contain lung contour ground truth labels only, and are combined into a second dataset of 704 cases where 394 examples are from patients with tuberculosis and 310 are from patients without. Each combined dataset is randomly split into 70% train, 15% validation and 15% test examples, each with a 1024px x 1024px resolution X-ray image and ground truth point coordinates for organ contours obtained from [5]. ### Model Implementation & Training We implement our model in PyTorch and use PyTorch-Geometric for the graph layer. All models were trained for 2500 epochs using a NVIDIA A100 GPU from Queen Mary's Andrena HPC facility. For reliable performance estimates, all models and baselines were trained from scratch three times, the mean scores obtained for quantitative analysis and the median model used for qualitative analysis. Hyperparameters for all experiments were unchanged from [1]. To impose a unit Gaussian prior on the VAE bottleneck we train the network with an additional KL-divergence loss term with weight \(1e^{-5}\), and use \(\beta=2.5e^{-2}\) for the HCD weight. For joint models we pretrain the first UNet model separately using the recipe from [1] and freeze its weights when training the full model. This is done to reduce complexity in our training procedure. ### Comparison to Existing Methods & Ablation Study We compare our approach to a variety of different dense- and point-based segmentation methods. First we validate our joint DD-DG learning approach by comparing to a DD-only segmentation network (UNet [2]) and DG-only segmentation networks (HybridGNet [6], HybridGNet+ISGC [1]). Next, we explore five alternative configurations of our joint architecture to demonstrate that our design choices are superior. These are: (1) UNet Joint: a network that uses our joint learning strategy but with two DD (UNet) networks, (2) Hourglass: joint learning but with no sharing between DD decoder and DG encoder, only the output of DD is passed to the input of DG, similar to the stacked hourglass network [21, 22], (3) Hourglass Concat: as above, but the output of DD is concatenated with the input and both are passed to DG, (4) Multi-task: a single dense encoder is shared between a dense and graph decoder, similar to [23], (5) No Joint: our network with no joint learning strategy. To demonstrate the effectiveness of our HCD loss, we compare to our joint network trained with the contour term removed (MSE only). Our HCD loss is similar to differentiable polygon rasterization in BoundaryFormer [9], as they both use the distance field to represent points with respect to the true boundary. However, our method precomputes the distance field for each example and samples it during training, while BoundaryFormer approximates it on the fly. Hence we also compare to a single DG network (HybridGNet+IGSC) where each point output is rendered to a dense 1028px x 1028px segmentation map using rasterization and the full model is trained using a pixel-level loss. Table 1-2 demonstrate that our methodology outperforms all point- and dense-based segmentation baselines on both datasets. As seen in Fig 4, the performance increase from networks that combine image features from dense and point trained networks (column 7,9) is superior to when image features from two dense trained networks are combined (column 5). Furthermore, concatenating features at each encoder-decoder level (Table 1-2, row 11) instead of at the Figure 4: JSRT & Padchest: Qualitative Analysis. Note that our method does not suffer from the topological errors of dense-based methods but benefits from their segmentation accuracy. Specifically, improvements (white boxes) are most prevalent in areas of complexity such as where the heart and lungs intersect. input-output level (row 5-6) shows improved performance. The addition of HCD supervision to a DG model (Table 1-2, row 8) gives similar improvements in segmentation when compared to using a differentiable rasterization pipeline (row 10), yet is far more computationally efficient (Table 2, column 7). ## 4 Conclusion We proposed a novel segmentation architecture which leverage the benefits of both dense- and point- based algorithms to improve accuracy while reducing topological errors. Extensive experiments support our hypothesis that networks that utilise joint dense-point representations can encode more discriminative features which are both semantically rich and highly detailed. Limitations in segmentation methods using a point-wise distance were identified, and remedied with a new contour-aware loss function that offers an efficient alternative to differentiable rasterization methods. Our methodology can be applied to any graph segmentation network with a convolutional encoder that is optimised using \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & Predict & Supervision & Lungs & \multicolumn{3}{c}{Heart} \\ & & & DC\(\uparrow\) & HD\(\downarrow\) & JC\(\uparrow\) & DC\(\uparrow\) & HD \(\downarrow\) & JC\(\uparrow\) \\ \hline HybridGNet & point & point & 0.9313 & 17.0445 & 0.8731 & 0.9065 & 15.3786 & 0.8319 \\ HybridGNet+IGSC & point & point & 0.9589 & 13.9955 & 0.9218 & 0.9295 & 13.2500 & 0.8702 \\ UNet & dense & dense & 0.9665 & 28.7316 & 0.9368 & 0.9358 & 29.6317 & 0.8811 \\ UNet Joint & dense & dense & 0.9681 & 26.3758 & 0.9395 & 0.9414 & 24.9409 & 0.8909 \\ Hourglass & point & both & 0.9669 & 13.4225 & 0.9374 & 0.9441 & 12.3434 & 0.8954 \\ Hourglass Concat & point & both & 0.9669 & 13.5275 & 0.9374 & 0.9438 & 12.1554 & 0.8948 \\ Multi-task & point & both & 0.9610 & 15.0490 & 0.9257 & 0.9284 & 13.1997 & 0.8679 \\ No Joint & point & point & 0.9655 & 13.2137 & 0.9341 & 0.9321 & 13.1826 & 0.8748 \\ MSE Only & point & both & 0.9686 & **12.4058** & 0.9402 & 0.9439 & 12.0872 & 0.8953 \\ Rasterize & point & dense & 0.9659 & 13.7267 & 0.9349 & 0.9344 & 12.9118 & 0.8785 \\ Ours & point & both & **0.9698** & 13.2087 & **0.9423** & **0.9451** & **11.7721** & **0.8975** \\ \hline \hline \end{tabular} \end{table} Table 1: JSRT & Padchest Dataset: Quantitative Analysis \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & Predict & Supervision & DC\(\uparrow\) & HD\(\downarrow\) & JC\(\uparrow\) & Inference (s) \\ \hline HybridGNet & point & point & 0.9459 & 12.0294 & 0.8989 & 0.0433 \\ HybridGNet + IGSC & point & point & 0.9677 & 9.7591 & 0.9380 & 0.0448 \\ UNet & dense & dense & 0.9716 & 16.7093 & 0.9453 & 0.0047 \\ UNet Joint & dense & dense & 0.9713 & 16.5447 & 0.9447 & 0.0103 \\ Hourglass & point & both & 0.9701 & 10.9284 & 0.9434 & 0.1213 \\ Hourglass Concat & point & both & 0.9712 & 10.8193 & 0.9448 & 0.1218 \\ Multi-task & point & both & 0.9697 & 10.8615 & 0.9417 & 0.0535 \\ No Joint & point & point & 0.9701 & 9.8246 & 0.9424 & 0.0510 \\ MSE Only & point & both & 0.9729 & 9.6527 & 0.9474 & 0.1224 \\ Rasterize & point & dense & 0.9718 & **9.4485** & 0.9453 & 0.2421 \\ Ours & point & both & **0.9732** & 10.2166 & **0.9481** & 0.1226 \\ \hline \hline \end{tabular} \end{table} Table 2: Montgomery & Shenzen Dataset: Quantitative Analysis + Inference Time a point-wise loss, and our experiments across four datasets demonstrate that our approach is generalizable to new data. #### 5.0.1 Acknowledgements This research is part of AI-based Cardiac Image Computing (AICIC) funded by the faculty of Science and Engineering at Queen Mary University of London.
2307.05700
SepHRNet: Generating High-Resolution Crop Maps from Remote Sensing imagery using HRNet with Separable Convolution
The accurate mapping of crop production is crucial for ensuring food security, effective resource management, and sustainable agricultural practices. One way to achieve this is by analyzing high-resolution satellite imagery. Deep Learning has been successful in analyzing images, including remote sensing imagery. However, capturing intricate crop patterns is challenging due to their complexity and variability. In this paper, we propose a novel Deep learning approach that integrates HRNet with Separable Convolutional layers to capture spatial patterns and Self-attention to capture temporal patterns of the data. The HRNet model acts as a backbone and extracts high-resolution features from crop images. Spatially separable convolution in the shallow layers of the HRNet model captures intricate crop patterns more effectively while reducing the computational cost. The multi-head attention mechanism captures long-term temporal dependencies from the encoded vector representation of the images. Finally, a CNN decoder generates a crop map from the aggregated representation. Adaboost is used on top of this to further improve accuracy. The proposed algorithm achieves a high classification accuracy of 97.5\% and IoU of 55.2\% in generating crop maps. We evaluate the performance of our pipeline on the Zuericrop dataset and demonstrate that our results outperform state-of-the-art models such as U-Net++, ResNet50, VGG19, InceptionV3, DenseNet, and EfficientNet. This research showcases the potential of Deep Learning for Earth Observation Systems.
Priyanka Goyal, Sohan Patnaik, Adway Mitra, Manjira Sinha
2023-07-11T18:07:25Z
http://arxiv.org/abs/2307.05700v1
SepHRNet: Generating High-Resolution Crop Maps from Remote Sensing imagery using HRNet with Separable Convolution ###### Abstract The accurate mapping of crop production is crucial for ensuring food security, effective resource management, and sustainable agricultural practices. One way to achieve this is by analyzing high-resolution satellite imagery. Deep Learning has been successful in analyzing images, including remote sensing imagery. However, capturing intricate crop patterns is challenging due to their complexity and variability. In this paper, we propose a novel Deep learning approach that integrates HRNet with Separable Convolutional layers to capture spatial patterns and Self-attention to capture temporal patterns of the data. The HRNet model acts as a backbone and extracts high-resolution features from crop images. Spatially separable convolution in the shallow layers of the HRNet model captures intricate crop patterns more effectively while reducing the computational cost. The multi-head attention mechanism captures long-term temporal dependencies from the encoded vector representation of the images. Finally, a CNN decoder generates a crop map from the aggregated representation. Adaboost is used on top of this to further improve accuracy. The proposed algorithm achieves a high classification accuracy of 97.5% and IoU of 55.2% in generating crop maps. We evaluate the performance of our pipeline on the Zuericrop dataset and demonstrate that our results outperform state-of-the-art models such as U-Net++, ResNet50, VGG19, InceptionV3, DenseNet, and EfficientNet. This research showcases the potential of Deep Learning for Earth Observation Systems. ## 1 Introduction Spatiotemporal crop mapping is a significant area of research in remote sensing and agriculture that uses satellite imagery to identify and monitor the cultivation of crops over time and space. Accurate crop mapping is crucial for sustainable agriculture, as it helps optimize crop yields and increase food production. Additionally, crop mapping provides valuable insights into land-use changes, agricultural practices, and crop management. Furthermore, spatiotemporal crop mapping has applications beyond agriculture, including monitoring invasive species, urban growth, and changes in natural habitats, making it a valuable tool for environmental monitoring and management. Recent advancements in crop mapping have been driven by the increasing availability of high-resolution satellite imagery and advances in machine learning algorithms. Deep learning models, such as Convolutional Neural Networks (CNNs), have improved the accuracy and efficiency of crop mapping from spatial satellite data by representing complex structures associated with different types of croplands. Integration of multiple data sources, such as weather data, soil information, and topographic data, into crop mapping models, has further enhanced their accuracy and provided a more comprehensive understanding of crop growth and management, enabling more accurate predictions of crop yields and environmental impacts. There has also been a shift towards using spatio-temporal models that consider the dynamics of crop growth over time and space. These models provide insights into the effects of climate change, natural disasters, and land-use changes on crop production, informing strategies for adaptation. In this study, we explore the task of spatio-temporal crop mapping from remote sensing images using several recent developments in Deep Learning, such as separable convolutions. We propose a pipeline that takes sequence of remote sensing images as input, incorporates High Resolution Network (HRNet) to capture spatial relations and an LSTM-based block and a self-attention mechanism to capture the temporal dependencies, to obtain a segmented image where each segment indicates a particular crop growth. Promising results are obtained on the publicly available ZueriCrop dataset, and several metrics are used to validate the robustness of the proposed pipeline. The novelty of the proposed approach lies in our use of recent Deep Learning models and concepts for this task. We use the High Resolution Network (HRNet) and show its strong improvement in comparison to well-established image segmentation approaches such as U-Net. Further, we show that the use of separable convolution is far more effective for this task in comparison to traditional convolution. Further, we show that utilizing the sequential information is useful to create a more accurate representation of the crop map, and explore the use of sequential models like LSTM and Multi-Head Self-Attention. The contributions of this work can be summarized as follows: 1. We propose SepHRNet: an encoder-decoder based pipeline for generating high-resolution crop maps from remote sensing image sequence 2. We compare many recent Deep learning-based models at each step of the pipeline to choose the best one 3. We show that use of separable convolution instead of standard convolution and multi-head self-attention instead of LSTM improve the spatial and temporal representation respectively 4. We show that Boosting can help the models further To establish the veracity of our contributions, we carry out extensive experiments on the ZueriCrop dataset, which contains sequences of remote sensing imagery over farmlands with ground-truth labels of crop production. We test different aspects of our proposed pipelines against alternate approaches. We consider and compare different deep learning architectures for the spatial component, as well as different convolution techniques. Regarding the temporal component, we compare LSTM and self-attention. Finally, we show how the use of Boosting (Adaboost) can further improve the performance of the proposed pipeline. We carry out a detailed ablation study to establish the importance of each part of the proposed pipeline. The following section includes a description of prior work in the domain of mapping crop types using remote-sensing images in a spatiotemporal setting. Section 3 provides details about the dataset used along with data processing. In Section 4, the methodology is presented, including baselines and the proposed architecture, with a detailed mathematical representation. Training details, along with mainstream experiments, simulation results, and a performance comparison of the proposed model with existing ones, are explained in Section 5. Section 6 presents an ablation study conducted. Finally, the last section presents the conclusions drawn from this work. ## 2 Related Work Crop mapping is an important task for agricultural planning. In recent years, remote sensing has emerged as a useful source of information for such crop mapping. Various techniques have been employed, including deep learning, time-series analysis, and machine learning, resulting in high classification accuracy for different crop types. ### Crop Mapping Mazzia et al. (2021) [Mazzia et al.(2020)] utilized multi-temporal Sentinel-2 imagery and crop information from the USDA National Agricultural Statistics Service (NASS) to train and evaluate their proposed spatiotemporal recurrent neural networks (STRNNs). Turkoglu et al. (2021) [Turkoglu et al.(2021)] introduced ms-convSTAR (multistage ConvRNN) and evaluated its performance on the Zuericrop dataset. They compared its performance with RF, LSTM, TCN, Transformer network, Unet, Unet + convLSTM, and Bi-convGRU. Konduri et al. (2020) [Konduri et al.(2020)] employed the Cluster-then-label approach using Multivariate Spatio-Temporal Clustering and Mapcurves on the MODIS NDVI and USDA CDL dataset. Russwurm et al. (2019) [Russwurm et al.(2019)] proposed the Breizhcrop time series dataset for crop identification and evaluated different models including RF, TCN, MSResNet, InceptionTime, OmniscaleCNN, LSTM, StarRNN, and Transformer. Rustowicz et al. (2019) [M Rustowicz et al.(2019)] introduced the first small-holder farms' crop type dataset of Ghana and South Sudan. They compared the performance of 2D U-Net + CLSTM, 3D CNN with RF, and bidirectional sequential encoder. Khaleque et al. (2020) [Khaleque et al.(2020)] utilized Sentinel-2 time-series data and machine learning algorithms to classify crops, considering temporal variations. Chen et al. (2022) [Chen et al.(2022)] integrated convolutional neural networks (CNNs) and long short-term memory networks (LSTMs) for spatiotemporal crop mapping. Zhang et al. (2021) [Zhang et al.(2021)] combined spectral-temporal features and multi-scale spatial information using a multi-task CNN and morphological profile (MP) technique. Zhu et al. (2021) [Zhu et al.(2021)] employed a multi-scale CNN with random forest (RF) classification for crop mapping from multi-temporal Landsat imagery. Temporal variability of crop reflectance was considered by Liu et al. (2020) [Liu et al.(2020)] using Sentinel-2 data and normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI) as input features. Liu et al. (2021) [Liu et al.(2021)] studied the temporal consistency and variability of optical indices for crop mapping in Southwest China. Yang et al. (2021) [Yang et al.(2021)] proposed a multi-scale feature fusion approach for crop mapping using Sentinel-2 imagery. Hu et al. (2021) [Hu et al.(2021)] used CNNs and a feature pyramid network (FPN) for multi-scale feature extraction. Shao et al. (2020) [Shao et al.(2020)] combined spectral indices and image patches with a UNet architecture for crop classification. These advancements are crucial for accurate crop mapping, enabling effective crop management and decision-making in agriculture. ### Deep Learning architectures VGG19 [Simonyan and Zisserman(2014)] is a deep architecture with 19 layers, enabling it to learn hierarchical representations and capture complex patterns in crop images. However, its large number of parameters makes it computationally expensive, memory-intensive, and slower compared to other models. ResNet50 [He et al.(2016)] introduces residual connections that facilitate training deeper networks and capture discriminative features for crop mapping. However, its larger model size can be challenging in terms of memory usage and computational resources. InceptionV3 [Szegedy et al.(2015)] incorporates multi-scale feature extraction through inception modules with parallel convolutional layers of different sizes. This reduces the number of parameters, allowing for faster training and inference. However, multiple parallel convolutional layers increase computational complexity and may lead to information loss, although auxiliary classifiers help mitigate this issue. DenseNet121's [Huang et al.(2017)] dense connectivity pattern allows for the direct flow of information between layers, enhancing gradient propagation and feature reuse. This improves parameter efficiency and captures fine-grained details and local features in crop images. However, direct connections increase memory usage during training and inference. EfficientNetV2 [Tan and Le(2019)] uses a compound scaling method to optimize resource allocation and achieve computational efficiency while maintaining accuracy. It incorporates Squeeze and Excitation (SE) blocks to capture important features and depthwise separable convolutions to reduce computational cost. However, the complex scaling coefficients may reduce model interpretability, and depthwise separable convolutions might impact the capture of complex relationships in crop images. HRNet [Sun et al.(2019)] captures high-resolution details, multi scale features, and contextual information. It maintains high-resolution representations throughout the network, captures fine-grained and coarse-grained features simultaneously, and integrates information from different levels of abstraction. However, it requires more computational resources, resulting in increased memory usage and longer training and inference times. UNet [Ronneberger et al.(2015)] is effective in capturing fine details and spatial relationships within an image. The UNet architecture consists of an encoder and a decoder, with skip connections between corresponding layers in the encoder and decoder. The encoder captures hierarchical information at different scales, while the decoder upsamples the feature maps. The skip connections help preserve spatial information during upsampling. UNet++ [Zhou et al.(2018)] builds upon the skip connections of the original UNet by introducing nested skip pathways, which allow for the integration of multi-scale contextual information. Each encoder block is connected not only to the corresponding decoder block but also to higher-resolution decoder blocks. This nested skip connection leverages multi-scale contextual information, enabling the network to capture both local and global contextual information more comprehensively. UNet++ offers an advanced and powerful architecture for crop mapping, allowing for more accurate and detailed segmentation of crops in satellite images or other remote sensing data. Each of these deep learning models has unique architectural characteristics that make it suitable for crop mapping. These models can be trained to classify different types of crops or identify crop boundaries within an image. The input to the network is an image patch or a satellite image, and the output is a pixel-wise segmentation map where each pixel is assigned a class label representing the crop type or boundary. These models have demonstrated effectiveness in capturing spatial dependencies, contextual information, and fine-grained details, which are crucial for accurate crop mapping. ## 3 Dataset: ZueriCrop ZueriCrop [Turkoglu et al.(2021)] is a large-scale, time-series dataset of satellite imagery of agricultural fields in Switzerland. It contains 116,000 field instances, each with ground-truth labels for 48 different crop types. The images were captured in 2019 under a variety of weather and lighting conditions, over a 50 km x 48 km area in the Swiss Cantons of Zurich and Thurgau. The dataset was made publicly available in 2021. The images were acquired by the Sentinel-2 satellite, which provides high-resolution (10-meter) multispectral imagery. The images were atmospherically corrected using the standard Sen2Cor software package. Several crop land images can be seen in Figure 1. It is the largest publicly available dataset of time-series satellite imagery of agricultural fields. It contains a variety of crop types, some of which are difficult to distinguish from each other using satellite imagery. It may not be representative of agricultural practices in other parts of the world, since it was collected for a small area of Switzerland. Despite these challenges, ZueriCrop is a valuable resource for research in precision agriculture. Since the images are of \(24X24\) resolution, and most of the deep learning architectures require a higher resolution of images, we resize the images using interpolation as well as padding. Moreover, the same transformation is also applied to the crop map in order to align the input to the output. After this, we normalize all the images by calculating the mean and the standard deviation of pixels per channel in order to make the pixel distribution uniform across channels. ## 4 Methodology The task of spatio-temporal crop mapping involves encoding the sequence of images, capturing temporal dependencies across the encodings, and finally obtaining the crop map as accurately as possible. We propose a deep learning-based solution to effectively capture temporal dependencies among the satellite images of land captured at different times of the year to obtain the crop distribution over that region. The rest of this section explains the proposed pipeline and its various parts that we explored to finally come up with a design that achieved the best performance on the ZueriCrop dataset. ### Pipeline Design **Encoder - Decoder Architecture** The standard encoder-decoder pipeline can be used with the motivation to treat images at different time frames independently. This is basically the image segmentation problem of Computer Vision, for which there are well-known models that fit this approach are UNet Figure 1: Examples of Sequence of NDVI images with their Ground Truth (Column 5) [Ronneberger et al.(2015)], and UNet++ [Zhou et al.(2018)]. These can produce crop segmentation maps for the images at different time-points independently. Subsequently, we can compute the mean of all the resulting crop maps to aggregate the information and obtain the final crop map representing the land cover. The overview of this pipeline can be seen in Figure 1(a). **Encoder - LSTM - Decoder Architecture** An alternative paradigm of architecture is where we utilize the sequential relation between the images directly. Here, we can use convolutional neural networks such as VGG19 [Simonyan and Zisserman(2014)], ResNet50 [He et al.(2016)], InceptionV3 [Szegedy et al.(2015)], DenseNet121 [Huang et al.(2017)], EfficientNetV2 [Tan and Le(2019)], or HRNet [Sun et al.(2019)] to encode and obtain vector representation of the ZueriCrop images. VGG19, ResNet50, InceptionV3, DenseNet121, and EfficientNetV2 are pre-trained on ImageNet to learn generic features, that can be fine-tuned for crop mapping. They capture spatial dependencies and contextual information, which are crucial for accurate crop mapping. Next, a sequential model like LSTM [Hochreiter and Schmidhuber(1997)] can be used in order to establish temporal relationships and obtain an aggregated representation of the land cover over the specified time frame. We found that a stacked LSTM layer with 3 blocks gives best results. Finally, the last hidden state of the LSTM block is fed to a Transposed Convolution [Dumoulin and Visin(2016)] based decoder in order to obtain the segmented crop map. The overview of this architecture can be seen in Figure 1(b). **Encoder - Self Attention - Decoder Paradigm** In this paradigm of architecture, we can use the same encoder and decoder types proposed in Section 4.1. However, instead of using a stacked LSTM block for temporal modeling, we use Self Attention [Vaswani et al.(2017)] mechanism to aggregate the vector representation of the images from different time-points. The architecture overview of this pipeline can be seen in Figure 1(c). Figure 2: Pipeline Design ### Proposed Pipeline Design After conducting exhaustive experimentation and hyperparameter tuning, we have developed a pipeline that achieves the best performance among all the candidates previously discussed. The overview of our proposed pipeline is illustrated in Figure 3. We choose the Encoder-Self Attention-Decoder pipeline. In the encoder, we employ HRNet [14] to create a high-resolution representation of the satellite images at each time-point. In the shallow layers of the network, we utilize Spatially separable convolution to reduce the model's parameter count while maintaining performance. By obtaining vector representations of all image frames, we leverage multi-head attention [14] to capture long-term temporal dependencies and generate a comprehensive representation of the land over the entire time-period. The aggregated representation is then fed into the decoder to produce the crop map. We call this combined model as **SepHRNet**. We further improve performance using Boosting. SepHRNet serves as the base model for the AdaBoost [12] algorithm, with a modified rule for updating the sampling probability of data points. We combine multiple versions of SepHRNet trained on 80% of the data, using the weighted ensemble as specified by the AdaBoost algorithm. Further, the Loss Function for AdaBoost is designed as follows: each image are assigned an error value of 1 if the percentage of misclassified pixels is more than 20%, and \(-1\) otherwise. As a result, the aggregated model demonstrates strong performance across the entire dataset. ### Components of Proposed Architecture **HRNet Encoder** Let x be the input to the HRNet network. \[HRNet(x)=H_{n}(H_{n-1}(...H_{2}(H_{1}(x))...)) \tag{1}\] where \(H_{i}\) denotes the \(i_{th}\) stage of the HRNet network. Each stage consists of parallel branches, denoted as \(B_{i}\), which operate on different resolutions of the input feature maps. The outputs of the branches in each stage are then combined to obtain the output of that stage. The HRNet network iteratively applies the stages \(H_{i}\) to the input x, with the final output being the output of the last stage \(H_{n}\). This allows the network to capture and integrate features at multiple resolutions, enabling it to maintain high-resolution representations throughout the network. **Spatially separable Convolution** Convolution is a well-known technique in image processing, which is widely used in Convolutional Neural Networks for image representation. Here we have a rectangular kernel \(w\), which represents a spatial pattern, and we scan the image with it to see which parts of it contains that pattern. \[y(i,j)=\sum_{m}\sum_{n}x(i-m,j-n)\cdot w(m,n) \tag{2}\] A typical CNN has many layers for convolution, each of which uses many kernels for parallel channels. The parameters \(w\) are not specified but learnt from data Figure 3: Proposed Architecture while training the neural network. Spatially separable convolution is a convolutional technique that offers advantages over standard convolution, particularly in scenarios with high aspect ratio images or when applying filters to small image regions. By using different kernel sizes for the vertical and horizontal dimensions, it can reduce the number of parameters in a convolutional neural network (CNN) and improve generalization performance by avoiding overfitting. \[z(i,j)=\sum_{m,n}x(i-m,j-n)\cdot w_{row}(m)\cdot w_{col}(n) \tag{3}\] Equation 3 represents the spatially separable convolution operation where \(z\) is the output obtained by convolving the input \(x\) with the row-wise filter \(w_{row}\) and the column-wise filter \(w_{col}\). The summation is performed over the filter dimensions \(m\) and \(n\), and the element-wise multiplication of the input and filters is performed at each spatial location \((i,j)\). In SepHRNet, we replace the standard convolution in shallow layers with spatially separable convolution. This replacement involves using two sequential convolutional layers with kernel sizes of \(kX1\) and \(1Xk\), respectively, instead of a single \(kXk\) kernel. This modification maintains the same receptive field, reduces parameter count, and promotes more comprehensive interactions among pixels. As a result, our segmentation performance improves, especially considering the non-uniform distribution of land cover. **Self-Attention** Let \(q_{t}\in\mathcal{R}^{\lceil\mathbb{I}}\), \(k_{t}\in\mathcal{R}^{\lceil\mathbb{I}}\), and \(v_{t}\in\mathcal{R}^{\lceil\mathbb{I}}\) represent the query, key, and value vectors, respectively, at time step \(t\). Matrix representations of the query, key, and value vectors are denoted as \(Q=[q_{1},q_{2},\ldots,q_{T}]\), \(K=[k_{1},k_{2},\ldots,k_{T}]\), and \(V=[v_{1},v_{2},\ldots,v_{T}]\), respectively. To compute the attention-weighted representation at a specific time step, we use the following equation: \[Attention(Q,K,V)=softmax\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V \tag{4}\] Here, \(d_{k}\) represents the dimension of the query, value, and key vectors, i.e., the dimension of the output vector obtained from the encoder. The Attention function performs scaled dot-product attention, where the queries and keys are scaled by the square root of the key dimension (\(d_{k}\)) and the result is weighted by the softmax of the query-key dot product. The final output is obtained by multiplying the weighted values with the softmax weights. In order to incorporate self-attention, we require three types of vectors: query, key, and value for each input in the sequence. To obtain these vectors, instead of a single layer at the end of the encoder, we utilize three feed-forward layers. This allows us to generate the necessary query-key-value triplet of vectors. By summing the attention-weighted vectors, we obtain the aggregated representation of the cropland. Figure 2(b) illustrates the single-head attention block. **Multi-head Attention** Instead of having just one query-key-value triplet from the encoder, we obtain multiple triplets and compute the aggregated representation of all the queries, keys, and values. These representations are then concatenated and projected to the required dimension, resulting in the multi-head representation of the cropland illustrated in Figure 3b. This approach allows us to attach more importance to the more complex structures in the cropland. \[head_{i}=Attention(Q\cdot W_{Q}^{i},K\cdot W_{K}^{i},V\cdot W_{V}^{i}) \tag{5}\] Each \(head_{i}\) is computed by applying the Attention function to the transformed queries (\(Q\cdot W_{Q}^{i}\)), keys (\(K\cdot W_{K}^{i}\)), and values (\(V\cdot W_{V}^{i}\)). \[MultiHead(Q,K,V)=FFN(Concat(head_{1},...,head_{h})) \tag{6}\] Here, \(head_{i}\) represents the output of a single-head self-attention, and \(FFN\) refers to a feed-forward network used to downsample the concatenated representation. The MultiHead function calculates the multi-head self-attention by concatenating the individual attention heads (\(head_{i}\)) and applying a feed-forward network. **Decoder** The aim of the decoder is to create the segmentation map at the same resolution as the input images. We use transposed convolution for this purpose. The architecture details can be seen in Figure 3d. \[X=ReLU(BN(ConvTranspose(Z,W,S,padding)+B)) \tag{7}\] In the above representation, Z represents the input feature map, W denotes the learnable weights of the transposed convolution operation, S represents the stride, and padding refers to the amount of zero-padding applied to the input feature map. The ConvTranspose operation performs the transposed convolution operation on Z using W, S, and the specified padding. It upsamples the input feature map by performing the reverse of the convolution operation, effectively increasing its spatial dimensions. The resulting output feature map X is then obtained by adding a bias term B to the transposed convolution output. BN denotes the batch normalization block. ReLU is used for RELU actiVation Function. ## 5 Experimental Evaluation of Pipeline ### Training Procedure The proposed architecture is trained using the sparse categorical cross-entropy loss function, which compares the softmax probabilities (\(P_{p}\)) with the ground truth labels (\(T_{p}\)) for each class. The loss is calculated according to Equation (8): \[L=-\sum_{p=1}^{N}(T_{p}*\log(P_{p})) \tag{8}\] In this equation, \(P_{p}\) is obtained using the softmax function to normalize the class probabilities. The loss function assigns smaller values for smaller differences and larger values for larger differences between the predicted and actual values. The goal is to minimize the loss, with a perfect model achieving a loss value of zero. The segmentation network is trained on a split of 80% for the train set (20,982 inputs) and 20% for the test set (6,995 inputs), with each input consisting of 71 images capturing different time frames. During training, the network is optimized using the Adam optimizer with a learning rate of \(1e-4\), a batch size of 128, and a weight decay of 0.0001. These hyperparameters are determined through grid search cross-validation. A cosine learning rate scheduler is employed over 25 epochs, and the training utilizes two V100 32GB GPUs in a distributed setup. Stacked bidirectional LSTM with three layers is explored to capture temporal dependencies. The LSTM has a hidden state and output size of 256, and the LSTM block excludes bias terms in linear activations. The self-attention block employs the standard attention mechanism, producing an aggregated representation of 768-dimensional vectors, serving as the base version. The multi-head attention block utilizes six attention heads as the default configuration. Five models are used as the default number of base models in the ensemble paradigm experiments. ### Comparative Analysis of Architectures Section 4.2 introduces the proposed approach, which achieves the best performance. Table 1 provides a summary of the experiments, utilizing multi-head attention with six attention heads. HRNet-base as the encoder outperforms other encoder variants. Moreover, incorporating the self-attention mechanism to capture temporal dependencies in the underlying data improves the performance of each individual model. The ensembling approach demonstrates strong performance across the entire dataset. The experimental analysis demonstrates the performance of different encoder architectures with the ESD paradigm for crop mapping tasks. Among the models, HRNet-base achieves the highest performance across all metrics, with an accuracy of 0.975, precision of 0.701, recall of 0.733, F1-score of 0.717, and mIoU of 0.552. The use of self-attention for sequence modeling instead of LSTM improves the performance of all models. This finding highlights the effectiveness of the ESD paradigm and the self-attention mechanism for accurate crop mapping, enabling precise crop segmentation and classification. Figure 4 provides a comparative overview of the improvement in different metrics when modifying the pipeline. It demonstrates that using self-attention instead of an LSTM block better captures temporal dependencies \begin{table} \begin{tabular}{c c|c c c c c} \hline **Encoder** & **Paradigm** & \multicolumn{5}{c}{**Metrics**} \\ **(with version)** & & **Accuracy** & **Precision** & **Recall** & **F1-score** & **mIoU** \\ \hline VGG19 & ESD & 0.769 & 0.42 & 0.431 & 0.425 & 0.322 \\ ResNet50 & ESD & 0.897 & 0.513 & 0.482 & 0.497 & 0.388 \\ InceptionV3 & ESD & 0.922 & 0.556 & 0.591 & 0.573 & 0.493 \\ DenseNet121 & ESD & 0.906 & 0.571 & 0.573 & 0.572 & 0.449 \\ EfficientNetV2 & ESD & 0.931 & 0.589 & 0.561 & 0.575 & 0.495 \\ HRNet-base & ESD & **0.975** & **0.701** & **0.733** & **0.717** & **0.552** \\ \hline \end{tabular} \end{table} Table 1: Performance evaluation among various versions of proposed approach across time frames. Furthermore, combining spatially separable convolution and standard convolution in the encoder architecture enables the model to understand the underlying cropland with higher precision and accuracy. Based on the promising results of our proposed approach, we anticipate its generalizability to other datasets in the future. ### Visualization In this section, Figure 5 provides the visualization of some crop maps generated comparing the proposed model with baseline models HRNet and UNet. The proposed model seems to display better-quality crop maps on the Zueri-Crop dataset. This serves as a motivation to incorporate Spatially separable convolution in place of the standard convolution of several standard encoder architectures. Moreover, ensembling the base models also shows promising results. ## 6 Ablation Study In this section, we present the results of various ablation experiments that demonstrate the enhanced performance of the models. ### Baselines In this section, we present the baseline results of the three proposed architectures discussed in Section 4. In Table 2, "ELD", and "ED" refers to the Encoder-LSTM-Decoder, and Encoder-Decoder paradigm respectively. **Encoder - LSTM - Decoder Architecture** To establish a baseline performance for spatio-temporal crop mapping and understand the behavior of different encoder architectures, we compare the results on the test set, as shown in Table 2. For all experiments, we use a stacked bidirectional LSTM with three layers. Upon evaluating the performance of various encoder architectures, we find that the HRNet-base encoder significantly outperforms other versions. HRNet maintains multi-resolution inputs by fusing information from multiple resolutions in Figure 4: Comparision of metrics for HRNet-base encoder parallel, enabling the model to capture both fine-grained and coarse information in the image. It also enhances the localization of land patches, resulting in promising crop mapping results. **Encoder - Decoder Architecture** We enumerate the evaluation metrics for different encoder-decoder architectures in this subsection. As mentioned in Section 4.1, we obtain the final crop distribution by taking the mean of the crop maps for each time frame. From Table 2, we observe that UNet++ outperforms UNet. Consequently, we choose HRNet-base as the preferred encoder due to its superior overall performance. ### Spatially Separable Convolution We evaluate the performance by replacing standard convolution with Spatially Separable Convolution in the encoder architecture. Table 3 illustrates this change also improves the performance. By capturing features along one dimension before moving to the other, the model effectively captures the contours of the cropland, resulting in better outcomes. The experimental analysis shows \begin{table} \begin{tabular}{c c|c c c c c} \hline **Encoder** & **Paradigm** & \multicolumn{5}{c}{**Metrics**} \\ **(with version)** & & **Accuracy** & **Precision** & **Recall** & **F1-score** & **mIoU** \\ \hline VGG19 & ELD & 0.695 & 0.362 & 0.343 & 0.352 & 0.258 \\ ResNet50 & ELD & 0.831 & 0.464 & 0.455 & 0.459 & 0.331 \\ InceptionV3 & ELD & 0.852 & 0.473 & 0.484 & 0.478 & 0.352 \\ DenseNet121 & ELD & 0.834 & 0.461 & 0.469 & 0.465 & 0.337 \\ EfficientNetV2 & ELD & 0.851 & 0.476 & 0.492 & 0.484 & 0.363 \\ HRNet-base & ELD & 0.912 & 0.623 & 0.663 & 0.642 & 0.481 \\ \hline UNet & ED & 0.846 & 0.473 & 0.468 & 0.470 & 0.352 \\ UNet++ & ED & 0.873 & 0.495 & 0.512 & 0.503 & 0.393 \\ \hline \end{tabular} \end{table} Table 2: Ablation 1: Baseline Paradigms Figure 5: Predictions made on the test images of Zuericrop Dataset that HRNet-base with the ELD paradigm achieves the highest performance in terms of accuracy, precision, recall, F1-score, and mIoU. It outperforms other encoder architectures in accurately classifying crop types and identifying crop boundaries. UNet++ with the ED paradigm also demonstrates competitive performance. The incorporation of spatially separable convolution improves the performance of both paradigms, highlighting its effectiveness in capturing fine-grained details and spatial relationships. Overall, HRNet-base with the ELD paradigm and UNet++ with the ED paradigm, incorporating spatially separable convolution, are effective for crop mapping tasks, providing accurate and detailed segmentation of crops in satellite images or other remote sensing data. ### Ensemble through Boosting We explore whether the ensemble strategy improves upon the baseline experiments by employing the AdaBoost algorithm. Table 4 shows that the AdaBoost algorithm indeed enhances the performance of the corresponding baseline architectures. Among the ensembles, the HRNet-base ensemble demonstrates the best performance, as detailed in Section 6.1. The experimental analysis shows that HRNet-base with the ELD paradigm achieves the highest accuracy, precision, recall, F1-score, and mIoU among the encoder architectures and paradigms. The ensemble strategy improves the performance of all models, \begin{table} \begin{tabular}{c c|c c c c c} \hline \hline **Encoder** & **Paradigm** & \multicolumn{5}{c}{**Metrics**} \\ **(with version)** & & **Accuracy** & **Precision** & **Recall** & **F1-score** & **mIoU** \\ \hline VGG19 & ELD & 0.747 & 0.402 & 0.401 & 0.401 & 0.297 \\ ResNet50 & ELD & 0.863 & 0.482 & 0.468 & 0.475 & 0.358 \\ InceptionV3 & ELD & 0.895 & 0.531 & 0.571 & 0.550 & 0.463 \\ DenseNet121 & ELD & 0.892 & 0.552 & 0.534 & 0.543 & 0.439 \\ EfficientNetV2 & ELD & 0.921 & 0.582 & 0.571 & 0.576 & 0.477 \\ HRNet-base & ELD & **0.948** & **0.671** & **0.691** & **0.681** & **0.524** \\ \hline UNet & ED & 0.881 & 0.561 & 0.577 & 0.569 & 0.479 \\ UNet++ & ED & 0.917 & 0.602 & 0.601 & 0.601 & 0.486 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation 3: Ensembling \begin{table} \begin{tabular}{c c|c c c c c} \hline \hline **Encoder** & **Paradigm** & \multicolumn{5}{c}{**Metrics**} \\ **(with version)** & & **Accuracy** & **Precision** & **Recall** & **F1-score** & **mIoU** \\ \hline VGG19 & ELD & 0.721 & 0.388 & 0.401 & 0.394 & 0.297 \\ ResNet50 & ELD & 0.852 & 0.479 & 0.468 & 0.473 & 0.358 \\ InceptionV3 & ELD & 0.887 & 0.552 & 0.571 & 0.561 & 0.463 \\ DenseNet121 & ELD & 0.871 & 0.522 & 0.534 & 0.528 & 0.439 \\ EfficientNetV2 & ELD & 0.892 & 0.562 & 0.571 & 0.566 & 0.477 \\ HRNet-base & ELD & **0.941** & **0.662** & **0.691** & **0.676** & **0.524** \\ \hline UNet & ED & 0.876 & 0.542 & 0.577 & 0.559 & 0.479 \\ UNet++ & ED & 0.907 & 0.597 & 0.601 & 0.599 & 0.486 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation 2: Incorporation of Spatially Separable Convolution with HRNet-base and UNet++ demonstrating competitive results. This highlights the effectiveness of the ensemble strategy for crop mapping tasks. Overall, HRNet-base with the ELD paradigm and UNet++ with the ED paradigm, combined through an ensemble strategy, offer accurate and detailed crop segmentation, enabling precise crop classification and boundary delineation. ### Separable Convolution with Boosting We also see the impact of combining spatially separable convolution at the encoding stage, along with AdaBoost. We find that this results in improvement of performance in all baseline architectures of the encoder. The experimental analysis for Table 5 reveals that HRNet-base with the ELD paradigm achieves the highest accuracy, precision, recall, F1-score, and mIoU among the different encoder architectures and paradigms. The incorporation of spatially separable convolution and the ensembling approach further improves the performance of all models. HRNet-base with the ELD paradigm achieves remarkable results with an accuracy of 0.964, precision of 0.692, recall of 0.717, F1-score of 0.704, and mIoU of 0.547. The UNet++ model with the ED paradigm also demonstrates competitive performance. This demonstrates the effectiveness of the spatially separable convolution and ensembling approach in enhancing crop mapping tasks. Overall, the combination of HRNet-base with the ELD paradigm and UNet++ with the ED paradigm, incorporating spatially separable convolution and employing an ensembling approach, provides accurate and detailed crop segmentation, enabling precise classification and boundary delineation of crops. ### Self-attention versus LSTM Instead of using LSTM to capture temporal dependencies between cropland representations at different time frames, we employ a self-attention mechanism to better weigh the contribution of each representation. In all experiments listed in Table 6, we utilize multi-head attention with six attention heads. As expected, HRNet-base as the encoder outperforms other encoder variants. Additionally, incorporating the self-attention mechanism improves the performance of each \begin{table} \begin{tabular}{c c|c c c c c} \hline **Encoder** & **Paradigm** & \multicolumn{5}{c}{**Metrics**} \\ **(with version)** & & **Accuracy** & **Precision** & **Recall** & **F1-score** & **mIoU** \\ \hline VGG19 & ELD & 0.752 & 0.41 & 0.4 & 0.405 & 0.299 \\ ResNet50 & ELD & 0.881 & 0.49 & 0.464 & 0.477 & 0.361 \\ InceptionV3 & ELD & 0.903 & 0.528 & 0.582 & 0.554 & 0.471 \\ DenseNet121 & ELD & 0.883 & 0.552 & 0.516 & 0.533 & 0.442 \\ EfficientNetV2 & ELD & 0.903 & 0.562 & 0.5558 & 0.560 & 0.464 \\ HRNet-base & ELD & **0.964** & **0.692** & **0.717** & **0.704** & **0.547** \\ \hline UNet & ED & 0.889 & 0.568 & 0.585 & 0.576 & 0.484 \\ UNet++ & ED & 0.92 & 0.613 & 0.605 & 0.609 & 0.493 \\ \hline \end{tabular} \end{table} Table 5: Ablation 4: Incorporation of Spatially separable convolution and ensembling approach individual model, enhancing the capture of temporal dependencies in the data. Its experimental analysis reveals that HRNet-base with the ESD paradigm achieves the highest accuracy, precision, recall, F1-score, and mIoU among the encoder architectures, making it an effective choice for crop mapping. Comparing the ELD and ESD paradigms, the ESD paradigm consistently outperforms the ELD paradigm across various encoder architectures in terms of accuracy, precision, recall, F1-score, and mIoU, indicating its effectiveness for crop mapping tasks. HRNet-base demonstrates superior performance in accurately classifying crop types and identifying crop boundaries compared to other encoders. UNet++ with ED, EfficientNetV2, and InceptionV3 with ESD also show competitive performance, while VGG19 exhibits lower performance. HRNet-base with the ESD paradigm emerges as a powerful choice, offering high accuracy, precision, recall, F1-score, and mIoU, which are crucial for precise crop classification and boundary delineation. ## 7 Conclusion The aim of this work is to generate high-resolution crop maps based on remote sensing imagery. We use image sequences collected over a period of time, and aim to incorporate this temporal information into the model for more robust estimation of the segmented crop maps. For this purpose, we proposed a deep learning pipeline using Encoder-Self Attention-Decoder structure, which we named SepHRNet. For each of the parts, we compared multiple baselines on multiple criteria, and chose the best-performing options. In the encoder, HRNet along with spatially separable convolution is used, followed by Multi-Head Self-Attention followed by decoder based on Transposed Convolutions which produces the segmented map at the original resolution. Further, we see that the results can improve significantly by building an ensemble of SepHRNet by Adaboost. The pipeline was tested on the Zuericrop dataset for mapping 48 different types of crops over Switzerland. The proposed model demonstrated high accuracy, precision, recall, F1-score, and mIoU, making it an effective choice for crop mapping. This work highlights the importance of separable convolution for spatial modeling and multi-head self-attention for temporal modeling. Future work will include scaling up the proposed architecture for mapping of larger regions with more crop-types, and over different regions of the world. \begin{table} \begin{tabular}{c c|c c c c c} \hline **Encoder** & **Paradigm** & \multicolumn{5}{c}{**Metrics**} \\ **(with version)** & & **Accuracy** & **Precision** & **Recall** & **F1-score** & **mIoU** \\ \hline VGG19 & ESD & 0.723 & 0.381 & 0.352 & 0.366 & 0.271 \\ ResNet50 & ESD & 0.857 & 0.472 & 0.648 & 0.546 & 0.353 \\ InceptionV3 & ESD & 0.871 & 0.513 & 0.509 & 0.511 & 0.394 \\ DenseNet121 & ESD & 0.842 & 0.476 & 0.482 & 0.479 & 0.361 \\ EfficientNetV2 & ESD & 0.875 & 0.494 & 0.511 & 0.502 & 0.403 \\ HRNet-base & ESD & **0.942** & **0.638** & **0.671** & **0.654** & **0.507** \\ \hline \end{tabular} \end{table} Table 6: Ablation 5: Using Self-attention for sequence modeling
2305.05179
Simplicial Hopfield networks
Hopfield networks are artificial neural networks which store memory patterns on the states of their neurons by choosing recurrent connection weights and update rules such that the energy landscape of the network forms attractors around the memories. How many stable, sufficiently-attracting memory patterns can we store in such a network using $N$ neurons? The answer depends on the choice of weights and update rule. Inspired by setwise connectivity in biology, we extend Hopfield networks by adding setwise connections and embedding these connections in a simplicial complex. Simplicial complexes are higher dimensional analogues of graphs which naturally represent collections of pairwise and setwise relationships. We show that our simplicial Hopfield networks increase memory storage capacity. Surprisingly, even when connections are limited to a small random subset of equivalent size to an all-pairwise network, our networks still outperform their pairwise counterparts. Such scenarios include non-trivial simplicial topology. We also test analogous modern continuous Hopfield networks, offering a potentially promising avenue for improving the attention mechanism in Transformer models.
Thomas F Burns, Tomoki Fukai
2023-05-09T05:23:04Z
http://arxiv.org/abs/2305.05179v1
# Simplicial Hopfield networks ###### Abstract Hopfield networks are artificial neural networks which store memory patterns on the states of their neurons by choosing recurrent connection weights and update rules such that the energy landscape of the network forms attractors around the memories. How many stable, sufficiently-attracting memory patterns can we store in such a network using \(N\) neurons? The answer depends on the choice of weights and update rule. Inspired by setwise connectivity in biology, we extend Hopfield networks by adding setwise connections and embedding these connections in a simplicial complex. Simplicial complexes are higher dimensional analogues of graphs which naturally represent collections of pairwise and setwise relationships. We show that our simplicial Hopfield networks increase memory storage capacity. Surprisingly, even when connections are limited to a small random subset of equivalent size to an all-pairwise network, our networks still outperform their pairwise counterparts. Such scenarios include non-trivial simplicial topology. We also test analogous modern continuous Hopfield networks, offering a potentially promising avenue for improving the attention mechanism in Transformer models. ## 1 Introduction Hopfield networks (Hopfield, 1982)1 store memory patterns in the weights of connections between neurons. In the case of pairwise connections, these weights translate to the synaptic strength between pairs of neurons in biological neural networks. In such a Hopfield network with \(N\) neurons, there will be \(\binom{N}{2}\) of these pairwise connections, forming a complete graph. Each edge is weighted by a procedure which considers \(P\) memory patterns and which, based on these patterns, seeks to minimise a defined energy function such that the network's dynamics are attracted to and ideally exactly settles in the memory pattern which is nearest to the current states of the neurons. The network therefore acts as a content addressable memory - given a partial or noise-corrupted memory, the network can update its states through recurrent dynamics to retrieve the full memory. Since its introduction, the Hopfield network has been extended and studied widely by neuroscientists (Griniasty et al., 1993; Schneidman et al., 2006; Sridhar et al., 2021; Burns et al., 2022), physicists (Amit et al., 1985; Agliari et al., 2013; Leonetti et al., 2021), and computer scientists (Widrich et al., 2020; Millidge et al., 2022). Of particular interest to the machine learning community is the recent development of modern Hopfield networks (Krotov and Hopfield, 2016) and their close correspondence (Ramsauer et al., 2021) to the attention mechanism of Transformers (Vaswani et al., 2017). Footnote 1: After the proposal of Marr (1971), many similar models of associative memory were proposed, e.g., those of Nakano (1972), Amari (1972), Little (1974), and Stanley (1976) – all before Hopfield (1982). Nevertheless, much of the research literature refers to and seems more proximally inspired by Hopfield (1982). Many of these models can also be considered instances of the Lenz-Ising model (Brush, 1967) with infinite-range interactions. An early (Amit et al., 1985; McEliece et al., 1987) and ongoing (Hillar and Tran, 2018) theme in the study of Hopfield networks has been their memory storage capacity, i.e., determining the number of memory patterns which can be reliably stored and later recalled via the dynamics. As discussed in Appendix A.1, this theoretical and computational exercise serves two purposes: (i) improving the memory capacity of such models for theoretical purposes and computational applications; and (ii) gaining an abstract understanding of neurobiological mechanisms and their implications for biological memory systems. Traditional Hopfield networks with binary neuron states, in the limit of \(N\rightarrow\infty\) and \(P\rightarrow\infty\), maintain associative memories for up to approximately \(0.14N\) patterns (Amit et al., 1985; McEliece et al., 1987), and fewer if the patterns are statistically or spatially correlated (Lowe, 1998). However, by a clever reformulation of the update rule based on the network energy, this capacity can be improved to \(N^{d-1}\), where \(d\geq 2\)(Krotov & Hopfield, 2016), and even further to \(2^{N/2}\)(Demircigil et al., 2017). Networks using these types of energy-based update rules are called modern Hopfield networks. Krotov & Hopfield (2016) (like Hopfield (1984)) also investigated neurons which took on continuous states. Upon generalising this model by using the softmax activation function, Ramsauer et al. (2021) showed a connection to the attention mechanism of Transformers (Vaswani et al., 2017). However, to the best of our knowledge, these modern Hopfield networks have not been extended further to include explicit setwise connections between neurons, as has been studied and shown to improve memory capacity in traditional Hopfield networks (Peretto & Niez, 1986; Lee et al., 1986; Baldi & Venkatesh, 1987; Newman, 1988). Indeed, Krotov & Hopfield (2016), who introduced modern Hopfield networks, make a mathematical analogy between their energy-based update rule and setwise connections given their energy-based update rule can be interpreted as allowing individual pairs of pre- and post-synaptic neurons to make multiple synapses with each other - making pairwise connections mathematically as strong as equivalently-ordered setwise connections2. Demircigil et al. (2017) later proved this analogy to be accurate in terms of theoretical memory capacity. By adding explicit setwise connections to modern Hopfield networks, we essentially allow all connections (pairwise and higher) to increase their strength - following the same interpretation, this can be thought of as allowing both pairwise and setwise connections between all neurons, any of which may be precisely controlled. Footnote 2: Work by Horn, D. & Usher, M. (1988) study almost the same system but with an slight modification to the traditional update rule, whereas Krotov & Hopfield (2016) use their modern, energy-based update rule. Functionally, setwise connections appear in abundance in biological neural networks. What's more, these setwise interactions often modulate and interact with one another in highly complex and nonlinear fashions, adding to their potential computational expressiveness. We discuss these biological mechanisms in Appendix A.2. There are many contemporary models in deep learning which implicitly model particular types of setwise interactions (Jayakumar et al., 2020). To explicitly model such interactions, we have multiple options. For reasons we discuss in Appendix A.3, we choose to model our setwise connections using a simplicial complex. We therefore develop and study _Simplicial Hopfield Networks_. We weight the simplices of the simplicial complex to store memory patterns and generalise the energy functions and update rules of traditional and modern Hopfield networks. Our main contributions are: * _We introduce extensions of various Hopfield networks with setwise connections._ In addition to generalising Hopfield networks to include explicit, controllable setwise connections based on an underlying simplicial structure, we also study whether the topological features of the underlying structure influences performance. * _We prove and discuss higher memory capacity in the general case of simplicial Hopfield networks._ For the fully-connected simplicial Hopfield network, we prove a larger memory capacity than previously shown by Newman (1988); Demircigil et al. (2017) for higher-degree Hopfield networks. * _We empirically show improved performance under parameter constraints._ By restricting the total number of connections to that of pairwise Hopfield networks with a mixture of pairwise and setwise connections, we show simplicial Hopfield networks retain a surprising amount of improved performance over pairwise networks but with fewer parameters, and are robust to topological variability. ## 2 Simplicial Hopfield networks ### Simplicial complexes Simplicial complexes are mathematical objects which naturally represent collections of setwise relationships. Here we use the combinatorial form, called an _abstract simplicial complex_. Although, to build intuition and visualise the simplicial complex, we also refer to their geometric realisations. **Definition 2.1**.: Let \(K\) be a subset of \(2^{[N]}\). The subset \(K\) is an abstract simplicial complex if for any \(\sigma\in K,\) the condition \(\rho\subseteq\sigma\) gives \(\rho\in K,\) for any \(\rho\subseteq\sigma\). In other words, an abstract simplicial complex \(K\) is a collection of finite sets closed under taking subsets. A member of \(K\) is called a _simplex_\(\sigma\). A \(k\)-dimensional simplex (or \(k\)-simplex) has cardinality \(k+1\) and \(k+1\) faces which are \((k-1)\)-simplices (obtained by omitting one element from \(\sigma\)). If a simplex \(\sigma\) is a face of another simplex \(\tau\), we say that \(\tau\) is a _coface_ of \(\sigma\). We denote the set of all \(k\)-simplices in \(K\) as \(K_{k}\). Geometrically, for \(k=0,1,2,\) and \(3\), a \(k\)-simplex is, respectively, a point, line segment, triangle, and tetrahedron. Therefore, one may visualise a simplicial complex as being constructed by gluing together simplices such that every finite set of vertices in \(K\) form the vertices of at most one simplex. This structure makes it possible to associate every setwise relationship uniquely with a \(k\)-simplex identified by its elements, which in our case are neurons (see Figure 1A). Simplices in \(K\) which are contained in no higher dimensional simplices, i.e., they have no cofaces, are called the _facets_ of \(K\). The dimension of \(K\), \(\text{dim}(K)\), is the dimension of its largest facet. We call a simplicial complex \(K\) a \(k\)-_skeleton_ when all possible faces of dimension \(k\) exist and \(\text{dim}(K)=k\). ### Model A network of \(N\) neurons is modelled by \(N\) spins. Let \(K\) be a simplicial complex on \(N\) vertices. In the binary neuron case, \(S_{j}^{(t)}=\pm 1\) at time-step \(t\). Given a set of neurons \(\sigma\) (which contains the neuron \(i\) and is a unique \((|\sigma|-1)\)-simplex in \(K\)), \(w(\sigma)\) is the associated simplicial weight and \(S_{\sigma}^{(t)}\) the product of their spins. Spin configurations correspond to patterns of neural firing, with dynamics Figure 1: **A.** Comparative illustrations of connections in a pairwise Hopfield network (left) and a simplicial Hopfield network (right) with \(N=4\). In a simplicial Hopfield network, \(\sigma=\{i,j\}\) is an edge (\(1\)–simplex), \(\sigma=\{i,j,k\}\) is a triangle (\(2\)–simplex), \(\sigma=\{i,j,k,l\}\) is a tetrahedron (\(3\)–simplex), and so on. **B.** Connection weight histograms of \(1\)–, \(2\)–, and \(3\)–simplices in a simplicial Hopfield network. In the binary case, the x-axis range is \([-P/N,+P/N]\). Here, \(N=100\) and \(P=10\), thus the range is \([-0.1,+0.1]\). Note that each dimension shows a similar, Gaussian distribution of weights (although there are different absolute numbers of these weights; see β€œMixed diluted networks’ in Section 2.2). **C.** Illustration of the hierarchical relationship between elements in the complex, up to \(3\)–simplices, with arrows indicating potential sources of weight modulation or interaction, e.g., between (co)faces or using Hodge Laplacians within the same dimension. Such modulations and interactions (including their biological interpretations) are discussed in Appendices A.2 and A.3. governed by a defined energy. The traditional model is defined by energy and weight functions \[E=-\sum_{\sigma\in K}w(\sigma)S_{\sigma}^{(t)}\qquad\qquad\qquad w(\sigma)=\frac{1 }{N}\sum_{\mu=1}^{P}\xi_{\sigma}^{\mu}, \tag{1}\] with \(\xi_{i}^{\mu}\) (\(=\pm 1\)) static variables being the \(P\) binary memory patterns stored in the simplicial weights. Similar for spins, \(\xi_{\sigma}^{\mu}\) is the product of the static pattern variables for the set of neurons \(\sigma\) in the pattern \(\mu\). Figure 1B shows examples of the resulting Gaussian distributions of weights at each dimension of the simplicial complex. We use these weights to update the state of a neuron \(i\) by applying the traditional Hopfield update rule \[S_{i}^{(t)}=\Theta\left(\sum_{\sigma\in K}w(\sigma)S_{\sigma\setminus i}^{(t- 1)}\right)\qquad\qquad\qquad\Theta(x)=\begin{cases}1&\text{if }x\geq 0\\ -1&\text{if }x<0\end{cases}. \tag{2}\] When \(K\) is a \(1\)-skeleton, this becomes the traditional pairwise Hopfield network (Hopfield, 1982). In the modern Hopfield case, the energy function and update rule are \[E =-\sum_{\mu=1}^{P}\sum_{\sigma\in K}F(\xi_{\sigma}^{\mu}S_{\sigma }^{(t)}) \tag{3}\] \[S_{i}^{(t)} =\text{sgn}\left[\sum_{\mu=1}^{P}\left(F(1\cdot\xi_{i}^{\mu}+ \sum_{\sigma\in K}\xi_{\sigma\setminus i}^{\mu}S_{\sigma\setminus i}^{(t-1)} )-F(-1\cdot\xi_{i}^{\mu}+\sum_{\sigma\in K}\xi_{\sigma\setminus i}^{\mu}S_{ \sigma\setminus i}^{(t-1)})\right)\right], \tag{4}\] where the function \(F\) can be chosen, for example, to be of a polynomial \(F(x)=x^{n}\) or exponential \(F(x)=e^{x}\) form. When \(K\) is a \(1\)-skeleton, this becomes the modern pairwise Hopfield network (Krotov & Hopfield, 2016). In the continuous modern Hopfield case, spins and patterns take real values \(S_{j},\xi_{j}^{\mu}\in\mathbb{R}\). Patterns are arranged in a matrix \(\Xi=(\xi^{1},...,\xi^{P})\) and we define the _log-sum-exp function_ (lse) for \(T^{-1}>0\) as \[\text{lse}(T^{-1},\Xi^{T}S^{(t)},K)=T\;\text{log}\left(\sum_{\mu=1}^{P}\sum_{ \sigma\in K}\text{exp}(T^{-1}\Xi_{\sigma}^{\mu}S_{\sigma}^{(t)})\right). \tag{5}\] The energy function is \[E=-\text{lse}(T^{-1},\Xi^{T}S^{(t)},K)+\frac{1}{2}S^{(t)T}S^{(t)}. \tag{6}\] For each simplex \(\sigma\in K\), we denote the submatrix of the patterns stored on that simplex as \(\Xi_{\sigma}\) (which has dimensions \(P\times\sigma\)). Using the dot product to measure the similarity between the patterns and spins, the update rule is \[S^{(t)}=\text{softmax}\left(T\sum_{\sigma\in K}\left(\Xi_{\sigma}^{T}S_{\sigma }^{(t-1)}\right)\right)\Xi. \tag{7}\] In practice, however, the dot product has been found to under-perform in modern continuous Hopfield networks compared to Euclidean or Manhattan distances (Millidge et al., 2022). Transformer models in natural language tasks have also seen performance improvements by replacing the dot product with cosine similarity (Henry et al., 2020), again a measure with a more geometric flavour. However, these similarity measures generalise distances between pairs of elements rather than sets of elements. We therefore use higher-dimensional geometric similarity measures, _cumulative Euclidean distance (ced)_ and _Cayley-Menger distance (cmd)_. Let \(d_{\rho}\) be the (Euclidean or Manhattan) distance between pattern \(\xi_{\rho}^{\mu}\) and spins \(S_{\rho}^{(t)}\) for pattern \(\mu\) and spins \(\rho\subset\sigma\). Let \(K_{1}^{\sigma}\) be the subset of \(K\) such that all elements in \(K_{1}^{\sigma}\) are \(1\)-simplex faces of \(\sigma\). We define the cumulative Euclidean distance as \[\text{ced}(\xi_{\sigma}^{\mu},S_{\sigma}^{(t)})=\sqrt{\sum_{\rho\in K_{1}^{ \sigma}}(d_{\rho})^{2}}. \tag{8}\] We define \(\text{cmd}(\xi_{\sigma}^{\mu},S_{\sigma}^{(t)})\) as the Cayley-Menger determinant of all \(\rho\in K_{1}^{\sigma}\), with distances set as \(d_{\rho}\). Mixed diluted networks.A computational concern in the above models is that the number of unique possible \(k\)-simplices is \(\binom{N}{k+1}\), e.g., with \(N=100\) there are approximately \(9.89\times 10^{28}\) possible \(50\)-simplices, compared to just \(4,950\) edges (\(1\)-simplices) found in a pairwise Hopfield network. If we allow all possible simplices for a simplicial Hopfield network with \(N\) neurons, the total number of simplices (excluding \(0\)-simplices, i.e., autapses) will be \(\sum_{d=2}^{N}\binom{N}{d}\). Simultaneously, there is also an open question as to how many setwise connections is biologically-realistic to model. We also note that setwise connections can be functionally built from combinations of pairwise connections by introducing additional hidden neurons, as shown by Krotov & Hopfield (2021). Therefore, we might in fact be under-estimating the total number of 'functional' setwise connections, which may appear via common network motifs or'synspembles' (Buzsaki, 2010). Conservatively, we evaluate classes of simplicial Hopfield networks which are low-dimensional, i.e., \(\text{dim}(K)\) is small, and where the total number of weighted simplices is not greater than those normally found in a pairwise Hopfield network, i.e., the number of non-zero weights is \(\binom{N}{2}\). We randomly choose weights to be non-zero, with each weight of a dimension having an equal probability and according to Table 1. (See Appendix A.4 for a small worked example.) Such random networks have previously been studied in the traditional pairwise case as 'diluted networks' (Treves & Amit, 1988; Bovier & Gayrard, 1993a;b; Lowe & Vermet, 2011). Here we study _mixed diluted networks_, since we use a mixture of connections of different degrees. We believe we are also the first to study such networks beyond pairwise connections, as well as in modern and continuous cases. Topology.Different collections of simplices in a simplicial complex can result in different Euler characteristics (a homotopy invariant property). Table 1 shows this from a parameter perspective via counting only the simplices with non-zero weights. However, even when using the same proportion of \(1\)- and \(2\)-simplices, the choices of which vertices those simplices contain can be different due to randomness. Therefore, the topologies of each network may vary (and so too may their subsequent dynamics and performance). One well-studied and often important topological property in the context of simplicial complexes, homology, counts the number of _holes_ in each dimension. In the \(0\)th dimension, this is the number of connected components; in the \(1\)st dimension, this is the number of triangles formed by edges which don't also have a \(2\)-simplex' 'filling in' the interior surface of that triangle; in the \(2\)nd dimension, this is the number of tetrahedra formed by triangles which don't also have a \(3\)-simplex 'filling in' the interior volume of that tetrahedron; and so on. The exact number of these holes in dimension \(k\) can be calculated by the \(k\)_-th Betti number_, \(\beta_{k}\) (see Appendix A.5). We calculate these for our networks to observe the relationship between homology and memory capacity. ### Theoretical memory capacity Mixed networks.Much is already known about the theoretical memory capacity of various Hopfield networks, including those with explicit (Newman, 1988) or implicit (Demircigil et al., 2017) setwise connectivity. However, we wish to point out a somewhat underappreciated relationship between memory capacity and the explicit or implicit number of network connections - which, in the fully-connected network, is determined by the degree of the connections (see Appendix A.6 for proof). **Corollary 2.2** (Memory capacity is proportional to the number of network connections).: _If the connection weights in a Hopfield network are symmetric, then the order of the network's memory capacity is proportional to the number of its connections._ What happens when there are connections between the same neurons at multiple degrees, i.e., what we call a mixed network? To the best of our knowledge, the theoretical memory capacity of such networks has not been well-studied. However, we found one classical study by Dreyfus et al. (1987) which showed, numerically, adding triplet connections to a pairwise model improved attractivity and memory capacity. Most prior formal studies have only considered connections at single higher degrees (Newman, 1988; Bengtsson, 1990). Although, higher order neural networks have historically considered such mixtures of interactions on different degrees simultaneously (Zhang, 2012), but as regular neural networks (e.g., feed-forward networks), not Hopfield networks. Higher order Boltzmann machines (HOBMs) (Sejnowski, 1986) have also been studied with mixed connections (Amari et al., 1992; Leisink & Kappen, 1999)3. However, HOBMs are unlike Hopfield networks in that they typically have hidden units, are trained differently, and have stochastic neural activations 4. Modern Hopfield networks also include an implicit mixture of connections of different degrees5 (but - and see Theorem 1 of Demircigil et al. (2017), which remains unproven - the mixture is unbalanced and not particularly natural, especially for \(F(x)=x^{n}\) when \(n\) is odd). Therefore, we include the following result demonstrating fixed points, large basins of attraction (i.e., convergence) to those fixed points in mixed networks, and memory capacity which is linear in the number of fully-connected degrees of connections (a proof is provided in Appendix A.6). Footnote 3: HOBMs also suffer the same problem as we face here, one of having many high-order parameters between the neurons to keep a track of. Possibly a factoring trick like in Memisevic & Hinton (2010) for HOBMs could be helpful in simplicial Hopfield networks. Footnote 4: Despite this, there are equivalences (Leonelli et al., 2021; Marullo & Agliari, 2021; Smart & Zilman, 2021). Footnote 5: Recall that \(\left(\sum_{i}a_{i}\right)^{b}=\sum_{i}a_{i}^{b+1}\). **Lemma 2.3** (Fully-connected mixed Hopfield networks).: _A fully-connected mixed Hopfield network based on a \(D\)-skeleton with \(N\) neurons and \(P\) patterns has, when \(N\rightarrow\infty\) and \(P\) is finite: (i) fixed point attractors at the memory patterns; and (ii) dynamic convergence towards the fixed point attractors within a finite Hamming distance \(\delta\). When \(P\rightarrow\infty\) with \(N\rightarrow\infty\), the network has capacity to store up to \((\sum_{d=1}^{D}N^{d})/(2\text{ ln }N)\) memory patterns (with small retrieval errors) or \((\sum_{d=1}^{D}N^{d})/(4\text{ ln }N)\) (without retrieval errors)._ This naturally comports with Theorem 2 from Demircigil et al. (2017), except here we show an increased capacity in the mixed network, courtesy of Corollary 2.2. **Mixed diluted networks.** As mentioned earlier, full setwise connectivity is not necessarily tractable nor realistic. Lowe & Vermet (2011) show for pairwise diluted networks constructed as Erdos-Renyi graphs (constructed by including each possible edge on the vertex set with probability \(p\)) that the memory capacity is proportional to \(pN\). Crucial for this result is that the random graph must be asymptotically connected. This makes sense, given that if any vertex was disconnected its dynamics could never be influenced. Empirically, it does seem that a certain threshold of mean connectivity in pairwise random networks is crucial for attractor dynamics (Treves & Amit, 1988). _Remark 2.4_.: By a straightforward generalisation of Lowe & Vermet (2011)'s result, diluted networks constructed as pure Erdos-Renyi hypergraphs may store on the order of \(pN^{d-1}\) memory patterns, where \(d\) is the degree of the connections. In the case of an unbounded number of allowable connections, Remark 2.4 would suggest picking as many higher-degree connections as possible when choosing between connections of lower or higher degrees in our mixed diluted networks. However, in the bounded case (our case), we are non-trivially changing the asymptotic behaviour in terms of connectivity and dynamics when we use a mixture of connection degrees. We also need to beware of asymmetries which may arise (Kanter, 1988). This makes the analysis of mixed diluted networks not particularly straightforward (also see Section 4). ### Numerical simulations and performance metrics Given the large space of possible network settings, in the main text we focus primarily on conditions listed in Table 1. Additional experiments are also shown in Appendix A.8. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{2-7} \multicolumn{1}{c|}{} & K1 & R\(\overline{12}\) & R\(\overline{12}\) & R\(\overline{12}\) & R2 \\ \hline 1–simplices & \(\binom{N}{2}\) & \(0.75\binom{N}{2}\) & \(0.50\binom{N}{2}\) & \(0.25\binom{N}{2}\) & 0 \\ \hline 2–simplices & 0 & \(0.25\binom{N}{2}\) & \(0.50\binom{N}{2}\) & \(0.75\binom{N}{2}\) & \(\binom{N}{2}\) \\ \hline \(\chi\) & \(N-(1/2)C\) & \(N-0.25C\) & \(N\) & \(N+0.25C\) & \(N+(1/2)C\) \\ \hline \end{tabular} \end{table} Table 1: List of network condition keys (top row), their number of non-zero weights for 1– and 2–simplices (second and third rows), and their β€˜functional’ Euler characteristic (\(\chi\), bottom row). \(N\) is the number of neurons. \(C=(N-1)N\). For simulation, the number of simplices at each dimension are rounded to the nearest integer. In our numerical simulations, we perform updates synchronously until \(E\) is non-decreasing or until a maximum number of steps is reached, whichever comes first. When a simulation concludes we compute the _overlap_ (for binary patterns) or _mean squared error (MSE)_ (for continuous patterns) of the final spin configuration with respect to all stored patterns using \[m^{\mu}=\left|\frac{1}{N}\sum_{i=1}^{N}S_{i}^{(t)}\xi_{i}^{\mu}\right| \text{MSE}^{\mu}=\frac{1}{N}\sum_{i=1}^{N}{(S_{i}^{(t)}-\xi_{i}^{ \mu})^{2}}. \tag{9}\] We say the network recalls (or attempts to recall) whichever pattern has the largest overlap (where \(m^{\mu}=1\) indicates perfect recall) or smallest MSE (where \(\text{MSE}^{\mu}=0\) indicates perfect recall). ## 3 Numerical simulations ### Binary memory patterns After embedding random binary patterns, we started the network in random initial states and recorded the final overlap of the closest pattern. Table 2 shows the final overlaps for traditional simplicial Hopfield networks (\(N=100\)). Our simplicial Hopfield networks significantly outperform the pairwise Hopfield networks (K1). In fact, the R\(\overline{2}\) model performs as well at \(0.3N\) patterns as the the pairwise network performs on \(0.05N\) patterns, a six-fold increase in the number of patterns and more than double the theoretical capacity of the pairwise network, \(\sim 0.14N\)(Amit et al., 1985). Surprisingly, Table 3 shows homology accounts for very little of the variance in network performance. ### Continuous memory patterns Energy landscape.Using Equation 3 and given a set of patterns, a simplicial complex \(K\), and an inverse temperature \(T^{-1}\), we may calculate the energy of network states. To inspect changes in the energy landscapes of different network conditions, we set \(N=10\) and \(P=10\) random patterns. We performed principle component analysis (PCA) to create a low dimensional projection of the patterns. Then, we generated network states evenly spaced in a \(10\times 10\) grid which spanned the projected memory patterns in the first two dimensions of PCA space. We calculated each state's energy by transforming these points from PCA space back into the \(N\)-dimensional space, across the network conditions at \(T^{-1}=1,2,10\) (Figure 7). At \(T^{-1}=1\), differences between the network conditions' energy landscapes are very subtle. However, at \(T^{-1}=2\) and \(T^{-1}=10\), we see a clear change: those with more \(2\)-simplices possess more sophisticated, pattern-separating landscapes. Recall as a function of memory loading.We tested the performance of our simplicial Hopfield networks by embedding data from the MNIST (LeCun et al., 2010), CIFAR-10 (Krizhevsky and Hinton, 2009), and Tiny ImageNet (Le and Yang, 2015) datasets as memories. We followed the protocol of Millidge et al. (2022) to test recall under increasing memory load as an indication of the networks' memory capacities. To embed the memories, we normalise the pixel values between \(0\) and \(1\), and treat them as continuous-valued neurons, e.g., for MNIST we have \(N=28\times 28=784\) neurons. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline _No. patterns_ & \(0.05N\) & \(0.1N\) & \(0.15N\) & \(0.2N\) & \(0.3N\) \\ \hline K1 & \(0.87\pm 0.18\) & \(0.81\pm 0.16\) & \(0.66\pm 0.10\) & \(0.65\pm 0.10\) & \(0.59\pm 0.08\) \\ \hline R\(\overline{1}2\) & \(0.96\pm 0.10\) & \(0.94\pm 0.14\) & \(0.82\pm 0.20\) & \(0.71\pm 0.17\) & \(0.64\pm 0.13\) \\ \hline R\(\overline{1}\overline{2}\) & \(0.98\pm 0.10\) & \(\mathbf{0.99\pm 0.03}\) & \(0.97\pm 0.10\) & \(0.91\pm 0.15\) & \(0.76\pm 0.16\) \\ \hline **R\(\overline{1}\overline{2}\)** & \(\mathbf{1\pm 0}\) & \(\mathbf{0.99\pm 0.04}\) & \(\mathbf{0.99\pm 0.05}\) & \(\mathbf{0.98\pm 0.08}\) & \(\mathbf{0.87\pm 0.16}\) \\ \hline R2 & \(\mathbf{1\pm 0}\) & \(\mathbf{0.99\pm 0.18}\) & \(0.94\pm 0.18\) & \(0.74\pm 0.29\) & \(0.53\pm 0.23\) \\ \hline \end{tabular} \end{table} Table 2: Mean \(\pm\) standard deviation of overlap distributions (\(n=100\)) from traditional simplicial Hopfield networks with varying numbers (top row) of random binary patterns. K1 is the traditional pairwise Hopfield network. R\(\overline{1}2\) significantly outperforms K1 at all tested levels (one-way t-tests \(p<10^{-11}\), \(F>50.13\)). At all pattern loadings, a one-way ANOVA showed significant variance between the networks (\(p<10^{-20}\), \(F>26.35\)). Box and whisker plots shown in Figure 6. We initialise \(S\) as one of the memory patterns corrupted by Gaussian noise with variance \(0.5\). After allowing the network to settle in an energy minima, we measured the performance as the fraction of correctly recalled memories (over all tested memories) of the uncorrupted patterns, where 'correct recall' was defined as a sum of the squared difference being \(<50\). In all tests, we used \(T^{-1}=100\). Also see Appendix A.7 for further simulation details. Figure 2 compares a pairwise architecture, K1, with a higher-order architecture, R1\(\overline{2}\). The performance of the K1 networks is comparable to that shown in Millidge et al. (2022), however, R1\(\overline{2}\) significantly outperforms K1 across all datasets. Since the MNIST dataset is relatively simple and K1 already performs well, the performance improvement is small, albeit significant. In the CIFAR-10 and Tiny ImageNet datasets, the performance improvements are more noticeable, with most distance functions seeing improvements of \(\geq 10\%\) in the fraction of correctly retrieved memories. Also noticeable in the results for CIFAR-10 and Tiny ImageNet (see Figure 2) is the relatively high performance of the ced and cmd distance measures. Indeed, cmd performs as well or better than the Manhattan distance in our experiments. And both ced and cmd (along with the Euclidean and Manhattan distances) outperform the dot product in CIFAR-10 and Tiny ImageNet at high memory loadings. This further supports the intuition and results of Millidge et al. (2022), that more 'geometric' distances perform better as similarity measures in modern Hopfield networks. ## 4 Discussion We have introduced a new class of Hopfield networks which generalises and extends traditional, modern, and continuous Hopfield networks. Our main finding is that mixed diluted networks can improve performance in terms of memory recall, even when there is no increase in the number of parameters. This improvement therefore comes from the topology rather than additional information Figure 2: Recall (mean \(\pm\) S.D. over \(10\) trials) as a function of memory loading using the MNIST, CIFAR-10, and Tiny ImageNet datasets, using different distance functions (see legend). Here we compare the performance of modern continuous pairwise networks (top row) and modern continuous simplicial networks (bottom row). The simplicial networks are R1\(\overline{2}\) networks (see Table 1 for information). R1\(\overline{2}\) significantly outperforms the pairwise network (K1) at all tested levels where there was not already perfect recall (one-way t-tests \(p<10^{-9}\), \(F>16.01\)). At all memory loadings, a one-way ANOVA showed significant variance between the networks (\(p<10^{-5}\), \(F>11.95\)). Tabulated results are shown in Tables 6, 7, and 8. in the form of parameters. We also show how distance measures of a more 'geometric flavour' can further improve performance in these networks. This simplicial framework (in diluted or undiluted forms) now opens up new avenues for researchers in neuroscience and machine learning. In neuroscience, we can now model how setwise connections, such as those provided by astrocytes and dendrites, may improve memory function and may interact to form important topological structures to guide memory dynamics. In machine learning, such topological structures may now be utilised in improved attention mechanisms or Transformer models, such as in Ramsauer et al. (2021). At the intersection of these fields, we may now further study how the topology of networks in neuroscience and machine learning systems may correspond to one another and share functional characteristics, such as how the activity of 'pairwise' Transformer models have shown similarities to activities in auditory cortex (Millet et al., 2022). Could'setwise' Transformer models correspond more closely? Or to a more diverse range of cell types? These and related questions are now open for exploration, and may lead to improved performance in applications (Clift et al., 2020). Convolution operations and higher-order neural networks.From the perspective of modern deep learning, considering higher order correlations between downstream inputs to a neuron is quite classical. For example, convolutional neural networks have incorporated specialised setwise operations since their inception (Fukushima, 1980; Lecun et al., 1998), and more general setwise relationships have also been introduced in higher-order neural networks (Pineda, 1987; Reid et al., 1989; Ghosh and Shun, 1992; Zhang, 2012). Although our setwise connections are not explicitly convolutional, they are in one notable sense conceptually similar: they collect information from a particular subset of neurons and only become active when those particular neurons are active in the right way. One of the main differences, however, is that - unlike typical convolution operations - we don't restrict the connection locations to some particular locations or arrangements within the input space. Our results therefore suggest that, in some cases, replacing regular feedforward connections with random convolutions may offer improved performance in some circumstances. Improvements and extensions.Our study focusses on random choices of weighted simplices. What if we choose more carefully? Indeed, it seems quite likely biological setwise connections are not random, and are almost certainly not randomly chosen to replace random pairwise connections. It now seems natural to study how online weight modulations (e.g., based on spectral theories) could generate new connections between Hopfield networks and, e.g., geometric deep learning. Such modulations may have novel biological interpretations, e.g., spatial and anti-Hebbian memory structures may be modelled by strategically inserting inhibitory interactions (Haga and Fukai, 2019; Burns et al., 2022) between higher simplices (and may also model disinhibition). Further analytic studies.Our numerical results suggest diluted mixed networks have larger memory capacities than pairwise networks. In a fairly intuitive sense, this is not particularly surprising - we are adding degrees of freedom to the energy landscape, within which we may build more stable and nicely-behaved attractors. However, we have not yet proven this increased capacity analytically for the diluted case, only given some theoretical indications as to why this occurs and proven the undiluted case. We hypothesise it is possible to do so using generalised methods from replica-symmetric theory (Amit et al., 1985) or self-consistent signal-to-noise analysis (Shiino and Fukai, 1993), in combination with methods from structural Ramsey theory. The capacity for modern simplicial networks may be on the order of a double-exponential in the number of neurons (since, in the limit of \(N\to\infty\), there is an exponential relationship in the number of multispin interactions on top of an exponential relationship in the number of intra-multispin interactions, i.e., both pair-spins and multi-spins can have higher degrees of attraction). This capacity, however, will likely scale nonlinearly with the choice of (random) dilution, e.g., there may be a steep drop in performance around a critical dilution range, likely where some important dynamical guarantees are lost due to an intolerably small number of connections of a particular order. Even higher orders and diluted mixtures of setwise connections may also be studied. Such networks, per Lemma 2.3, will likely improve their performance as higher-degree connections are added (as shown in Appendix A.8). However, and as implied in Section 2.3, the number and distribution of these connections may need to be careful chosen in highly diluted settings. ### Reproducibility Statement To reproduce our results in the main text and appendices, we provide our Python code as supplementary material at [https://github.com/tfburns/simplicial-hopfield-networks](https://github.com/tfburns/simplicial-hopfield-networks). We have also provided a small worked example in Appendix A.4 to help clarify computational steps in the model construction. Assumptions made in our theoretical results are stated in Section 2.3 and Appendix A.6. ### Acknowledgements The first author thanks Milena Menezes Carvalho for graphic design assistance with Figure 1, as well as Robert Tang, Tom George, and members of the Neural Coding and Brain Computing Unit at OIST for helpful discussions. We thank anonymous reviewers for their feedback and suggestions. The second author acknowledges support from KAKENHI grants JP19H04994 and JP18H05213.
2302.07911
From Reality Keys to Oraclize. A Deep Dive into the History of Bitcoin Oracles
Before the advent of alternative blockchains such as Ethereum, the future of decentralization was all in the hands of Bitcoin. Together with Nakamoto itself, early developers were trying to leverage Bitcoin potential to decentralize traditionally centralized applications. However, being Bitcoin a decentralized machine, available non-trustless oracles were considered unsuitable. Therefore, strategies had to be elaborated to solve the so-called oracle problem in the newborn scenario. By interviewing early developers and crawling early forums and repositories, this paper aims to retrace and reconstruct the chain of events and contributions that gave birth to oracles on Bitcoin. The evolution of early trust models and approaches to solving the oracle problem is also outlined. Analyzing technical and social barriers to building oracles on Bitcoin, the transition to Ethereum will also be discussed.
Giulio Caldarelli
2023-02-15T19:05:28Z
http://arxiv.org/abs/2302.07911v2
**From Reality Keys to Oraclize. A Deep Dive into the History of Bitcoin Oracles** ###### Abstract Before the advent of alternative blockchains such as Ethereum, the future of decentralization was all in the hands of Bitcoin. Together with Nakamoto itself, early developers were trying to leverage Bitcoin's potential to decentralize traditionally centralized applications. However, being Bitcoin a decentralized machine, available non-trustless oracles were considered unsuitable. Therefore, strategies had to be elaborated to solve the so-called "oracle problem" in the newborn scenario. By interviewing early developers and crawling early forums and repositories, this paper aims to retrace and reconstruct the chain of events and contributions that gave birth to oracles on Bitcoin. The evolution of early trust models and approaches to solving the oracle problem are also outlined. Analyzing technical and social barriers to building oracles on Bitcoin, the transition to Ethereum will also be discussed. _Bitcoin; Blockchain; Contracts; Oracles; Trust Models; Extrinsic data; Multi-signature; OP_Return_. ## 1 Introduction _"That's cheating, though, isn't it?...but all of the really interesting complex contracts I can think of require data from outside the blockchain"[1]._"Cheating" is how Gavin Andersen provocatively referred to utilizing oracles on the blockchain to run smart contracts. The idea is that, in order to be able to finally utilize the Blockchain for something above cryptocurrencies, renouncing to a degree of decentralization may be considered a fair take. Whether it is right to leave the hard-achieved decentralization and the degree to which it has to be renounced in exchange for more interoperability is yet to be defined [2, 3, 4]. The literature on blockchain oracles is a small niche. Two recent studies show that the total number of academic papers concerning oracles barely exceeds two hundred [5, 6]. The academic and practitioner interest in blockchain oracles rose, in fact, after the 2017 ICO hype, where hundreds of blockchain integration proposals were launched almost in every sector [7]. Since many turned out to be fraudulent or unrealistic, studies arose on the motive of their infeasibility [8, 9, 10, 11]. An emergent stream of literature guided by the works of Egberts [12], Frankenreiter [13], and Damjan [14] also started to investigate the role of oracles, along with their uses, risks, and legal implications in real-world blockchains. As the general awareness of the so-called "oracle problem" increased, many other papers concerning oracle technical structure and classification emerged [2, 15]. The paper by Al-Breiki et al. [16] is one of the first to classify trustworthy blockchain oracle protocols by evaluating their security and the foundations of their trust models. Eskandari et al. [17] and Liu et al. [18] instead focus on oracles used in DeFi. The first provides a theoretical framework to classify them, while the second outlines, by gathering on-chain data, the deviation rate of the different oracle designs. Recent research by Pasdar et al. [19] instead involves a consistent number of oracles. It investigates the data type they can provide, their resistance to Sybil attacks, and their exposure to the so called "verifiers dilemma." The dilemma concerns the preference of the verifier to vote for the outcome that guarantees himself a reward instead of performing work for correctness. It has to be said that since the academic literature on oracles started in 2017-18, the whole decentralized infrastructure had already shifted from Bitcoin to Ethereum and other alt-chains by that time. Therefore, the undergone studies mainly involved the infrastructure active and observable in that timeframe and onward, reflecting thus a specific philosophy and belief. However, the concept of decentralizing applications with blockchain and the use of oracles is intuitively much older [20]. Before the advent of Ethereum, Bitcoin was the leading ecosystem on which early smart contracts developers and blockchain enthusiasts experimented with decentralized applications. As data from the real world requires oracles, those had to be primarily theorized and built on Bitcoin. Although research in [12, 20] hinted that oracles on Bitcoin worked with multi-signature wallets, to the best of the author's knowledge, no broader and further studies can be retrieved on how those protocols were theorized and built, nor how their trust model worked. Since Bitcoin is much older than Ethereum, it is reasonable to hypothesize that more than one oracle type was created and active on top of its chain. The idea of this paper is that the oracle literature broadly misses his Bitcoin history, which should also include their origin. Due to that reason, oracles theoretical background and evolution may be biased by an investigation of projects already developed on alt-chains. Research in [5] supports the view of excessive heterogeneity and confusion in oracle definitions and boundaries. Unclarity also emerges in the characteristics of their trust models. As the oracle origin and underlying idea have yet to be defined, it is arguable that those aspects may be clarified by investigating their history further. In the absence of dedicated academic or grey literature, the author opted for an exploratory study to investigate the oracle's origin. The data collection is therefore guided by experts in the field who were among the first to theorize and develop oracles on the Bitcoin blockchain. Their theoretical background and the history of their protocol are outlined to understand better how the oracle concept was born and how it theoretically evolved. The technical structures are also outlined and compared to show how technically the trust model evolved with every oracle. Their history and their protocols will be traced and described. Where possible, the data provided by the experts is double-checked with available written documents, repositories, and online materials, such as emails and forum posts. The research questions of this study are the following: 1. What is the exact origin of blockchain oracles, and how were they theorized? 2. How did early developers face the oracle problem? 3. How have trust models evolved? 4. Which factors mainly contributed to the shift oracles development to the Ethereum ecosystem? The paper proceeds as follows. Section two introduces the methodology as well as the data collection while section three outlines the findings. Section four discusses the findings and answers the research questions. Section five concludes the paper by providing hints for further research. ## 3 Findings This section provides an overview of early oracle protocols starting from Reality Keys and ending with Oraclize. Every paragraph is divided into two sub-paragraphs in which the first retraces the history of the oracle protocol from its underlying idea to its development and difficulties faced, while the second technically describes the oracle module. Table one provides the list of experts and associated protocol/paragraph. Quotations from each paragraph (unless stated otherwise) are from the relative expert interviewed. ### The origin of blockchain oracles. In 2012, during a Bitcoin conference in London, Mike Hearn, one of the first Bitcoin developers, declared that he noticed some unusual pieces of code in Bitcoin. When Mike asked Nakamoto for clarifications, he replied that they were meant to execute "contracts" at a later time. It appears that these so-called contracts were only visible to those that looked into the Bitcoin code, but proof of their existence could also be found in an Email from Nakamoto dated the 09th of March, 2011 [21]. In that document, Nakamoto defines contracts as _"by signing, an input owner says "I agree to put my money in, if everyone puts their money in and the outputs are this."_. Unfortunately, by the time Nakamoto left, the code to support contracts was insufficient to execute them fully, and other developers were not informed on how to continue their development. To help solve the problem, Mike Hearn started a BitcoinWiki page concerning those so-called distributed contracts on the 22nd of May 2011, defining them as "a method of using Bitcoin to form agreements with people, anonymously and without the need for trust." The wiki page saw the contributions of many other developers, of which some probably used their real names and others just pseudonyms. Appendix 1 provides an overview of page contributors and contribution types. A classic example of the contract described on the page was the promise of later payment using data from the blockchain (e.g., timestamp) to determine the exact time to unlock the money. However, apart from this, a contract such as a "will" was also described. A will contract concerns the event of death and introduce arbitrary data on the blockchain for the first time. Due to that idea, on the 25th of July 2011, the concept of Oracle was added on the wiki page by Mike Hearn, explaining that "as Bitcoin nodes cannot measure arbitrary conditions, we must rely on an 'oracle". In the same contribution, an oracle was defined as "a server that has a keypair, and signs transactions on request when a user-provided expression evaluates to true" [22]. In the will example described in the wiki, the oracle was the third key owner of an M-of-N multi-signature wallet that signs the transaction when the condition death=true. The contract illustrated in figure 1 is meant to work as follows: The creator of the will (e.g., grandfather for grandson) would create a transaction spending the output and setting the output to: <oracle pubkey> <grandson pubkey> 2 CHECKMULTISIGVERIFY <hash> OP_TRUE It means that the transaction is complete by the grandfather side, but its expendability is conditional to the script's output mentioned above. The script requires two other key owners to sign the transaction when a specific hash is verified. The oracle then accepts the request and receives the expression and a copy of the partially complete transaction along with the output script. The oracle pubkey would be published on the oracle website, which is meant to be a trusted data source (in this case, it concerns people's deaths). Then the sentence about the grandfather's death should be in a form that the oracle can understand (e.g., a hash). Ideally may be a hashed form of the string: \begin{table} \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{**Expert**} & \multicolumn{1}{c|}{**Protocol/Paragraph**} \\ \hline Mike Hearn & Oracles’ origin \\ \hline Edmund Edgar & Reality Keys \\ \hline Paul Sztore & Truthcoin \\ \hline Tomasz Kolinko & Orisi \\ \hline Adam Krellenstein & Counterparty \\ \hline Thomas Bertani & Oraclize \\ \hline \end{tabular} \end{table} Table 1: Experts interviewed and dedicated paragraph. has_died('john smith', born_on=1950/01/02) The oracle then verifies if the hash of the expression matches the hash of the output scripts, and if it does, he signs the transaction. Otherwise, he returns an error. Assuming that the grandson has already signed his part of the script, when the oracle successfully signs his part, the grandson can broadcast the contract transaction and the money claim[22], [23]. It must be noted that, in this example, the creator of the will contract decides the oracle that can unlock the transaction. By that time, the approach was purely theoretical, and Mike Hearn had never developed an actual oracle. If the oracle was known as a black box, his efforts were toward "how to make that black box a little more transparent." As for the reason why he gave to this new concept in the blockchain space, the name oracle, he replied: "The name itself (oracle) is a bit of everything, just like contracts; all these things are metaphors. I think I used it because there is a history of using that term in the field of cryptography, and what I was developing was similar to the concept of random oracle". Hearn further realized that a working Bitcoin wallet was necessary for these new features to be successfully implemented. In 2012-13 however, the wallet market was small and fragmented. In 2014 Bitcoin wallets were even banned by Apple [24]. Aiming for a solution, Mike Hearn started the development of Bitcoin-J, a wallet that could successfully support contracts. Unfortunately, many other difficulties prevented the effective development of contracts and oracles by that time. In Mike's opinion the following were the most problematic: * Contracts, as a new type of programming, were not easy to develop. They involved cryptography which is something programmers are not always very familiar with. * As developers were developing their own wallets, there was an evident lack of interoperability. Programs hardly worked on one wallet and rarely worked on others. * Many contracts were just proof of concept, not even integrated into a wallet, and often only executable from the command line. As the final integration into a wallet was difficult, people weren't just doing it. Figure 1: Multi-Sig based will Contract. According to Mike Hearn, apart from being a big experiment, oracles will not be used more broadly in the future. He believes that the real solution lies in the "trusted computing" area he has dedicated to after leaving the Bitcoin community in 2015. #### 3.1.1 Nakamoto "contracts" and considerations on oracles. Considerations directly coming from Nakamoto regarding transactions involving an external state or arbitrator can be traced back to an email of the 27th of April 2009. The email discussed Mike Hearn's idea of introducing chargebacks to Bitcoin transactions [25]. In 2009 Bitcoin was at its earliest stage, and no scripts had been implemented yet. In that email, Nakamoto stated that if an agent required the possibility of chargeback, an "escrow" transaction (which was still not implemented) should have been used. Furthermore, a third party with the power to decide whether to return or release the money had to be designated. The idea was also to implement an expiration date to the escrow for the funds to be automatically returned if no options were exerted within the time limit. Interestingly, the original proposal of Nakamoto was slightly different. He proposed, in fact, an escrow system in which either the bitcoins were released or burnt. It was a sort of "kill switch" that prevented thieves to gain benefits from cheating [26]. However, the community voted against the burning mechanism, opting for a chargeback mechanism instead. The kill switch was thought to excessively penalize the buyer (and also the whole community) by permanently removing bitcoin from circulation [27]. Except for the chargeback example, the idea of a third-party arbitrator is not retrieved anywhere else in Nakamoto's writings when Bitcoin scripts were finally available. As a matter of fact, in the same email in which Nakamoto describes the idea of contracts, the provided resolution does not involve an arbitrator. In his contract based on multi-sig, all the participants are key owners with the same rights, and the resolution is attained as soon as the required subset of parties signs the transaction. In a following communication, Nakamoto also considered the possibility of broadening the range of applications built on Bitcoin. He declared that he was planning to build a marketplace "eBay style," built into the client, but with the same mechanism of review/ratings of modern intermediary platforms. However, due to the "locked-in nature" of Bitcoin he saw it difficult for those applications to be directly built on top of the chain. Therefore, he shared the idea of utilizing other chains (e.g., sidechains) with more developer-friendly rules, but with the same miners as Bitcoins. To achieve interconnectedness, he suggested that inputs for the other chains could have been data from Bitcoin blocks (e.g., the nonce). The alternative chain to which Nakamoto refers was named "BitDNS" by that time. Still, it was only a theoretical approach, and it seems unrelated to existing projects sharing the same name [28, 29]. A few days before his last email [30], Nakamoto also commented on the possibility of Bitcoin scripts being non-stateless. His opinion was that if Bitcoin has access to outside data which may change between nodes, it could generate a fork in the chain. The exception is made for information that is always false up to a specific time and permanently true after (Timelocks). In reply to another email from Mike Hearn, Nakamoto, also provided his view concerning the involvement of trusted third parties, such as Google, in managing users' accounts. Nakamoto suggested that even in the presence of trusted third parties, contracts should be executed in a trustless way. This can be attained if the trusted third party signs the transaction before the contract is created. As this was the last contribution from Nakamoto on Bitcoin and external data, no information could be retrieved on how he wished to put arbitrary data on sidechains. ### Bridging Reality to the Bitcoin blockchain: Reality Keys Fascinated by the episode of the WikiLeaks banking blockade, Edmund Edgar came across discussions about peer-to-peer currencies, eventually discovering Bitcoin in 2010. His first contribution to the ecosystem concerns the attempt to introduce Bitcoin as an official currency in OpenSim (an open alternative to Second Life). He thought that since OpenSim was a decentralized life simulator, it could have benefitted from a form of decentralized currency. While working on this integration, he came across a video of Mike Hearn from 2012 in London in which he talked publicly about the possibility of using Bitcoin Technology for real-world applications. He started following then, many discussion threads on BitcoinTalk about the need for a trusted oracle for Bitcoin in order to build real-world applications [31]. He noticed, however, that it had yet to be an official practical implementation of this idea, so he started working on his own. In principle, he was working in parallel on both the OpenSim currency and oracle for Bitcoin, but since OpenSim was not gaining attention and oracles were a more interesting subject, he decided to focus entirely on what later became Reality Keys. The first lines of code for Reality Keys were written in late 2013, and the project was released early in 2014. Being the first official working oracle, Reality Key was not influenced or inspired by other projects. Its ecosystem was built in response to the need of that time to create a bridge from blockchain to the real world and the strict Bitcoin technical constraints. Congesting and eventually breaking the chain with these new applications was the primary concern; therefore, Edmund thought about making its oracle ecosystem ultimately work off-chain. Furthermore, given the technical limitations of Bitcoin and to adhere to the available scripts, the oracle was set to answer only binary (yes/no) questions. At the first implementation of Reality Keys, the research for the data and the publication of the correct answer were done directly by the project team, eventually by Edmund himself. However, the users who made the questions knew the data source from which the information was taken. Therefore, despite the system being relatively centralized and not automated, there was a certain degree of transparency. In this regard, however, Edmund specified that although his specific system design was centralized, he hoped the whole blockchain oracle ecosystem would eventually be decentralized. He expected, in fact, many other competitors to show up in the short term. Therefore, if Reality Keys was just one of the available oracles, users could freely select among the most trusted and reliable alternatives. When the oracle system was ready and running, it served different kinds of requests, from bitcoin prices to soccer scores. There was, however, not a specific application managed by Reality Keys until they built a sponsorship integration. This used the RunKeeper API to promote walks and marathon-related events. However, personal challenges concerning walks and runs with humanitarian aims were also sponsored. Someone could, for example, challenge himself that if he doesn't run a certain amount of kilometers by a specific date, he has to send some BTC to a charity. Thanks to a system of APIs, Reality Keys could provide information on whether the user has reached his goal. With these new implementations, the team also had to face new challenges. The integration with RunKeeper required, in fact, a dedicated website for the application. The user was then supposed to generate a key, and the website should have been able to perform a transaction with that key eventually. Since a working wallet such as Metamask was not available for Bitcoin, as well as tools for coding, the whole implementation should have been written from scratch. In the end, they managed to complete the website, but as Edgar declared, "this is very hard, and this is very hard to do securely." Unfortunately, the absence of a wallet supporting contracts and developing tools remained almost unchanged on Bitcoin. Since the system was inflexible, there was not much demand for contracts, and since there was not much demand, the interest in building those applications was eventually scarce. The system then remained almost unchanged till the advent of Ethereum, with only binary questions available, but improvement attempts were made on the range of available data types. He added FreeBase as a data source for Reality Keys. Freebase allowed for a very wide variety of queries to be made using the structured data system run by Google. When Ethereum started, Edmund went to Devcon 1 (Nov 2015), where he described how Reality Keys could be used in Ethereum. Being entirely built off-chain, Reality Keys was not wholly tied to Bitcoin and could be implemented on Ethereum immediately. At Devcon 1, Edgar also met Thomas Bertani from Oraclize. From that point onward, the development of Reality Keys switched to Ethereum, mainly for three reasons. First, the Freebase website was shut down by Google. Since it constituted one of the primary Reality Keys sources of data, it eventually affected its overall utility. Second, the block size war's outcome made Bitcoin more expensive to use for contracts, ultimately decreasing demand for Reality Keys service. Lastly, the implementation of Ethereum could have allowed the switch from a yes/no based platform to a system of signed data of any type to be directly used on-chain. Since the platform radically changed, the project was rebranded first to Realitio and finally to Reality.eth. Besides the technical differences, the new Ethereum version also had a different theoretical approach. What Reality Keys offered on Bitcoin was simply a bridge between real-world data and on-chain contracts. Therefore, it grabbed existing data from trusted sources (e.g., Freebase) and made it available. However, the team realized there was a need for data that didn't exist anywhere. Therefore, what Reality.eth was meant to do on Ethereum was to provide data that could not be pulled from APIs or websites. The philosophy of the platform switched then from delivering data to creating data. Two factors mainly drove the design change: **1)** A trusted data source (API) could not be found for some specific applications. **2)** Other projects become specialized in the bridging process (e.g., Oraclize), and it was not helpful to provide a similar service. The project then evolved to its current version, in which it can answer any human language question. #### 3.2.1 How Reality Keys (Bitcoin) worked. The first thing to consider to understand Reality Keys' mechanics is that since it was an off-chain oracle, there was no direct interaction with Bitcoin. However, the way it works was inherently influenced by Bitcoin's technical constraints and the available scripts by the time it was designed. The following example provides an overview of how Reality Keys could be used as an oracle on Bitcoin. Consider having two agents, Alice and Bob, who wish to bet on Bitcoin price. Alice bets that by the 01st of June 2014, the price of BTC would reach or exceed 4005, while Bob, on the contrary, bets that by the same date, the price of Bitcoin will be lower than 4005. The agents can solve the bet themselves or entrust it to a third-party oracle such as Reality Keys. If they decide to use Reality Keys oracle, they must make a simple binary (yes/no) question on the oracle website asking whether, by the 01st of June 2014, the bitcoin price is above/equal to 4005, which corresponds to Alice's bet. If the oracle replies no, then obviously, Bob wins the bet. Both agents know how and from which source, Reality Keys, draws the answers and trust both the source and the Reality Keys project. Otherwise, they would freely opt for another contract resolution method. Reality Keys creates two key pairs (public and private) for both yes and no answers. The two public keys are then published on their website. When the selected date comes (the 01st of June 2014), the Reality Keys system checks the price of Bitcoin on the proposed data source, but the result is published in two stages. First, the system automatically publishes the results (not the key) on their website and waits for an objection period. During this objection period, an agent can ask for a "human check" of the results, offering a tip of 10 mBTC [32]. Once the objection period has elapsed, the team then publishes the correct private key and deletes the private key corresponding to the false outcome. From a technical point of view, the role of Reality Keys ends with the publication of the correct private key. However, it is vital to understand what happens or can happen between the publication of the public key and the private key on the Bitcoin network. Although some examples were offered on the Reality Keys website, there was actually not a specific or "standard" way of implementing their oracle service. The choice of implementing a standard or non-standard multi-signature transaction or using a script (P2SH) along with any specific conditions was totally in the hands and responsibility of the users. Depending on the selected choices, different costs, technical difficulties, or security standards would have been obtained for which Reality Keys was not responsible. One of the few still available demo scripts (realitykeysdemo.py) implements Reality Keys creating a conditional contract on the outcome of the oracle using pybitcointools [33]. The mentioned commands refer explicitly to the script described in the repository. From the user's side, the steps are as follows: 1. Alice creates a key pair with the command below and sends the public key to Bob. She then funds her address using any Bitcoin client. Bob does the same../realitykeysdemo.py makekeys 2. Alice and Bob, register a Reality key and get the ID <reality_key_id> from the URL. 3. In case one of the two parties (Alice or Bob) disappeared before completing the transaction, the other party could get the money back from the temporary address with the command./realitykeysdemo.py pay <address> -a <amount> -f [<fee>]" 4. Alice creates a P2SH address spendable by combining (Alice key + reality key-yes) or (Bob key + reality key-no). Afterward, she creates a transaction spending the contents of both her and bob temporary address to the P2SH address, using her private key. The following output is then sent to Bob for him to sign and broadcast../realitykeysdemo.py setup <reality_key_id> <yes_winner_public_key> <yes_stake_amount> <no_winner_public_key> <no_stake_amount>" 5. When Bob receives the partially signed transaction, he recreates it to check if the output is the same. If everything is as expected, he signs the transaction and broadcasts it. 6. When the result is issued, whoever wins the bet, Alice or Bob, can execute the following script to unlock the funds from the contract and send them to another address of their choice../realitykeysdemo.py claim <reality_key_id> <yes_winner_public_key> <no_winner_public_key> -f [<fee>] -d [<destination_address>] The procedure described above is now deprecated and is no longer available due to the transition to Reality.eth. The new oracle works under different logic and premises, and being not tied to Bitcoin anymore, its analysis goes beyond the scope of this research. ### Enabling prediction markets on Bitcoin with Truthooin On 26th November 2012, the Commodity Futures Trading Commission (CFTC) claimed that intrade.com, a prediction market platform, was interfering with CFTC's role to police market activity and protect market integrity [34]. Probably in response to this, on the 23rd of December (same year), intrade.com closed all U.S.-based customer accounts. On 10 March 2013, InTrade ceased all operations worldwide [35]. To Paul Sztorc, an expert in prediction markets, it was disappointing but not unexpected. Paul was interested in Bitcoin at the time and resolved to find a way to leverage Bitcoin technology to launch an open and uncensorable prediction market. Other than the markets themselves, it would have included a new Peer-to-peer "oracle" to resolve them without trusted third parties. At that time, RealityKeys was a reliable oracle in development, but Sztorc was concerned that its system could have been manipulated for information of high value. Alternatives based on multi-sig were also not viable in the long run: "I was convinced that multi-sig was not the solution to the oracle problem. If the oracle problem is like sending a man to the moon, using a multi-sig is like trying to do it with a catapult". Emerging projects also were more oriented to the idea of data feeds, therefore, to a constant update of data to the blockchain. Sztorc's approach was, however opposite: "we don't need a data feed, we don't need a frequent check...we only need to check if some information is true at a certain point". For that reason, although in principle not interested in oracles, he developed his own (Truthcoin) by the end of 2013, publishing the first version of the whitepaper in early 2014. The Truthcoin whitepaper and the project itself were influenced by what was called the "blocksize war", a fierce dispute between Bitcoin developers on the block size growth and the emergence of numerous alternatives to Bitcoin of dubious value. As the intention was to formalize the Truthcoin idea and then to have some other group do the actual development, Sztorc never launched the project. He could not take the risk of his ideas being manipulated by charlatans or his project being erroneously labeled as a scam. Therefore the Truthcoin whitepaper was written in a highly scrupulous detailed way so that debates such as the block size war could never happen on his project. Furthermore, to alienate any association with alt/scam coins, he strictly adhered to Bitcoin and Nakamoto's ideas. In light of this, Truthcoin was planned to be developed as a Bitcoin sidechain. Besides being proposed by Nakamoto, sidechains were also valuable because they allowed the avoidance of using complex Bitcoin scripts. In the script design of prediction markets, "each bitcoin transaction would be like an enormous computer program". The script's length is due not only to the application's data but also to the information on how to digest that data. Although better programmable, the same limitations would have been encountered using an all-purpose blockchain like Ethereum. A dedicated sidechain was then seen as a better solution since it already knows how to process the data, and only needs minimal inputs to code each user action. All the required code is preloaded, and the full node already knows where to find and how to process the incoming data. Although theorized and discussed in 2010, with Nakamoto's help, sidechains were still underdeveloped [29]. The first practical idea of a two-way peg sidechain was discussed in December 2013 by Luke Dashjr. Along with others, they released the Blockstream whitepaper in October 2014, a system to enable blockchain innovation via pegged sidechains [36]. Sztorc started to develop a sidechain concept ("Drive Chain"), in 2014, but being alone and aware of the work of Luke Dashjr he decided to focus on other aspects of Truthcoin project, hoping to use Blockstream sidechain once completed. Therefore, inspired by the work of Robin Hanson, he refined Truthcoin logarithmic market scoring rules (LMSR) [37]. Concerning Hanson's contribution, Sztorc stated: "each buy and sell happens unilaterally and atomically, so it was perfect for the blockchain." Apparently, LMSR also inspired what is now called Automated Market Makers (AMM) on Ethereum [38]. In 2015 the Truthcoin software was almost complete, but unfortunately, by that time, it was clear that the Blockstream sidechain project was not going to succeed in the short run. Therefore, Sztorc switched again to the development of a bitcoin sidechain, "Drivechain", (spelled without a space this time) of which an advanced version was published in November 2015, so that it could have been beneficial not only for his now rebranded project Hiwemind but also for the whole Bitcoin community. #### 3.3.1 How the Truthcoin Oracle works. In its whitepaper, Truthcoin is described as a "proof-of-work sidechain that collects information on the creation and state of Prediction Markets (PMs)" [39]. The Truthcoin protocol exploits the concept of "salience". Oxford dictionary defines salience as the quality of being particularly noticeable or important [40]. Salient information is something that should be well-known by anyone. The solution proposed in Truthcoin to achieve salience is based on time. The idea is that information is certain and true after a certain amount of time. Instead of providing a piece of information as soon as it is known, the idea is to provide it at a point where it is undoubtedly certain. Technically it is organized as follows: Two coins are present on Truthcoin, CashCoins (CSH) and Votecoins (VTC). CSH is pegged 1:1 to bitcoin and allows users to create, buy and sell PMs shares. Votecoins represent each user's reputation, are tradable, and pay dividends over time. VTC allows users to vote on PMs decisions and collect PMs fees. The ownership of VTC can only change by the effect of voting activity. In the whitepaper, the totality of voters is referred to as a "corporation". The concept behind this is that the reputation of the entire system is more relevant than the single reputation of each individual. Votecoins are not mined but are proportionally shared among voters in a way that if someone acquires some Votecoins, someone else has less, as its total amount remains constant. Two types of decisions are supported on Truthcoin, Binary (0,1) and Scalar (Xmin, Xmax). A third state (.5) identifies decisions that are non-resolvable or confusing. Four entities are present on the platform which are: **Authors**: users that create a prediction market and provide initial liquidity. The difficult work of the author lies in finding a market that may attract many users and identifying a decision that will be well-known to the voters after a specific time. **VoteCoin Owner**: user that votes on a decision. Their main task is to maintain or increase their reputation. **Traders**: users that trade on any PMs, are the customers of the platform. **Miners**: Those who mine blocks on the sidechain. Being a Bitcoin sidechain that allows merged mining (Hash reuse), Bitcoin miners could mine sidechain blocks at a negligible cost. The resolution of a market and a decision on its "true" outcome described in picture 2 is as follows. An Author adds a decision, specifying the topic and the time of resolution. Then waits for the transaction to be included in a block. When a decision is added, the author can also add a market providing initial liquidity. Then he waits for the transaction to be included in a block. When a market is added, trading begins. The market can be advertised so that users buy and sell the shares of the different market states (such as "yes" and "no"). Eventually, the event occurs and becomes "observable". After this, when the time specified by the creator has passed, the decision is considered "mature". The set of all mature decisions is called a "ballot". Staking their tokens, owners of Votecoins are called to vote for all the decisions in the ballot, and when votes are revealed, Votecoins staked are frozen. The decision is then resolved according to the consensus algorithm, which also reallocates VTC. After a decision is resolved, a waiting time of a week starts. Once the waiting time expires, another phase starts in which Miners can veto the "resolved Ballots". If more than 50% of the blocks of this period veto the ballot, then all the decisions inside the ballot must be re-voted. When all the above-mentioned phases are concluded, the redemption phase starts in which all the winning shares are given a price, and users can redeem them for CSH. Crucial for the oracle's good outcome is the creator's role, which must choose an event whose outcome should be "salient" at a certain point in time and voters that must correctly predict the answer of the majority of voters. If an event is not salient, voters will not be able to vote, leading to an unresolved market. Otherwise, if voters are incapable of coordinating, they will be slashed off their tokens. Failing to report or report outside the accepted value range results de facto in a slash. The amount of token slashed is proportional to the distance of the reported value to the one that is identified as the true outcome. Finally, the Truthcoin ecosystem can be broken if someone obtains 51% of the corporation. That means being able to control 51% of the system's economic value, which is considered unlikely to happen. ### Multiple independent oracles on Bitcoin: Orisi Tomasz Kolinko has been interested in Bitcoin since its early days in 2012. He had the idea of launching a stablecoin that could be transacted on the Bitcoin network. However, to build a stablecoin, a data feed was necessary to constantly update its exchange rate. By that time, the only available and known oracles were Reality Keys and Truthcoin. Reality keys was an already operating and reliable oracle project. However, in its early versions, it did not offer the possibility of a data feed, and it was mainly oriented to a one-off event such as election results. The other oracle, Truthcoin, was still under development, and similarly to Reality Keys, it was more oriented to one-off events rather than data feeds. For that reason, he decided to develop his own decentralized oracle Orisi, based on multi-signature. He also added an entry for this concept, in the "contract" Bitcoinwiki page on the 09th of June 2014. In his idea, the oracles must have been trusted entities from the financial world (e.g., banks and other financial institutions) sending data about real-world asset prices. So, instead of adding only an oracle key, he wanted to add multiple keys to make the oracle as decentralized as possible. However, being decentralization based on multi-sig, technical limits prevented, the number of oracles to be large as Kolinko had initially in mind. M-of-N multi-sig protocol is in fact not entirely customizable, and there is a limited key combination from which users can choose. The standard multi-signature was in fact two out of three keys, and more complex ones allowed a maximum of up to 15 keys. This structural limitation made it impossible to add more reporters to the oracle, therefore dramatically limiting the functionalities of Orisi. Unfortunately, also another issue was encountered by Kolinko in the development of Orisi. The scripts of the Orisi oracle were in fact hardly to be mined, for two inherent reasons. 1. First, there was an economic disincentive to mine Orisi transactions because the scripts were larger than usual. As they occupied the weight of many simple Bitcoin transactions, miners could have collected more fees by selecting other transactions instead of Orisi's. Therefore, although a bit more expensive it was unlikely for miners to mine the script voluntarily. Figure 2: Truthcoin market resolution phases [39]. 2. Second and most important, by that time, scripts were not a common transaction type. Pay-to-script-hash was introduced in 2012, and part of the community was not keen on inserting scripts on Bitcoin. Many wanted to keep it as a payment system only. Miners feared that processing scripts on the Bitcoin network could have broken the chain and altered the payment system. Therefore, the scripts were just not selected by the miners and left in the mempool. If, after some time, the transaction was not mined, it was automatically rejected. It is not generally well known, but transactions on the Bitcoin blockchain are not only divided between valid and invalid. Indeed, a valid transaction is a transaction that complies with the rules of the protocol and whose output is less or equal to its input. A transaction is instead invalid if it violates protocol rules or has an output higher than its input. However, among valid transactions, miners can exert a sort of "veto" for which they can arbitrarily decide which transaction to put in their block. Censorship resistance is however guaranteed since for a miner that refuses to put a transaction in a block, there will be others that will insert the transaction for that to be eventually mined. The chance for all miners to collude to reject a specific transaction is ideally remote. However, there are also transactions that were considered as "non-standard" (e.g., multi-sig above three keys), and, although perfectly valid, were unlikely to be mined [41]. Even though the miners did not deliberately collude to reject those transactions, they were so unusual that they naturally decided not to include them. It was a sort of Schelling point [42]. Given the fact that Orisi scripts were something quite new on Bitcoin, unfortunately, despite being totally legitimate, Orisi transactions were generally not mined. To be able to have them mined, the Orisi team had to search for an agreement with a mining pool. Explaining the potential of the oracle and the underlying stablecoin projects, they managed to involve Eligius Pool which had around 7% of the Bitcoin hashrate (as of June 2014). Having 7% of the hashrate meant that Orisi transaction had 7% chance to be mined, which resulted in one transaction every 8-15 blocks on average. Finally, due to the multi-signature limitations and the difficulties in including Orisi transactions into blocks, the Orisi project was abandoned. The multi-signature limits prevented more honest and trusted oracles to join the project, and the frequency of updates (every one or two hours) made it impossible for Orisi to serve as a price feed for a stablecoin. #### 3.4.1 How Orisi oracle worked. Although the initial idea was to create a price feed, the Orisi whitepaper showed a far more ambitious project. The website shows active or planned support for, timelock verdicts, BTC price feed, website Boolean/integer, dedicated feed (e.g., weather feed), and arbitration support. The key innovation of Orisi oracle compared to previous proof-of-concept and proposals based on multi-sig, was to add a "set" of independent oracles. The idea was that it was difficult to bribe more than half of the oracles, and being different entities, they would have implemented different hardware and applications, thus reducing the chance of being all hacked. A list of trusted oracles was proposed on the platform website but other trusted nodes could also be selected by the users. Also, the protocol implemented a "bitmessage" protocol instead of direct IP communication to protect the identity of oracle nodes and to prevent spam, thanks to a proof-of-work mechanism [43]. The majority of oracles needed to agree to perform a transaction, and given the multi-signature limitations, they could be a maximum of 8 out of 15, in theory. However, in practice, they could not be all oracles. For example, if a multi-sig wallet is made of 4 of 7 oracles, then all the oracles may decide to send the funds to an arbitrary address. Therefore, to have seven oracles, at least an 8-of-11 multi-sign address should have been created, of which four keys should belong to the agents. Thus 1+(m of n) is turned into (n+1 of 2n-m+1). That is another factor that, given the limit of 15 keys, further constrained the usability of Orisi. The following example (Figure 3), better clarifies how Orisi differentiates from previous oracles based on multi-sig. Alice promises Bob that if candidate A wins the election, she will give him 10BTC. Both agree that the condition for the payment would be that on a specific website is declared the election of candidate A. Then they both agree on a set of 7 oracle nodes. Alice should then deposits 10 BTC in a multi-sig wallet that is considered a "safe" until oracles decide to forward the funds to Bob or to return them to Alice (in case candidate A loses the election). In order to do so, Alice creates an "unlock" transaction to forward the funds from the safe and pays the fees to the oracles and the Orisi project. Oracles then verify the transaction and the validity of the request. If valid they add the transaction and notify the agents. If all the oracles acknowledged the validity of the transaction, Alice transfers the 10BTC to the wallet for the contract to be finally active. If candidate A wins the election, then as soon as oracle nodes notice the information on the website, they sign the transaction that is also broadcasted through Bitmessage. Once enough oracles sign the transaction Bob can also sign the transaction with his keys and broadcast the transaction to the Bitcoin network to finally unlock the funds. ### Bitcoin oracles through meta-chains: Counterparty In 2012, J.R. Willet published "The second Bitcoin Whitepaper." The document theorized the launch of Mastercoin, whose idea was to use Bitcoin as a protocol layer, on top of which higher-level protocols could be built [45]. Instead of having multiple blockchains, the Bitcoin blockchain could have been used as a foundation layer to launch new currencies also with experimental new rules [46]. Willet is also well known because to fund Mastercoin, he launched the first-ever ICO in 2013, raising 4740 BTC (nearly $500K at that time) [47]. Mastercoin (rebranded to Omni in 2015) obtained discrete attention, but the protocol development was slow at its launch despite the hype and the successful ICO. When Mastercoin was launched, the only alternative to extend Bitcoin functionalities was colored coin. Colored coin protocol was limited and Figure 3: Multisignature contract with multiple independent oracles [44]. had no support for oracles; its sole purpose was to tokenize assets on a blockchain such as Bitcoin. Therefore, understanding the potential of Mastercoin, but disappointed with its slow development, Adam Krellenstein, along with Evan Wagner and Robby Dermody decided to develop a new protocol on Bitcoin called Counterparty inspired by Mastercoin premises. In particular, the goal was to be able to use information that originates off-blockchain, to produce betting, gaming, or other financial instruments on Bitcoin. Unlike other projects, Counterparty was launched already with full functionalities on day one. Features were added quickly except for one, as the chief developer said: "the most difficult thing to add was the decentralized and trustless gaming in the form of rock paper scissors. But we had a decentralized exchange on day one, and it was already working when we launched it". In line with other projects at that time, Counterparty was announced on Bitcointalk. Still, the developer team decided to stay anonymous at the beginning and opt for a Proof-of-burn to launch their currency (XCP), de-facto renouncing to raise any money for their project. The reasons for those choices were mainly the following: 1. The choice reflected what Nakamoto did with Bitcoin, staying anonymous and renouncing to any reward for his project. 2. There were high concerns for the legal implications of raising capital with cryptocurrencies, "which turned out to be not very serious." 3. There were personal concerns about the project's development and how it could have turned out, as unforeseen events may have damaged personal reputation. 4. Replicating bitcoin issuance (electricity consumption), they wanted to burn resources (bitcoins) instead of transferring resources. 5. They were against raising capital for a project in the alpha-beta stage. "We didn't want to raise money during the development as we thought it was dishonest." Initially, the proof-of-burn was supposed to work by consuming bitcoin as fees for the miners, de facto not really destroying bitcoins. However, many community members on Bitcointalk argued that miners could have exploited their position to produce Counterparty tokens unlimitedly. Understood that it was an actual threat, Adam Krellenstein decided to change the burning mechanism and made it by transferring BTC to an address whose private keys were unknown (e.g., an impossible vanity address), de facto making them permanently unspendable. Above the hardships of developing a project without funding, Counterparty suffered the effect of a dispute labeled the OP_Return war [48]. In order to add the required transaction data, Counterparty needed an OP_Return size greater than the 40 bytes the Bitcoin Core developers set in the official v0.9.0 release [49]. Counterparty utilized the shrunk OP_Return feature, but the limited size also forced them to use others, such as multi-sig, to make their protocol work. Multi-sig was designed for features such as escrow payments, but the second signature could be leveraged to store data instead [50]. However, this workaround contributed to drawing the attention of the opposing factor of the OP_Return war. From March 2014, in fact, Luke Dashjr, a Bitcoin core developer and owner of a mining pool, started to filter (eventually without success) all Counterparty transactions. As Luke declared, this censorship's motivation was to prevent exploitation of network resources by Counterparty [48, 51]. However, although probably beneficial for Bitcoin nodes, this decision was criticized because Luke was also a co-founder of Blockstream, a major Counterparty competitor [52, 53]. Although OP_Return size was increased at a later date, the main history and development of Counterparty, however, was inevitably affected by this limit [48]. Furthermore, above the constraints resulting from the reduced payload size, the most significant consequence of this debate was the fear of censorship of the Counterparty protocol. The widespread climate of uncertainty prevented developers from building on Counterparty, further impacting its development and competitiveness. It is to be noticed that also Ethereum, the second biggest network after Bitcoin, was negatively affected by OP_Return limit in its development, as Vitalik Buterin argued on social media. Although some considered it an overclaim, Vitalik declared that the original idea of Ethereum was a "counterparty-style metacoin on top of primecoin. Not Bitcoin because the OP_RETURN wars were happening"[54, 55]. Crucial in the history of Counterparty was also the advent of Ethereum. The main innovation of Ethereum was not smart contracts, de facto already available with Counterparty, but the virtual machine along with the language to write smart contracts. Counterparty had smart contracts, but only those written and supported by its developers. It was possible to code more smart contracts, but every application built had to be part of the protocol. Ethereum on the other hand had an extensible infrastructure that allowed anyone to write their own smart contracts. Only the language was part of the protocol and applications could be deployed on top of it: _"It is a more elegant and flexible system...but ultimately does the same thing"_. Aware of the value of EVM, it was ported to Counterparty (EVMParty) so that Ethereum smart contracts could be run on Bitcoin via Counterparty [56]. However, an official version was never released due to multiple factors. At the beginning, there was the idea that the user base would have been minimal, since few people were building on Ethereum and most of the applications were still on Bitcoin. After, when developers started to move to Ethereum, it was clear that Bitcoin could not compete in the smart contract field. First, contracts would have been slow due to Bitcoin block time even if more user-friendly with the introduction of EVM. Secondly, due to the block size war, the price of Bitcoin increased while Ethereum was very cheap. Therefore, nobody would have preferred Bitcoin to build contracts. As Nakamoto did with Bitcoin, Adam Krellenstein left the Counterparty project to the community in late 2014. To date Counterparty, it is still an active project on Bitcoin and wields the same structure and premises as when it was built. #### 3.5.1 Counterparty Oracle Module explained Counterparty is a meta-chain that runs on top of the Bitcoin blockchain. A meta-chain is a chain in which transaction data is contained on another chain called master-chain. Meta-chain transaction data runs only after Master-chain transactions are complete [57]. Counterparty transactions are Bitcoin transactions but with extra metadata in them. If a blockchain is a book, and blocks are pages, a Counterparty software writes information in the margin of those pages. Intuitively Bitcoin software ignores that extra data; therefore, specific software is needed to read it. The Counterparty protocol's idea is that when someone signs a transaction with Bitcoin, he adds some metadata to the transaction. Then the content of this metadata is verified by all Counterparty users to ensure the transaction is valid. The architectural pattern is called state machine replication. Let's assume that Alice wishes to transfer 5 XCP to Bob. She will then sign a Bitcoin transaction in which OP_Return (or OP_Multisig), declares the willingness to transfer 5 XCP tokens to Bob. Since Counterparty is a meta-chain, the related Bitcoin transaction will always be confirmed as long as Alice pays the necessary transaction fees. Therefore, if Alice decides to spend 5 XCP and only owns 2, the transaction on the Bitcoin blockchain will be confirmed anyway. The corresponding counterparty transaction will instead be marked as invalid. The Counterparty transaction data is retrievable with a block explorer, but it is encoded and appears in a format such as the following: The format is not human-readable and must be decoded by the Counterparty engine to be read and digested. The string, in fact, need to be deobfuscated with the ARC4 Cypher and verified if it starts with CNTRPRTY (first 8 bytes). From the 9\({}^{\text{th}}\) byte, information on the transaction type (send, broadcast, issuance) should be retrieved, followed by the specific transaction data. The following scheme outlines the deciphered content of a Counterparty transaction data chunk [58]. 434e545250525459|FFFF|xxxxxxx... this data is different for each transaction type. the transaction type identifier (4 bytes) the string CNTRPRTY (8 bytes) Once deciphered, the transaction will be digested by the Counterparty engine, which also verifies its validity. Being able to inject extrinsic data into the blockchain, the Counterparty engine may already be considered an oracle for Bitcoin. As explained before, however, on top of Counterparty, applications such as prediction markets or decentralized exchanges can be built that further require data from the outside. Data such as a price feed for a decentralized exchange is injected into the protocol thanks to the "broadcast" transaction type. A broadcast message publishes textual and numerical information, along with a timestamp. A series of broadcasts from the same address is called a "feed." Intuitively, the timestamps of a feed should increase monotonically [59]. On the Counterparty explorer (xchain.io), users can leave feedback for the address that publishes feeds, with also some comments. Figure 4 provides an overview of what a broadcast price feed shows (3a), the details of the transaction (3b), and feedback (3c). Figure 4: Broadcast information (BTC-USD price feed) Figure 4: Broadcast transaction details ### A provably honest oracle: Oraclize Thomas Bertani's main interest was the use of Graphical Processing Units (GPUs) for scientific calculations. Being similar to the concept of mining, he becomes soon interested in Bitcoin when he heard about it in 2012. He had direct experience in the development and production of ASIC miners with Counterra in 2013 and with exchanges as the founder of BitBoat, the first Italian company that allowed the purchase of bitcoin by cash. As an innovator, he was touring to present blockchain as well as other technology updates. While studying to prepare his speech for Codemotion, an event in Milan, he came across the BitcoinWiki page started by Mike Hearn that discussed oracles as co-signers and was "fascinated by those complex conditional transactions." Then he began to advertise also those concepts at conferences. Bertani noticed that while presenting blockchain features, the audience was highly interested in automated transactions (smart contracts) based on real-world events. Since oracles were crucial to achieving those automated transactions, he thought they would have been something huge in the short term. On this early thought, however, he realized that: "I was wrong. Oracles are still a very long-term problem; I don't think there is a convincing solution at the moment. It's a partially solved problem for very simple use cases such as price feeds". By that time, Bertani's main concern and purpose were to find a practical solution to the problem of feeding automated transactions with real-world data. The first version of what later became Oraclize, in fact, was based on the concept of creating a pre-authorized Bitcoin transaction by partially signing it and having the oracle put the second signature when a certain condition was met. It was, however, hard to find developers to work on the project because although they understood the potential of Oraclize, they were skeptical because of script length, high costs, and network congestion. Against skeptical opinions, Bertani managed to continue the development of Oraclize, also thanks to some hackathons awards, the first of which was actually won by proposing a half-life insurance model based on Oraclize. When Ethereum was launched in May 2015, the Oraclize interface was adapted to run on the new blockchain, with the first tests done in August directly on the main net. It was soon actively used in smart contracts as Bertani declared, "the oracle to get data from the APIs was something that got much traction. We had a peak of tens of thousands of transactions every month, to get data about all different things" [60]. The team continued the development of insurance as well as other ideas, but they soon realized that every application needed a specific solution to the oracle problem. Therefore, they decided to drop all the side projects and focus exclusively on the oracle module. Figure 4c: Rating of the broadcaster address By then, both Bitcoin and Ethereum versions were live and available on the Oraclize website. For Bitcoin, there was an API with a point-and-click interface and a dedicated library. For Ethereum, instead, there was a solidity integration. Due to the already discussed problems of bitcoin conditional transactions (length, costs, congestion), bitcoin integration was eventually dismissed. It must also be noted that it never went into actual production besides being used for testing purposes. The team also considered integration with Reality Keys and Amazon Mechanical Turk as data sources, but given the scarcity of requests also, these features didn't go into production. To expand the use of the oracle, Oraclize implemented an authenticity proof to validate the fact that the oracle behaved honestly. It was called "honesty proof" at the beginning. Still, the name was soon rebranded given the fact that being data source reliability out of oracle's control, it could have created false expectations. This new feature guarantees that the data provided matches the one drawn at the source. Given the new implementation, the project was itself rebranded in Provable as its utility was also seen outside the blockchain domain. The idea was to exploit this feature to produce proof that could support a claim in Trusted Execution Environments (TEE). Provable was, in fact, used to create key-based attestation proof for Ledger devices. However, the first and foremost applications that succeeded in the test phase and went officially into production were developed on Ethereum and concerned random number generation for online casinos. The advantage in Provable was that, unlike an auditing attestation service that verifies online casino platforms every now and then, the Provable engine guaranteed that every number generation was executed safely. In the blockchain field, gambling was already a trend for applications such as Satoshi dice. It already had a large user base, besides insurance and decentralized finance, that came much later. Despite the increase in efficiency of the whole project, sadly, Provable could not keep up in popularity and traction with other modern oracles due to different business choices. Mainly the absence of a token favored those that used it for marketing purposes, eventually influencing the oracle market shares and distribution. That said, the oracle is still under development, and since its inception, it has processed millions of transactions on the Ethereum blockchain making it one of the longest-running and most widely used oracles to date. #### 3.6.1 How Oraclize worked on Bitcoin. Oraclize in Bitcoin could be leveraged using conditional transactions and P2SH. The Bitcoin script shall include the condition (or set of conditions), the required signature to be redeemed, the data source, the outcome (or set of outcomes), and possibly an expiration date so that in case of oracle malfunction or inability of parties to signing the transactions, the funds are returned to their owners. To better clarify the use of Oraclize on Bitcoin, an example can be taken from the protocol library that involves a bet between two agents, Alice and Bob [61]. The two agents establish that if the temperature in Milan (Italy) is above 10 degrees or if it rains from the contract is established until the next 24 hours, Bob can unlock the funds; otherwise, Alice can. They establish that the conditions are checked hourly via Wolfram Alpha until a condition is matched or the time limit elapses. In this contract, we then have the following: * A number of **agents**, Alice, bob, and Oraclize. A fourth agent (e.g., Carol) can be the arbitrator in case one of the others is unreachable or inactive. Otherwise, a nLockTime script can establish the refund of the money after a certain amount of time. * Pre-established **conditions** and **outcomes**. The conditions are the temperature in Milan above ten degrees and the event of rain. In both cases, the outcome is that bob can unlock the funds. If both conditions are not verified within the timeframe, the outcome is that Alice takes the funds. Action outcomes can overlap, but conditions should not. This is in order to avoid ambiguity in the contracts and certain types of attacks. For example, if a condition is ambiguous, an oracle can select the most convenient result for selfish purposes. * A data source, which is **Wolfram Alpha** in this case, but the parties can agree upon any other source. * A **Timeframe** in which the contract is active. The contract resolves when two of the three key owners sign the transaction. Truthfully, the contract can be written and also established without the help of Oraclize, as it can directly point to a web API. The data provider may offer its signature to the data itself and also to the transaction. What Oraclize does, however, is standardize the data transfer and the authenticity proof so that any web API can be a data source for the blockchain without any adaptation from their end. The authenticity proof is also available in the case the data source does not sign the data. Leveraging Qualcomm TEE technology, Oraclize can also guarantee that the data drawn from the web API has not been manipulated. Intuitively, only data sources with SSL encryption can be utilized with Oraclize since, in the absence of this level of security, the protocol would be unable to guarantee the reliability of the data due to unforeseen man-in-the-middle attacks. An additional level of security was also developed called "ProofShield,". With this feature, it could have been ensured that the proof of authenticity was already correctly verified if a transaction was signed. Without "ProofShield", on a chain like Bitcoin, the verification could not have been enforced but only verified and audited manually at a later time. Although available and working, the Bitcoin integration of Oraclize was dismissed since there were not enough requests to justify the put into production. ## 4 Discussion This section elaborates on the findings that emerged to answer this study's research questions. The first paragraph discusses the origin of the oracle idea, the intuitions at the base of the following proposals, and the debates that emerged. The second paragraph discusses the characteristics of trust models and how those protocols addressed the conundrum of the oracle problem. The third provides an overview of the limitations of building oracles on Bitcoin and further elaborates on the passage to Ethereum/alt-chains. ### Oracles and extrinsic data on-chain debate As per experts' experience, the oracle concept's first appearance came from the developer Mike Hearn that had it formalized in a BitcoinWiki post. In his interview, Hearn stated that he was not inspired by the work of someone else, but he borrowed part of the idea directly from the computer science concept of the "random oracle model." It also emerged that the name was meant to be provisional since oracles in computer science referred to something quite the opposite to what he wanted to elaborate. Arguably, if an "oracle" is a black box that feeds a centralized machine with trusted data, Hearn's proposal of a transparent box that feeds a decentralized application with trustless data should have been addressed with a different name. However, the name is stuck to date, and the heterogeneity in blockchain oracles definitions found in [5] may also be due to this taxonomical overlap. Intuitively, suppose a developer is asked to write an "oracle" for a blockchain, and the principle of the white box is not explicitly explained. In that case, he will probably write the type of oracle learned from legacy computer science. Truthfully, both oracle types have the same finality but should work in a different way and under different logic. Interestingly, it emerged that the word "smart contract" had also been improperly used as Nakamoto named applications built on Bitcoin just "contracts." In its contract example, of a transaction that is executed as soon as enough signatures are placed, no particular "smart" feature emerges. They appear to be digitalized representations of ordinary contracts based on a blockchain. In this form of contract, all the parties, ideally, share the same power (keys). Considering their characteristics, in the official wiki written by Mike Hearn, they are, in fact, called "distributed contracts," which seem to be more adherent to the original idea [22]. According to the reminiscences of Mike Hearn, the prefix "smart" started to be used a bit later. On the one hand, because it slightly resembled the concept of smart money/smart contract developed by Nick Szabo [62], and on the other hand, it was seen as necessary to alienate the "legal aura" from the word contracts. However, same as oracles, smart contracts in origin referred to a slightly different concept [62, 63]. Concerning Nakamoto idea on oracles, we cannot fully speculate on his opinion given the limited amount of available messages and posts certainly traceable to him. From some emails, however, it emerged that he was reluctant to add on Bitcoin mainchain, data different from time. His eBay-style marketplace was, in fact, proposed on a sidechain and not on the Bitcoin mainnet. However, a marketplace such as the one he proposed, requires extrinsic data on products and feedback. Still, no explanation is given on how this extrinsic data should have been fetched. It is arguable, but not provable that Nakamoto was not planning any specific data transferring system for his platform, different from the traditional ones. Nakamoto's vision undoubtedly influenced early developers and enthusiasts. For bitcoin core developers, in fact, extrinsic data injection in the blockchain was often seen as an improper use of the ledger [48]. Following the idea of Nakamoto, real-world applications should have been developed only through sidechains. This general mindset affected oracles' history in many ways. Reality Keys was, in fact, developed entirely off-chain to avoid messing with Bitcoin. Truthcoin was developed as a Bitcoin sidechain to adhere to Nakamoto ideas strictly, but due to sidechain's slow development, it is still not an active project to date. Orisi transactions were discarded, although being legitimate, eventually leading to the project's demise. Oraclize struggled to find developers due to the skepticism of large Bitcoin scripts. Finally, Counterparty was dragged into the OP_Return war, which is also said to have impacted the Ethereum launch and development. The OP_Return war is a matter that requires further elaboration. It was always "technically possible" to add data unrelated to bitcoin transactions on the Bitcoin blockchain. Although achievable with other features, such as the one Nakamoto utilized to add the famous string "The Times 03/Jan/2009 Chancellor on brink of second bailout for banks" [64] on the genesis block, the OP_Return, was the easiest way to perform this operation [65, 66]. Still, adding extrinsic data with OP_Return, resulted in a transaction considered unusual or non-standard. As explained in the Orisi case, non-standard transactions are transactions that, despite being perfectly valid and minable, are not relayed by ordinary Bitcoin nodes and, therefore, are unlikely to be included into blocks. However, with a compliant miner (or by mining the transaction autonomously), it was possible to add any sort of data, such as hashes, pieces of articles, song lyrics, pieces of poetry, or pieces of whitepapers [48]. There are, in fact, online repositories such as bitcoinstrings.com that keep track of all this extrinsic data on Bitcoin. Fearing network bloat and to discourage widespread adoption of this practice, the Bitcoin core developers, with 2014 version v0.9.0., reduced the OP_RETURN payload size to 40 bytes (after a pre-release testing phase at 80 bytes), which made it practical for storing a hash plus some small metadata [67, 68]. Significant on-chain data was thought to impact transaction fees and network performances negatively. However, with this update, transactions with OP_Return of 40 bytes (or less) were considered standard and relayed by nodes with default settings. This piece of Bitcoin history is exciting since, from this study, a discrepancy emerges in how the events are described and recalled by experts in the industry. According to the official BitcoinWiki, and reliable work of literature on the Bitcoin protocol, the OP_Return operator was inserted with version v0.9.0. and directly at 40 bytes [69], [70]. OP_Return at 80 bytes was described as an early hypothesis that was soon discarded and then accepted as an improvement in February 2015 with the v0.10.0 release. This view of history, however, clashes with some information found online and what some experts recall. OP_Return operator, in fact, appears to be already part of the Bitcoin code developed by Nakamoto in 2009 [71]. As also discussed in official forums, it was leveraged as a "non-standard" feature long before 2014 [68]. Nonetheless, the main "trigger" of the OP_Return war appears to be an early release of the v0.9.0, which de facto included a standardized OP_Return operator with a payload size of 80 bytes in middle 2013. According to what was declared by Bitcoin Core developers, the 80 bytes was a random value picked for testing purposes, and 40 bytes was then sought to be a fair amount to be finally included in the official release [68]. However, since the testing release was not widely announced and advertised to avoid overuse of the experimental features, other protocols developers building on Bitcoin in 2013 were unaware that the features they were using were meant to be "provisional." Therefore, when v0.9.0. was officially released with OP_Return payload at 40 bytes, they interpreted the "slash" as a deliberate censorship attempt, resulting in a fierce debate (therefore labeled as "war") within the community [48], [49]. Furthermore, from a technical point of view, the standardization of the OP_Return promoted with 2014 v0.9.0 was de facto inserted on Bitcoin in 2013 (for testing purposes) with pull request #2738 [72]. Therefore, for developers already leveraging on OP_Return in 2013 as a standard feature, the 2014 standardization announcement was just a "lie," further fueling the harshness of the debate [73]. From a strictly technical perspective, in fact, the main change in the OP_Return with the v0.9.0 2014 official release was just the halving of the payload size [74]. According to other views, however, the OP_Return debate was an exaggeration since they saw it just as an excuse for some people to promote their alt-chains, blaming Bitcoin core developers for creating division in the community [48], [54]. Further details on the OP_Return debate are out of the scope of this research. What clearly emerges is that the discussion of whether it is right or not to inject extrinsic data into Bitcoin is a conundrum of difficult solution. If on the one hand, Nakamoto's vision is evident on the fact that Bitcoin should remain pure (of data except for time); on the other hand, inventions in history are not always used as the inventor intended. In the case of Bitcoin, however, being decentralized and maintained by the community, any network "misuse" is paid by all the nodes regardless of their approval. As some objected, however, the exponential growth of the Bitcoin blockchain is also due to an increase in its use rather than just arbitrary data injection [73], [75]. Although still unsolved, nowadays, the debate is of less interest since real-world applications are mainly built on alt-chains such as Ethereum. In the author's opinion, however, the idea of Nakamoto to keep the main chain pure and experiment on additional layers, alt-chain or sidechains, could have been a reasonable, fair take at the end. ### Approaches to the oracle problem and trust models Given the nascency of decentralized machines and Bitcoin constraints, approaching the oracle problem was undoubtedly a hard task for early developers. In the contracts proposals by Nakamoto, the concept of trust was completely absent, as he probably aimed for a purely trustless environment. If a contract was based on an external party, he had to sign his part before it was broadcasted, therefore excluding any possible moral hazard [30]. The case described in [21] instead shows the outcome of the contract subject to the approval of the majority of its signers, arguably a voting-based system. There is, however substantial difference between the system described by Nakamoto and the voting-based system proposed in later oracle trust models. In the voting system proposed by Nakamoto, the outcome of the contract was meant to produce effect only for its signers. Later oracle proposals instead outline oracle systems in which voters' actions produce effect both for themselves and for platform users [76, 77]. The trust model proposed by Paul Sztorc is the one that most reflects Nakamoto's idea, although extending the role of voters. His approach addresses the oracle problem, trying to eliminate the concept of a single actor as a central point of trust. Of course, the corporation proposed by Sztorc is itself a central authority, but the economic incentive is meant to prevent a possible takeover by a single actor. Since based on sidechains, the system's security is still to be evaluated. As a technology that is not yet available, it is hard to predict how it may and will affect the overall safety of the oracle. The trust model proposed in other oracle mechanisms in this study leverages different factors to guarantee the reliability of data on-chain. Reality Keys, for example, aimed at implementing a data source that could have been widely recognized as reliable (e.g., Freebase). The reliability of the oracle protocol would have then benefitted from the trustworthiness of the data source itself. In case of mistakes due to automation, Reality Keys also included the possibility of a manual check upon the payment of a fee. Of course, this reduces the externalities of a machine failure but also introduces the chance of human failure. Trust in the very actors running the protocol is then required. The smart contract's security is trustless to a certain extent. As the contract setup is to be made directly by the clients, they are responsible for their security. Orisi's trust model needs to be seen and analyzed under its own logic. Orisi proposes a system based on multiple data feeds of high reputation from the traditional financial world. Arguably Bitcoin was supposed to propose an alternative to the existing financial world; however, it could not offer the stability (in terms of value) guaranteed by traditional finance. Therefore, the idea of Orisi to launch a fiat-pegged stablecoin based on reliable data feeds from traditional finance is entirely coherent with its aim. A trustless data feed would not have been a logical conclusion. On the other hand, its design based on multiple oracles offers a certain degree of decentralization, which should guarantee a stable and reliable feed in case of malfunction or unavailability of some of the data sources. The trust model offered in Oraclize found its premises in the power of technology. Leveraging on Trusted Execution Environments, it aims at guaranteeing that the data fetched at a reliable data source, such as Wolfram Alpha, has not been manipulated. Its design is undoubtedly centralized, but its features are meant to prove, in a fully auditable way, that it does not suffer from the usual weakness of a centralized source while enjoying the relative advantages. It went officially into production first for gaming applications, for which a centralized but secure design was plainly appropriate. The oracle module available in Counterparty has quite a simple structure that is explainable by the complexity of the whole protocol. Counterparty Oracle evaluation is open to the public judgment of xchain.io users that, with their feedback, can increase the rating of the data feed. However, the power and reliability of this system are subject to the size of the active community. If the community is restricted and non-active (e.g., made of speculators), it's unlikely that those oracles are evaluated, or competition between them arises. Arguably, however, with a low TVL, the chance of manipulation is also low. In case the active community is considerably large in size, then more comments and feedback to the oracles are expected, therefore increasing their meaningfulness. However, with a higher TVL, the chance of manipulation will also increase, but arguably more oracle alternatives should also be available. The outcome of this study suggests that the third research question cannot be answered by relying on a chronological order. Trust model design did not evolve with time but adapted almost instantaneously according to specific applications and needs. After the idea of oracles was launched by Mike Hearn in 2011 and further advertised in his 2012 London talk, a certain amount of time was required for developers to elaborate further on these new concepts and come up with a solution. As discussed by all the experts, 2013 was the year they elaborated their project to have those published with a proposal and/or a whitepaper in 2014. Therefore, all the different trust models came out almost simultaneously, de facto, weakening the hypothesis of evolution. The newborn trust models were untied from each other's and reflected their practical use and purposes. The models analyzed in this study have their own peculiarities and uniqueness; therefore, none of these can be considered an improvement of another one. An analysis of second-generation oracles that may be made of those natives of the Ethereum blockchain could show some inspiration or improvement to those analyzed in this study. However, an investigation into these is beyond the scope of this study. ### Limitations and difficulties of building on Bitcoin and the transition to Ethereum It is widely known that building on Bitcoin in the early days was a difficult task; therefore also, oracles development was problematic. According to experts' opinion, the main difficulties concerned: * The absence of developing tools and wallets * The large size and costs of Bitcoin scripts * Concerns about net congestion. * Skepticism in storing extrinsic data into Bitcoin. The existence of applications, such as Satoshi Dice or Lighthouse, supports the view that it was actually possible to build on Bitcoin, but the development was not standardized, and every developer had to find the proper workaround for their application. Although Hearn developed Bitcoin-J as a wallet aimed at being like Metamaks for Ethereum, it lacked the contribution of other developers to further build on top of it. The experience of Edmund Edgar with Runkeeper API also confirms that although it was possible to build with Bitcoin, the absence of a proper wallet and developing tools constituted a critical limitation. With developing tools and Metamask, EVM undoubtedly constituted an incentive for developers to migrate to Ethereum. The concerns on net congestion and transaction costs were another element that further contributed to pushing real-world application development outside the Bitcoin domain. Despite the interest and prizes that Oraclize managed to obtain, he could never put the Bitcoin version into production due to low usage and the struggle to find developers. Orisi project was abandoned due to the skepticism around scripts and the inability to have their transaction mined. Their experience made clear that no application could entirely rely on non-standard scripts for its survival. Truthcoin was built following Nakamoto's advice, but to date, a working version is still unavailable due to the struggle to build a proper sidechain. Although nonfacing the issues of building directly on Bitcoin, it is suffering from the issue of having not an existing chain to be developed upon. In fact, other projects inspired by Truthcoin, such as Augur, could have been successfully launched on Ethereum already in 2016 [78]. However, complying with different standards, it is debatable if that choice constitutes an improvement of the original design. However, the actual limit of building on Bitcoin emerges through the experience of Counterparty. Regardless of whether they were using or misusing OP_Return feature, their history shed light on the fact that building on Bitcoin was simply not "welcome" [48]. The Ethereum gas system compromise allows anyone to program any type of application as long as the proper gas fees are paid. It was born with this system, and nodes know their role and purpose. On Bitcoin, different philosophies and visions coexist, and not all the nodes/miners share the same idea. Although Bitcoin has a system of fees that varies according to the net congestion and transaction type, it was not meant to include also applications in the first place. Therefore, the payment of a fee is not necessarily a good compromise for those who wish for a light and mono-purpose chain. When Ethereum was launched then, the environment was divided into two main platforms, one of which was tormented by disputes on extrinsic data usage and block size, and another one that was cheap and full of enthusiasts building and experimenting [79, 80]. Above all, many fundings were also coming to the Ethereum platform, so it was understandable to expect a consistent migration of developers [81]. Nowadays, many improvements have been made to the Bitcoin network with the development of second layers, such as the Lightning network [82]. Ideally, those are capable of bringing the entire ecosystem built on Ethereum to the Bitcoin network. Also, advancements have been made by Blockstream concerning sidechains. It is arguable then to expect a working version in the near future. Nonetheless, due to the shift from an electronic cash system to a safe-haven asset, as a consequence of the block size war, many of those who own a significant amount of bitcoins share the philosophy of HODL (hold for dear life). Therefore, even if working decentralized applications will eventually be built on Bitcoin, skepticism emerges on the existence of a solid user base willing to spend their assets on them. ## 5 Conclusions This study provides an overview of the Bitcoin oracles' history, from the first theoretical idea to the early practical applications till the advent of Ethereum. In the absence of dedicated literature, experts who worked on oracles in the early days were interviewed, and the information provided was enriched with the available written material found online. From the research, it emerges that the idea of an oracle mechanism came from Mike Hearn, that had it formalized in an early BitcoinWiki page. The concept was then further elaborated theoretically by other experts and then translated into actual software by a few enthusiasts. The year in which those projects were developed was 2014. All approaches to solving the oracle problem bear their peculiarities that are primarily due to the specific applications for which they were designed. The idea of a chronological evolution of trust models is instead not verified as those investigated were apparently untied from each other. Another aspect that emerges from this research is the difficulty in building oracles and, in general, applications on Bitcoin. Interestingly, the hardest to overcome were not technical difficulties. A part of the Bitcoin community was, in fact, reluctant to introduce extrinsic data on the chain due to concerns about network growth/congestion and transaction fees. The same goes for some non-standard Bitcoin scripts. The passage to Ethereum was, therefore, inevitable. The present research contributes to academic literature filling the gap that exists from the origin of oracles on Bitcoin to modern oracles on Ethereum/alt-chains. The original concept of oracles as well as smart contracts, are clarified. The theoretical background of future academic papers can therefore build on the findings of this research. Practitioners can also benefit from this research by understanding how oracles were theorized at early stages and how they were initially adapted to different applications. The present study also has limitations since, although data were double checked, and verified by the author, the history is described through the eyes of the experts interviewed; therefore, it can be biased by their personal views and background. Furthermore, the impossibility of interviewing Nakamoto, and given the scarcity of the retrieved material concerning his opinion/idea on oracles, the accuracy of the interpretation provided cannot be guaranteed. Further studies can build on this one by comparing the oracles and trust models analyzed in this paper with those developed afterward on Ethereum and other alt-chains. \begin{tabular}{l|l|l|l} **Name and/or Pseudonym** & **First contribution** & **Last Contribution** & **Contribution type** \\ \hline Mike Hearn (Mike) & The 22nd of May 2011 & The 25th of May 2014 & Creator and main contributor to the page, added the main ideas. \\ \end{tabular}
2305.12405
Rational approximations of operator monotone and operator convex functions
Operator convex functions defined on the positive half-line play a prominent role in the theory of quantum information, where they are used to define quantum $f$-divergences. Such functions admit integral representations in terms of rational functions. Obtaining high-quality rational approximants of operator convex functions is particularly useful for solving optimization problems involving quantum $f$-divergences using semidefinite programming. In this paper we study the quality of rational approximations of operator convex (and operator monotone) functions. Our main theoretical results are precise global bounds on the error of local Pad\'e-like approximants, as well as minimax approximants, with respect to different weight functions. While the error of Pad\'e-like approximants depends inverse polynomially on the degree of the approximant, the error of minimax approximants has root exponential dependence and we give detailed estimates of the exponents in both cases. We also explain how minimax approximants can be obtained in practice using the differential correction algorithm.
OisΓ­n Faust, Hamza Fawzi
2023-05-21T08:54:27Z
http://arxiv.org/abs/2305.12405v1
# Rational approximations of operator monotone ###### Abstract Operator convex functions defined on the positive half-line play a prominent role in the theory of quantum information, where they are used to define quantum \(f\)-divergences. Such functions admit integral representations in terms of rational functions. Obtaining high-quality rational approximants of operator convex functions is particularly useful for solving optimization problems involving quantum \(f\)-divergences using semidefinite programming. In this paper we study the quality of rational approximations of operator convex (and operator monotone) functions. Our main theoretical results are precise global bounds on the error of local Pade-like approximants, as well as minimax approximants, with respect to different weight functions. While the error of Pade-like approximants depends inverse polynomially on the degree of the approximant, the error of minimax approximants has root exponential dependence and we give detailed estimates of the exponents in both cases. We also explain how minimax approximants can be obtained in practice using the differential correction algorithm. ## 1 Introduction Matrix functions have countless applications in applied mathematics [14]. Given a function \(f:I\to\mathbb{R}\) defined on an interval \(I\) of \(\mathbb{R}\), one can extend \(f\) to act on Hermitian matrices by applying \(f\) to the eigenvalues. More precisely, if \(A\) is a Hermitian matrix (of any finite size) with spectral decomposition \[A=\sum_{i}\lambda_{i}v_{i}v_{i}^{\dagger}\] where \(\lambda_{i}\in I\), and \(\{v_{i}\}\) is an orthonormal family of eigenvectors, we define \(f(A)\) by \[f(A)=\sum_{i}f(\lambda_{i})v_{i}v_{i}^{\dagger}.\] Operator monotone and operator convex functionsThe space of Hermitian matrices is equipped with a partial order, known as the Lowner order whereby \(A\succeq B\) if and only if \(A-B\) is positive semidefinite. In his seminal 1934 paper, Lowner [13] introduced and characterized so-called _operator monotone_ functions \(h:I\to\mathbb{R}\) which satisfy \[A\succeq B\implies h(A)\succeq h(B)\] for all Hermitian matrices \(A,B\) of any size, whose spectra lie in \(I\). He showed that the class of operator monotone functions coincides precisely with the class of _Pick functions_ from complex analysis which admit an analytic continuation to the open upper half plane. Importantly, such functions admit an integral representation in terms of rational functions. In the case where \(h\) is defined on \(I=(0,\infty)\), which will be the main setting of this paper, Lowner's theorem asserts that one can write \[h(x)=h(1)+\int_{0}^{1}\frac{x-1}{1+t(x-1)}d\nu(t) \tag{1}\] for some finite measure \(\nu\) supported on \([0,1]\). For each \(t\in[0,1]\) the rational integrand (in \(x\)) is operator monotone, and Lowner's theorem asserts that any operator monotone function is essentially a positive linear combination of such rational functions. Prominent examples of operator monotone functions are \(h(x)=\log x\), and \(h(x)=x^{\alpha}\) for \(\alpha\in[0,1]\), the latter example being known as the Lowner-Heinz inequality. Closely related to operator monotone functions are _operator convex_ functions \(f:I\to\mathbb{R}\) which satisfy Jensen's inequality in the Lowner order \[f(\lambda A+(1-\lambda)B)\preceq\lambda f(A)+(1-\lambda)f(B),\] for all \(\lambda\in[0,1]\) and all Hermitian matrices \(A,B\) having a spectrum contained in \(I\). Such functions were studied by Lowner's doctoral student Kraus in 1936 [14], where he established a characterization similar to the above. Any operator convex function \(f:(0,\infty)\to\mathbb{R}\) can be expressed as \[f(x)=f(1)+f^{\prime}(1)(x-1)+\int_{0}^{1}\frac{(x-1)^{2}}{1+t(x-1)}d\mu(t) \tag{2}\] where \(\mu\) is a finite measure supported on \([0,1]\). Examples of operator convex functions are \(f(x)=x\log x\) and \(f(x)=x^{\alpha}\) for all \(\alpha\in[1,2]\). We note that all operator monotone functions (1) are necessarily operator _concave_, however the converse is not true. Quantum \(f\)-divergencesOperator convexity plays a crucial role in the area of quantum information theory. A _density matrix_ is a Hermitian positive semidefinite matrix with trace equal to \(1\). Density matrices are the quantum analogue of classical probability distributions, and represent probabilistic mixtures of quantum states. If \(\rho\) and \(\sigma\) are two density matrices, a fundamental quantity in quantum information is the _quantum relative entropy_ defined by \[S(\rho\|\sigma)=\operatorname{Tr}[\rho(\log\rho-\log\sigma)], \tag{3}\] which is the quantum counterpart of the classical Kullback-Leibler divergence. More generally for \(\alpha\in(1,2]\), the \(\alpha\)-quasi-entropy of the pair \((\rho,\sigma)\) is defined by \[S_{\alpha}(\rho\|\sigma)=\frac{1}{\alpha-1}(\operatorname{Tr}[\rho^{\alpha} \sigma^{1-\alpha}]-\operatorname{Tr}\sigma) \tag{4}\] which converges to \(S(\rho\|\sigma)\) as \(\alpha\to 1\). A key fact about \(S_{\alpha}\) and \(S\) is that they are joint convex functions in \((\rho,\sigma)\); this is a (nontrivial) consequence of the operator convexity of the functions \(x^{\alpha}\) for \(\alpha\in[1,2]\), see [10, 11]. The \(\alpha\)-quasi entropy defined above is only a special case of so-called quantum \(f\)-divergences [20], defined for any operator convex \(f:(0,\infty)\to\mathbb{R}\), whose precise definition we omit here. Let us just mention that these are the quantum analogues of the well-known \(f\)-divergences defined in classical probability and information theory for probability distributions \(p=\{p_{i}\}\) and \(q=\{q_{i}\}\) via the expression \[S_{f}(p\|q)=\sum_{i}q_{i}f(p_{i}/q_{i}),\] which is convex in \((p,q)\) for any choice of convex function \(f:(0,\infty)\to\mathbb{R}\). Optimization and semidefinite programmingMany problems in quantum information are naturally expressed as optimization problems involving a quantum \(f\)-divergence, and in particular the quantum entropies (3) or (4). This includes for example the problem of evaluating the efficiency of a quantum key distribution protocol in cryptography [13], measuring the amount of entanglement in a quantum state [15], or the evaluation of quantum channel capacities [21]. Given the complex nature of some of these optimization problems, it is highly desirable to express them in a standard form for which efficient and reliable algorithms exist. _Semidefinite programming_[22] has emerged as a natural way to formulate convex optimization problems arising in quantum information theory, given its ability to deal with Hermitian positive semidefinite variables. A semidefinite program is a convex optimization problem of the form \[\min_{x\in\mathbb{R}^{n}}\quad c^{T}x\quad:\quad A_{0}+x_{1}A_{1}+\cdots+x_{n }A_{n}\succeq 0 \tag{5}\] where \(c\in\mathbb{R}^{n}\), and \(A_{0},A_{1},\ldots,A_{n}\) are given Hermitian matrices. The constraint (5) in a semidefinite program is known as a _linear matrix inequality_ and it describes a convex region in \(\mathbb{R}^{n}\). Semidefinite programs can be solved efficiently using a variety of algorithms such as interior-point methods [23] or first-order splitting methods [1]. Optimization problems involving the quantum entropy function (3) however cannot be directly expressed in semidefinite form since the feasible set of a semidefinite optimization problem is necessarily semialgebraic [1], while the quantum entropy function is not. One approach around this problem is to work with rational approximations of the entropy function. This approach was adopted in [14, 15] where the approximations were obtained from quadrature rules applied to the integral representations (1) and (2). The key fact is that while a general operator convex function \(f\) may not be amenable to semidefinite programming, the rational integrand \[f_{t}:x\mapsto\frac{(x-1)^{2}}{1+t(x-1)}\] is. Indeed, observe that a convex constraint of the form \[f_{t}(x)\leq\tau\] can be equivalently described by the \(2\times 2\) linear matrix inequality \[\begin{bmatrix}1+t(x-1)&x-1\\ x-1&\tau\end{bmatrix}\succeq 0.\] Such a _semidefinite programming representation_ of \(f_{t}\) can be extended to any finite positive sum of \(\{f_{t_{i}}\}\). Furthermore, with some additional (nontrivial) work these representations can be extended to matrix arguments, and to quantum \(f\)-divergences as shown in [14, 14, 15]. As such, it is of significant interest to understand how to best approximate an operator monotone or convex function by discretizing the integral (1) or (2), i.e., (in the case of an operator convex functions) \[f(x)\approx\sum_{i=1}^{m}u_{i}\frac{(x-1)^{2}}{1+t_{i}(x-1)}\] for some weights \(u_{i}>0\) and nodes \(t_{i}\in[0,1]\). Such approximations can in turn be used to approximate quantum \(f\)-divergences (such as the quantum relative entropy (3)) via functions that admit a semidefinite programming representation. In [14] it was observed that applying Gaussian quadrature to the integral (1), with respect to the measure \(d\nu(t)\), yields a diagonal Pade approximant to the function \(h\). One drawback of this approximation is that it is neither an upper bound, nor a lower bound on \(h\), a feature which is often desired in optimization. Later, it was realized in [15, 14] that if one uses the Gauss-Radau quadrature instead for \(h(x)=\log x\) then one obtains rigorous upper/lower bounds. Main contributionsThe goal of this paper is to systematically study rational approximations of operator monotone and operator convex functions defined on the positive half-line, with a view towards applications in semidefinite optimization and quantum information. We study two types of approximations and we precisely quantify the approximation errors of each type. * We first study Gaussian quadrature-based approximations, where the integrals (1) and (2) are discretized via some Gaussian quadrature rule. We show that by choosing suitable quadrature rules, one obtains upper/lower bounds on the function that are _locally optimal around_\(x=1\), i.e., they agree with the Taylor expansion to the higher possible order and coincide with certain Pade approximants. We further quantify the approximation error as a function of the number of discretization points. Our first main theorem is stated below for operator monotone functions--a version for operator convex functions appears later in Theorem 8. Recall that an \(m\)-point Gauss-Radau quadrature rule requires one of the nodes to be an endpoint of the integration interval, and is exact for all polynomials of degree up to \(2m-2\) (see Section 2.1 for the precise definition). **Theorem 1**.: _Let \(\nu\) be a finite Borel measure on \([0,1]\), and let \(h_{\nu}(x)=\int_{0}^{1}\frac{(x-1)}{1+t(x-1)}\mathrm{d}\nu(t)\) be a corresponding operator monotone function satisfying \(h_{\nu}(1)=0\). Let \(\nu_{m}^{0}\) and \(\nu_{m}^{1}\) be discrete measures associated to the \(m\)-node Gauss-Radau quadrature rules for \(\nu\) with fixed node at 0 and at 1, respectively. Let \(h_{\nu_{m}^{0}}\) and \(h_{\nu_{m}^{1}}\) be the rational operator monotone approximations to \(h_{\nu}\) arising from \(\nu_{m}^{0}\) and \(\nu_{m}^{1}\). Then \(h_{\nu_{m}^{0}}\) is a \([m/m-1]\) rational function, \(h_{\nu_{m}^{1}}\) is a \([m/m]\) rational function, and we have:_ 1. _For_ \(m\geq 1\)_,_ \[h_{\nu_{m}^{0}}(x)\geq h_{\nu_{m+1}^{0}}(x)\geq h_{\nu}(x)\geq h_{\nu_{m+1}^{1} }(x)\geq h_{\nu_{m+1}^{1}}(x).\] (6) 2. _Locally around_ \(x=1\)_,_ \[h_{\nu_{m}^{0}}(x)-h_{\nu}(x)=O((x-1)^{2m})\] (7) \[h_{\nu_{m}^{1}}(x)-h_{\nu}(x)=O((x-1)^{2m}).\] 3. _For any_ \(x>0\)_,_ \[h_{\nu_{m}^{0}}(x)-h_{\nu_{m}^{1}}(x)\leq\max\{\nu_{m}^{0}(\{0\}),\nu_{m}^{1}( \{1\})\}\frac{(x-1)^{2}}{x}.\] (8) Equation (6) says that the sequence of functions \((h_{\nu_{m}^{0}})\) (resp. \((h_{\nu_{m}^{1}})\)) is monotonic nonincreasing (resp. nondecreasing), and is an upper bound (resp. lower bound) on \(\bar{h}_{\nu}\). Equation (7) asserts that \(h_{\nu_{m}^{0}}\) is the order \([m/m-1]\)_Pade approximant_ to \(h_{\nu}(x)\) at \(x=1\), and \(xh_{\nu_{m}^{1}}(x)\) is the order \([m/m-1]\)Pade approximant to \(xh_{\nu}(x)\) at \(x=1\).1 Most importantly for us, (8) gives a global approximation bound on the gap Footnote 1: We should mention that monotonicity results along the lines of (6) are well known [10] for the PadΓ© approximants of _Stieltjes functions_, a class of functions which bear a strong resemblance to operator monotone functions. \[h_{\nu_{m}^{0}}(x)-h_{\nu_{m}^{1}}(x)=(h_{\nu_{m}^{0}}(x)-h_{\nu}(x))+(h_{\nu }(x)-h_{\nu_{m}^{1}}(x)),\] in terms of the weight of the endpoint in the Gauss-Radau quadrature rule, and relative to the function \((x-1)^{2}/x\). Since \(h_{\nu}(x)\) can be interpreted as an average of the functions \(\{x\mapsto\frac{x-1}{1+t(x-1)}\,:\,t\in[0,1]\}\) which are pointwise decreasing in \(t\), a natural choice of function relative to which error can be measured is the difference between the maximum and minimum of these functions: \((x-1)-(x-1)/x=(x-1)^{2}/x\). In Section 3 we work out the explicit values of \(\nu_{m}^{0}(\{0\})\) and \(\nu_{m}^{1}(\{1\})\) for the important example of \(\alpha\)-divergences which allows us to show that the convergence rate is given by \(\approx 1/m^{2(1-|\alpha|)}\) for \(\alpha\in(-1,1)\). * Theorem 1 quantifies the global accuracy of the best local approximants around \(x=1\). A natural question is to understand which approximants satisfy a global bound of the form (8) with the _best possible_ dependence on \(m\). In other words, given an operator convex function \(f:(0,\infty)\to\mathbb{R}\), and a nonnegative weight function \(b:(0,\infty)\to\mathbb{R}_{\geq 0}\), we seek to characterize the quantity \[E_{m_{1},m_{2}}=\inf_{r\in\mathcal{R}_{m_{1},m_{2}}}\sup_{x\in(0,\infty)} \frac{|f(x)-r(x)|}{b(x)}\] (9) where \(\mathcal{R}_{m_{1},m_{2}}\) is the set of rational functions that can be expressed as \(p(x)/q(x)\) where \(\deg p\leq m_{1}\) and \(\deg q\leq m_{2}\). Leveraging existing results on best rational approximations we first prove, under mild conditions on the weight function \(b\), that the best rational approximant in (9) exists and can be obtained by applying a suitable discretization of the integral representation (2). **Theorem 2** (See Theorem 11 for details).: _Let \(f:(0,\infty)\to\mathbb{R}\) be operator convex with \(f(1)=f^{\prime}(1)=0\) and let \(b:(0,\infty)\to\mathbb{R}\) be a continuous weight function which is positive except at \(x=1\). Under conditions (22)-(25) the best order \([m+1/m]\) rational approximation to \(f\) relative to \(b\) exists and has the form_ \[\tilde{f}(x)=\sum_{i=1}^{m}u_{i}\frac{(x-1)^{2}}{1+t_{i}(x-1)}\] _for weights \(u_{i}\geq 0\) and \(0\leq t_{1}<\cdots<t_{m}\leq 1\)._ Next, we focus on the nonnegative operator convex functions \[f_{\alpha}(x)=\frac{1}{\alpha(\alpha-1)}(x^{\alpha}-\alpha(x-1)-1)\] which generate the so-called \(\alpha\)-divergences for \(\alpha\in[-1,2]\). For \(\alpha=0\) and \(\alpha=1\) we have \[f_{0}(x)=-\log x-x+1,\quad f_{1}(x)=x\log x-x+1.\] We define for \(0\leq\alpha\leq\beta\leq 2\) the quantity \[\epsilon^{[m]}_{\alpha,\beta}:=\inf_{\begin{subarray}{c}0\leq t_{1}<\dots<t_ {m}\leq 1\\ u_{i}\geq 0,\;i=1,\dots,m\end{subarray}}\left\{\sup_{x\in(0,\infty)}\Big{|}\frac{f _{\alpha}(x)-\tilde{f}(x)}{f_{\beta}(x)}\Big{|}\;:\;\tilde{f}(x)=\sum_{i=1}^{m }\frac{u_{i}(x-1)^{2}}{1+t_{i}(x-1)}\right\}. \tag{10}\] Our results quantify the behaviour of \(\epsilon^{[m]}_{\alpha,\beta}\) as \(m\to\infty\). Note that by choosing the \(\{u_{i},t_{i}\}\) in (10) via Gaussian quadrature (as in Theorems 1 and 8), we can get _upper bounds_ on \(\epsilon^{[m]}_{\alpha,\beta}\). However these upper bounds turn out to be far from tight. For example, one can show that Gaussian-quadrature based approximations yield upper bounds of the form \(\epsilon^{[m]}_{\alpha,\beta}\lesssim Cm^{-k}\) for some constants \(C\) and \(k\) that depend on \(\alpha,\beta\) (for certain values of \(\alpha,\beta\)). As the next result shows, this inverse polynomial dependence on \(m\) is far from optimal. Our first theorem concerns the case \(\alpha\in(0,1)\), and shows that we can get root exponential instead. **Theorem 3**.: _For each \(\alpha\in(0,1)\), there is a constant \(C_{\alpha}>0\) such that_ \[\epsilon^{[m]}_{\alpha,\alpha}\leq C_{\alpha}e^{-2\pi\sqrt{\alpha(1-\alpha)m}}\] _for all \(m\in\mathbb{N}\)._ The decay rate in \(\exp(-c\sqrt{m})\) is well-known to approximation theorists and is due to the presence of singularities. (Analytic functions can be approximated at a rate \(\exp(-cm)\) by polynomials.) The constant \(\sqrt{\alpha(1-\alpha)}\) comes from the presence of _two_ singularities for \(f_{\alpha}\), at \(x=0\) and \(x=\infty\). Our second theorem concerns the case \(\alpha\notin[0,1]\). **Theorem 4**.: _For \(1\leq\alpha<\beta\leq 2\), there is a constant \(C_{\alpha,\beta}>0\) such that_ \[\epsilon^{[m]}_{\alpha,\beta}\leq C_{\alpha,\beta}\,m^{3/2}e^{-\pi\sqrt{2 \alpha(\beta-\alpha)m/\beta}} \tag{11}\] _for all \(m\geq 1\). For \(-1\leq\beta<\alpha\leq 0\), we have_ \[\epsilon^{[m]}_{\alpha,\beta}\leq C_{1-\alpha,1-\beta}\,m^{3/2}e^{-\pi\sqrt{2 (1-\alpha)(\alpha-\beta)m/(1-\beta)}}. \tag{12}\] We suspect that these bounds can be improved; for example, we believe that the right-hand side of (12) can be replaced by \(C_{\alpha,\beta}e^{-2\pi\sqrt{\alpha(\beta-\alpha)m/\beta}}\). Note that this improvement is root exponential, since the factor of \(2\) in the exponent has moved outside of the square root. See Conjectures 1 and 2 for more details. We have made computer code used to calculate the quadrature rules \((u_{i},t_{i})_{i=1}^{m}\) and errors \(\epsilon^{[m]}_{\alpha,\beta}\) available at [https://www.github.com/oisinfaust/alpha-divergence-quad](https://www.github.com/oisinfaust/alpha-divergence-quad). As an example of how our results are of practical relevance in numerical quantum information science, we offer (without proof) the following result based on Theorem 4 and the forthcoming work [10], showing that one can get efficient semidefinite approximations of the quantum relative entropy function. We denote by \(\mathbb{H}^{n}\) the space of \(n\times n\) Hermitian matrices, and by \(\mathbb{H}^{n}_{++}\) the set of positive definite \(n\times n\) Hermitian matrices. **Theorem 5**.: _For any \(m\geq 1\), there is a convex function \(D^{[m]}(\rho\|\sigma)\) defined for \((\rho,\sigma)\in\mathbb{H}_{++}^{n}\times\mathbb{H}_{++}^{n}\) such that_ * \(D^{[m]}\) _has an explicit semidefinite programming representation with_ \(O(m)\) _blocks of size_ \(2n\times 2n\) _each,_ * _For any pair_ \((\rho,\sigma)\in\mathbb{H}_{++}^{n}\times\mathbb{H}_{++}^{n}\) _such that_ \(\operatorname{Tr}\rho=\operatorname{Tr}\sigma=1\)_,_ \[\left|D(\rho\|\sigma)-D^{[m]}(\rho\|\sigma)\right|\leq\frac{\epsilon_{1,2}^{[m ]}}{2}\left(\operatorname{Tr}[\rho^{2}\sigma^{-1}]-1\right)\] (13) _where_ \(\epsilon_{1,2}^{[m]}=O(m^{3/2}e^{-\pi\sqrt{m}})\)_._ The Gaussian quadrature-based approximations which have been used in previous works [14, 15, 16, 17] have a much slower convergence with \(m\), namely in \(1/m^{2}\). In fact if our Conjecture 1 is true (supported by the numerical evidence in Section 5) then the bound (13) is actually \(\epsilon_{1,2}^{[m]}=O(e^{-\pi\sqrt{2m}})\). OrganizationSection 2 covers preliminaries concerning Gaussian quadrature and best rational approximations. Section 3 deals with Gaussian quadrature approximations for operator monotone and convex functions and Section 4 deals with best rational approximants. Finally, Section 5 contains numerical illustrations of the results. ## 2 Preliminaries In this section, we review some important material concerning Gaussian quadrature, Pade approximants, and best rational approximation theory. Given nonnegative integers \(m_{1},m_{2}\), let \(\mathcal{R}_{m_{1},m_{2}}\) denote the set of rational functions \(r(x)=\frac{p(x)}{q(x)}\), where \(p\in\mathbb{R}_{m_{1}}[x]\), \(q\in\mathbb{R}_{m_{2}}[x]\) are polynomials with \(\deg p\leq m_{1}\) and \(\deg q\leq m_{2}\). We will sometimes call \(\mathcal{R}_{m_{1},m_{2}}\) the set of rational functions of order \([m_{1}/m_{2}]\). ### Gaussian quadrature and Pade approximants Let \(\mu\) be a finite measure on \([0,1]\) which is not supported on a finite set of points. For each positive integer \(m\), there is a quadrature rule on \(m\) nodes (the \(m\)-node Gauss quadrature rule for \(\mu\)) such that for each \(k=0,1,\ldots,2m-1\), \[\int_{[0,1]}t^{k}\mathrm{d}\mu(t)=\sum_{i=1}^{m}u_{i}t_{i}^{k}. \tag{14}\] The \(m\) nodes \(t_{i}\) are precisely the roots of the degree-\(m\) orthogonal polynomial with respect to the measure \(\mu\). It will be convenient to use the notation \(\mu_{m}:=\sum_{i}u_{i}\delta_{t_{i}}\) for the \(m\)-node Gauss quadrature rule for \(\mu\). Alternatively, one can fix one or more of the nodes in advance, and choose the weights and remaining nodes such that (14) is satisfied for \(k\) as large as possible. This leads to quadrature rules such as the Gauss-Radau or Gauss-Lobatto rule defined next. The _Gauss-Radau_ quadrature rule for \(\mu\) fixes _either_\(t_{1}=0\) or \(t_{m}=1\), and satisfies (14) for \(k=0,\ldots,2m-2\). The interior nodes are the roots of the degree-\((m-1)\) orthogonal polynomial with respect to the modified measure whose density with respect to \(\mu\) is \(t\) or \(1-t\) (depending on whether the fixed node is \(0\) or \(1\)). We will write \(\mu_{m}^{0}\), \(\mu_{m}^{1}\) for the corresponding discrete measures. The _Gauss-Lobatto_ quadrature rule for \(\mu\) fixes _both_\(t_{1}=0\) and \(t_{m}=1\), and satisfies (14) for \(k=0,\ldots,2m-3\). The interior nodes are the roots of the degree-\((m-2)\) orthogonal polynomial with respect to the modified measure whose density with respect to \(\mu\) is \(t(1-t)\). We will write \(\mu_{m}^{0,1}\) for the corresponding discrete measure. Given a function \(f\), smooth in a neighbourhood of \(1\), and nonnegative integers \(m_{1},m_{2}\), the _Pade approximant_ to \(f\) at \(1\) of order \([m_{1}/m_{2}]\) is the rational function \(r(x)=\frac{p(x)}{q(x)}\) which satisfies \[q(x)f(x)-p(x)=O((x-1)^{m_{1}+m_{2}+1})\qquad\text{as }x\to 1.\] With this definition, the Pade approximant of each order \([m_{1}/m_{2}]\) always exists and is unique. Usually, Pade approximants satisfy the slightly stronger condition \(r(x)-f(x)=O((x-1)^{m_{1}+m_{2}+1})\), but this is not always possible for certain functions \(f\). ### Best rational approximations Given an interval \(I\subseteq\mathbb{R}\), and a continuous function \(f:I\to\mathbb{R}\) define the best rational approximation error \[E_{m_{1},m_{2}}(f,I)=\inf_{r\in\mathcal{R}_{m_{1},m_{2}}}\;\sup_{x\in I}\;|r(x)- f(x)|.\] Bernstein [1] already showed that if \(I\) is bounded and if \(f\) can be analytically continued to an open interval strictly containing \(I\), then \(f\) can be approximated by _polynomials_ with a geometric rate of convergence, i.e. \[E_{m,0}(f,I)=O(\rho^{m})\quad\text{ for some }\rho\in(0,1).\] Unfortunately, the best polynomial approximants to functions with endpoint singularities can converge much more slowly. For example [1], we have \(E_{m,0}(\sqrt{x},[0,1])=\Omega(n^{-1})\). On the other hand, rational approximations can be much better, as shown by Stahl in [11] (see also [12]) \[E_{m,m}(\sqrt{x},[0,1])\sim 8e^{-\pi\sqrt{2m}}.\] This root-exponential convergence is typical for functions admitting special integral form, namely _Stieltjes transforms_ of well-behaved measures. Though distinct, these functions have a strong connection with operator monotone and convex functions, a connection that we exploit heavily in this paper. The following result is easily deduced from Theorems 1 and 2 in [13] concerning the best rational approximation of Markov-Stieltjes functions. Similar results are obtained in [1, 2, 1, 10]. **Theorem 6**.: _For \(\alpha,\beta>0\), let \(\phi:[-1,1]\to\mathbb{R}\) be a Borel-measurable function satisfying \(0\leq\phi(\lambda)\leq c(1-\lambda)^{\alpha}(1+\lambda)^{\beta}\) for some \(c>0\). Let \(G:[-1,1]\to\mathbb{R}\) be given by_ \[G(w)=\int_{-1}^{1}\frac{\phi(\lambda)}{1-\lambda w}\mathrm{d}\lambda. \tag{15}\] _Then, for some constant \(C>0\) and all \(m\geq 0\),_ \[E_{m,m}(G,[-1,1])\leq Ce^{-2\pi\sqrt{\kappa m}}\] _where \(\kappa:=\frac{\alpha\beta}{\alpha+\beta}\) is the harmonic mean of \(\alpha\) and \(\beta\)._ Note that the function \(G\) of (15) has two singularities at \(w=-1\) and \(w=+1\); indeed its \(\lceil\alpha\rceil^{\text{th}}\) derivative blows up as \(w\to 1\), while its \(\lceil\beta\rceil^{\text{th}}\) derivative blows up as \(w\to-1\). Proof.: This result appears in the literature [13] when the function \(G(w)\) admits a single singularity at \(w=+1\), which corresponds to the case "\(\beta=+\infty\)" above. To deal with functions admitting two singularities we split the integral representation (15) into three terms \(G=G^{-}+G^{0}+G^{+}\) where \(G^{0}\) is analytic and \(G^{-}\) and \(G^{+}\) each have a single singularity at \(-1\) and \(+1\) respectively. Applying existing results to each individual function yields the desired result. The details are worked out in Appendix A. A natural question is whether the best rational approximants to the function \(G\) in (15) can be obtained by discretizing the integral form. In fact, it can be shown to be the case, and this is the object of the next theorem from [1]. **Theorem 7** (See [1, Theorem V.3.6]).: _Let \(G\) be a function of the form (15). Then for each \(m\in\mathbb{N}\), \(E_{m-1,m}(G;[-1,1])\) is attained by a rational function \(R_{m}\) which has the form_ \[R_{m}(w)=\sum_{i=1}^{m}\frac{a_{i}}{1-\lambda_{i}w},\] _for \(a_{i}\geq 0\) and \(\lambda_{i}\in(-1,1)\). Moreover \(R_{m}\) is unique._ It will later be convenient to express the above results in terms of functions defined on \((0,\infty)\). By applying a change of variables \(x=\frac{1+w}{1-w}\) which maps \(w\in(-1,1)\) to \(x\in(0,\infty)\), the function \(G(w)\) in (15) is mapped to \[g(x)=\int_{0}^{1}\frac{x+1}{1+t(x-1)}d\mu(t)\] where \(\frac{d\mu(t)}{dt}\leq ct^{\alpha}(1-t)^{\beta}\). The integral above is closely related to the integral representation of operator monotone and operator convex functions we saw earlier; more precisely we see that \(\frac{x-1}{x+1}g(x)\) has exactly the form (1) and, \(\frac{(x-1)^{2}}{x+1}g(x)\) has exactly the form (2). This allows us to state the following corollary concerning quadrature approximations of certain operator convex functions. **Corollary 1**.: _Let \(f:(0,\infty)\to\mathbb{R}\) be an operator convex function with \(f(1)=f^{\prime}(1)=0\) that admits an integral representation \(f(x)=\int_{0}^{1}\frac{(x-1)^{2}}{1+t(x-1)}\psi(t)dt\), where the density \(\psi\) satisfies \(0\leq\psi(t)\leq ct^{\alpha}(1-t)^{\beta}\) for some \(c,\alpha,\beta>0\). Then there is a constant \(C\), and for each \(m\in\mathbb{N}\) there are weights \(u_{i}\geq 0\) and nodes \(t_{i}\in(0,1)\), such that_ \[\left|f(x)-\sum_{i=1}^{m}u_{i}\frac{(x-1)^{2}}{1+t_{i}(x-1)}\right|\leq Ce^{- 2\pi\sqrt{\frac{\alpha m}{\alpha+\beta}}}\frac{(x-1)^{2}}{x+1}.\] Proof of Corollary 1.: The proof is a direct consequence of Theorems 6 and 7, via a change of variables. Indeed, note that \[\frac{x+1}{(x-1)^{2}}f(x)=\int_{0}^{1}\frac{x+1}{1+t(x-1)}\psi(t)dt=\int_{-1}^ {1}\frac{1}{1-\lambda\frac{x-1}{x+1}}\phi(\lambda)d\lambda=G((x-1)/(x+1))\] where \(G(w):=\int_{-1}^{1}\frac{1}{1-\lambda w}\phi(\lambda)d\lambda\), and \(\phi(\lambda):=\psi((1-\lambda)/2)\leq c^{\prime}(1-\lambda)^{\alpha}(1+ \lambda)^{\beta}\). By Theorems 6 and 7 we know that the best order \([m-1/m]\) rational approximants to \(G(w)\) have the form \(R_{m}(w)=\sum_{i=1}^{m}\frac{a_{i}}{1-\lambda_{i}w}\), and satisfy \[|G-R_{m}|=E_{m-1,m}(G,[-1,1])\leq E_{m-1,m-1}(G,[-1,1])\leq C^{\prime}e^{-2\pi \sqrt{\kappa(m-1)}}\leq Ce^{-2\pi\sqrt{\kappa m}}\] for all \(m\geq 1\), where \(\kappa=\frac{\alpha\beta}{\alpha+\beta}\). It follows that, defining \(r_{m}(x)=R_{m}((x-1)/(x+1))\), we get \[\sup_{x\in(0,\infty)}\left|\frac{x+1}{(x-1)^{2}}f(x)-r_{m}(x)\right|=\sup_{w \in(-1,1)}|G(w)-R_{m}(w)|\leq Ce^{-2\pi\sqrt{\kappa m}}.\] Note that \(r_{m}(x)=\sum_{i=1}^{m}\frac{a_{i}}{1-\lambda_{i}(\frac{x-1}{x+1})}=\sum_{i=1} ^{m}u_{i}\frac{x+1}{1+t_{i}(x-1)}\), where \(t_{i}=(1-\lambda_{i})/2\) and \(u_{i}=a_{i}/2\). Weighted approximationsIn this paper we will mostly deal with best rational approximations _relative_ to a nonnegative weight function \(b:I\to\mathbb{R}_{\geq 0}\). We define the relative approximation error by \[E_{m_{1},m_{2}}(f,I;b)=\inf_{r\in\mathcal{R}_{m_{1},m_{2}}}\inf\left\{ \epsilon\;:\;|r(x)-f(x)|\leq\epsilon\,b(x)\;\forall x\in I\right\}.\] Note that Corollary 1 already says that for operator convex \(f:(0,\infty)\to\mathbb{R}\) such that \(f(1)=f^{\prime}(1)=0\) whose representing measure satisfies the bound \(d\mu(t)/dt=\psi(t)\leq ct^{\alpha}(1-t)^{\beta}\), \[E_{m+1,m}\left(f,(0,\infty);\frac{(x-1)^{2}}{x+1}\right)\leq Ce^{-2\pi\sqrt{ \alpha\beta m/(\alpha+\beta)}}.\] ## 3 Best local approximants In this section we prove Theorem 8, the analogue of Theorem 1 for operator convex functions. The proof of Theorem 1 itself is very similar, so we omit it. **Theorem 8**.: _Let \(\mu\) be a finite Borel measure on \([0,1]\), and let \(f_{\mu}(x)=\int_{0}^{1}\frac{(x-1)^{2}}{1+t(x-1)}\mathrm{d}\mu(t)\) be the corresponding operator convex function satisfying \(f_{\mu}(1)=f^{\prime}_{\mu}(1)=0\). Let \(\mu_{m}\) and \(\mu_{m}^{0,1}\) be the discrete measures associated to the \(m\)-node Gauss and Gauss-Lobatto quadrature rules for \(\mu\), respectively. Let \(f_{\mu_{m}}\) and \(f_{\mu_{m}^{0,1}}\) be the rational operator convex approximations to \(f_{\mu}\) arising from \(\mu_{m}\) and \(\mu_{m}^{0,1}\). Then_ 1. _For_ \(m\geq 0\)_,_ \[f_{\mu_{m}}(x)\leq f_{\mu_{m+1}}(x)\leq f_{\mu}(x)\leq f_{\mu_{m+1}^{0,1}}(x) \leq f_{\mu_{m}^{0,1}}(x)\] 2. _Locally around_ \(x=1\)_, we have_ \[f_{\mu_{m}}(x)-f_{\mu}(x) =O((x-1)^{2m+2})\] \[f_{\mu_{m}^{0,1}}(x)-f_{\mu}(x) =O((x-1)^{2m})\] 3. _For any_ \(x>0\)_,_ \(f_{\mu_{m+1}^{0,1}}(x)-f_{\mu_{m}}(x)\leq\mu_{m+1}^{0,1}(\{0\})(x-1)^{2}+\mu_{ m+1}^{0,1}(\{1\})\frac{(x-1)^{2}}{x}\)_._ Proof.: We will first prove 2, then 1, then 3. 2. For \(x\in(0,2)\) we can write \[f_{\mu}(x)-f_{\mu_{m}}(x) =\int\frac{(x-1)^{2}}{1+t(x-1)}\mathrm{d}\mu(t)-\int\frac{(x-1)^{ 2}}{1+t(x-1)}\mathrm{d}\mu_{m}(t)\] \[=(x-1)^{2}\int\sum_{k=0}^{\infty}t^{k}(1-x)^{k}\mathrm{d}\mu(t)-( x-1)^{2}\int\sum_{k=0}^{\infty}t^{k}(1-x)^{k}\mathrm{d}\mu_{m}(t)\quad[\text{ since }|x-1|<1]\] \[=(x-1)^{2m+2}\sum_{k=0}^{\infty}(1-x)^{k}\left[\int t^{k+2m} \mathrm{d}\mu(t)-\int t^{k+2m}\mathrm{d}\mu_{m}(t)\right]\qquad\qquad[\text{ using \eqref{eq:2m}}]\] (16) \[=O((x-1)^{2m+2}).\] Similarly, \[f_{\mu}(x)-f_{\mu_{m}^{0,1}}(x) =(x-1)^{2m}\sum_{k=0}^{\infty}(1-x)^{k}\left[\int t^{k+2m-2} \mathrm{d}\mu(t)-\int t^{k+2m-2}\mathrm{d}\mu_{m}^{0,1}(t)\right]\] (17) \[=O((x-1)^{2m}),\] since \(m\)-node Gauss-Lobatto quadrature is exact for polynomials up to degree \(2m-3\). 1. We will prove that \(f_{\mu_{m+1}}(x)\geq f_{\mu_{m}}(x)\) for each \(m\) and \(x>0\). Since \(f_{\mu_{m}}\) converges to \(f_{\mu}\) pointwise, this also proves that \(f_{\mu_{m}}(x)\leq f_{\mu}(x)\). Let \(0<s_{1}^{m}<\cdots<s_{m}^{m}<1\) be the nodes of \(\mu_{m}\) and let \(0<s_{1}^{m+1}<\cdots<s_{m+1}^{m+1}<1\) be the nodes of \(\mu_{m+1}\). Then \[f_{\mu_{m+1}}(x)-f_{\mu_{m}}(x)=\frac{p(x)(x-1)^{2}}{\prod_{i=1}^{m}[1+s_{i}^{ m}(x-1)]\prod_{j=1}^{m+1}[1+s_{j}^{m+1}(x-1)]}\] for some polynomial \(p\) of degree at most \(2m\). On the other hand, using (16) twice, we have \[f_{\mu_{m+1}}(x)-f_{\mu_{m}}(x)=\left[\int t^{2m}\mathrm{d}\mu(t)-\int t^{2m} \mathrm{d}\mu_{m}(t)\right](x-1)^{2m+2}+O((x-1)^{2m+3}).\] It follows that \(p(x)=c(x-1)^{2m}\), where \(c:=\int t^{2m}\mathrm{d}\mu(t)-\int t^{2m}\mathrm{d}\mu_{m}(t)\). For any smooth function \(f\), the residual \(\int f(t)\mathrm{d}\mu(t)-\int f(t)\mathrm{d}\mu_{m}(t)\) has the same sign as \(f^{(2m)}(\eta)\), for some \(\eta\in(0,1)\)[11, Eq. 8.4.10]. In our case, \(f^{(2m)}(\eta)=(2m)!>0\) for any \(\eta\), hence \(c>0\). Therefore, \(f_{\mu_{m+1}}(x)\geq f_{\mu_{m}}(x)\) for each \(m\) and \(x>0\). Now, let \(0<t_{1}^{m}<\cdots<t_{m-2}^{m}<1\) be the internal nodes of \(\mu_{m}^{0,1}\) and let \(0<t_{1}^{m+1}<\cdots<t_{m-1}^{m+1}<1\) be the internal nodes of \(\mu_{m+1}^{0,1}\). Then \[f_{\mu_{m+1}^{0,1}}(x)-f_{\mu_{m}^{0,1}}(x)=\frac{p(x)(x-1)^{2}}{x\prod_{i=1}^{ m-2}[1+t_{i}^{m}(x-1)]\prod_{j=1}^{m-1}[1+t_{j}^{m+1}(x-1)]}\] for some polynomial \(p\) of degree at most \(2m-2\). From (17), we deduce that \(p(x)=\bar{c}\,(x-1)^{2m-2}\), where \(\bar{c}:=\int t^{2m-2}\mathrm{d}\mu(t)-\int t^{2m-2}\mathrm{d}\mu_{m}^{0,1}(t)\). Unlike for Gaussian quadrature, for Gauss-_Lobatto_ quadrature, for any smooth function \(f\), the residual \(\int f(t)\mathrm{d}\mu(t)-\int f(t)\mathrm{d}\mu_{m}^{0,1}(t)\) has the _opposite_ sign to \(f^{(2m-2)}(\eta)\), for some \(\eta\in(0,1)\)[11, Eq. 8.10.22]. Hence, \(\bar{c}<0\). Therefore \(f_{\mu_{2}^{0,1}}(x)\geq f_{\mu_{3}^{0,1}}(x)\geq\cdots\geq f_{\mu}(x)\) for \(m\geq 2\) and all \(x>0\). 3. Let \(0<s_{1}<\cdots<s_{m}<1\) be the nodes of \(\mu_{m}\), and let \(0<t_{1}<\cdots<t_{m-1}<1\) be the interior nodes of \(\mu_{m+1}^{0,1}\). Also, write \(u_{0}=\mu_{m+1}^{0,1}(\{0\})\) and \(u_{1}=\mu_{m+1}^{0,1}(\{1\})\). We can write \[f_{\mu_{m+1}^{0,1}}(x)-f_{\mu_{m}}(x) =(x-1)^{2}\left[u_{0}+\frac{u_{1}}{x}+\frac{p(x)}{\prod_{i=1}^{m- 1}[1+t_{i}(x-1)]\prod_{j=1}^{m}[1+s_{j}(x-1)]}\right]\] \[=(x-1)^{2}\left[\frac{u_{0}xQ(x)+u_{1}Q(x)+xp(x)}{xQ(x)}\right],\] (18) where \(p\) is a polynomial of degree \(2m-2\) and \(Q(x)\equiv\prod_{i=1}^{m-1}[1+t_{i}(x-1)]\prod_{j=1}^{m}[1+s_{j}(x-1)]\). By item 2, \(f_{\mu_{m+1}^{0,1}}(x)-f_{\mu}(x)=O((x-1)^{2m+2})\) and \(f_{\mu_{m}}(x)-f_{\mu}(x)=O((x-1)^{2m+2})\), so \(f_{\mu_{m+1}^{0,1}}(x)-f_{\mu_{m}}(x)=O((x-1)^{2m+2})\) as \(x\to 1\). Therefore, \[f_{\mu_{m+1}^{0,1}}(x)-f_{\mu_{m}}(x)=\frac{c(x-1)^{2m+2}}{xQ(x)},\] for some \(c\geq 0\). Note that, for \(x\geq 1\), the function \(x\rightarrow\frac{x-1}{1+t(x-1)}\) is increasing for every \(t\in[0,1]\). Therefore, the product \(\frac{c(x-1)^{2m}}{xQ(x)}\) is increasing on the semi-infinite interval \(x\geq 1\). By considering the term of leading order in (18), we see that \(\frac{c(x-1)^{2m}}{xQ(x)}\to u_{0}\) as \(x\rightarrow\infty\). Therefore, for every \(x\geq 1\), we have \[f_{\mu_{m+1}^{0,1}}(x)-f_{\mu_{m}}(x)\leq u_{0}(x-1)^{2}.\] On the other hand, \(Q(x)\) is increasing in \(x\), so for every \(x\in(0,1]\), \(\frac{c(x-1)^{2m+2}}{xQ(x)}\leq\frac{c(x-1)^{2m+2}}{xQ(0)}\leq\frac{c(x-1)^{2} }{xQ(0)}\). Again comparing with (18), we see that \(\frac{c}{Q(0)}=u_{1}\), so for every \(x\in(0,1]\), we have \[f_{\mu_{m+1}^{0,1}}(x)-f_{\mu_{m}}(x)\leq u_{1}\frac{(x-1)^{2}}{x}.\] It follows that for any \(x>0\), \[f_{\mu_{m+1}^{0,1}}(x)-f_{\mu_{m}}(x)\leq(x-1)^{2}\max\{u_{0},\frac{u_{1}}{x} \}\leq u_{0}(x-1)^{2}+u_{1}\frac{(x-1)^{2}}{x}.\] **Remark 1**.: \(f_{\mu_{m}}(x)\) is a rational function of order \([m+1/m]\). Combining this with the second part of the theorem, \(f_{\mu_{m}}(x)\) is the order \([m+1/m]\) Pade approximant to \(f_{\mu}(x)\) at \(x=1\). Also, \(f_{\mu_{m}^{0,1}}(x)\) is a rational function of order \([m+1/m-1]\) with a simple pole at \(x=0\). Therefore, \(xf_{\mu_{m}^{0,1}}(x)\) is a rational function of order \([m+1/m-2]\). Multiplying by \(x\), we have \(xf_{\mu_{m}^{0,1}}(x)-xf_{\mu}(x)=O((x-1)^{2m})\) as \(x\to 1\). This shows that \(xf_{\mu_{m}^{0,1}}(x)\) is the order \([m+1/m-2]\) Pade approximant to \(xf_{\mu}(x)\) at \(x=1\). **Remark 2**.: Some functions such as \(x\mapsto\log x\) or \(x\mapsto x^{\alpha}\) for \(\alpha\in(0,1)\) are both operator monotone, and operator concave. One can thus apply either Theorem 1 or Theorem 8 to obtain rational approximations. In general one obtains _different_ rational approximations. However the upper approximations from Theorem 1 (based on Gauss-Radau with fixed node \(t=0\)) and Theorem 8 (based on Gauss quadrature) coincide, since they are the \([m+1/m]\) Pade approximants to these functions. Note that, since \(x\mapsto\log x\) and \(x\mapsto x^{\alpha}\) for \(\alpha\in(0,1)\) are operator _concave_ (not convex), the Gauss quadrature approximant from Theorem 8 is indeed an _upper_ approximant. The lower approximations from these theorems are different, however. **Remark 3**.: In practice, the \(m\)-node Gauss, Radau, and Lobatto quadrature rules can be obtained by solving an eigenvalue problem [11, 12]. The matrix defining the eigenvalue problem has entries related to the measure \(\mu\) (they are the coefficients for the recurrence relation satisfied by the orthogonal polynomials with respect to \(\mu\)). ### The special case of \(h(x)=\frac{x^{\alpha}-1}{\alpha(1-\alpha)}\) In this section we consider the particular functions \[h_{\alpha}(x)=\frac{x^{\alpha}-1}{\alpha(1-\alpha)}\qquad\bigg{[}=\frac{x-1}{ 1-\alpha}-f_{\alpha}(x)\bigg{]}\] which are operator monotone for all \(\alpha\in(-1,1)\). Note that \(h_{\alpha}(x)\to\log x\) as \(\alpha\to 0\). Since \(h_{\alpha}\) is operator monotone, it has an integral representation which is explicitly given by \[h_{\alpha}(x)=\int_{0}^{1}\frac{x-1}{1+t(x-1)}d\nu_{\alpha}(t) \tag{19}\] where \(d\nu_{\alpha}(t)=\frac{\sin(\alpha\pi)}{\alpha(1-\alpha)\pi}t^{-\alpha}(1-t)^ {\alpha}dt\) (with \(d\nu_{0}(t)=dt\)). The next theorem makes the convergence bound (8) explicit in terms of \(m\) and \(\alpha\). **Theorem 9**.: _Let \(\alpha\in(-1,1)\), and let \(\nu^{0}_{\alpha,m}\) and \(\nu^{1}_{\alpha,m}\) be respectively the \(m\)-point Gauss-Radau discrete measures obtained from the integral representation (19). Then_ \[\nu^{0}_{\alpha,m}(\{0\}) =\frac{\Gamma(1-\alpha)\Gamma(m+\alpha)}{\Gamma(1+\alpha)\Gamma( m+1-\alpha)\,m}\sim\frac{\Gamma(1-\alpha)}{\Gamma(1+\alpha)\,m^{2(1-\alpha)}}\] \[\nu^{1}_{\alpha,m}(\{1\}) =\nu^{1}_{-\alpha,m}(\{0\})\sim\frac{\Gamma(1+\alpha)}{\Gamma(1 -\alpha)\,m^{2(1+\alpha)}}.\] _When \(\alpha=0\), the asymptotic equalities are exact, i.e. \(\nu^{0}_{\alpha,m}(\{0\})=\nu^{1}_{\alpha,m}(\{1\})=m^{-2}\)._ Proof.: This is an application of [12, Eq. 3.10] where it is shown that for the measure on \([-1,1]\) with density \((1-\lambda)^{\alpha}(1+\lambda)^{\beta}\), the \(m\)-point Gauss-Radau quadrature rule has weight \[u^{\alpha,\beta}_{m}:=\frac{2^{\alpha+\beta+1}\Gamma(1+\beta)\Gamma(2+\beta) \Gamma(m+\alpha)\Gamma(m)}{\Gamma(m+\beta+1)\Gamma(m+\alpha+\beta+1)}\] at the endpoint \(\lambda=-1\). Note that the pushforward of \(\nu_{\alpha}\) by \(\lambda(t)=2t-1\) has density \(\frac{\sin(\alpha\pi)}{2\alpha(1-\alpha)\pi}(1-\lambda)^{\alpha}(1+\lambda)^{-\alpha}\). Therefore, \[\nu^{0}_{\alpha,m}(\{0\}) =\frac{\sin(\alpha\pi)}{2\alpha(1-\alpha)\pi}\cdot u^{\alpha,- \alpha}_{m}\] \[=\frac{\sin(\alpha\pi)}{\alpha(1-\alpha)\pi}\cdot\frac{\Gamma(1- \alpha)\Gamma(2-\alpha)\Gamma(m+\alpha)\Gamma(m)}{\Gamma(m-\alpha+1)\Gamma(m +1)}\] \[=\frac{1}{(1-\alpha)\Gamma(1-\alpha)\Gamma(1+\alpha)}\cdot\frac{ \Gamma(1-\alpha)\Gamma(2-\alpha)\Gamma(m+\alpha)}{\Gamma(m-\alpha+1)m}\] \[=\frac{\Gamma(1-\alpha)}{\Gamma(1+\alpha)}\cdot\frac{\Gamma(m+ \alpha)}{\Gamma(m-\alpha+1)m}.\] By Stirling's formula, \(\Gamma(x+\gamma)\sim\Gamma(x)x^{\gamma}\) as \(x\to\infty\), so we obtain the asymptote \[\nu^{0}_{\alpha,m}(\{0\})\sim\frac{\Gamma(1-\alpha)}{\Gamma(1+\alpha)\,m^{2(1- \alpha)}}\] as \(m\to\infty\). Finally, since \(\nu_{\alpha}\) has density proportional to \(t^{-\alpha}(1-t)^{\alpha}\), the pushforward of \(\nu_{\alpha}\) by \(t\mapsto 1-t\) is \(\nu_{-\alpha}\). It follows that \[\nu^{1}_{\alpha,m}(\{1\})=\nu^{0}_{-\alpha,m}(\{0\})\sim\frac{\Gamma(1+\alpha) }{\Gamma(1-\alpha)\,m^{2(1+\alpha)}}.\] ## 4 Best global approximants of \(\alpha\)-divergences In this section we study best global rational approximants. We focus in particular on the functions \[f_{\alpha}(x)=\frac{x^{\alpha}-\alpha(x-1)-1}{\alpha(\alpha-1)} \tag{20}\] for \(\alpha\in[-1,2]\), which generate the so-called \(\alpha\)-divergences. We note that \(f_{\alpha}\) is operator convex on \((0,\infty)\), and that \(f_{\alpha}(x)\geq 0\) for all \(x>0\) with \(f(1)=f^{\prime}(1)=0\) and \(f^{\prime\prime}(1)=1\). As such, \(f_{\alpha}\) has an integral representation of the form \[f_{\alpha}(x)=\int_{0}^{1}\frac{(x-1)^{2}}{1+t(x-1)}d\mu_{\alpha}(t)\] with \[\frac{d\mu_{\alpha}}{dt}=\frac{\sin[(\alpha-1)\pi]}{\alpha(\alpha-1)}t^{1- \alpha}(1-t)^{\alpha} \tag{21}\] Note that for \(\alpha\in(0,1)\), the asymptotes of \(f_{\alpha}\) as \(x\to 0^{+}\) and as \(x\to\infty\) are integral powers of \(x\). Indeed, we have \(f_{\alpha}(x)\to\frac{1}{\alpha}\) as \(x\to 0^{+}\) and \(f_{\alpha}(x)\sim\frac{x}{1-\alpha}\) as \(x\to\infty\). This is not the case for \(\alpha\in(-1,0]\cup[1,2)\). As such, the approximation results we have for \(f_{\alpha}\) will differ depending on whether \(\alpha\in(0,1)\) or not. Recall from Section 2 that \(E_{m_{1},m_{2}}(f,I)\) is the smallest error in approximating \(f\) by a \([m_{1}/m_{2}]\) rational function on \(I\subset\mathbb{R}\), and that when \(b\geq 0\) on \(I\), \(E_{m_{1},m_{2}}(f,I;b)\) is the smallest error relative to \(b\) in approximating \(f\) by a \([m_{1}/m_{2}]\) rational function on \(I\). First we show that it is not possible to find a uniform rational approximation to \(f_{\alpha}\) on the infinite interval \((0,\infty)\) for any \(\alpha\in(-1,2)\). **Theorem 10**.: _For any \(\alpha\in(-1,2)\), \(E_{m_{1},m_{2}}(f_{\alpha},(0,\infty))=\infty\) for any \(m_{1},m_{2}\)._ Proof.: For \(\alpha\notin\{0,1\}\), it suffices to show that \(E_{m_{1},m_{2}}(x^{\alpha},(0,\infty))=\infty\). Let \(r(x)\) be a rational function, and oberve that there is a number \(c\in\mathbb{R}\) and an integer \(k\) such that \(r(x)\sim cx^{k}\) as \(x\to\infty\). There is also a number \(\bar{c}\) and integer \(\bar{k}\) such that \(r(x)\sim\bar{c}x^{\bar{k}}\) as \(x\to 0^{+}\). We have \[x^{\alpha}-r(x)\sim\begin{cases}x^{\alpha}(1-cx^{k-\alpha})&x\to\infty\\ x^{\alpha}(1-\bar{c}x^{k-\alpha})&x\to 0^{+}.\end{cases}\] If \(\alpha\in(-1,0)\), then \(x^{\alpha}\to\infty\) as \(x\to 0^{+}\), so \(x^{\alpha}-r(x)\to\infty\) unless \(\bar{c}x^{\bar{k}-\alpha}\to 1\). This is impossible, since \(\bar{k}\neq\alpha\). If \(\alpha\in(0,1)\cup(1,2)\), then \(x^{\alpha}\to\infty\) as \(x\to\infty\), so \(x^{\alpha}-r(x)\to\infty\) unless \(cx^{k-\alpha}\to 1\). This is impossible, since \(k\neq\alpha\). Finally, if \(\alpha\in\{0,1\}\) essentially the same argument works, with \(x^{\alpha}\) replaced by \(\log x\) or \(x\log x\). We now turn our attention to rational approximations _relative_ to a weight function \(b\). Our first theorem shows that under some mild conditions on \(f\) and \(b\), the best rational approximant is of quadrature type, i.e., can be obtained by discretizing the integral representation of \(f\). The next theorem is general, and is not restricted to the functions \(f_{\alpha}\). **Theorem 11**.: _Let \(f:(0,\infty)\to\mathbb{R}\) be operator convex with \(f(1)=f^{\prime}(1)=0\). Let \(b:(0,\infty)\to\mathbb{R}\) be continuous, and positive except at \(x=1\). Assume that_ \[\lim_{x\to 1}b(x)/(x-1)^{2}>0, \tag{22}\] _that the limits_ \[\lim_{x\to 0^{+}}f(x)/b(x),\quad\lim_{x\to\infty}f(x)/b(x) \tag{23}\] \[\lim_{x\to 0^{+}}1/b(x),\quad\lim_{x\to\infty}x/b(x) \tag{24}\] _also exist and are finite, and that_ \[\lim_{x\to 0^{+}}xb(x)=0,\quad\lim_{x\to\infty}b(x)/x^{2}=0. \tag{25}\] _A best order \([m+1/m]\) rational approximation to \(f\) relative to \(b\) exists. Moreover, if \(\tilde{f}\) is such a best approximation, then it has the form_ \[\tilde{f}(x)=\sum_{i=1}^{m}\frac{u_{i}(x-1)^{2}}{1+t_{i}(x-1)} \tag{26}\] _for weights \(u_{i}\geq 0\) and nodes \(0\leq t_{1}<\dots<t_{m}\leq 1\)._ Proof.: See Appendix B. Best rational approximants of \(\alpha\)-divergencesRecall from (10) that \(\epsilon_{\alpha,\beta}^{[m]}\) is the error of the best approximation of quadrature type to \(f_{\alpha}\) relative to the weight function \(f_{\beta}\), i.e. \[\epsilon_{\alpha,\beta}^{[m]}:=\inf_{\begin{subarray}{c}0\leq t_{1}<\dots<t_ {m}\leq 1\\ u_{i}\geq 0,\,i=1,\dots,m\end{subarray}}\left\{\sup_{x>0}\left|\frac{f_{\alpha }(x)-\tilde{f}(x)}{f_{\beta}(x)}\right|\;:\;\tilde{f}(x)=\sum_{i=1}^{m}\frac{ u_{i}(x-1)^{2}}{1+t_{i}(x-1)}\right\}.\] As a direct corollary to Theorem 14, we have **Corollary 2**.: _Suppose that either \(\alpha,\beta\in(0,1)\), or \(1\leq\alpha<\beta<2\), or \(-1<\beta<\alpha\leq 0\). Then_ \[\epsilon_{\alpha,\beta}^{[m]}=E_{m+1,m}(f_{\alpha},(0,\infty);f_{\beta}).\] Our main theorems in this section concern the rate of decay of \(\epsilon_{\alpha,\beta}^{[m]}\). Our first theorem deals with the case \(\alpha\in(0,1)\). The theorem will be proved in Section 4.1. **Theorem 3**.: _For each \(\alpha\in(0,1)\), there is a constant \(C_{\alpha}>0\) such that_ \[\epsilon_{\alpha,\alpha}^{[m]}\leq C_{\alpha}e^{-2\pi\sqrt{\alpha(1-\alpha)m}}\] _for all \(m\in\mathbb{N}\)._ For \(\alpha\in(-1,0]\cup[1,2)\), it turns out that \(f_{\alpha}\) cannot be well approximated by rational functions \(\tilde{f}\) in relative error: **Theorem 12**.: _For \(\alpha\in(-1,0]\cup[1,2)\), we have \(\epsilon_{\alpha,\alpha}^{[m]}=1\)._ Proof.: First consider \(\alpha\in(-1,0]\), and consider a rational approximation \(\tilde{f}\) defined by nodes \(0\leq t_{1}<\dots<t_{m}\leq 1\) and weights \(u_{i}\geq 0\). If \(t_{m}=1\) and \(u_{m}>0\), we have \(\lim_{x\to 0^{+}}x\tilde{f}(x)=u_{m}>0\), and since \(\lim_{x\to 0^{+}}xf_{\alpha}(x)=\infty\), \(\sup_{x>0}\left|\frac{\tilde{f}(x)}{f_{\alpha}(x)}-1\right|=\infty\). Otherwise (if \(t_{m}<1\) or \(u_{m}=0\)), then \(\lim_{x\to 0^{+}}\tilde{f}(x)=\sum_{i}\frac{u_{i}}{1-t_{i}}<\infty\), and since \(\lim_{x\to 0^{+}}f_{\alpha}(x)=\infty\), we have \(\sup_{x>0}\left|\frac{\tilde{f}(x)}{f_{\alpha}(x)}-1\right|\geq 1\). By setting all the weights are zero (so that \(\tilde{f}(x)\equiv 0\)), we can always obtain \(\sup_{x>0}\left|\frac{\tilde{f}(x)}{f_{\alpha}(x)}-1\right|=1\), so \(\epsilon_{\alpha}^{[m]}=1\). For \(\alpha\in[1,2)\), analogous considerations of the behaviour of \(\tilde{f}(x)\) and \(f_{\alpha}(x)\) as \(x\to\infty\) show that \(\epsilon_{\alpha}^{[m]}=1\) in this case as well. This suggests to consider approximations which minimise the error \(\epsilon_{\alpha,\beta}^{[m]}\) for \(\beta\neq\alpha\). The following will be proved in Section 4.2 by explicitly constructing (suboptimal) quadrature rules. **Theorem 4**.: _For \(1\leq\alpha<\beta\leq 2\), there is a constant \(C_{\alpha,\beta}>0\) such that_ \[\epsilon_{\alpha,\beta}^{[m]}\leq C_{\alpha,\beta}\,m^{3/2}e^{-\pi\sqrt{2 \alpha(\beta-\alpha)m/\beta}} \tag{11}\] _for all \(m\geq 1\). For \(-1\leq\beta<\alpha\leq 0\), we have_ \[\epsilon_{\alpha,\beta}^{[m]}\leq C_{1-\alpha,1-\beta}\,m^{3/2}e^{-\pi\sqrt{2 (1-\alpha)(\alpha-\beta)m/(1-\beta)}}. \tag{12}\] Numerical experiments (see Figure 1 - right) suggest that the rate of root exponential convergence in Theorem 4 is too pessimistic by a factor of \(\sqrt{2}\). Further evidence of the suboptimality of this result is that, when modified to provide a bound on \(\epsilon_{\alpha,\alpha}^{[m]}\) for \(\alpha\in(0,1)\), the technique used to prove Theorem 4 yields only a bound of the form \[\epsilon_{\alpha,\alpha}^{[m]}\leq C_{\alpha}\,m^{3/2}e^{-\pi\sqrt{2\alpha(1- \alpha)m}},\] but we know from Theorem 3 that the correct behabiour is \(e^{-2\pi\sqrt{\alpha(1-\alpha)m}}\). Therefore we conjecture the following **Conjecture 1**.: _In Theorem 4, the right-hand-side of (11) can be replaced by_ \[C_{\alpha,\beta}e^{-2\pi\sqrt{\alpha(\beta-\alpha)m/\beta}},\] _and the right-hand side of (12) can be replaced by_ \[C_{1-\alpha,1-\beta}e^{-2\pi\sqrt{(1-\alpha)(\alpha-\beta)m/(1-\beta)}}.\] We note that to prove the conjecture above, it would be sufficient to prove the following result: **Conjecture 2**.: _Let \(-1\leq\beta<\alpha<0\). There is a constant \(C\) such that_ \[E_{m,m}(x^{\alpha},(0,1];x^{\beta})\leq C\,e^{-2\pi\sqrt{(\alpha-\beta)m}}.\] The above can be seen as an extension to negative powers \(\alpha\) of the following famous estimate in approximation theory [10]: \[\forall\alpha>0,\quad E_{m,m}(x^{\alpha},[0,1])\sim 4^{1+\alpha}|\sin[\alpha \pi]|\,e^{-2\pi\sqrt{\alpha m}}.\] ### Proof of Theorem 3 (case \(\alpha\in(0,1)\)) Since \(\alpha,1-\alpha>0\), we can readily apply Corollary 1 which says that for any \(m\in\mathbb{N}\) we can find weights \(u_{i}\geq 0\) and nodes \(t_{i}\in(0,1)\) such that \[\left|f_{\alpha}(x)-\sum_{i=1}^{m}\frac{u_{i}\,(x-1)^{2}}{1+t_{i}(x-1)}\right| \leq Ce^{-2\pi\sqrt{\alpha(1-\alpha)m}}\cdot\frac{(x-1)^{2}}{x+1}\quad\forall x >0. \tag{27}\] The key insight is to observe that the function \[\frac{(x+1)}{(x-1)^{2}}f_{\alpha}(x)\] is bounded below by a strictly positive constant, in fact by \(1/3\) (see below). This implies \[\frac{|f_{\alpha}(x)-\tilde{f}(x)|}{f_{\alpha}(x)}\leq 3Ce^{-2\pi\sqrt{\alpha(1- \alpha)m}}\] where \(\tilde{f}(x)=\sum_{i=1}^{m}\frac{u_{i}\,(x-1)^{2}}{1+t_{i}(x-1)}\) has the required form. It remains to prove that \(\frac{(x+1)}{(x-1)^{2}}f_{\alpha}(x)\geq\frac{1}{3}\) for all \(x>0\). This is a special case of Lemma 5. ### Proof of Theorem 4 (case \(\alpha\in(-1,0]\cup[1,2)\)) We start by making a change of variables in the integral representation of \(f_{\alpha}(x)\), so that the integral is over \(\mathbb{R}\) instead of \((0,1)\): \[f_{\alpha}(x) =\frac{(x-1)^{2}}{Z_{\alpha}}\int_{0}^{1}\frac{t^{1-\alpha}(1-t)^ {\alpha}}{1+t(x-1)}\mathrm{d}t \text{where }Z_{\alpha}:=\frac{\alpha(\alpha-1)\pi}{\sin[(\alpha-1)\pi]}\] \[=\frac{(x-1)^{2}}{Z_{\alpha}}\int_{-\infty}^{\infty}\frac{e^{u} (\frac{1}{1+e^{u}})^{1-\alpha}(\frac{e^{-u}}{1+e^{-u}})^{\alpha}}{(1+e^{u})(1+ ze^{u})}\mathrm{d}u \text{change of variable }t=\frac{1}{1+e^{-u}}\] \[=\frac{(x-1)^{2}}{Z_{\alpha}}\int_{-\infty}^{\infty}\frac{e^{(2- \alpha)u}}{(1+e^{u})^{2}(1+xe^{u})}\mathrm{d}u. \tag{28}\] An outline of the construction is as follows. We approximate the integral (28) using the trapezoidal rule at \(m\) equispaced nodes a distance \(h\) apart. The total error of this approximation is the sum of the discretization error (going from the integral to an infinite sum) and the truncation error (from truncating the sum to \(m\) terms). Relative to the function \(f_{\beta}(x)\), the discretization error is of order \(h^{-3}e^{-2\pi^{2}/h}\) (Lemma 1). The truncation error is of order \(e^{(\beta-\alpha)\alpha mh/\beta}\) (Lemma 2). The exponential part of these estimates are approximately balanced when \(h=\pi\sqrt{\frac{2\beta}{\alpha(\beta-\alpha)m}}\), and the overall error is then of order \(m^{3/2}e^{-\pi\sqrt{2\alpha(\beta-\alpha)m/\beta}}\) as claimed. The fundamental approach of this construction is due to Stenger [10]; Trefethen gives a simplified exposition in [10, Chapter 25]. Our analysis is slightly different because we are interested in best uniform rational approximations _relative_ to a function \(f_{\beta}(x)\). The key technical result is Lemma 6, which is used in Lemma 2 to bound the truncation error relative to \(f_{\beta}(x)\). We continue with a more detailed presentation of the construction. The integral in (28) can be approximated by the discrete sum \[S_{\alpha}^{h}(x):=\frac{(x-1)^{2}}{Z_{\alpha}}\sum_{n=-\infty}^{\infty}\frac {he^{(2-\alpha)nh}}{(1+e^{nh})^{2}(1+xe^{nh})}, \tag{29}\] for some small \(h>0\), which can in turn be truncated to a sum \[S_{\alpha}^{h,m_{-},m_{+}}(x):=\frac{(x-1)^{2}}{Z_{\alpha}}\sum_{n=m_{-}}^{m_ {+}}\frac{he^{(2-\alpha)nh}}{(1+e^{nh})^{2}(1+xe^{nh})} \tag{30}\] with \(m=m_{+}-m_{-}+1\) terms, where \(m_{-}\leq 0\leq m_{+}\). Note that \(S_{\alpha}^{h,m_{-},m_{+}}(x)\) has the form \(\sum_{n}u_{n}\cdot\frac{(x-1)^{2}}{1+t_{n}(x-1)}\), where \(t_{n}=(1+e^{-nh})^{-1}\) and \(u_{n}=\frac{he^{(2-\alpha)nh}}{Z_{\alpha}(1+e^{nh})^{3}}\). To complete the proof of Theorem 4, we will need two main lemmas. These will also guide our choice of the parameters \(h\) (the discretisation scale), and \(m_{-},m_{+}\) (which determine the truncation of (29)). **Lemma 1**.: _There is an explicit absolute constant \(C>0\) such that for every \(\alpha\in[-1,2]\) and \(h<\frac{\pi^{2}}{2}\),_ \[\big{|}f_{\alpha}(x)-S_{\alpha}^{h}(x)\big{|}\leq Ch^{-3}e^{\frac{-2\pi^{2}}{h }}f_{\alpha}(x)\qquad\forall\,x>0. \tag{31}\] _Here \(S_{\alpha}^{h}\) is as defined in (29)._ **Lemma 2**.: _Let \(1\leq\alpha<\beta\leq 2\), \(h>0\), and \(m_{-}\leq 0\leq m_{+}\). Then, for each \(x>0\),_ \[0\leq S_{\alpha}^{h}(x)-S_{\alpha}^{h,m_{-},m_{+}}(x)\leq\frac{3}{Z_{\alpha}} \left(\frac{e^{(\beta-\alpha)hm_{-}}}{\beta-\alpha}+\frac{e^{-\alpha hm_{+}} }{\alpha}\right)f_{\beta}(x).\] Proof of Theorem 4.: Assume first that \(1\leq\alpha<\beta\leq 2\). Let \(m_{+}\) be the largest integer which is less than \((1-\frac{\alpha}{\beta})m\), and let \(m_{-}\) be the smallest integer greater than \(-\frac{\alpha}{\beta}m\). Then the sum \(S_{\alpha}^{h,m_{-},m_{+}}(x)\) has at most \(m\) terms, and by Lemma 2, \[|S_{\alpha}^{h}(x)-S_{\alpha}^{h,m_{-},m_{+}}(x)| \leq\frac{3}{Z_{\alpha}}\left(\frac{e^{(\beta-\alpha)hm_{-}}}{ \beta-\alpha}+\frac{e^{-\alpha hm_{+}}}{\alpha}\right)f_{\beta}(x)\] \[\leq\frac{3}{Z_{\alpha}}\left(\frac{e^{-(\beta-\alpha)h[\frac{ \alpha}{\beta}m-1]}}{\beta-\alpha}+\frac{e^{-ah[(1-\frac{\alpha}{\beta})m-1]}} {\alpha}\right)f_{\beta}(x)\] \[=\frac{3}{Z_{\alpha}}\left(\frac{e^{(\beta-\alpha)h}}{\beta- \alpha}+\frac{e^{\alpha h}}{\alpha}\right)e^{-\frac{a(\beta-\alpha)}{\beta}mh} f_{\beta}(x).\] By Lemma 1, \(\left|f_{\alpha}(x)-S_{\alpha}^{h}(x)\right|\leq Ch^{-3}e^{\frac{-2\pi^{2}}{h}} f_{\alpha}(x)\). Combining this with the fact that \(f_{\alpha}(x)\leq\frac{\beta}{\alpha}f_{\beta}(x)\) (see Lemma 4), we get \[\left|f_{\alpha}(x)-S_{\alpha}^{h}(x)\right|\leq\frac{C\beta}{\alpha}h^{-3}e^ {-\frac{-2\pi^{2}}{h}}f_{\beta}(x).\] Choosing \(h=\pi\sqrt{\frac{2\beta}{\alpha(\beta-\alpha)m}}\) (so that \(e^{-\frac{\alpha(\beta-\alpha)}{\beta}mh}=e^{\frac{-2\pi^{2}}{h}}\)), we combine our estimates to obtain the bound \[|f_{\alpha}-S_{\alpha}^{h,m_{-},m_{+}}(x)|\leq\underbrace{\left(\frac{C\beta }{\alpha}h^{-3}+\frac{3}{Z_{\alpha}}\left(\frac{e^{(\beta-\alpha)h}}{\beta- \alpha}+\frac{e^{\alpha h}}{\alpha}\right)\right)}_{=O(m^{\frac{3}{2}})\text{ since }h=\sqrt{\frac{2\beta}{\alpha(\beta-\alpha)m}}}e^{-\pi\sqrt{2\alpha(\beta- \alpha)m/\beta}}f_{\beta}(x).\] This proves (11). In the case where \(-1\leq\beta<\alpha\leq 0\), (12) follows immediately by observing that \(\epsilon_{\alpha,\beta}^{[m]}=\epsilon_{1-\alpha,1-\beta}^{[m]}\). This is because \(f_{1-\alpha}(x)\equiv xf_{\alpha}(\frac{1}{x})\). To prove Lemma 1, we will use the following result about the accuracy of the trapezoidal rule for analytic integrands. **Theorem 13** (Theorem 5.1 in [14]).: _Let \(\omega\) be a function analytic in the strip \(|\operatorname{Im}(z)|<a\), and such that \(\omega(z)\to 0\) uniformly as \(|z|\to\infty\) in the strip. Suppose further that for some \(M>0\),_ \[\int_{-\infty}^{\infty}|\omega(u+bi)|\mathrm{d}u\leq M \tag{32}\] _for every \(b\in(-a,a)\). Then for any \(h>0\),_ \[\left|\int_{-\infty}^{\infty}\omega(u)\mathrm{d}u-h\sum_{j=-\infty}^{\infty} \omega(jh)\right|\leq\frac{2M}{e^{2\pi a/h}-1}.\] Proof of Lemma 1.: Define the functions \[\omega_{x}(u):=\frac{e^{(2-\alpha)u}}{(1+e^{u})^{2}(1+xe^{u})}. \tag{33}\] Note that (31) is equivalent to \[\left|\int_{-\infty}^{\infty}\omega_{x}(u)\mathrm{d}u-h\sum_{j=-\infty}^{ \infty}\omega_{x}(jh)\right|\leq Ch^{-3}e^{\frac{-2\pi^{2}}{h}}\int_{-\infty}^ {\infty}\omega_{x}(u)\mathrm{d}u.\] For each \(x\), \(\omega_{x}\) is analytic in the strip \(|\operatorname{Im}(z)|<\pi\), and \(\omega_{x}(z)\to 0\) uniformly as \(|z|\to\infty\) in the strip. However there is no finite \(M\) satisfying (32) for every \(|b|<\pi\). On the other hand, by Lemma 3, for \(\epsilon\in(0,1)\) we have \[\int_{-\infty}^{\infty}|\omega(u+bi)|\mathrm{d}u\leq\cos[(1-\epsilon)\pi/2]^ {-3}\int_{-\infty}^{\infty}\omega_{x}(u)\mathrm{d}u\leq\frac{1}{\epsilon^{3}} \int_{-\infty}^{\infty}\omega_{x}(u)\mathrm{d}u\] whenever \(b\in((\epsilon-1)\pi,(1-\epsilon)\pi)\). Consequently, for each \(\epsilon\in(0,1)\) we have \[\left|\int_{-\infty}^{\infty}\omega_{x}(u)\mathrm{d}u-h\sum_{j=- \infty}^{\infty}\omega_{x}(jh)\right|\leq\frac{2}{\epsilon^{3}[e^{2\pi^{2}(1- \epsilon)/h}-1]}\int_{-\infty}^{\infty}\omega_{x}(u)\mathrm{d}u.\] We are free to choose \(\epsilon=\frac{3h}{2\pi^{2}}<1\) (this value is chosen to maximise \(e^{3}e^{2\pi^{2}(1-\epsilon)/h}\)). With this choice of \(\epsilon\) we have \(2\pi^{2}(1-\epsilon)/h=2\pi^{2}/h-3>1\) (since \(h<\frac{\pi^{2}}{2}\)). Therefore \(e^{2\pi^{2}(1-\epsilon)/h}>e>2\), and consequently, \(2(e^{2\pi^{2}(1-\epsilon)/h}-1)>e^{2\pi^{2}(1-\epsilon)/h}\). We can now write \[\left|\int_{-\infty}^{\infty}\omega_{x}(u)\mathrm{d}u-h\sum_{j=- \infty}^{\infty}\omega_{x}(jh)\right|<\frac{4}{\epsilon^{3}e^{2\pi^{2}(1- \epsilon)/h}}=4\left(\frac{2\pi^{2}e}{3h}\right)^{3}e^{-\frac{2\pi^{2}}{h}}.\] Lemma 1 now follows, and we can take \(C=4\left(\frac{2\pi^{2}e}{3}\right)^{3}\). Proof of Lemma 2.: We have \[\frac{Z_{\alpha}}{(x-1)^{2}}\left[S_{\alpha}^{h}(x)-S_{\alpha}^{ h,m}(x)\right] =\sum_{n<m_{-}}\frac{he^{(2-\alpha)nh}}{(1+e^{nh})^{2}(1+xe^{nh})} +\sum_{n>m_{+}}\frac{he^{(2-\alpha)nh}}{(1+e^{nh})^{2}(1+xe^{nh})}\] \[=\sum_{n<m_{-}}\frac{he^{(2-\alpha)nh}}{(1+e^{nh})^{2}(1+xe^{nh}) }+\sum_{n>m_{+}}\frac{he^{-\alpha nh}}{(e^{-nh}+1)^{2}(1+xe^{nh})}\] \[\leq\sum_{n<m_{-}}\frac{he^{(2-\alpha)nh}}{1+xe^{nh}}+\sum_{n>m_ {+}}\frac{he^{-\alpha nh}}{1+xe^{nh}}.\] By Lemma 6, for \(n>0\) we have \(\frac{1}{3}\cdot\frac{(x-1)^{2}}{1+xe^{nh}}\leq f_{\beta}(x)\), while for \(n<0\) we have \(\frac{(x-1)^{2}}{3}\cdot\frac{e^{(2-\beta)nh}}{1+xe^{nh}}\leq f_{\beta}(x)\). Therefore \[Z_{\alpha}\left[S_{\alpha}^{h}(x)-S_{\alpha}^{h,m_{-},m_{+}}(x)\right] \leq 3h\left(\sum_{n<m_{-}}e^{(\beta-\alpha)nh}+\sum_{n>m_{+}}e^{ -\alpha nh}\right)f_{\beta}(x)\] \[=3h\left(\sum_{n>-m_{-}}e^{-(\beta-\alpha)nh}+\sum_{n>m_{+}}e^{ -\alpha nh}\right)f_{\beta}(x)\] \[=3h\left(\frac{e^{(\beta-\alpha)hm_{-}}}{e^{(\beta-\alpha)h}-1}+ \frac{e^{-\alpha hm_{+}}}{e^{\alpha h}-1}\right)f_{\beta}(x)\] \[\leq 3\left(\frac{e^{(\beta-\alpha)hm_{-}}}{\beta-\alpha}+\frac{e ^{-\alpha hm_{+}}}{\alpha}\right)f_{\beta}(x).\] ## 5 Numerical illustration In this section we numerically validate the convergence results obtained in this paper. To compute the best global approximants \(\tilde{f}\) from Section 4, we used the differential correction algorithm which we briefly review below in Section 5.1. In Figure 1 (left) we illustrate Theorem 3, by comparing the exact relative approximation errors \(\epsilon_{\alpha,\alpha}^{[m]}\) of the best quadrature-based approximation of \(f_{\alpha}\) with the predicted asymptotic error (according to Theorem 3). We illustrate the case where \(\alpha\geq\frac{1}{2}\) only, since \(\epsilon_{\alpha,\alpha}^{[m]}=\epsilon_{1-\alpha,1-\alpha}^{[m]}\). In Figure 1 (right) we consider the approximation errors \(\epsilon_{1,\beta}^{[m]}\) for \(\beta\in(1,2]\), which are relevant in the approximation of quantum relative entropy. These values are compared visually with the asymptotic errors predicted by Conjecture 1. The obtained approximation errors match very well with the predicted rates of convergence. In Figure 2 we plot the error of the best \(15\)-node approximation to \(f_{1}\) relative to \(f_{2}\), and we observe the expected equioscillation. In Figure 3, we compare the location of the nodes of this approximation with the nodes of the best _local_ approximation around \(x=1\) (obtained by Gaussian quadrature on \([0,1]\) of the measure \(\mathrm{d}\mu_{\alpha}(t)=(1-t)\mathrm{d}t\), see Section 3). In fact, the nodes of the best local approximation are none other than the roots of the Jacobi polynomial \(J_{15}^{0.1}\) after a linear transformation of the domain \([-1,1]\) to \([0,1]\). These roots can be obtained from the eigendecomposition of a certain tridiagonal matrix [10]. ### Obtaining best global approximants The best relative approximations (i.e., the minimizing \(\tilde{f}\) in (10)) in our numerical illustrations were computed using the _differential correction_ algorithm for uniform rational approximation [11, 1] - specifically the version which allows for best _weighted_ uniform approximation and linear constraints in the numerator and denominator polynomials [13]. Indeed, the approximation \(\tilde{f}\) can be obtained as the best order \([m+1/m]\) rational approximation to \(f_{\alpha}(x)\) relative to the nonnegative function \(f_{\beta}(x)\) (see Corollary 2). Since \(f_{\alpha}(x)\) and \(f_{\beta}(x)\) have double roots at \(x=1\), in practice it is preferable to compute the best order \([m-1/m]\) rational approximation to \(f_{\alpha}(x)/(x-1)^{2}\) relative to the positive function \(f_{\beta}(x)/(x-1)^{2}\). The differential correction algorithmThe differential correction algorithm is an iterative algorithm for finding the best order \([m_{1},m_{2}]\) rational approximation to a function \(f(x)\) on an interval \(I\), given a discretization \((x_{i})_{i=1}^{N}\) of \(I\). At iteration \(t+1\), given polynomials \(p_{t},q_{t}\) of degree \(m_{1},m_{2}\) respectively such that \(q_{t}(x_{i})>0\) for every \(i\in[N]\), let \(\Delta_{t}=\max_{i\in[N]}\{|f(x_{i})-\frac{p_{t}(x_{i})}{q_{t}(x_{i})}|\}\). We aim to find polynomials \(p,q\) "close" to \(p_{t},q_{t}\) such that \(\Delta=\max_{i\in[N]}\{|f(x_{i})-\frac{p(x_{i})}{q(x_{i})}|\}<\Delta_{t}\). We can rephrase this as \[\min_{p\in\mathbb{R}_{m_{1}}[x],q\in\mathbb{R}_{m_{2}}[x],\Delta\in\mathbb{R} }\Delta\quad\text{subject to}\quad\frac{|f(x_{i})q(x_{i})-p(x_{i})|}{q_{t}(x_{ i})}\leq\Delta\frac{q(x_{i})}{q_{t}(x_{i})}. \tag{34}\] The problem (34) is almost a linear program in the variables \((p,q,\Delta)\), except that the right-hand side is quadratic. Assuming that the second order term \(\frac{(\Delta-\Delta_{t})(q-q_{t})}{q_{t}}\) is small, we can linearise the right-hand side \[\Delta\frac{q(x_{i})}{q_{t}(x_{i})}\approx\Delta_{t}\frac{q(x_{i})}{q_{t}(x_{ i})}+(\Delta-\Delta_{t}),\] to get the iteration \[(p_{t+1},q_{t+1})\in\operatorname*{argmin}_{p\in\mathbb{R}_{m_{1}}[x],q\in \mathbb{R}_{m_{2}}[x]}\max_{i\in[N]}\frac{|f(x_{i})q(x_{i})-p(x_{i})|-\Delta_{t} q(x_{i})}{q_{t}(x_{i})}\quad\text{subject to}\quad\|q\|_{\infty}\leq 1. \tag{35}\] Here \(\|q\|_{\infty}\) is the \(\ell_{\infty}\) norm of the coefficients of \(q\), so (35) is a linear program. The normalization condition \(\|q\|_{\infty}\leq 1\) is necessary since the objective function is homogeneous in \((p,q)\). Although this derivation was rather informal, it can be proved that \((p_{t},q_{t},\Delta_{t})\) form a minimizing sequence [1]. To find the best rational approximation relative to a function \(b(x)>0\), (35) is modified to \[(p_{t+1},q_{t+1})\in\operatorname*{argmin}_{p\in\mathbb{R}_{m_{1}}[x],q\in \mathbb{R}_{m_{2}}[x]}\max_{i\in[N]}\frac{|f(x_{i})q(x_{i})-p(x_{i})|-\Delta_ {t}q(x_{i})b(x_{i})}{q_{t}(x_{i})b(x_{i})}\quad\text{subject to}\quad\|q\|_{ \infty}\leq 1. \tag{36}\] Comments on the practical implementationOur implementation is made available at [https://www.github.com/oisite](https://www.github.com/oisite). The main reason for choosing the differential correction algorithm (instead of, e.g., the Remez algorithm) is that it is guaranteed to converge for any feasible initialization. A drawback of the differential correction algorithm which is often mentioned is that, since the approximation domain must be discretized, and the linear program solved at each iteration scales with the size of the discretization, it can be quite slow. However, since we know in advance that the only singularities of the functions \(f_{\alpha}(x)/(x-1)^{2}\) are at \(x=0\) and at \(x=\infty\), we are free to choose discretizations with points exponentially distributed near these points. This means that the number of discretization points in total can be quite modest (\(\sim 500\)) and still give very accurate results. Good numerical stability was obtained by representing the rational approximants using barycentric coordinates, as described in [12]. ## Acknowledgments HF would like to thank Y. Nakatsukasa and L.N. Trefethen for discussions about best rational approximations. We would also like to thank J. Saunderson for comments that helped improve the exposition. HF acknowledges funding from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee EP/X032051/1.
2306.03058
Shoal: Improving DAG-BFT Latency And Robustness
The Narwhal system is a state-of-the-art Byzantine fault-tolerant scalable architecture that involves constructing a directed acyclic graph (DAG) of messages among a set of validators in a Blockchain network. Bullshark is a zero-overhead consensus protocol on top of the Narwhal's DAG that can order over 100k transactions per second. Unfortunately, the high throughput of Bullshark comes with a latency price due to the DAG construction, increasing the latency compared to the state-of-the-art leader-based BFT consensus protocols. We introduce Shoal, a protocol-agnostic framework for enhancing Narwhal-based consensus. By incorporating leader reputation and pipelining support for the first time, Shoal significantly reduces latency. Moreover, the combination of properties of the DAG construction and the leader reputation mechanism enables the elimination of timeouts in all but extremely uncommon scenarios in practice, a property we name Prevalent Responsiveness" (it strictly subsumes the established and often desired Optimistic Responsiveness property for BFT protocols). We integrated Shoal instantiated with Bullshark, the fastest existing Narwhal-based consensus protocol, in an open-source Blockchain project and provide experimental evaluations demonstrating up to 40% latency reduction in the failure-free executions, and up-to 80% reduction in executions with failures against the vanilla Bullshark implementation.
Alexander Spiegelman, Balaji Arun, Rati Gelashvili, Zekun Li
2023-06-05T17:29:33Z
http://arxiv.org/abs/2306.03058v2
# Shoal: Improving DAG-BFT Latency And Robustness ###### Abstract. The Narwhal system is a state-of-the-art Byzantine fault-tolerant scalable architecture that involves constructing a directed acyclic graph (DAG) of messages among a set of validators in a Blockchain network. Bullshark is a zero-overhead consensus protocol on top of the Narwhal's DAG that can order over 100k transactions per second. Unfortunately, the high throughput of Bullshark comes with a latency price due to the DAG construction, increasing the latency compared to the state-of-the-art leader-based BFT consensus protocols. We introduce Shoal, a protocol-agnostic framework for enhancing Narwhal-based consensus. By incorporating leader reputation and pipelining support for the first time, Shoal significantly reduces latency. Moreover, the combination of properties of the DAG construction and the leader reputation mechanism enables the elimination of timeouts in all but extremely uncommon scenarios in practice, a property we name "prevalent responsiveness" (it strictly subsumes the established and often desired "optimistic responsiveness" property for BFT protocols). We integrated Shoal instantiated with Bullshark, the fastest existing Narwhal-based consensus protocol, in an open-source Blockchain project and provide experimental evaluations demonstrating up to 40% latency reduction in the failure-free executions, and up-to 80% reduction in executions with failures against the vanilla Bullshark implementation. Key words and phrases:**Keywords:Consensus Protocol, Byzantine Fault Tolerance + Footnote †: 2023: Shoal: Improving DAG-BFT Latency And Robustness + Footnote †: 2023: Shoal: Improving DAG-BFT Latency And Robustness + Footnote †: 2023: Shoal: Improving DAG-BFT Latency And Robustness + Footnote †: 2023: Shoal: Improving DAG-BFT Latency And Robustness + Footnote †: 2023: Shoal: Improving DAG-BFT Latency And Robustness ## 1. Introduction Byzantine fault tolerant (BFT) systems, including consensus protocols (Bauer et al., 2012; Krizhevsky et al., 2012; Krizhevsky et al., 2013; Krizhevsky et al., 2014; Krizhevsky et al., 2015) and state machine replication (Krizhevsky et al., 2013; Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015), have been a topic of research for over four decades as a means of constructing reliable distributed systems. Recently, the advent of Blockchains has underscored the significance of high performance. While Bitcoin handles approximately 10 transactions per second (TPS), the proof-of-stake committee-based blockchains (Krizhevsky et al., 2013; Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2015; Krizhevsky et al., 2015) are now engaged in a race to deliver a scalable BFT system with the utmost throughput and minimal latency. Historically, the prevailing belief has been that reducing communication complexity was the key to unlocking high performance, leading to the pursuit of protocols with linear communication. However, this did not result in drastic enough improvements in the throughput, falling significantly short of the current blockchain network targets. For example, the state-of-the-art Hotstuff (Hotstuff, 2015) protocol in this line of work only achieves a throughput of 3500 TPS (Bauer et al., 2012). A recent breakthrough, however, stemmed from the realization that data dissemination is the primary bottleneck for leader-based protocols, and it can benefit from parallelization (Krizhevsky et al., 2013; Krizhevsky et al., 2014; Krizhevsky et al., 2015; Krizhevsky et al., 2015). The Narwhal system (Krizhevsky et al., 2015) separated data dissemination from the core consensus logic and proposed an architecture where all validators simultaneously disseminate data, while the consensus component orders a smaller amount of metadata. A notable advantage of this architecture is that not only it delivers impressive throughput on a single machine, but also naturally supports scaling out each blockchain validator by adding more machines. The Narwhal paper (Krizhevsky et al., 2015) evaluated the system in a geo-replicated environment with 50 validators and reported a throughput of 160,000 TPS with one machine per validator, which further increased to 600,000 TPS with 10 machines per validator. These numbers are more in line with the ambitions of modern blockchain systems. Consequently, Narwhal has garnered significant traction within the community, resulting in its deployment in Sui (Sui, 2016) and ongoing development in Aptos (Shi et al., 2016) and Celo (Celo, 2017). Developing a production-ready reliable distributed system is challenging, and integrating intricate consensus protocols only adds to the difficulty. Narwhal addresses this issue by abstracting away networking from the consensus protocol. It constructs a non-equivvocating round-based directed acyclic graph (DAG), a concept initially introduced by Aleph (Aler et al., 2017). In this design, each validator contributes one vertex per round, and each vertex links to \(n-f\) vertices in the preceding round. Each vertex is disseminated via an efficient reliable broadcast implementation, ensuring that malicious validators cannot distribute different vertices to different validators within the same round. With networking abstraction separated from the details of consensus, the DAG can be constructed without contending with complex mechanisms like view-change or view-synchronization. During periods of network asynchrony, each validator may observe a slightly different portion of the DAG at any given time. However, the structure facilitates a simpler ordering mechanism compared to monolithic BFT protocols. In DAG-based consensus protocols, vertices represent proposals, edges represent votes, and the concept of quorum intersection guarantees that validators can consistently order all DAG vertices. This provides efficient consensus because ordering is done via local computation only, without any additional communication cost. _Narwhal-based consensus protocols._ As discussed, the idea shared by Narwhal-based consensus protocols is to interpret the DAG structure as the consensus logic [17; 27; 35; 36], but they differ in the networking assumptions and the number of rounds required for vertex ordering. However, all three protocols share a common structure. Prior to the protocol initiation, there is an a-priori mapping from specific rounds to leaders shared among all validators. In the asynchronous protocols (DAG-Rider and Tusk), this mapping to the sequence of leaders is hidden behind threshold cryptography and revealed throughout the protocol. We use the term _anchor_ to refer to the vertex associated with the round leader in each relevant round. The DAG local ordering process by each validator is divided into two phases. First, each validator determines which anchors to order (the rest are skipped). Then, the validators sequentially traverse the ordered anchors, deterministically ordering all DAG vertices contained within the causal histories of the respective anchors. The primary considerations that affect the protocol latency are as follows 1. Bad leaders. When a validator is malicious or not fast enough, its vertex may not be included in the DAG. In the case of leaders, the absence of anchors affects the ordering latency of all vertices in previous rounds that are not already ordered. These vertices can only be ordered as a part of a causal history of a future anchor, directly impacting their latency. 2. Sparse anchors. In Narwhal-based consensus protocols, not every round includes an anchor. Consequently, vertices located farther from the next anchor must wait for additional rounds before they can be ordered. _Shoal framework._ This paper presents Shoal: a framework addressing the aforementioned challenges by incorporating leader reputation and pipelining mechanisms into all Narwhal-based consensus protocols. So far, all available open-source implementations of Narwhal and Bullshark, including Meta 1, and the production deployment on Sui 2 lack these features, while our evaluations demonstrate they can provide significant performance improvements. Footnote 1: [https://github.com/facebookresearch/narwhal/blob/main/consensus/src/lib.rs](https://github.com/facebookresearch/narwhal/blob/main/consensus/src/lib.rs) Footnote 2: [https://github.com/MystenLabs/sui/blob/main/narwhal/consensus/src/bullshark](https://github.com/MystenLabs/sui/blob/main/narwhal/consensus/src/bullshark) Leader reputation is an often overlooked concept in theoretical research, yet it holds crucial importance for practical performance. In practice, Byzantine failures are rare due to robust protection and economic incentives for validators to adhere to the protocol. (Moreover, Narwhal-based DAG constructions, which provide non-equivocation, significantly reduce the range of potential Byzantine behavior). Thus, the most common failure scenarios in Blockchain (esp. in Narwhal-based) systems involve validators who struggle to keep up, which can occur due to temporary crashes, slower hardware, or geographical distance. If unresponsive validators repeatedly become leaders, progress is inevitably impeded and degrades system performance. The leader reputation schemes select leaders based on the history of their recent activity, as introduced in Diem [42] and later formalized in [16]. In the context of Narwhal-based consensus, pipelining means having an anchor in every round, which would result in improved latency for non-anchor vertices. _The main challenge._ While the ability to order the DAG locally, without extra communication contributes to the scalability of Narwhal-based consensus, it poses a significant challenge to supporting leader reputation and pipelining. The leader reputation problem is simpler to solve for monolithic BFT consensus protocols. While the validators may disagree on the history that determines the next leader's identity, the worst that can happen is a temporary loss of liveness until view synchronization, i.e. the quorum of validators can eventually recover by agreeing on a fall-back leader. This exact method was utilized in [16], electing the fall-back leaders by a simple round-robin. In contrast, when all communication is done upfront for building the DAG, the safety of a consensus protocol relies on a key property of the local computation that all validators will decide to order the same set of anchors. This must hold despite the local views of the DAG possibly differing among the validators across multiple rounds. Hence, selecting the round leaders dynamically based on reputation (as opposed to the a-priori mapping) seems impossible due to a circular dependency: we need to agree on mapping to solve consensus, but we need consensus to agree on a new mapping. For pipelining, even if all validators agree on the mapping, they also must agree on whether to order or skip each anchor. Our attempts to solve the problem by delving into the inner workings of the protocol and exploring complex quorum intersection ordering rules have not been fruitful. Intuitively, this is because consensus requires a voting round after each anchor proposal and the next anchor should link to the decisions (votes) on the previous one. _Our solution._ In Shoal, we lean into the power of performing computations on the DAG, in particular the ability to preserve and re-interpret information from previous rounds. For leader reputation, this allows bootstrapping the seemingly circular dependency on consensus, while for pipelining, it allows combining multiple instances of the protocol in a suitable manner. In fact, Shoal runs multiple instances of the protocol one after the other, where the trick is to agree on the switching point based on the following observation: _For any Narwhal-based consensus protocol, since all validators agree on which anchors to order vs skip, they in particular agree on the first ordered anchor._ With this observation in mind, each validator can start locally interpreting its view of the DAG by running an instance of its favorite protocol until it determines the first ordered anchor. Since validators agree on this anchor, they can all deterministically start a new protocol instance in the following round. Note that this too, happens locally, from a validator's perspective, as a part of re-interpreting the DAG. As a result, Shoal ensures the following 1. Leader reputation: validators select new anchors for future rounds based on the information available in the causal history of the ordered anchors. 2. Pipelining: allocate an anchor in the first round of the new instance. That way, if the first anchor in every instance is ordered, we get an anchor in every round, providing the pipelining effect. _Our system and prevalent responsiveness._ We implemented Shoal in the open-source codebase of one of the live Blockchain networks and instantiated it with the partially synchronous version of Bullshark3. In this setting, we also discovered a way to eliminate timeouts in all except extremely rare scenarios, a property we refer to as prevalent responsiveness. The design with prevalent responsiveness demonstrates further performance improvements in our evaluations. Added motivation to avoid timeouts in as many situations as possible comes from a purely practical point of view, as (1) when timeouts are common, the duration affects the system performance, but in a way that is non-trivial to configure in an optimal way as it is highly environmentally (network) dependent; and (2) timeout handling is known to add significant complexity to the implementation logic for managing potential state space of validators. Footnote 3: Shoal of bull sharks. Monolithic leader-based BFT protocols use timeouts to trigger protocol progress every time a leader is faulty or slow, while optimistic responsiveness property, popularized by the HotStuff (Hotstuff, 2011) protocol, effectively eliminates timeout implications in ideal scenarios when the network is synchronous and there are no failures. However, when failures do occur, all validators must still wait until the timeout expires before transitioning to the next leader. Utilizing the inherent properties of the DAG construction, and leader reputation mechanism, we ensure that Shoal makes progress at network speed under a much larger set of scenarios than optimistically responsive protocols would, which makes Shoal with partially synchronous Bullshark prevalently responsive. In Shoal, validators do wait for timeouts when a few leaders crash and the corresponding anchors are not ordered. While the FLP (Shalaf et al., 2017) impossibility result dictates that there has to be a scenario that requires a timeout, Shoal design aligns this FLP scenario to be extremely improbably in practice (multiple, e.g., 10 consecutive skipped anchors). Conceptually, this is similar to how randomized protocols align FLP scenarios to have 0 probability in solving asynchronous consensus with probability 1 (Bullshaw, 2011). All available Bullshark implementations use timeouts to ensure honest validators wait for slow anchors even if \(2f+1\) other vertices were already delivered. By eliminating timeouts, Shoal immediately reduces latency when a leader is faulty, as the corresponding anchors would never be delivered and it is best to advance to the next round as fast as possible. If the leader is not crashed and just slower, validators may skip anchors that they could order if they waited a little bit longer. This is however, where the leader reputation mechanism of Shoal shines, filtering out slow validators that constantly delay new rounds and allowing the DAG to proceed at network speed while ordering most anchors. Our experimental evaluation demonstrates up to 40% reduction in latency against vanilla Bullshark protocol implementation when there are no failures in the system, and up to 80% reduction in latency when there are failures. We provide experiments specifically designed to give insights into the impact of the improvements separately, i.e. pipelining, leader reputation and eliminating the timeouts (prevalent responsiveness). In summary, the paper focuses on improving latency and robustness in DAG-Based protocols. It provides Shoal, a framework to enhance any Narwhal-based consensus protocol with (1) Leader reputation mechanism that prevents slow, isolated, or crashed validators from becoming leaders, (2) pipelining support that ensures every round on the DAG has an anchor, and (3) eliminating timeouts in many cases further reducing the latency, The remaining sections of the paper are organized as follows: Section 2 provides background information on DAG-BFT and highlights the main property utilized in this paper. Section 3.1 introduces our pipelining approach, while Section 3.2 presents the leader reputation solution in Shoal. In Section 4, we prove correctness of the proposed framework. Section 5 describes the implementation details and discusses timeouts. Section 6 presents the results of our evaluation. Section 7 discusses related work, and finally, Section 8 concludes the paper. ## 2. DAG Bft We start by providing the necessary background on Narwhal-based BFT consensus (Section 2.1) and define a common property (Section 2.2) satisfied by such consensus protocols. We rely on this property while designing Shoal to enhance a given baseline protocol with pipelining and leader reputation, thereby reducing latency. ### Background The concept of DAG-based BFT consensus, initially introduced by HashGraph (Hash et al., 2017), aims to decouple the network communication layer from the consensus logic. In this approach, each message consists of a collection of transactions and references to previous messages. These messages collectively form an ever-growing DAG, with messaging serving as vertices and references between messages serving as edges. In Narwhal, the DAG is round-based, similar to Aleph (Aleph, 2018). In this approach, each vertex within the DAG is associated with a round number. In order to progress to round \(r\), a validator must first obtain \(n-f\) vertices (from distinct validators) belonging to round \(r-1\). Every validator can broadcast one vertex per round, with each vertex referencing a minimum of \(n-f\) vertices from the previous round. The _causal history_ of a vertex v refers to the sub-graph that starts from v. Figure 1 illustrates a validator's local view of a round-based DAG. To disseminate messages, Narwhal uses an efficient reliable broadcast implementation that guarantees: **Validity:**: if an honest validator has a vertex v in its local view of the DAG, then it also has all the causal history of v. **Eventual delivery:**: if an honest validator has a vertex in round r by validator p in its local view of the DAG, then eventually all honest validators have a vertex in round r by validator p in their local views of the DAG. **Non-equivocation:**: if two honest validators have a vertex in round r by validator p in their local views of the DAG, then the vertices are identical. Inductively applying Validity and Non-equivocation, we get: **Completeness:**: if two honest validators have a vertex v in round r by validator p in their local views of the DAG, then v's causal histories are identical in both validators' local view of the DAG. In simple words, Narwhal construction guarantees that 1. All validators eventually see the same DAG; and 2. Any two validators that have the same vertex \(v\) locally also agree on the whole causal history of \(v\) (the contents of vertices and edges between them). _DAG-Rider / Tusk / Bullshark._ DAG-Rider, Tusk, and Bullshark are all algorithms to agree on the total order of all vertices in the DAG with no additional communication overhead. Each validator independently looks at its local view of the DAG and orders the vertices without sending a single message. This is done by interpreting the structure of the DAG as a consensus protocol, where a vertex represents a proposal and an edge represents a vote. DAG-Rider (Rider, 2018) and Tusk (Tusk, 2018) are randomized protocols designed to tolerate full asynchrony, which necessitates a larger number of rounds and consequently, a higher latency. Bullshark (Bullshark, 2018) also provides a deterministic protocol variant with a faster ordering rule, relying on partial synchrony for liveness. While the specific details are not required to understand this paper, next we explain the high-level structure of these protocols and define a property they all share. ### Common framework Narwhal-based consensus protocols have the following common abstract structure: 1. Pre-determined anchors. Every few rounds (the number depends on the protocol) there is a round with a pre-determined leader. The vertex of the leader is called an _anchor_. In the partially synchronous version of Bullshark, the leaders are a-priori known. In the asynchronous protocols (DAG-Rider, Tusk, asynchronous Bullshark) the leaders are hidden and revealed during the DAG construction. 2. Order the anchors. All validators independently decide which anchors to skip and which to order. The details differ among the protocols, although they all rely on quorum intersection in the DAG structure. The key aspect is that each honest validator locally decides on a list of anchors, and all lists share the same prefix. 3. Order causal histories. Validators process their list of ordered anchors one by one, and for each anchor order all previously unordered vertices in their causal history by some deterministic rule. By Completeness, all validators see the same causal history for any anchor, so all validators agree on the total order. An illustration of the ordering logic appears in Figure 2. The key correctness argument for all the above mention consensus protocols relies on the fact that all validators agree on which anchors to order and which to skip. In particular, they will all agree on the first anchor that no validator skips. More formally, the abstract property of the Narwhal-based consensus protocols that our Shoal framework relies on is the following: Figure 1. A possible local view of a round-based DAG. The causal history of the vertex identified by validator 2 in round 2 is highlighted in green. **Property 1**.: _Given a Narwhal-based protocol \(\mathcal{P}\), if all honest validators agree on the mapping from rounds to leaders before the beginning of an instance of \(\mathcal{P}\), then they will agree on the first anchor each of them orders during the execution of \(\mathcal{P}\)._ The proof follows immediately from Proposition 2 in DAG-Rider (Rider, 2017) and Corollary C. in Bullshark (Bullshaw, 2018). ## 3. Shoal Shoal is protocol agnostic and can be directly applied to all Narwhal-based consensus protocols, i.e., DAG-Rider, Tusk, and Bullshark. It makes no changes to the protocols but rather combines their instances in essentially a "black-box" manner. The entire correctness argument can be derived solely from Property 1. ### Pipelining A natural progression after the high throughput scalability of BFT consensus achieved by Narwhal is to reduce latency as much as possible. To this end, Bullshark already halved DAG-rider's latency for ordering anchors from 4 rounds to 2 by adding an optimistic path under the partially synchronous network communication assumption. Intuitively, it is hard to imagine latency lower than 2 rounds as in the interpretation of the DAG structure as a consensus protocol, one round is needed to "propose" the anchor, while another is needed for "voting". However, only anchors can be ordered in 2 rounds. The rest of the vertices are ordered as part of the causal history of some anchor and require a minimum latency of 3 or 4 rounds. This is because the vertices in a "voting" round require (minimum) 3 rounds, while vertices that share a round with an anchor have to wait for at least the next anchor to be ordered, thus requiring (minimum) 4 rounds. An illustration of the ordering latency for different vertices appears in Figure 3. Ideally, to reduce the latency of ordering vertices we would like to have an anchor in every round. This would allow for non-anchor vertices to be ordered as a part of some anchor's causal history in each and every round, making latency and throughput of the protocol less spiky. In Bullshark, it would become possible for every non-anchor vertex to be ordered in 3 rounds (see Figure 3), while in DAG-Rider the latency may be reduced from 10 rounds to 7 in expectation. _Solution._ Let \(\mathcal{P}\) be any Narwhal-based consensus protocol. On a high level, the core technique in Shoal is to execute \(\mathcal{P}\) until it, as a consensus protocol, guarantees agreement on some part of the DAG for all validators. Starting from the round following the agreed part of the DAG, all validators can switch over and start executing a new instance of \(\mathcal{P}\) (or a different Narwhal-based consensus protocol, if desired) from scratch. While the instances are not executing concurrently, this scheme effectively pipelines the "proposing" and "voting" rounds. As a result in Shoal, in a good case an anchor is ordered in every round. The pseudocode appears in Algorithm 1. In the beginning of the protocol, all validators interpret the DAG from round 0, and the function \(F\) is some pre-defined deterministic mapping from rounds to leaders. Each validator locally runs \(\mathcal{P}\), using \(F\) to determine the anchors, until it orders the first anchor, denoted by \(A\) in round \(r\). The key is that, by the correctness of \(\mathcal{P}\) as stated in Property 1, all validators agree that \(A\) is the first ordered anchor (previous anchors are skipped by all validators). Consequently, each validator can re-interpret the DAG from the next round (round \(r+1\)) according to a new instance of the protocol \(\mathcal{P}\) (or another Narwhal-based protocol) executing from scratch from round \(r+1\). To order the DAG, much like in the original \(\mathcal{P}\), the validators deterministically order \(A\)'s causal history, and by the Completeness property, arrive at the same total order over the same vertices. Note that without re-interpreting the DAG according to a new instance of \(\mathcal{P}\) starting from round \(r+1\) Figure 3. Illustration of the number of rounds required for each vertex in the DAG to be ordered in the best case, according to the Bullshark protocol. The number in each vertex represents its minimum latency. For example, the anchor of round \(i+1\) can be ordered in round \(i+2\), but the other vertices in this round require at least 4 rounds to be ordered. Figure 2. A possible local view of the DAG in the partially synchronous Bullshark protocol. Filled squares represent the pre-defined anchors. In this example, the validator orders the red and yellow anchors, while the green (which is not in the DAG) anchor is skipped. To order the DAG, the validator deterministically orders the red anchor’s causal history (the unfilled red vertices) and immediately after the yellow anchor’s causal history (the unfilled yellow vertices). the next anchor according to the previously executing instance of the protocol would appear in a strictly later round (e.g. \(r+4\) for DagRider and \(r+2\) for Bullshark). The above process can continue for as long as needed. An illustration appears in Figure 4. Note that in Algorithm 1, function \(F\) is fixed and used by each instance of protocol \(\mathcal{P}\). In a true "black-box" implementation, the round numbers could be different from the perspective of the executing protocol instance (i.e. start from 0 for each new instance). However, \(F\) is fixed and always assigns the same anchor to any given round \(r\) in Shoal regardless of the protocol instance used for this round. Note that with Shoal, ordering an anchor vertex requires 2 rounds, while all other vertices require 3. In Section A we discuss a potential direction to reduce the latency for non-anchor vertices by treating all vertices as anchors. Intuitively, we can use Property 1 to instantiate a binary agreement to decide whether to commit each vertex individually. ### Leader Reputation BFT systems are designed to tolerate Byzantine failures in order to provide as strong as possible worst-case reliability guarantees. However, actual Byzantine failures rarely occur in practice. This is because validators are highly secured and have strong economic incentives to follow the protocol. Slow or crashed leaders are a much more frequent occurrence which can significantly degrade the system performance. In Narwhal-based BFT, if the leader of round \(r\) crashes, no validator will have the anchor of round \(r\) in its local view of the DAG. Thus, the anchor will be skipped and no vertices in the previous round can be ordered until some later point due to an anchor in a future round. The way to deal with missing anchors is to somehow ensure that the corresponding leaders are less likely to be elected in the future. A natural approach to this end is to maintain a reputation mechanism, assigning each validator a score based on the history of its recent activity. A validator that has been participating in the protocol and has been responsive would be assigned a high score. Otherwise, the validator is either crashed, slow, or malicious and a low score is assigned. The idea is then to deterministically re-compute the pre-defined mapping from rounds to leaders every time the scores are updated, biasing towards leaders with higher scores. In order for validators to agree on the new mapping, they should agree on the scores, and thus on the history used to derive the scores. Such a mechanism was previously proposed in (DagRider and Barabasi, 2017) and implemented in the Diem Blockchain (DagRider and Barabasi, 2017) to enhance the performance of Jolteon (DagRider and Barabasi, 2017), a leader-based consensus protocol. One important property Jolteon is that Safety is preserved even if validators disagree on the identity of the leader, while liveness is guaranteed as long as they eventually converge. Hence, validators could re-assign the reputation scores every time a new block was committed, even though during asynchronous periods it was possible for different validators to commit the same block in different rounds. Unfortunately, this is not the case for Narwhal-based BFT. If validators disagree on the anchor vertices, they will order the DAG differently and thus violate safety. This makes the leader reputation problem strictly harder in Narwhal-based BFT. _Solution._ Shoal constructs a protocol identical to a given Narwhal-based consensus protocol \(\mathcal{P}\), but to support leader reputation anchors are selected according to a function \(F\) that takes into account validators' recent activity, e.g., the number of vertices they have successfully added to the DAG. The function \(F\) should be updated as frequently as possible and aim to select validators with a better reputation as leaders more often than their counterparts with a lower reputation. In Shoal, pipelining and leader reputation can be naturally combined as they both utilize the same core technique of re-interpreting the DAG after agreeing on the first ordered anchor. In fact, the pseudocode for Shoal appears in Algorithm 2 only differs from Algorithm 1 by adding line 8. The idea is that the validators simply need to compute a new mapping, starting from round \(r+1\), based on the causal history of ordered anchor \(A\) in round \(r\) (which they are guaranteed to agree on by Property 1). Then, the validators start Figure 4. Illustration of Shoal’s pipelining integrated into Bullshark. The vertices that are fixed to be anchors by \(F\) are marked by a crown. The protocol starts by interpreting the DAG with anchors in rounds \(1,3\), and \(5\). Bullshark determines that the anchor in round \(1\), marked by a green checkmark, is the first to be ordered. Then, a new instance of Bullshark starts at round \(2\) with the anchors marked in rounds \(2\) and \(4\). executing a new instance of \(\mathcal{P}\) from round \(r+1\) with the updated anchor selection function \(F\). ``` 1:current_round \(\leftarrow\) 0 2:\(F:R\to A\) \(\triangleright\) deterministic rounds to anchors mapping 3:while true do 4: execute \(\mathcal{P}\), select anchors by \(F\), starting from current_round until the first ordered (not skipped) anchor is determined. 5: let \(A\) be the first ordered anchor in round \(r\) 6: order \(A\)'s causal history according to \(\mathcal{P}\) 7: current_round \(\leftarrow\)\(r+1\) 8: update \(F\) according to \(A\)'s causal history ``` **Algorithm 2** Shoal Our solution is protocol agnostic and can be directly applied to all Narwahl-based consensus protocols, i.e., DAG-Rider, Tusk, and Bullshark. An illustration can be found in Figure 5. Shoal makes no changes to the protocols but rather combines their instances, and the entire correctness argument can be derived solely from Property 1. ## 4. Correctness To prove the correctness of Shoal (Algorithm 2) we assume that the underlying protocol satisfies Property 1, which we will use inductively. **Lemma 4.1**.: _Let \(P\) be a Narwahl-based DAG-BFT protocol that satisfies Property 1. Let \(D\) be a round-based DAG, and assume a known to all function \(F\) that maps rounds to anchors. Then all the locally ordered lists of anchors by honest validators executing Shoal with \(P\) according to \(F\) share the same prefix._ Proof.: Proof is by induction on the ordered anchors. **Base:** We need to show that all honest validators agree on the first anchor. Since Shoal starts by running \(P\) until the first anchor is ordered, the base case follows immediately from Property 1. **Step:** Assume all honest validators agree on the first \(k\) ordered anchors, we need to prove that they agree on anchor \(k+1\). First, we show that all honest validators agree on the new function \(F\) (Line 8 in Algorithm 2). This holds because the new function \(F\) is deterministically computed according to the information in \(k\)'s causal history, and by the Completeness property of the DAG, all honest validators have the same causal history of anchor \(k\) in their local view. Next, let \(r\) be the round of anchor \(k\). By the inductive assumption, all honest validators agree on \(r\). Thus, all honest validators start the next instance of \(P\) in the same round \(r+1\). Now consider a DAG \(D^{\prime}\) that is identical to \(D\) except it does not have the first \(r\) rounds. By Property 1, all validators that run \(P\) with the new function \(F\) on \(D^{\prime}\) agree on the first ordered anchor in \(D^{\prime}\). Therefore, all validators agree on anchor \(k+1\) in \(D\). **Theorem 4.2**.: _Let \(P\) be a Narwahl-based DAG-BFT protocol that satisfies Property 1. Shoal with \(P\) satisfies total order._ Proof.: By Lemma 4.1, all validators order the same anchors. The theorem follows from the DAG Completeness property as all validators follow the same deterministic rule to order the respective causal histories of the ordered anchors. ## 5. Implementation and Prevalent Responsiveness We have implemented Narwhal and the partially synchronous version of Bullshark as part of a publicly available open-source blockchain project4. This blockchain is live and the process of productionizing our implementation is underway. The code is written in Rust, utilizing Tokio5 for asynchronous networking, BLS (Rust et al., 2018) implemented over BLS12-381 curves for signatures, RocksDB6 for persistent data storage, and the Noise7 protocol for authenticated messages. Footnote 4: In order to uphold the anonymity requirement of the submission, we do not disclose the name of the blockchain project. Footnote 5: [https://tokio.rs](https://tokio.rs) Footnote 6: [https://rocksdb.org](https://rocksdb.org) Footnote 7: [https://github.com/noiseprotocol/noise_spec](https://github.com/noiseprotocol/noise_spec) ### Vanilla Bullshark We implemented Bullshark according to (Rust et al., 2018), but additionally incorporated weak links per (Vanilla, 2018) in our DAG construction. Observing \(n-f\) vertices in a round is sufficient for progressing to the next round. Therefore, without weak links, slow validators may consistently lag behind others in broadcasting their vertices and thus may consistently fail to add their vertices to the DAG. This will incur significant latency for their client transactions. Weak links from a vertex can reference vertices from earlier rounds in addition to the normal (strong) links to \(n-f\) vertices from the previous round. Figure 5. Illustration of Shoal’s leader reputation integrated into Bullshark (no pipelining). First, the DAG is interpreted via the Bullshark protocol and the red anchors. The anchor in round i+1, A1, is determined to be the first ordered anchor. Then, based on A1’s causal history, new anchors are selected for future rounds (marked in green). Note that validator 4, which had an anchor according to the red selection, no longer has an anchor according to the new mapping (it was not performing well). Then, A1’s causal history is deterministically ordered as in the original Bullshark, and a new instance of Bullshark starts at round \(i+2\) based on the green anchors. These weak links are used when establishing the causal history of ordered anchors and thus facilitate the inclusion of transactions contributed by the slow validators into the total order. We refer to this implementation as Vanilla Bullshark. It is important to note that adding the support for weak links increases the average latency compared to the figures presented in (Vanilla et al., 2018), which did not employ the weak links. ### Eliminating Timeouts The short paper for the stand-alone partially synchronous version of Bullshark (Bullshark, 2018) assumes the DAG is given and focuses on the ordering of its vertices. On the other hand, full Bullshark is an asynchronous protocol with a fast path under partial synchrony. The full Bullshark paper (Vanilla et al., 2018) describes how to build the DAG and in particular, the incorporation of timeouts to support the fast path. Validators in Bullshark must observe \(n-f\) vertices in a round to advance to the next round. Even rounds have anchors, while vertices in odd rounds determine the "voting" pattern. Full Bullshark uses the following timeouts for every validator to support the fast path: * Even-round: wait until the anchor of the round is delivered (or the timeout expires). * Odd-round: wait until \(2f+1\) vertices that link to the anchor in the previous round are delivered (or the timeout expires). The rationale for the above logic is to help order the anchor within 2 rounds. However, part of the contribution of this paper is to eliminate these timeouts in such a way that actually significantly improves latency, according to our evaluation. Having fewer cases where timeouts can occur also inherently simplifies the potential state space and thus, the implementation of the protocol. In Section 6, we refer to even-rounds as _anchor rounds_ and to odd-rounds as _vote rounds_. Vanilla bullshark w/o vote TimeoutIn the full Bullshark \(2f+1\) votes are required to order anchors. Without timeouts in odd rounds, a Byzantine adversary can prevent the fast pass from making progress even during synchrony. As long as Byzantine validators deliberately not link to the anchor, and even 1 of their vertices get delivered among the first \(2f+1\) to an honest validator in an odd round, then the honest validator will not be able to order the anchor. However, we discovered that we can completely eliminate timeouts in odd rounds in the partially synchronous variant of Bullshark. The anchor ordering rule in this case is \(f+1\) votes (Bullshark, 2018). As a result, even if \(f\) out of the first \(2f+1\) vertices delivered to a validator in a round is from Byzantine validators (and do not link to the anchor), the remaining \(f+1\) vertices will link to the anchor due to the even-round timeout and be sufficient to order it. Baseline BullsharkFLP impossibility result (Bullshark, 2018) dictates that any deterministic protocol providing liveness under partial synchrony must use timeouts. In Bullshark, without timeouts in the even rounds, an honest leader that is even slightly slower than the fastest \(2f+1\) validators will struggle to get its anchor linked by other vertices. As a result, the anchor is unlikely to be ordered. The timeout, therefore, ensures that all honest validators link to anchors during periods of synchrony (as long as the leader has not crashed and actually broadcasts the anchor vertex). Even though timeouts are unavoidable in the worst case, we observe that the DAG construction combined with the leader reputation mechanism allows avoiding them in vast majority of cases in practice. This is in contrast to leader-based monolithic consensus protocols, where timeouts are the only tool to bypass the rounds with bad leaders. Without timeouts, a monolithic protocol could stall forever as there is no other mechanism to stop waiting for a crashed leader. It is also hard to set the timeouts appropriately: conservative timeouts lead to excessive waiting for crashed leaders, while aggressive timeouts lead to bypassing slower validators (and hence unnecessarily failed rounds). In contrast, the DAG construction provides a "clock" that estimates the network speed. Even without timeouts, the rounds keeps advancing as long as \(2f+1\) honest validators continue to add their vertices to the DAG. As a result, the DAG can evolve despite some leaders being faulty. Eventually, when a non-faulty leader is fast enough to broadcast the anchor, the ordering will also make progress. Recall that to be ordered, in partially synchronous Bullshark, an anchor needs \(f+1\) votes (links) out of the \(3f+1\) vertices. Therefore, as our evaluation demonstrates, in the failure-free case, most of the anchors are ordered in the next round. The benefit are even more pronounced when there are failures. This is because a crashed validator causes a timeout to expire, stalling the protocol for the entire duration. Without a timer, however, the DAG will advance rounds at network speed and the Bullshark protocol is able to immediately move to the next anchor. Timeouts as a fallbackBy FLP (Bullshark, 2018) impossibility result, there exists an adversarial schedule of events that can prevent all anchors from getting enough votes to be ordered. This scenario is extremely unlikely to occur in practice, but to be on the safe side, the protocol can deal with it by falling back to using timeouts after a certain amount of consecutive skipped anchors. ### Shoal of Bullsharks A realistic case in which timeouts can help the performance of a Narwhal-based consensus protocol is when the leader is slower than other validators. Then, as discussed earlier, waiting for an anchor to be delivered even after \(2f+1\) other vertices can allow the anchor to be committed in the next round. While we eliminated timeouts from partially synchronous Bullshark, note that, due to the leader reputation mechanism, Shoal instantiated with Bullshark does better than repeatedly waiting for the slow leaders. Instead, the leader reputation mechanism excludes (or at least significantly reduces the chances of) slow validators from being selected as leaders. This way, the system takes advantage of the fast validators to operate at network speed. _Prevalent Responsiveness._ Shoal provides network speed responsiveness under all realistic failure and network scenarios, a property we name _Prevalent Responsiveness_. Specifically, compared to optimistic responsiveness, Shoal continues to operate at network speed even during asynchronous periods or if leaders fail for a configurable number of consecutive rounds. We implemented leader reputation and pipelining on top of the Baseline Bullshark and compared it to the baseline (no timeouts) implementation. _Leader reputation logic._ As explained in Section 3.2, Shoal ensures all validators agree on the information used to evaluate the recent activity and to bias the leader selection process accordingly towards healthier validators. Any deterministic rule to determine the mapping from rounds to leaders (i.e. the logic in pseudocode Line 8 in Algorithm 2) based on this shared and agreed upon information would satisfy the correctness requirements. Next, we discuss the specific logic used in our implementation. At any time each validator is assigned either a high or a low score, and all validators start with a high score. After ordering an anchor \(v\), each validator examines \(v\)'s causal history \(H\). Every skipped anchor in \(H\) is (re-)assigned a low score, and every ordered anchor in \(H\) is (re-)assigned a high score. Then, the new sequence of anchors is pseudo-randomly chosen based on the scores, with a validator with a high score more likely to be a leader in any given round. Note that while the validators use the same pseudo-randomness (so that they agree on the anchors), the computation is performed locally without extra communication. Assigning higher scores to validators whose anchors get ordered ensures that future anchors correspond to faster validators, thus increasing their probability to be ordered. However, we ensure that the low score is non-zero, and thus underperforming validators also get a chance to be leaders. This crucially gives a temporarily crashed or underperforming validator a chance to recover its reputation. ## 6. Evaluation We evaluated the performance of the aforementioned variants of Bullshark and Shoal on a geo-replicated environment in Google Cloud. In order to show the improvements from pipelining and leader reputation independently, we also evaluate Shoal PL, which is a Shoal instantiation with only pipeline enabled, and Shoal LR, which is a Shoal instantiation with only Leader Reputation enabled. With our evaluation, we aim to show that (i) Shoal maintains the same throughput guarantees as Bullshark. (ii) Shoal can provide significantly lower latency than Bullshark and its variants. (iii) Shoal is more robust to failures and can improve latency with the help of Leader Reputation. For completeness, we also compare against Jolteon (Zhou et al., 2017), which is the current consensus protocol of the production system we use. Jolteon combines the linear fast path of Tendermint/Hotstuff with a PBFT style view-change, and as a result, reduces Hotstuff latency by 33%. The implementation extends the original Jolteon protocol with a leader reputation mechanism, which prioritizes well-behaved leaders from previous rounds for future rounds. In addition, to mitigate the leader bottleneck and support high throughput, the implementation uses the Narwhal technique to decouple data dissemination via a pre-step component (called Quorum Store (Stoer, 2018)). We evaluate prevalent responsiveness by presenting experiments that compare variants of Bullshark w.o. timeout in different rounds versus Shoal as discussed in Section 5. **Experimental Setup.** Our experimental setup consists of t2d-standard-32 type virtual machines spread equally across three different Google Cloud regions: us-west1, europe-west4, asia-east1. Each virtual machine has 32 vCPUs, 128GB of memory, and can provide up to 10Gbps of network bandwidth. The round-trip latencies are: 118ms between us-west1 and asia-east1, 251ms between europe-west4 and asia-east1, and 133ms between us-west1 and europe-west4. The experiments involve three different values of N (the number of validators): 10, 20, and 50, tolerating up to 3, 6, and 16 failures, respectively. We only measure the consensus performance to avoid introducing noise from other parts of the production system, such as execution and storage. The transactions are approximately 270B in size. We set a maximum batch size of 5000 transactions. In our experiments, we measure _latency_ as the Figure 6. Baseline performance under no failures time elapsed from when a vertex is created from a batch of client transactions to when it is ordered by a validator. The timeouts for moving to the next round, when applicable, are set to 1s, which is less than the 1.5s timeout used by the production Blockchain system we use. ### Baseline Performance First, we evaluate the performance of the Bullshark variants, namely Vanilla Bullshark, Vanilla Bullshark w/ Anchor Timeouts, and Baseline Bullshark, to align on a baseline performance to evaluate Shoal in the rest of the experiments. The results are in Figures 6 and 7. Figure 6 shows the throughput and average latencies of the three Bullshark variants as the system size increases. The presence of timeouts in Vanilla Bullshark forces it to build the DAG slowly, which combined with the fact that fewer validators contribute vertices to the DAG when \(N=10\), results in lower throughput than other variants, which have fewer or no timeouts. The latencies for Vanilla Bullshark is up to 88% higher due to the timeouts. Interestingly, the latencies are similar for baseline Bullshark and Vanilla Bullshark w/o Vote timeout in the normal case because there is a trade-off between building a DAG at network-speed while skipping an anchor and waiting slightly longer for the anchor to be part of the votes. We also evaluated the vanilla variants and the baseline for \(N=50\) and with varying the number of failures, in Figure 7. We observe that Baseline Bullshark provides lower latency than other variants by virtue of being able to build the DAG at network speed skipping failed anchors and ordering using the alive ones. Therefore, in the rest of the section, we use Baseline Bullshark as the baseline to evaluate Shoal. ### Performance of Shoal under fault-free case We now evaluate the Shoal variants against the baseline under the normal case where there are no failures. The results are in Figure 8. As expected, the throughput of the Shoal variants is similar as the number of validators increases. It can be observed that each variant of Shoal decreases the latency leading to full Shoal protocol. In summary, we observe that the Shoal's average latency decreases by up to 20% compared to Baseline Bullshark. On the other hand, Juleton (Juleton, 2018), despite its use Narwhal's data dissemination decoupling, is only able to achieve a peak throughput of less than 60k, about 40% lower than Shoal. This is because under high load leaders become the bottleneck again as they are not able to deal with the required network bandwidth, and as a result, unable to drive progress before timeouts expire. Furthermore, in terms of latency, Juleton is \(\approx\)50% better than Vanilla Bullshark, but only \(\approx\)20% better than Shoal. Note that the latencies presented do not include the pre-step Quorum Store's latencies, because all the compared protocols include this optimization. However, in the case of Shoal, this latency can be avoided by merging Quorum Store into the DAG construction, as done in Narwhal, which will further close the latency gap from Juleton. In Figures 8(c) and 8(d), we distinguish the latencies of transactions in the vote-round vertices from that in anchor-round vertices, in order to show the effect of the pipelining approach. The vote and anchor round latencies for Shoal PL, as well as Shoal, are similar, which helps provide predictable and smooth latency for transactions in real production systems. In contrast, the vote and anchor round latencies for Baseline Bullshark and Shoal LR differ by 5-20% depending on the number of failures. Figure 8. Shoal performance under no failures with 10, 20 and 50 validators. Figure 7. Baseline performance under failures (N=50) ### Performance of Shoal under faults Figure 9 shows the behavior of baseline and Shoal variants under faults. For this experiment, \(N=50\) and the failures are increased from 4 to 16 (maximum tolerated). This is the case where the Leader Reputation mechanism helps to improve the latency significantly by reducing the likelihood of failed validators from being anchors. Notice that without Leader Reputation, the latencies of Baseline Bullshark and Shoal PL increases significantly as the number of failures increases. Shoal provides up to 65% lower latencies than Baseline Bullshark under failures. Figure 10 shows the impact of skipping leaders on the latency by comparing vanilla Bullshark with Shoal on a timeline plot under failures. We have a system of 50 validators, 8 of which have failed. The x-axis represents a part of the experiment time window and the y-axis shows the latency. The presence of timeouts and the need to skip anchors causes vanilla Bullshark's latency to fluctuate. In our experiment, we observed latency jitter of approximately one second, which makes it impossible to provide predictable latency in production systems. In constrast, Shoal maintains consistent low latency without any jitter. ### Summary In contrast to Vanilla Bullshark, Shoal provides up to 40% lower latency in the fault-free case and up to 80% lower latency under failures. Furthermore, we show that Shoal provides predictable latency and is able to commit at network speed in most cases and without waiting for timeouts. ## 7. Related work ### BFT systems for Blockchains Byzantine fault tolerance (BFT) has been an active area of research for over four decades, with a significant body of literature in both theory (Billshark, 2010) and systems (Billshark, 2010; Bisserman and Bisserman, 2010; Bisserman and Bisserman, 2011; Bisserman et al., 2012). With the advent of Blockchain systems in recent years, the focus on performance and scalability has notably increased. Initial efforts to enhance throughput and scalability attempted to reduce the communication complexity of leader-based eventually synchronous protocols. This resulted in a considerable body of work aiming to achieve communication complexity linear to the number of validators (Bisserman and Bisserman, 2010; Bisserman and Bisserman, 2011; Bisserman et al., 2012; Bisserman et al., 2013). Despite sound theoretical premise, the practical implications arguably fell short of expectations. An independent evaluation and comparison conducted by (Bisserman and Bisserman, 2010) revealed that the well-known HotStuff (Hotstuff, 2010) protocol achieved a throughput of only 3,500 TPS on a geo-replicated network. The practical breakthrough occurred a few years later with the realization that the main bottleneck in BFT systems, particularly those relying on leaders, is data dissemination. Mir-BFT (Mir-BFT, 2010) introduced an innovative approach by running multiple PBFT (Bisserman and Bisserman, 2011) instances in parallel. Independently, Narwhal (Narwhal, 2010) and later Dispersedledger (Hotstuff, 2010) decoupled data dissemination from the consensus logic. These advancements showcased impressive results, with Narwhal achieving a peak throughput of 160,000 TPS. There has been systems (Billshark, 2010; Bisserman et al., 2012; Bisserman et al., 2013) and theoretical (Bisserman and Bisserman, 2010; Bisserman et al., 2013; Bisserman et al., 2013) research in asynchronous BFT protocols. However, to the best of our knowledge, no asynchronous protocol is deployed in production in an industrial system. Another appealing property of Narwhal is the support of a partially synchronous (Bisserman et al., 2013) as well as asynchronous (Narwhal et al., 2010; Bisserman et al., 2012; Bisserman et al., 2013) (as long as randomness is available) protocols, and the ability to easily switch among them. Figure 10. Latency timeline under 8 failures with \(N=50\). The x-axis represents a part of the experiment time window and the y-axis shows the latency. Figure 9. Shoal performance under 4, 8, and 16 failures (N=50) ### Timeouts and responsiveness The FLP (FLP, 2017) impossibility result states that there is no deterministic consensus protocol that can tolerate a fully asynchronous network. The proof relies on the fact that it is impossible to distinguish between crashed and slow validators during asynchronous periods. The immediate application to partially synchronous networks, therefore, is that all deterministic protocols must rely on timeouts in some way to guarantee liveness against a worst-case adversary. Indeed, to the best of our knowledge, all previous deterministic BFT protocols, including the partially synchronous version of Bullshark (Bullshark, 2015), relied on timeouts to implement a simple version of a failure detector (Bullshark, 2015). This mechanism monitors the leaders and triggers view-changes when timeouts expire, i.e. when faults are suspected. The optimistic responsiveness property, popularized by HotStuff (Hotstuff, 2015), avoids timeouts in the best-case failure-free scenario. However, when failures do occur, all validators wait until the timeout expires before view-changing to the next leader, introducing a significant slowdown in the protocol execution. Moreover, as discussed in Section 5, setting a proper timeout duration is a non-trivial problem in its own right. Shoal provides prevalent responsiveness, which is a strictly better property than optimistic responsiveness as it guarantees network speed progress in case of healthy leaders and zero delays in case of failures. Shoal achieves this by relying on the network speed "clock" inherent in the DAG construction itself (Dag, 2017), combined with the leader reputation mechanism. While due to the FLP result, the worst case in which a timeout would be required for maintaining the liveness of the protocol cannot completely be eliminated, Shoal successfully relegates such cases to occur in specific extremely uncommon scenarios from a practical point of view (multiple consecutive unordered anchors). ### DAG-based BFT DAG-based consensus in the context of BFT was first proposed by HashGraph (Hash, 2015). The idea is to separate the network communication layer, i.e. efficiently constructing a system that forms a DAG of messages, and the consensus logic that can involve complex pieces such as view-change and view-synchronization. The consensus logic is performed locally, whereby a validator examines its local view of the DAG and orders the vertices without sending any messages. The challenge arises from the asynchronous nature of the network, which may cause different validators to observe slightly different portions of the DAG. To address this, the DAG structure is interpreted as a consensus protocol, wherein a vertex represents a proposal and an edge represents a vote. Aleph (Spiegel, 2017) introduced a round-based DAG structure. Such a structure simplifies support for garbage collection and non-equivocation, which in turn simplifies the consensus logic to order the vertices. Narwhal implements round-based DAG, and three Narwhal-based consensus protocols have been previously proposed. The first is DAG-Rider (Rider, 2017), which introduced a quantum-safe asynchronous protocol with optimal amortized communication complexity and \(O(1)\) latency. Tusk (Tusk, 2017) improved latency in the best case. An asynchronous version of Bullshark (Bullshark, 2015; Bullshark, 2015) includes a fast path (Bullshark, 2015), while a stand-alone partially synchronous protocol (Bullshark, 2015) also exists and is currently deployed in production in Sui (Sui, 2016). Shoal presents a framework that applies to all Narwhal-based protocols, enhancing their latency through a more efficient ordering rule and a leader reputation mechanism. An orthogonal theoretical effort (Tendermint, 2017) trades off the non-equivocation property of the DAG construction (which typically requires reliable broadcast), as well as the separation from the consensus logic, in order to reduce latency. ### Pipelining To the best of our knowledge, pipelining in the BFT context was first proposed by Tendermint (Tendermint, 2017), and later utilized in HotStuff (Hotstuff, 2015) and Diem (Diem, 2016). State machine replication (SMR) systems can be constructed from multiple instances of single-shot consensus (Dag, 2017), e.g. one approach to build Byzantine SMR is by running a PBFT instance (Dag, 2016) for each slot. Tendermint introduced the elegant idea of chaining proposals or piggybacking single-shot instances such that a value for a new slot could be proposed before the value for the previous slot was committed. In this approach, a message in the \(i^{th}\) round of the \(k^{th}\) instance can be interpreted as a message in round \(i-1\) of instance \(k+1\). While the latency for each instance remains unchanged, clients experience improved latency as their transactions can be proposed earlier. In DAG-based consensus, the concept of piggybacking proposals is inherent in the design, as each vertex in the DAG links to vertices in previous rounds. However, previous protocols did not allow having an anchor in every round. Shoal framework supports having an anchor in each round in a good case for any Narwhal-based protocol, providing a "pipelining effect". ### Leader reputation Leader reputation is often overlooked in theory, yet it plays a crucial role in performance in practice. While Byzantine failures are rare as validators are highly protected, isolated, and economically incentivized to follow the protocol, more common are validators that are unresponsive. This may be because they temporarily crashed, running slow hardware, or are simply located farther away. If a leader/anchor election is done naively, unresponsive validators will unavoidably stall progress and lead to significant performance impact. A practical approach, implemented in Diem (Diem, 2016) and formalized in (Dag, 2017), is to exclude underperforming validators from leader election. This is achieved by updating the set of candidates after every committed block based on the recent activity of validators. In a chained protocol, if all validators observe the same committed block, they can deterministically elect future leaders based on the information in the chain. However, in some cases, certain validators may see a commit certificate for a block earlier than others. This can lead to disagreements among validators regarding the list of next leaders, causing a temporary loss of liveness. For DAG-based protocols, disagreements on the identity of round leaders can lead the validators to order the DAG completely differently. This poses a challenge for implementing leader reputation on the DAG. As evidence, a Narwhal and Bullshark implementation currently deployed in production in Sui blockchain does not support such a feature 8. Shoal enables leader reputation in Narwhal-based BFT protocols without any additional overhead. Footnote 8: github.com/MystenLabs/sui/blob/main/narwhal/consensus/src/bullshark.rs ## 8. Discussion Shoal can be instantiated with any Narwhal-based consensus protocol, and can even switch between protocols during the DAG retrospective re-interpretation step. Shoal uniformizes the latency and throughput across the validators and eliminates the use of timeouts except in very rare cases, which contributes to the robustness and performance of the system. Predictable and smooth latency and throughput patterns have major practical benefits for real systems. It facilitates setting up effective monitoring and alerts for anomaly detection. This is crucial for ensuring security and quality of service by enabling timely response and any intervention necessary, be it manual or automated. Predictable consensus throughput also facilitates pipelining the ordering of transactions with other components of the Blockchain, e.g. transaction execution and commit. Shoal satisfies the property we name prevalent responsiveness, ensuring the worst-case executions that must use timeouts due to the FLP impossibility result are aligned with the improbable (and worst-case) scenarios from the practical standpoint. Moreover, the design without timeouts plays into the strengths of the leader reputation mechanism of Shoal, and as a result, provides further latency improvements.
2305.02616
Sparsity Domain Smoothing Based Thresholding Recovery Method for OFDM Sparse Channel Estimation
Due to the ever increasing data rate demand of beyond 5G networks and considering the wide range of Orthogonal Frequency Division Multipllexing (OFDM) technique in cellular systems, it is critical to reduce pilot overhead of OFDM systems in order to increase data rate of such systems. Due to sparsity of multipath channels, sparse recovery methods can be exploited to reduce pilot overhead. OFDM pilots are utilized as random samples for channel impulse response estimation. We propose a three-step sparsity recovery algorithm which is based on sparsity domain smoothing. Time domain residue computation, sparsity domain smoothing, and adaptive thresholding sparsifying are the three-steps of the proposed scheme. To the best of our knowledge, the proposed sparsity domain smoothing based thresholding recovery method known as SDS-IMAT has not been used for OFDM sparse channel estimation in the literature. Pilot locations are also derived based on the minimization of the measurement matrix coherence. Numerical results verify that the performance of the proposed scheme outperforms other existing thresholding and greedy recovery methods and has a near-optimal performance. The effectiveness of the proposed scheme is shown in terms of mean square error and bit error rate.
Mohammad Hossein Bahonar, Reza Ghaderi Zefreh, Rouhollah Amiri
2023-05-04T07:45:53Z
http://arxiv.org/abs/2305.02616v1
# Sparsity Domain Smoothing Based Thresholding Recovery Method for OFDM Sparse Channel Estimation ###### Abstract Due to the ever increasing data rate demand of beyond 5G networks and considering the wide range of Orthogonal Frequency Division Multiplexing (OFDM) technique in cellular systems, it is critical to reduce pilot overhead of OFDM systems in order to increase data rate of such systems. Due to sparsity of multipath channels, sparse recovery methods can be exploited to reduce pilot overhead. OFDM pilots are utilized as random samples for channel impulse response estimation. We propose a three-step sparsity recovery algorithm which is based on sparsity domain smoothing. Time domain residue computation, sparsity domain smoothing, and adaptive thresholding sparsifying are the three-steps of the proposed scheme. To the best of our knowledge, the proposed sparsity domain smoothing based thresholding recovery method known as SDS-IMAT has not been used for OFDM sparse channel estimation in the literature. Pilot locations are also derived based on the minimization of the measurement matrix coherence. Numerical results verify that the performance of the proposed scheme outperforms other existing thresholding and greedy recovery methods and has a near-optimal performance. The effectiveness of the proposed scheme is shown in terms of mean square error and bit error rate. OFDM, sparse channel estimation, sparse domain smoothing, thresholding. + Footnote †: publicationid: pubid: 978-1-6654-8087-1/22/$31.00 Β©2022 IEEE ## I Introduction Considering the increasing number of users in cellular networks and their increasing data rate demand, it is critical to develop new technologies and network architectures in order to support the ever increasing data rate demand of beyond 5G networks. Many concepts and technologies such as a multiple-input-multiple-output (MIMO) system [1], device-to-device (D2D) communications [2], dense cellular networks [3], cellular vehicle-to-everything [4] communications, and intelligent reflecting surfaces [5] have been proposed in the literature to increase capacity of cellular networks. On the other hand, data overhead reduction can also be considered as a tool to provide coverage to a larger number of users. Due to the capability of an orthogonal frequency division multiplexing (OFDM) system to combat multipath fading channels by providing flat fading channels over all subcarriers [6], this technique has been widely used in communication systems. Hence, data overhead reduction of OFDM-based communication systems can be an important issue. Some of the subcarriers of an OFDM-based communication system are allocated to pilots which are data overheads of such systems. The number of pilots can be reduced under certain conditions such as existence of sparse wireless channels. A wireless channel is usually a multipath channel and can be modeled as a sparse channel having a small number of significant paths [7]. The equivalent discrete-time channel will also have a sparse impulse response in time domain. In order to estimate the sparse channels of an OFDM-based communications system, pilot-assisted sparse channel estimation techniques can be used [8, 9]. The proposed schemes of some researches such as [10, 11] are based on Compressed Sensing (CS) which states that a sparse signal can be successfully reconstructed from very few samples [12]. These methods estimate the sparse channel impulse response by taking into account the inherent sparsity and result in a better estimation performance in terms of mean square error (MSE) or bit error rate (BER) than conventional channel estimation methods such as Least Square (LS) and Minimum MSE (MMSE). The performance of sparse channel estimation in an OFDM-based communication system depends on the reconstruction method and the pilot placement algorithm [10, 11, 13]. In [14], by minimizing MMSE of the channel estimation using Cross-Entropy Optimization (CEO), a pilot placement algorithm for OFDM is proposed. By minimizing the coherence, a random search method [15] and a deterministic allocation method [16, 17] are proposed for pilot placement in OFDM systems. In [17], it is shown that the non-uniform patterns based on Cyclic Difference Set (CDS) are optimal. As a solution to the pilot placement problem in MIMO-OFDM systems, in [18], a multiple random search and a genetic algorithm based method are suggested. Also, a random-generation-based method is proposed in [10]. Equispaced pilots are investigated in [19]. The authors consider the orthogonal matching pursuit (OMP) algorithm which is a greedy sparse reconstruction algorithm. Thresholding-based sparse recovery methods such as Iterative Method with Adaptive Thresholding (IMAT) [20] has also been proposed in the literature as a sparse reconstruction tool. In this paper, we investigate sparse channel estimation of OFDM-based communication systems. Due to the superiority of thresholding-based sparse recovery methods compared to greedy approaches, we propose our three-step thresholding-based sparse recovery algorithm. Time domain residue computation, sparsity domain smoothing, and adaptive thresholding sparsifying are the three steps of the proposed scheme, respectively. We also investigate pilot location design based on measurement matrix coherence minimization. We employ windowing approach as the smoothing operation in the second step of the proposed scheme. The smoothing operation should be implemented in the sparsity domain and thus frequency domain representation of the OFDM system is also elaborated in the system model. To the best of our knowledge, the thresholding recovery methods based of sparsity domain smoothing has not been previously employed for sparse channel estimation of OFDM systems in the literature. Simulation results indicate that the proposed scheme outperforms other existing thresholding and greedy sparse reconstruction methods in terms of MSE and BER. The proposed scheme is also compared with an oracle estimator and the near-optimal performance of the proposed scheme is demonstrated. The rest of the paper is organized as follows. Section II describes the channel impulse response measurement of the OFDM-based communication system. Section III describes the proposed recovery method that utilizes sparsity domain smoothing. The pilot location design is investigated in Section III. The simulation results are reported in Section IV. Section V concludes the paper. ## II System Model Consider the downlink spectrum of a cellular system where a single antenna BS transmits data to multiple CUEs. There exist \(N\) CUEs in the cell where \(i\)-th CUE is denoted as \(c_{i}\). Similar to currently utilized cellular networks, it is assumed that orthogonal CLs are used by CUEs. Hence, interference management issue among CUEs is solved. The OFDM technique is also used for data transmission among the BS and each CUE due to the it capability to combat multipath fading. The channel between the BS and and \(c_{i}\) which is denoted as \(h_{i}\) has frequency-selective fading and its coherence time is much larger than the OFDM symbol duration. The continuous time domain impulse response of \(h_{i}\) can be formulated as \[h_{i}(t)=\sum_{k=0}^{K-1}\bar{\alpha}_{i,k}\delta(t-t_{i,k}), \tag{1}\] where \(\bar{\alpha}_{i,k}\in\mathbb{C}\) is the channel coefficient of \(k\)-th path of the CL between the BS and \(c_{i}\). The time delay of the path is also denoted as \(t_{i,k}\). The equation (1) can also be represented in a discrete form as \[h_{i}[n]=\sum_{l=0}^{L-1}\alpha_{i,l}\delta[n-l], \tag{2}\] where \(\alpha_{i,l}\in\mathbb{C}\) denotes the complex channel gain coefficient of the \(l\)-tap delay path of the CL between the BS and \(c_{i}\). The equation (2) is derived by sampling the continuous channel impulse response. This representation that is a finite impulse response filter of length \(L\) can be used in discrete and sparse signal processing applications. It can be concluded that the channel is sparse which means the channel vector \(\mathbf{h}_{i}=[h_{i}[0],h_{i}[1],...,h_{i}[L-1]]^{T}\in\mathbb{C}^{N\times 1}\) has only \(K\) nonzero elements where \(K\ll L\). It is assumed that OFDM system has a total number of \(N\) subcarriers where \(N_{p}\) of them are used for pilot symbols. The received data at \(k\)-th subcarrier and \(n\)-th OFDM frame of \(c_{i}\)\((1\leq k\leq N)\) can be formulated as \[r_{i}(n,k)=X_{i}(n,k)H_{i}(n,k)+V_{i}(n,k), \tag{3}\] where \(X_{i}(n,k)\), \(H_{i}(n,k)\), and \(V_{i}(n,k)\) are the \(k\)-th elements of \(\mathbf{\tilde{X}_{i}}(n)\in\mathbb{C}^{N\times 1}\), \(\mathbf{H_{i}}(n)\in\mathbb{C}^{N\times 1}\), and \(\mathbf{V_{i}}(n)\in\mathbb{C}^{N\times 1}\) which are the frequency domain representations of \(n\)-th transmitted OFDM symbol, the multipath channel at the time of the OFDM symbol, and additive white Gaussian noise (AWGN) at the time of the OFDM symbol, respectively. The frequency domain representations of the transmitted OFDM symbol and the multipath channels can be expressed as \[\mathbf{\tilde{X}_{i}}(n) =\mathbf{F}\mathbf{x}_{i}(n), \tag{4}\] \[\mathbf{H}_{i}(n) =\mathbf{F}\mathbf{\tilde{h}}_{i}(n), \tag{5}\] where \(\mathbf{x}_{i}(n)\in\mathbb{C}^{N\times 1}\) is the \(n\)-th OFDM symbol that is transmitted from BS to \(c_{i}\) and \(\mathbf{\tilde{h}_{i}}(n)\in\mathbb{C}^{N\times 1}\) is the zero-padded discrete impulse response of the channel between the BS and \(c_{i}\) as expressed in (2). The standard \(N\times N\) DFT matrix is also denoted as \(\mathbf{F}\in\mathbb{C}^{N\times N}\). The transmitted data of each CUE and pilot symbols form a \(N\times 1\) vector which is modulated according to the OFDM technique using Inverse Fast Fourier Transform (IFFT). Each OFDM symbol is transmitted on \(N\) subcarriers. The pilot symbols are values of the subcarriers of the OFDM symbol which are placed at specific locations called pilot locations. The pilot symbols of the CL between the BS and \(c_{i}\) are \(x_{i}[n],\ n\in\Lambda_{i}\). The set of pilot locations of the CL between the BS and \(c_{i}\) is denoted as \(\Lambda_{i}\) and can be expressed as \[\Lambda_{i}=\{\lambda_{i,1},\lambda_{i,2},...,\lambda_{i,N_{p}}\}, \tag{6}\] where \(\lambda_{i,k}\) is the location of the \(k\)-th pilot of the CL between the BS and \(c_{i}\). Equation (6) states that the cardinality of pilot locations set for all CLs is equal to \(N_{p}\). Let \(\mathbf{r}_{i}(n)\in\mathbb{C}^{N\times 1}\) denote the vector of \(n\)-th OFDM symbol received samples at \(c_{i}\). As a result the equation (3) can be expressed in a matrix form as \[\mathbf{R}_{i}(n)=\mathbf{X}_{i}(n)\mathbf{W}\mathbf{h}_{i}(n)+\mathbf{V}_{i} (n) \tag{7}\] where \(\mathbf{X}_{i}(n,k)=\mathrm{diag}\{\tilde{X}_{i}(n,1),\tilde{X}_{i}(n,2),..., \tilde{X}_{i}(n,N)\}\in\mathbb{C}^{N\times N}\) is the diagonal matrix of the subcarriers of \(n\)-th OFDM symbol of \(c_{i}\). The \(K\)-sparse impulse response of the multipath channel during the transmission of \(n\)-th OFDM symbol is denoted as \(\mathbf{h}_{i}(n)\in\mathbb{C}^{L}\). is the \(K\)-sparse impulse response of the channel, and The partial FFT matrix which includes the first \(L\) columns of a standard \(N\times N\) FFT matrix is denoted as \(\mathbf{W}\in\mathbb{C}^{N\times L}\). We aim to investigate the performance of our proposed pilot allocation method using a sparsity domain interpolation based recovery method. As a result, the equation (7) can be just considered at pilot positions instead of all subcarriers. In addition to that, we solve the pilot allocation problem for all OFDM symbols. Hence, without loss of generality, the equation (7) can be expressed as \[\mathbf{R}_{i,\Lambda_{i}}=\mathbf{X}_{i,\Lambda_{i}}\mathbf{W}_{ \Lambda_{i}}\mathbf{h}_{i}+\mathbf{V}_{i,\Lambda_{i}}, \tag{8}\] where \(\mathbf{R}_{i,\Lambda_{i}}\in\mathbb{C}^{N_{p}\times 1}\) is the vector of received pilot subcarriers at \(c_{i}\), \(\mathbf{W}_{\Lambda_{i}}\in\mathbb{C}^{N_{p}\times L}\) is the DFT submatrix with \(N_{p}\) rows associated with the pilot subcarriers, and \(\mathbf{V}_{i,\Lambda_{i}}\) is the AWGN of the associated with the pilot subcarriers in frequency domain. Assuming equal value pilot symbols, the received data corresponding to pilot symbols denoted as \(\mathbf{\tilde{H}}_{i,\Lambda_{i}}\) can be expressed as \[\mathbf{\tilde{H}}_{i,\Lambda_{i}}=\mathbf{W}_{\Lambda_{i}}\mathbf{h}_{i}+ \mathbf{V}_{i,\Lambda_{i}}, \tag{9}\] where \(\mathbf{H}_{i,\Lambda_{i}}\), \(\mathbf{W}_{\Lambda_{i}}\), and \(\mathbf{h}_{i}\) are the observation vector, measurement matrix, and K-sparse input vector, respectively according to the CS framework. ## III Proposed Scheme In this section, our sparse channel estimation approach is proposed. In the first subsection, the proposed sparse reconstruction algorithm is discussed and the usage of sparsity domain smoothing and interpolation is investigated. In order to evaluate the performance of the proposed scheme, OFDM pilot allocation should be investigated. Hence, the pilot allocation problem is discussed in the second subsection by minimizing coherence of the measurement matrix. ### _Sparse Reconstruction Using Sparsity Domain Interpolation_ Many sparse recovery algorithms has been used to estimate sparse multipath channels. It has been shown that thresholding-based algorithms have better performance compared to greedy methods. Hence, we select IMAT as our initial sparse recovery algorithm and improve its performance by adding a sparsity domain interpolation operation to it. The IMAT algorithm is a sparsity-based random sampling recovery method. Our main goal is to estimate sparse impulse response of channel between the BS and \(c_{i}\) (\(\mathbf{h}_{i}\)) from noisy random samples of CFR (\(\mathbf{\tilde{H}}_{i,\Lambda_{i}}\)). A successive reconstruction method is utilized in the IMAT algorithm in order to perform sparse signal recovery where an adaptive threshold is used to sparsify the signal at each iteration [20]. The inverse or pseudo-inverse of the measurement matrix is needed in the IMAT algorithm. Therefore, the pseudo-inverse of the measurement matrix can be derived using Moore-Penrose pseudo-inverse [21] as \[\mathbf{W}_{\Lambda_{i}}^{+} =\mathbf{W}_{\Lambda_{i}}^{H}(\mathbf{W}_{\Lambda_{i}}\mathbf{W}_ {\Lambda_{i}}^{H})^{-1}\] \[=\mathbf{W}_{\Lambda_{i}}^{H}(\frac{1}{N}\mathbf{I}_{N_{p}\times N _{p}})=\frac{1}{N}\mathbf{W}_{\Lambda_{i}}^{H}, \tag{10}\] where the pseudo-inverse of the measurement matrix is denoted as \(\mathbf{W}_{\Lambda_{i}}^{+}\). Our proposed recovery scheme consists of three steps. To the best of our knowledge the following approach which consists of a smoothing step has not been proposed in the literature for OFDM sparse channel estimation. The first step is to find the time domain residue using the measurement matrix pseudo-inverse as \[\tilde{\tilde{\mathbf{h}}}_{i,k} =\lambda\mathbf{W}_{\Lambda_{i}}^{+}(\mathbf{\tilde{H}}_{i, \Lambda_{i}}-\mathbf{W}_{\Lambda_{i}}\mathbf{\tilde{h}}_{i,k-1})\] \[=\frac{\lambda}{N}\mathbf{W}_{\Lambda_{i}}^{H}(\mathbf{\tilde{H}} _{i,\Lambda_{i}}-\mathbf{W}_{\Lambda_{i}}\mathbf{\tilde{h}}_{i,k-1}), \tag{11}\] where \(\tilde{\tilde{\mathbf{h}}}_{i,k}\in\mathbb{C}^{L}\) is the time domain residue of the estimated impulse response at \((k-1)\)-th iteration and \(\mathbf{\tilde{h}}_{i,k-1}\in\mathbb{C}^{L}\) is the sparse estimated channel impulse response at \((k-1)\)-th iteration. The second step is to use a smoothing function which is formulated as \[\mathbf{\hat{h}}_{i,k}=f(\tilde{\tilde{\mathbf{h}}}_{i,k})+\mathbf{\tilde{h} }_{i,k-1}, \tag{12}\] where \(\mathbf{\hat{h}}_{i,k}\) is the non-sparse estimated channel impulse response at \(k\)-th iteration and the smoothing function is denoted as \(f(.):\mathbb{C}^{L}\rightarrow\mathbb{C}^{L}\). Windowing can be used as a potential smoothing operation. The third step is to sparsify the non-sparse \(\mathbf{\hat{h}}_{i,k}\) using an adaptive threshold described as \[\tilde{h}_{i,k}(i)=\begin{cases}\hat{h}_{i,k}(i)&|\hat{h}_{i_{k}}(i)|>\beta \mathrm{e}^{-\alpha k}\\ 0&\mathrm{otherwise}\end{cases} \tag{13}\] where \(\alpha\) and \(\beta\) are adaptive thresholding parameters. The recovered signal converges to the original sparse signal when the number of iterations is sufficient and the parameters \(\lambda\), \(\alpha\), and \(\beta\) are chosen appropriately. ### _Pilot Allocation Using Measurement Matrix Coherence Minimization_ In this section, we illustrate the proposed pilot allocation method. The performance of the proposed scheme depends on the recovery method as well as the pilot locations. Therefore, minimization of the coherence of the measurement matrix is utilized as a metric to find the pilot locations. The measurement matrix preserves the information from the K-sparse input vector in the observed vector. In order to determine a proper measurement matrix, its coherence [22] denoted by \(\mu\) can be computed. The K-sparse channel impulse response is guaranteed to be perfectly recovered when \(\mu_{\mathbf{W}_{\Lambda_{i}}}<\frac{1}{2K}\)[23]. The pilot positions can be optimized using the coherence of each CUE which is defined as \[\mu_{\mathbf{W}_{\Lambda_{i}}}=\max_{\begin{subarray}{c}1\leq i,j\leq L\\ i,j\neq j\end{subarray}}|\sum_{\lambda\in\mathbf{\Lambda}_{i}}\frac{1}{N_{p} }\mathrm{e}^{-j\frac{2\pi}{N}\lambda(i-j)}|, \tag{14}\] where the DFT submatrix \(\mathbf{W}_{\Lambda_{i}}\) is formulated as \[\mathbf{W}_{\Lambda_{i}}=\begin{bmatrix}1&\mathrm{e}^{-j\frac{2\pi}{N}\lambda_ {i,T}}&\cdots&\mathrm{e}^{-j\frac{2\pi}{N}\lambda_{i,T}(L-1)}\\ 1&\mathrm{e}^{-j\frac{2\pi}{N}\lambda_{i,T}}&\cdots&\mathrm{e}^{-j\frac{2\pi}{N} \lambda_{i,T},2(L-1)}\\ \vdots&\vdots&\ddots&\vdots\\ 1&\mathrm{e}^{-j\frac{2\pi}{N}\lambda_{i,T},N_{p}}&\cdots&\mathrm{e}^{-j\frac{2 \pi}{N}\lambda_{i,T},N_{p}(L-1)}.\end{bmatrix} \tag{15}\] According to the periodic structure of the DFT submatrix \((\mathbf{W}_{\Lambda_{i}})\) and (15), (14) can be simplified as \[\mu_{\mathbf{W}_{\Lambda_{i}}}=\frac{1}{N_{p}}\max_{1\leq r\leq L-1}\Big{|}\sum _{\lambda\in\mathbf{\Lambda}_{i}}\mathrm{e}^{-j\frac{\pi}{N}\lambda r}\Big{|}. \tag{16}\] By assuming \(w=\mathrm{e}^{-j2\pi/N}\) and using (16) and the fact that the optimum set of pilots (\(\mathbf{\Lambda}_{opt}\)) which minimizes the maximum of all coherences, also minimizes the maximum of all coherences to the power of two, the optimum set of pilots will be derived from \[\mathbf{\Lambda}_{opt}=\operatorname*{arg\,min}_{\mathbf{\Lambda}_{i}}\max_ {1\leq r\leq L-1}\Big{|}\sum_{\lambda\in\mathbf{\Lambda}_{i}}w^{r\lambda} \Big{|}^{2}. \tag{17}\] Inspired by [17], we define the CDS of \(\mathbf{\Lambda}_{i}\) as \(\mathbf{D}_{i}=\{\lambda_{i,l}-\lambda_{i,k}|1\leq l,k\leq N_{p};l\neq k\}\). By denoting the number of repetitions of a number \(0\leq d\leq N-1\) which is a member of the set \(\mathbf{D}_{i}\) as \(\alpha_{d,i_{T}}\), the equation (17) will be simplified as \[\Big{|}\sum_{\lambda\in\mathbf{\Lambda}_{i_{T}}}w^{r\lambda}\Big{|}^{2} =\sum_{d=0}^{N-1}\alpha_{d,i_{T}}w^{rd} \tag{18}\] \[=N_{p}+\sum_{d=1}^{N-1}\alpha_{d,i_{T}}w^{rd}. \tag{19}\] According to the definition of set \(\mathbf{D}_{i}\), the number of 0s in the set is \(N_{p}\) which can be expressed as \[\alpha_{0,i}=N_{p},1\leq i\leq N. \tag{20}\] The cardinality of the set \(\mathbf{D}_{i}\) is equal to \(N_{p}^{2}\) since \[\sum_{d=0}^{N-1}\alpha_{d,i}=N_{p}^{2},1\leq i\leq N. \tag{21}\] Therefore, it can be concluded that \[\sum_{d=1}^{N-1}\alpha_{d,i}=N_{p}(N_{p}-1),1\leq i\leq N. \tag{22}\] According to equation (19), the value of \(|\sum_{\lambda\in\mathbf{\Lambda}_{i}}w^{r\lambda}|^{2}\) depends on \(r\), \(i\), and \(\{\alpha_{d,i}\}\) which is equivalent to the pilot positions. By defining \[g(r,i,\mathbf{\Lambda})=N_{p}+\sum_{d=1}^{N-1}\alpha_{d,i}w^{rd} \tag{23}\] and due to the fact that \(r\) and \(i\) are discrete variables, it can be concluded that there exists a lower bound for \(\max_{1\leq r\leq L-1}g(r,i,\mathbf{\Lambda})\) which is computed using the fact that the term is always more than its mean value which can be expressed as \[\max_{1\leq r\leq L-1}g(r,i_{T},\mathbf{\Lambda})\geq\frac{N_{p}(N-N_{p})}{N-1}. \tag{24}\] The problem of finding optimal pilot positions is the minimization of \(\max_{1\leq r\leq L-1}g(r,i,\mathbf{\Lambda})\) and can be solved when the lower bound explained in (24) is achieved. It is obvious that the equality is held when \[g(1,i,\mathbf{\Lambda})=\cdots=g(L-1,i,\mathbf{\Lambda})=\frac{N_{p}(N-N_{p})} {N-1}, \tag{25}\] which results in \[\alpha_{1,1}=\cdots=\alpha_{N-1,1}=N_{p}. \tag{26}\] Since the total number of subcarriers and pilots are respectively equal to \(N\) and \(N_{p}\), equation (26) states that the best choice of pilot positions in the described OFDM system is achieved when the set of pilot positions form a CDS. A \((\lambda,v,k)\) CDS exists if and only if \(k^{2}-k=(v-1)\lambda\). Having a set of parameters, different CDS with the same parameters can be produced using addition and multiplication of the elements with a constant number. Having a sample CDS, other CDS with same \((\lambda,v,k)\) parameters can be produced which can be used as pilot positions of different CUEs. ## IV Simulation Results In this section, the simulation results are reported. The parameters of the evaluated OFDM system are reported in table I. In order to observe the performance of the proposed channel estimation algorithm, no coding is used in the system. The total number of subcarriers and pilots are chosen according to a (91,10,1) CDS. The initial CDS used for the proposed pilot method is \(\{1,3,7,8,19,22,32,55,64,72\}\). The channels are independent Rayleigh multipath fading with 4 significant nonzero taps. The performance of the system is evaluated in terms of normalized MSE and the BER for a zero-forcing equalizer based on the channel estimation. Multiple random search method of [18] is used as a benchmark for our suggested scheme. Fig. 1 compares the minimum coherence obtained by the multiple random search [18] with that of the proposed scheme. It is observe that the suggested CDS scheme results in a lower coherence while multiple iterations are required for the random search algorithm. Fig. 2 compares the BER performance of our proposed recovery method with other competing algorithms. Our sparsity domain smoothing based recovery method is denoted as sparsity domain smoothing IMAT (SDS-IMAT). The Interpolation method is equivalent to a reconstruction method that the whole channel impulse response is constructed by interpolation of the limited random samples correspo The Exact method is equivalent to an oracle estimator which knows the channel impulse response perfectly. The OMP and IMAT algorithms are the proposed schemes of [24] and [20], respectively. It is shown that our proposed recovery method outperforms other existing methods and has a near-optimal performance. Fig. 3 compares the MSE performance of our proposed SDS-IMAT algorithm with that of the OMP algorithm when random, random search, and cyclic pilot locations are used. The superiority of our proposed scheme compared to the OMP method for sparse channel estimation in OFDM system is shown in this figure. According to this figure, we see that our proposed CDS-based pilot allocation scheme has a better performance than the random search and random pilot allocation methods. Fig. 4 demonstrate the effect of having a large number of subcarriers in a OFDM system. A large size OFDM system with 2257 subcarriers and 48 pilots is assumed. We compare our optimized CDS based pilot allocation scheme with random pilot allocation technique, while the SDS-IMAT technique is used as a recovery technique for both. The BER versus SNR is depicted in Fig. 4. The simulation results show that the performance of these two methods are almost the same in a large size OFDM systems. Hence, random pilot allocation scheme combined with our proposed SDS-IMAT recovery technique can be used in a large size OFDM system to obtain a high quality reconstruction of the channel taps. ## V Conclusion In this paper, sparse channel estimation and pilot allocation in an OFDM system has been investigated. Due to the superior performance of thresholding-based methods, we have modified the IMAT algorithm to present our proposed scheme. Considering the sparsity domain, we proposed a sparsity domain smoothing-based thresholding recovery method, denoted as SDS-IMAT, which consists of three steps. The proposed scheme detects nonzero taps of channel impulse response Fig. 1: Degradation of coherence vs. number of iterations for the utilized CDS and random search method. Fig. 4: BER of the system vs. SNR for Random and Cyclic SDS-IMAT Fig. 3: MSE vs. SNR of SDS-IMAT and OMP (different pilot allocation methods) Fig. 2: BER vs. SNR of cyclic pilot allocation algorithm (different reconstruction methods) and their corresponding values effectively. Pilot locations are also found by minimizing the measurement matrix coherence which results in cyclic difference set based pilot locations. It was shown that the proposed scheme outperforms state-of-the-art methods in terms of BER and MSE.
2301.01830
Search for an Ultraviolet Zero in the Seven-Loop Beta Function of the $λφ^4_4$ Theory
We investigate whether the seven-loop beta function of the $\lambda \phi^4_4$ theory exhibits evidence for an ultraviolet zero. In addition to a direct analysis of the beta function, we calculate and study Pad\'e approximants and discuss effects of scheme transformations on the results. Confirming and extending our earlier studies of the five-loop and six-loop beta functions, we find that in the range of $\lambda$ where the perturbative calculation of the seven-loop beta function is reliable, the theory does not exhibit evidence for an ultraviolet zero.
Robert Shrock
2023-01-04T21:44:16Z
http://arxiv.org/abs/2301.01830v1
Search for an Ultraviolet Zero in the Seven-Loop Beta Function of the \(\lambda\phi_{4}^{4}\) Theory ###### Abstract We investigate whether the seven-loop beta function of the \(\lambda\phi_{4}^{4}\) theory exhibits evidence for an ultraviolet zero. In addition to a direct analysis of the beta function, we calculate and study Pade approximants and discuss effects of scheme transformations on the results. Confirming and extending our earlier studies of the five-loop and six-loop beta functions, we find that in the range of \(\lambda\) where the perturbative calculation of the seven-loop beta function is reliable, the theory does not exhibit evidence for an ultraviolet zero. ## I Introduction In this paper we consider the renormalization-group (RG) behavior of the \(\lambda\phi^{4}\) field theory in \(d=4\) spacetime dimensions, where \(\phi\) is a real scalar field. This theory, commonly denoted \(\phi_{4}^{4}\), is described by the Lagrangian \[{\cal L}=\frac{1}{2}(\partial_{\nu}\phi)(\partial^{\nu}\phi)-\frac{m^{2}}{2} \phi^{2}-\frac{\lambda}{4!}\,\phi^{4}. \tag{1}\] The Lagrangian (1) is invariant under the global discrete \(\mathbb{Z}_{2}\) symmetry \(\phi\to-\phi\). Quantum loop corrections lead to a dependence of the physical quartic coupling \(\lambda=\lambda(\mu)\) on the Euclidean energy/momentum scale \(\mu\) at which this coupling is measured. The dependence of \(\lambda(\mu)\) on \(\mu\) is described by the RG beta function of the theory, \(\beta_{\lambda}=d\lambda/dt\), or equivalently, \(\beta_{a}=da/dt\), where \(dt=d\ln\mu\)[1] and \[a\equiv\frac{\lambda}{(4\pi)^{2}}. \tag{2}\] (The argument \(\mu\) will often be suppressed in the notation.) Since we will investigate the properties of the theory for large \(\mu\) in the ultraviolet (UV), the value of \(m^{2}\) will not play an important role in our analysis. For technical convenience, we assume that \(m^{2}\) is positive. At a reference scale \(\mu_{0}\), the quartic coupling \(\lambda(\mu_{0})\) is taken to be positive for the stability of the theory. The one-loop term in this beta function has a positive coefficient, so that for small \(\lambda\), \(\beta_{\lambda}>0\) and hence as \(\mu\to 0\), the coupling \(\lambda(\mu)\to 0\), i.e., the theory is infrared (IR)-free. This perturbative result is in agreement with nonperturbative approaches [2]; some reviews include [3; 4]. The beta function \(\beta_{a}\) has the series expansion \[\beta_{a}=a\sum_{\ell=1}^{\infty}b_{\ell}\,a^{\ell}. \tag{3}\] The \(n\)-loop (\(n\ell\)) beta function, denoted \(\beta_{a,nt}\), is given by Eq. (3) with the upper limit of the loop summation index \(\ell=n\) instead of \(\ell=\infty\). The one-loop and two-loop terms in \(\beta_{a}\) are independent of the scheme used for regularization and renormalization, while terms of loop order \(\ell\geq 3\) are scheme-dependent [5; 6]. For the O(\(N\)) \(\lambda|\vec{\phi}|^{4}\) theory with an \(N\)-component field, \(\vec{\phi}=(\phi_{1},...,\phi_{N})\), the coefficients \(b_{1}\), \(b_{2}\), and \(b_{3}\) were calculated in [5]. Higher-loop coefficients \(b_{\ell}\) with \(\ell\geq 3\) have been computed using the \(\overline{\rm MS}\) minimal subtraction scheme [7; 8]. A calculation of \(b_{5}\) and discussion of earlier computations of \(b_{4}\) and \(b_{5}\) (e.g., [9; 10; 11]) was given in [4; 12]. The coefficient \(b_{6}\) was calculated for \(N=1\) in [13] and for general \(N\) in [14]. Most recently, the seven-loop coefficient \(b_{7}\) was calculated in [15]. In analyzing the series expansion (3), one recalls that it is an asymptotic expansion and the large-order behavior has been the subject of extensive study [16], including [17] and references therein. An interesting question is whether, for the region of \(\lambda\) where a perturbative calculation of \(\beta_{\lambda}\) is reliable, this beta function exhibits evidence for a zero at some (positive) value of the quartic coupling. This would be an ultraviolet fixed point (UVFP) of the renormalization group, i.e., as \(\mu\to\infty\), \(\lambda(\mu)\) would approach this value (from below). In previous work we have investigated this question up to the five-loop order for the O(\(N\)) \(\lambda|\vec{\phi}|^{4}\) theory in [18] and up to the six-loop order for the real \(\lambda\phi^{4}\) theory in [19] and the O(\(N\)) \(\lambda|\vec{\phi}|^{4}\) theory in [20], finding evidence against such a UVFP. In the present paper, using the results of [15], we extend our analysis to the seven-loop level. Our analysis in [20] covered a large range of specific \(N\) values and also included an argument for the absence of a UV zero in the (rescaled) \(n\)-loop beta function at large \(N\) (see Eqs. (3.12)-(3.13) in [20]). Thus, it will suffice to focus on the \(N=1\) theory here. In view of this previous evidence against a UV zero in \(\beta_{\lambda}\) and associated UVFP in the O(\(N\)) \(\lambda|\vec{\phi}|^{4}\) theory, it is worthwhile to mention one case where an IR-free quantum field theory is known to have a UVFP, namely, the nonlinear O(\(N\)) \(\sigma\) model in \(d=2+\epsilon\) spacetime dimensions. In this theory, an exact solution was obtained in the limit \(N\to\infty\) with \(\lambda(\mu)N=x(\mu)\) a fixed function of \(\mu\) and yielded the beta function \[\beta_{x}=\frac{dx}{dt}=\epsilon x\Big{(}1-\frac{x}{x_{{}_{UV}}}\Big{)} \tag{4}\] for small \(\epsilon\), where \(x_{{}_{UV}}=2\pi\epsilon\) is a UV fixed point of the renormalization group [21]. Since the leading term in \(\beta_{x}\) is positive for \(\epsilon>0\), this theory is IR-free. Thus, in this nonlinear O(\(N\)) \(\sigma\) model in \(d=2+\epsilon\) dimensions, the coupling \(x(\mu)\) flows (monotonically) from \(x=0\) at \(\mu=0\) to \(x=x_{{}_{UV}}\) as \(\mu\to\infty\). Note that by making \(\epsilon\ll 1\) one can arrange that the UVFP at \(x_{{}_{UV}}=2\pi\epsilon\) occurs at an arbitrarily small value of the scaled coupling \(x\). This paper is organized as follows. In Section II we review some relevant background. In Section III we present the results of our analysis of the seven-loop beta function. Section IV contains a further analysis of this question of a UV zero using Pade approximants, while Section V discusses effects of scheme transformations. Our conclusions are given in Section VI. ## II Beta function The \(n\)-loop truncation of (3), denoted \(\beta_{a,n\ell}\), is a polynomial in \(a\) of degree \(n+1\) having an overall factor of \(a^{2}\). We may extract this factor and define a reduced beta function \[\beta_{a,r} = \frac{\beta_{a}}{\beta_{a,1\ell}}=\frac{\beta_{a}}{b_{1}a^{2}} \tag{4}\] \[= 1+\frac{1}{b_{1}}\,\sum_{\ell=2}^{\infty}b_{\ell}a^{\ell-1}\.\] The \(n\)-loop truncation of \(\beta_{a,r}\), denoted \(\beta_{a,r,n\ell}\equiv R_{n}\), is defined by taking the upper limit of the sum in (4) to be \(\ell=n\) rather than \(\ell=\infty\).. The first two coefficients in the beta function of this theory are \(b_{1}=3\) and \(b_{2}=-17/3\)[5]. The coefficients \(b_{\ell}\) with \(3\leq\ell\leq 7\) and the resultant higher-loop beta function discussed below, are calculated in the \(\overline{\rm MS}\) scheme. The coefficients up to the five-loop level are [4; 5; 9; 12] \[b_{3}=\frac{145}{8}+12\zeta_{3}=32.5497\, \tag{5}\] \[b_{4} = -\frac{3499}{48}-78\zeta_{3}+18\zeta_{4}-120\zeta_{5} \tag{6}\] \[= -271.606\,\] and \[b_{5} = \frac{764621}{2304}+\frac{7965}{16}\zeta_{3}-\frac{1189}{8}\zeta_ {4}+987\zeta_{5}+45\zeta_{3}^{2} \tag{7}\] \[- \frac{675}{2}\zeta_{6}+1323\zeta_{7}\] \[= 2848.57\,\] where the floating-point values are given to the indicated accuracy and \[\zeta_{s}=\sum_{n=1}^{\infty}\frac{1}{n^{s}} \tag{8}\] is the Riemann zeta function. If \(s=2r\) is even, then \(\zeta_{s}\) can be expressed as a rational number times \(\pi^{2r}\), namely \(\zeta_{2r}=(-1)^{r+1}B_{2r}(2\pi)^{2r}/[2(2r)!]\), where \(B_{n}\) are the Bernoulli numbers; however, we leave these \(\zeta_{2r}\) in their generic form here and below. The six-loop coefficient is [13; 14] \[b_{6} = -\frac{18841427}{11520}-\frac{779603}{240}\zeta_{3}+\frac{16989}{1 6}\zeta_{4}-\frac{63723}{10}\zeta_{5}-\frac{8678}{5}\zeta_{3}^{2}+\frac{6691}{ 2}\zeta_{6}+162\zeta_{3}\zeta_{4}-\frac{63627}{5}\zeta_{7} \tag{9}\] \[- 4704\zeta_{3}\zeta_{5}+\frac{264543}{25}\zeta_{8}-\frac{51984}{ 25}\zeta_{3,5}-768\zeta_{3}^{3}-\frac{46112}{3}\zeta_{9}\] \[= -34776.13\,\] where [22] \[\zeta_{3,5}=\sum_{m>n\geq 1}\frac{1}{n^{3}m^{5}}. \tag{10}\] The seven-loop coefficient is considerably more complicated than \(b_{6}\), and we refer the reader to [15] for the analytic expression. The numerical value is \[b_{7}=474651.0. \tag{11}\] Thus, in summary, the seven-loop beta function of the \(\lambda\phi^{4}\) theory (calculated in the \(\overline{\rm MS}\) scheme), is \[\beta_{a,7\ell} = a^{2}\Big{(}3-\frac{17}{3}a+32.5497a^{2}-271.606a^{3} \tag{12}\] \[+ 2848.57a^{4}-34776.1a^{5}+474651a^{6}\Big{)}\.\] ## III Zeros of the \(n\)-loop beta function up to loop order \(n=7\) In this section we investigate a possible UV zero, denoted \(a_{{}_{UV,n\ell}}\), of the \(n\)-loop beta function, \(\beta_{a,n\ell}\). The double zero of \(\beta_{a,n\ell}\) at \(a=0\) is always present (independent of \(n\)); this is an infrared zero and hence will not be of interest here. A necessary condition for there to be robust evidence for a UV zero in the beta function of an IR-free theory is that the values calculated at successive loop orders should be close to each other. Although the two-loop beta function \(\beta_{a,2\ell}\) does have a UV zero, at \(a_{{}_{UV,2\ell}}=9/17=0.52941\), we found that the three-loop beta function \(\beta_{a,3t}\) has no UV zero and, while a UV zero is present in \(\beta_{a,4\ell}\), it occurs at a considerably smaller value, namely \(a_{{}_{UV,4\ell}}=0.23332\). At the five-loop level, \(\beta_{a,5\ell}\) has no UV zero, while at the six-loop level, although \(\beta_{a,6\ell}\) has a UV zero, it occurs at a still smaller value, \(a_{{}_{UV,6\ell}}=0.16041\)[18; 19]. Thus, the results of this analysis show that the necessary condition that the beta function calculated to successively higher loop order should exhibit values of \(a_{{}_{UV,n\ell}}\) that are close to each other is not satisfied by this theory. At seven-loop order, using \(\beta_{a,7\ell}\) from [15], we find that this function has no physical UV zero. Instead, the zeros are comprised of three complex-conjugate pairs, \(-0.102135\pm 0.079848i\), \(0.0142348\pm 0.136854i\), and \(0.124533\pm 0.0659940i\). Summarizing, \[a_{{}_{UV,2\ell}}=0.52941,\quad a_{{}_{UV,4\ell}}=0.23332,\quad a_{{}_{UV,6 \ell}}=0.16041\] no \(a_{{}_{UV,n\ell}}\) for \(n=3,~{}5,~{}7\). The calculations up to seven loops show a pattern, namely that for even \(n=2,~{}4,~{}6,\)\(\beta_{a,n\ell}\) has a zero, \(a_{{}_{UV,n\ell}}\), but the values for different \(n\) are not close to each other, while for odd \(n=1,~{}3,~{}5,~{}7,\)\(\beta_{a,n\ell}\) has no UV zero. In Fig. 1 we plot the \(n\)-loop beta functions for \(2\leq n\leq 7\) loops. Another way to show this information is via the \(n\)-loop reduced beta function, \(\beta_{a,r,n\ell}=R_{n}\). We plot \(R_{n}\) in Fig. 2 for \(2\leq n\leq 7\). The results discussed above are evident in these figures. First, one may inquire how large is the interval in a over which the calculations of \(\beta_{a,n\ell}\) to the respective \(n\)-loop orders are in mutual agreement. As one can see from Figs. 1 and 2, the \(n\)-loop beta functions \(\beta_{a,n\ell}\) with \(2\leq n\leq 7\) only agree with each other well over the small interval of couplings \(0\leq a\lesssim 0.05\). As shown in Fig. 1, the \(\beta_{a,n\ell}\) with even \(n=2,~{}4,~{}6\) reach maxima and then decrease, crossing the (positive) real axis at different values listed in Eq. (3.1), while the \(\beta_{a,n\ell}\) with odd \(n\) increase monotonically with \(a\). This seven-loop analysis confirms and extends our conclusions in [19; 20] at the six-loop level that the zero in the two-loop beta function of the \(\lambda\phi^{4}\) theory occurs at too large a value of \(a\) for the perturbative calculation to be reliable. ## IV Analysis with Pade Approximants One can gain further insight into the behavior of the beta function by the use of Pade approximants (PAs). We carried out this analysis up to the six-loop level in [19; 20], finding no indication of a physical UV zero, and here we extend it to the seven-loop level. Since the double zero in \(\beta_{a,n\ell}\) at \(a=0\) is not relevant to the question of a UV zero, we use the reduced beta function \(\beta_{a,r,n\ell}\) for this Pade analysis. The \([p,q]\) Pade approximant to \(\beta_{a,r,n\ell}\) is the rational function [23] \[[p,q]_{\beta_{a,r,n\ell}}=\frac{1+\sum_{j=1}^{p}\,r_{j}a^{j}}{1+\sum_{k=1}^{q} \,s_{k}\,a^{k}} \tag{4.1}\] with \(p+q=n-1\), where the coefficients \(r_{j}\) and \(s_{j}\) are independent of \(a\). At seven-loop order, we can calculate Figure 1: Plot of the \(n\)-loop \(\beta\) function \(\beta_{a,n\ell}\) as a function of \(a\) for (i) \(n=2\) (red, solid), (ii) \(n=3\) (green, dashed), (iii) \(n=4\) (blue, dotted), (iv) \(n=5\) (black, dot-dashed), (v) \(n=6\) (cyan, solid), and (vi) \(n=7\) (brown, solid). At \(a=0.16\), going from bottom to top, the curves are for \(n=6\), \(n=4\), \(n=2\), \(n=3\), \(n=5\), \(n=7\). Figure 2: Plot of the ratio \(R_{n}\) of the \(n\)-loop beta function \(\beta_{a,n\ell}\) divided by \(\beta_{a,1\ell}\), as a function of \(a\) for (i) \(n=2\) (red, solid), (ii) \(n=3\) (green, dashed), (iii) \(n=4\) (blue, dotted), (iv) \(n=5\) (black, dot-dashed), (v) \(n=6\) (cyan, solid), and (vi) \(n=7\) (brown, solid). At \(a=0.16\), going from bottom to top, the curves are for \(n=6\), \(n=4\), \(n=2\), \(n=3\), \(n=5\), and \(n=7\). the Pade approximants \([p,q]_{\beta_{a,r,7t}}\) with \([p,q]\) taking on the values [6,0], [5,1], [4,2], [3,3], [2,4], [1,5], and [0,6]. Since the loop order is understood, we write \([p,q]_{\beta_{a,r,7t}}\equiv[p,q]\) for brevity of notation. The PA [6,0] is equivalent to \(\beta_{a,r,7t}\) itself, which we have already analyzed, and the PA [0,6] has no zeros, so we focus here on the remaining five Pade approximants. We list our results for these Pade approximants to \(\beta_{a,r,7t}\) below: \[[5,1]=\frac{1+11.760a-14.931a^{2}+57.552a^{3}-286.17a^{4}+1367.8a^{5}}{1+13.649a}\, \tag{10}\] \[[4,2]=\frac{1+20.541a+75.687a^{2}-49.670a^{3}+81.973a^{4}}{1+22.430a+107.21a^{2 }}\, \tag{11}\] \[[3,3]=\frac{1+25.073a+152.81a^{2}+155.99a^{3}}{1+26.962a+192.89a^{2}+318.33a^{3 }}\, \tag{12}\] \[[2,4]=\frac{1+22.314a+103.55a^{2}}{1+24.203a+138.42a^{2}+89.390a^{3}-91.252a^{ 4}}\, \tag{13}\] \[[1,5]=\frac{1+14.023a}{1+15.912a+19.205a^{2}-45.828a^{3}+196.10a^{4}-910.03a^{ 5}}. \tag{14}\] We recall some necessary requirements for a zero of a \([p,q]\) Pade approximant to be physically relevant. These include the requirement that this zero should occur on the positive real axis in the complex \(a\) plane and the requirement that this zero of the PA should be closer to the origin \(a=0\) than any pole on the real positive \(a\)-axis, since otherwise the pole would dominate the IR to UV flow starting at the origin. If a Pade approximant were to exhibit such a zero, then one would proceed to inquire how close it is to any of the \(a_{{}_{UV,nt}}\) in Eq. (1). However, we find that none of these Pade approximants (10)-(14) has a zero on the positive real \(a\) axis. Explicitly, the [5,1] PA has two complex-conjugate pairs of zeros at \(a=-0.12719\pm 0.26046i\) and \(a=0.26922\pm 0.20930i\), together with a real zero at \(a=-0.074837\). This real zero is part of a nearly coincident pole-zero pair, with the pole of the [5,1] PA being located at \(a=-0.073267\). The appearance of a nearly coincident pole-zero pair close to a point \(a_{0}\) in a \([p,q]\) Pade approximant is typically an indication that the function that the PA is fitting has neither a pole nor a zero in the local neighborhood of \(a_{0}\), since as the locations of the nearly coincident pole-zero pair approach each other, they simply divide out in the ratio (10). Each of the Pade approximants that we calculate here has a pole-zero pair. The [4,2] PA has zeros at the complex-conjugate pair \(a=0.42009\pm 0.96575i\), together with the real values \(a=\{-0.16929,\ -0.064970\}\) and poles at \(a=\{-0.14481,\ -0.064414\}\). The [3,3] PA has zeros at \(a=\{-0.78531,\ -0.13282,-0.061458\}\), and poles at \(a=\{-0.42342,\ -0.12140,\ -0.061112\}\). The [2,4] PA has zeros at \(a=\{-0.15193,\ -0.063563\}\), and poles at \(a=\{-0.69186,\ -0.13432,\ -0.063100,\ 1.8689\}\). Finally, the [1,5] PA has a zero at \(a=-0.071313\) and poles at \(a=\{-0.22780,\ -0.070185,\ 0.44160,\ 0.035937\pm 0.39287i\}\). Thus, our analysis with Pade approximants of the seven-loop beta function yields the same conclusion as our analysis of the beta function itself, namely that there is no evidence for a stable, reliably perturbatively calculable UV zero up to this seven-loop level. ## V Effects of scheme transformations Since the terms in the beta function at loop order \(n\geq 3\) are scheme-dependent, it is necessary to assess the effect of scheme transformations in an analysis of zeros of a higher-loop beta function. A scheme transformation can be expressed as a mapping between \(a\) and a transformed coupling \(a^{\prime}\), \[a=a^{\prime}f(a^{\prime})\, \tag{15}\] where \(f(a^{\prime})\) is the scheme transformation function. Since this transformation has no effect in the free theory, one has \(f(0)=1\). We consider \(f(a^{\prime})\) functions that are analytic about \(a=a^{\prime}=0\) and hence can be expanded in the form \[f(a^{\prime})=1+\sum_{s=1}^{s_{\rm max}}k_{s}(a^{\prime})^{s}\, \tag{16}\] where the \(k_{s}\) are constants and \(s_{max}\) may be finite or infinite. The beta function in the transformed scheme, \(\beta_{a^{\prime}}=da^{\prime}/d\ln\mu\), has the expansion \[\beta_{a^{\prime}}=a^{\prime}\sum_{\ell=1}^{\infty}b^{\prime}_{\ell}(a^{\prime })^{\ell}. \tag{10}\] In [24], formulas were derived for the \(b^{\prime}_{\ell}\) in terms of \(b_{\ell}\) and the \(k_{s}\). In addition to \(b^{\prime}_{1}=b_{1}\) and \(b^{\prime}_{2}=b_{2}\), these are \[b^{\prime}_{3}=b_{3}+k_{1}b_{2}+(k_{1}^{2}-k_{2})b_{1}\, \tag{11}\] \[b^{\prime}_{4}=b_{4}+2k_{1}b_{3}+k_{1}^{2}b_{2}+(-2k_{1}^{3}+4k_{1}k_{2}-2k_{3 })b_{1}\, \tag{12}\] and so forth for higher \(\ell\). These results are applicable to the study of both an IR zero in the beta function of an asymptotically free theory and a possible UV zero in the beta function of an IR-free theory. They were extensively applied to assess scheme dependence in higher-loop studies of an IR fixed point in asymptotically free non-Abelian gauge theories [24; 25; 26; 27; 28]. For the present \(\lambda\phi^{4}\) theory, a study of scheme dependence was carried out in [18]. It was shown that even when one shifts to a scheme different from the usual \(\overline{\text{MS}}\) scheme, the beta function still does not satisfy a requisite condition for a physical UV zero, namely that the value of this zero (in a given scheme) should not change strongly when it is calculated to successive loop orders. This result from [18] also holds in the same way in the present seven-loop context. ## VI Conclusions In this paper we have investigated whether the real scalar field theory with a \(\lambda\phi^{4}\) interaction exhibits evidence of an ultraviolet zero in the beta function. Using the seven-loop coefficient \(b_{7}\) from [15], our present study extends our previous six-loop study in [19; 20] to the seven-loop level. Our work includes a study of the seven-loop beta function itself, together with an analysis of Pade approximants. We conclude that, for the range of couplings where the perturbative calculation of this beta function may be reliable, it does not exhibit robust evidence for an ultraviolet zero. ###### Acknowledgements. I would like to thank Oliver Schnetz for valuable discussions on [15]. This research was supported in part by the U.S. National Science Foundation Grant NSF-PHY-22-15093.
2310.17913
An Advanced Fuel Efficiency Optimization Model with Fractional Programming
Reducing the fuel consumption within a power network is crucial to enhance the overall system efficiency and minimize operating costs. Fuel consumption minimization can be achieved through different optimization techniques where the output power of the generators is regulated based on their individual efficiency characteristics. Existing studies primarily focus either on maximizing the efficiency function or minimizing the operating cost function of the generators to minimize fuel consumption. However, for practical implementation, it becomes imperative to incorporate a function within the optimization framework to represent the fuel consumption rate directly. This study introduces a novel approach by formulating a minimization problem with a sum-of-ratios objective function representing the fuel consumption rate. However, optimization problems with sum-of-ratios objective functions or constraints are extremely challenging to solve because of their strong nonlinearity. To efficiently solve the formulated problem, a fractional programming (FP) approach is adopted in this study. This reformulation technique significantly reduces the solution time of the optimization problem and provides a better solution than nonlinear programming (NLP). In addition, the reformulated problem can also be applied to large-scale systems where the NLP fails to converge. The proposed methodology of this study is tested on the notional MVAC ship system, modified IEEE 30-bus and IEEE 118-bus systems. The results demonstrate that the model successfully minimizes fuel consumption by effectively scheduling the generator and ESS dispatch.
Md Isfakul Anam, Tuyen Vu
2023-10-27T06:15:18Z
http://arxiv.org/abs/2310.17913v1
# An Advanced Fuel Efficiency Optimization Model with Fractional Programming ###### Abstract Reducing the fuel consumption within a power network is crucial to enhance the overall system efficiency and minimize operating costs. Fuel consumption minimization can be achieved through different optimization techniques where the output power of the generators is regulated based on their individual efficiency characteristics. Existing studies primarily focus either on maximizing the efficiency function or minimizing the operating cost function of the generators to minimize fuel consumption. However, for practical implementation, it becomes imperative to incorporate a function within the optimization framework to represent the fuel consumption rate directly. This study introduces a novel approach by formulating a minimization problem with a sum-of-ratios objective function representing the fuel consumption rate. However, optimization problems with sum-of-ratios objective functions or constraints are extremely challenging to solve because of their strong nonlinearity. To efficiently solve the formulated problem, a fractional programming (FP) approach is adopted in this study. This reformulation technique significantly reduces the solution time of the optimization problem and provides a better solution than nonlinear programming (NLP). In addition, the reformulated problem can also be applied to large-scale systems where the NLP fails to converge. The proposed methodology of this study is tested on the notional MVAC ship system, modified IEEE 30-bus and IEEE 118-bus systems. The results demonstrate that the model successfully minimizes fuel consumption by effectively scheduling the generator and ESS dispatch. Fuel consumption, system efficiency, energy management system, sum-of-ratio problem, fractional programming. ## Nomenclature \(P_{i,g},Q_{i,g}\): Real and reactive power output of \(i\)-th generator \(P_{i,l},Q_{i,l}\): Real and reactive load at \(i\)-th bus \(P_{i,inj},Q_{i,inj}\): Real and reactive power injection at \(i\)-th bus \(V_{i},\theta_{ik}\): voltage and voltage angle difference \(G_{ik},B_{ik}\): Conductance and susceptance of line \(ik\) \(P_{ik}\): Real power capacity of line \(ik\) \(DR_{i},UR_{i}\): Down and up ramp rate of \(i\)-th generator \(P_{i}^{C,max},P_{i}^{D,max}\): Maximum charging and discharging rate of \(i\)-th ESS \(E_{i,b}\): Energy stored at \(i\)-th ESS \(P_{i,b}^{r}\): Output power of \(i\)-th ESS \(SOC_{i}\): State of charge of \(i\)-th ESS \(\alpha\): fuel energy density \(N_{g}\): Number of generator \(T\): Total planning horizon \(\triangle t\): Time step Number of buses ## I Introduction Continuous increments in load demand on power systems due to the growing number of consumers impose a great challenge for the engineers and operators in the field. It is essential to supply the increased load requirements of a power system territory by introducing additional energy sources and/or expanding the capacity of the existing sources [1]. Both solutions result in higher fuel costs incurred for operating a large number of distributed generators. Furthermore, the survival period of stand-alone power networks, such as islanded microgrids or ship electrical power systems, largely depends on the fuel consumption of the distributed generators. As a result, a state-of-the-art system efficiency model is required to optimize the fuel consumption of a system. System efficiency optimization techniques refer to minimizing the fuel consumption rate by regulating the output power of the generators within the system. Over the past decades, a significant number of research has been conducted to improve system efficiency using various optimization methods. One notable contribution is presented in [2], where the authors propose a novel approach to analyze the characteristics of the efficiency function, leading to the determination of the maximum total power supply and overall efficiency for such systems. In [3], a genetic algorithm modeling framework is presented where the optimization problem involves minimizing the thermal cost function. The cost function is constructed based on the power generation characteristics of the hydropower plant. A similar approach can be found in [4], where the daily optimal generation scheduling problem (DOHGSB) is solved by implementing a unique differential evolution algorithm. The objective function is formulated by analyzing the hydropower plant characteristics or input-output curve. In [5], the authors introduce a novel distributed algorithm to maximize system efficiency. A fourth-order and a third-order efficiency function for the main and auxiliary power generation module (PGM), respectively, are optimized using a distributed row search algorithm (DCSA). The approaches in [2]-[5] focus on optimizing the efficiency function or the generation characteristics of the generators, which can be complex to apply to systems consisting of generators with different ratings. Also, since the generator efficiency function is not a straightforward representation of the system's fuel consumption, it is impractical to optimize the efficiency function with an objective to improve fuel efficiency. Some literature can be found where the optimization problem is formulated to minimize the fuel consumption rate or fuel cost directly. In [6], four power-sharing schemes are presented to establish a unit commitment strategy to minimize fuel costs. However, instead of considering the generator efficiency function, the authors utilized a typical fuel consumption characteristic curve, which largely depends on the capacity and model of a generator. The authors in [7] implement a recursive method to estimate a second-order polynomial model of specific fuel consumption. The model is later used to determine the optimal load distribution between the different generators. A similar approach is found in [8], where dynamic programming is used to solve the formulated problem. However, both optimization models are suitable for simple power networks since they don't include AC power flow or energy storage models. A minimum hourly fuel consumption curve interpolated by a quadratic equation is used in [9] to minimize the fuel consumption of hybrid electric vehicles. This method can not be extended for transmission or distribution systems since most power system constraints are not included in the model. In [10], the authors utilize a 3D map of brake-specific fuel consumption (BSFC) in terms of the rotating speed of the drive and generated mechanical torque to determine the minimum point of diesel engine (DE) fuel consumption. Authors in [11] also apply a similar method where a speed vs. power curve of the diesel engine (DE) is exploited to achieve minimum fuel consumption conditions. Nonetheless, these methods have two major drawbacks: they are implemented particularly for the DC ship systems, and the strong nonlinearity of the BSFC curve increases the complexity of determining the optimal operation. Reduction in fuel cost can also be achieved indirectly through economic dispatch (ED) optimization problems [12, 13]. In the ED problem, a polynomial objective function (generally in quadratic form) representing the cost of the generator dispatch is optimized to supply the demand most economically. However, since the objective function of the ED problem does not represent the fuel consumption rate, these formulations are unable to accomplish the highest system efficiency. They are also inconvenient for long-term planning of fuel usage and impractical to apply to systems where achieving optimal fuel consumption rate is the main goal. Although the strategies discussed above for improving overall system performance and reducing fuel cost have their own merits, the key to minimizing fuel consumption lies in introducing a function that accurately represents the fuel consumption rate. By directly representing fuel consumption in the optimization process, researchers can effectively tackle the core challenge of reducing fuel usage and achieving greater energy efficiency in the system. Therefore, future studies may benefit from exploring methodologies considering this crucial aspect when addressing system efficiency optimization. This paper presents a novel approach by introducing a unique sum-of-ratios objective function, which directly represents the fuel consumption rate. Unlike conventional methods that optimize the polynomial generator efficiency function or cost of operation, this sum-of-ratios formulation offers a practical and more efficient solution. However, solving multiple ratio optimization problems, known as Fractional Programming (FP), has been proven to be NP-hard [14]. The convergence of these problems with established nonlinear optimization methods can take an extensive amount of time. In addition, when the sum-of-ratios problem involves more than 20 ratios, the current approaches struggle to find a solution within a reasonable timeframe [15, 16]. Due to these challenges, directly solving the sum-of-ratios optimization problem for energy management systems (EMS) is not feasible. Especially this method will be inapplicable for large-scale systems with a high number of variables. It necessitates the development of an efficient reformulation and solution technique to address multiple ratio problems effectively. Finding innovative approaches to tackle these difficulties will be crucial in making the proposed sum-of-ratios method a practical and scalable solution for optimizing fuel consumption in real-world energy systems. A substantial body of literature exists on Fractional Programming (FP), but the emphasis has primarily been on single-ratio problems. Among the renowned reformulation techniques, the _Charnes-Cooper Transform_, [17][18] is notable for proposing an algorithm to solve single ratio linear FP problems by introducing two new variables and converting the fractional problem into a linear problem. Another classical technique, _Dinkelbach's Transform_[19], reformulates the single ratio problem using a new auxiliary variable updated iteratively until convergence is achieved. [20] presents a formulation of a linear problem equivalent to a single ratio linear FP problem where some duality properties are used to prove the equivalence. For quadratic FP problems, where both the numerator and denominator are quadratic functions, a new method called the _decomposition fractional separable method_ is proposed in [21] using linear programming techniques. An alternative approach to solving single-ratio quadratic FP is outlined in [22], employing Taylor series expansion for effective reformulation. The literature discussed in references [17]-[22] primarily focused on solving single-ratio FP problems and cannot be directly extended to handle multi-ratio problems. Addressing multiple ratio problems, as encountered in the sum-of-ratios function, remains a challenge that requires innovative and efficient reformulation and solution techniques. However, in [23], the authors proposed an extension of Dinkelbach's Transform specifically tailored to address multi-ratio FP problems. Nonetheless, this method was later refuted by Falk and Palocsay [24], who demonstrated its limitations through a numerical example. To find the globally optimal solution for the sum-of-ratios problem, [25] introduced a practical method that involves solving a sequence of convex programming problems. In [26], a convexification strategy was employed to decompose fractional terms into convex and concave components. Then, a piecewise linearization technique was applied to approximate the concave terms effectively. Additionally, [27] proposed a quadratic transform to tackle concave-convex multiple ratio minimization problems. In the case of generalized convex multiplicative functions, a reformulation technique was presented in [28], where the main problem was reformulated as a concave minimization problem with 2p variables. This reformulation technique could also be applied to sum-of-ratios FP problems if the multiplicative terms were replaced with a convex over a concave function [29]. In our study, the objective function is defined as a sum-of-ratios minimization problem with non-negative-convex numerator and positive-concave denominator terms. Due to this specific form of the problem, an appropriate algorithm should be selected to solve the formulated multiple ratios FP problem effectively. As a result, the reformulation technique presented in [28] is adopted in this paper. By leveraging this reformulation technique, the complexities of the sum-of-ratios problem can be effectively addressed, and an optimized solution can be found with a feasible convergence time. The contributions of this paper are the following: * A novel fractional objective function is introduced in this literature, which directly represents the fuel consumption rate of the generators. Unlike typical system efficiency optimization problems that use the efficiency function or the operating cost function as the objective, this unique formulation directly accounts for fuel consumption. This approach proves to be more efficient and practical compared to previous studies since it directly targets the core issue of minimizing fuel usage and improving overall system efficiency. * To address the optimization problem in this study, the sum-of-ratios fractional programming (FP) algorithm is employed. to the best of our knowledge, this literature represents the first application of the FP method to solve the optimization problem for EMS efficiently. The reformulation technique with FP can also be applied to different power or communication system research where sum-of-ratios functions are used. * The successful application of the FP algorithm, combined with the convex relaxation of nonlinear constraints, demonstrates that the proposed model is suitable for handling large-scale systems. This capability is exemplified through the model's effective implementation on the IEEE 118-bus system. By demonstrating its applicability to such a complex and extensive system, the paper establishes the scalability of the proposed approach for real-world energy management scenarios. The remainder of the paper is organized as follows: the fuel efficiency problem formulation with sum-of-ratios objective function and its convex reformulation are presented in Section II. In Section III, the solution algorithm for the reformulated problem is described. The results for the notional MVAC ship system, IEEE 30-bus, and IEEE 118-bus system, accompanied with the performance comparisons, are demonstrated in Section IV. Finally, Section V represents the conclusion and future work. ## II Problem Formulation ### _Optimization Model for System Efficiency_ This section presents a unique sum-of-ratios objective function for the optimization problem that directly represents the fuel consumption of the generators. The objective of the minimization problem is to minimize the fuel consumption rate of the generators over the planning horizon, which will maximize the system efficiency. The fuel consumed by a generator can be expressed by taking into account the generator's efficiency and the output power it produces. The objective function is the following: \[\text{minimize }f=\frac{1}{\alpha}\sum_{t=0}^{T}\sum_{i\in N_{g}}\frac{P_{i,g }^{t}}{\eta_{i,g}}\Delta t. \tag{1}\] where, generator efficiency, \(\eta=a_{i}p_{i,g}^{2}+b_{i}p_{i,g}+c_{i}\) with \(a\), \(b\), and \(c\) are generator specific constants, \(\alpha\) is the fuel energy density (MWh/L), \(p_{i,g}\) is the per-unit output of the i-th generator: \(p_{i,g}=P_{i,g}/P_{i,b}\); \(P_{i,g}^{t}\) is the generator output power at time t, and \(P_{i,b}\) is the base power of i-th generator. \(N_{g}\) = total number of generators, \(T\) = planning horizon, \(\triangle t\) = each time period. The following active and reactive power balance constraints are associated with the system: \[P_{i,inj}=P_{i,g}-P_{i,l} \tag{2}\] \[Q_{i,inj}=Q_{i,g}-Q_{i,l} \tag{3}\] where \[P_{i,inj}=\sum_{k\in B}V_{i}V_{k}(G_{ik}cos\theta_{ik}+B_{ik}sin\theta_{ik}) \tag{4}\] \[Q_{i,inj}=\sum_{k\in B}V_{i}V_{k}(G_{ik}sin\theta_{ik}+B_{ik}cos\theta_{ik}) \tag{5}\] where (2) and (5) are the AC power flow constraints for the system. The following constraints should be included in the problem formulation to maintain the operational limits of the system: \[P_{i,g}^{min}\leq P_{i,g}^{t}\leq P_{i,g}^{max}, \tag{6}\] \[Q_{i,g}^{min}\leq Q_{i,g}^{t}\leq Q_{i,g}^{max}, \tag{7}\] \[V_{i}^{min}\leq V_{i}^{t}\leq V_{i}^{max}, \tag{8}\] \[\theta_{i}^{min}\leq\theta_{i}^{t}\leq\theta_{i}^{max}, \tag{9}\] \[-P_{ik}\leq P_{ik}^{t}\leq P_{ik}, \tag{10}\] \[-Q_{ik}\leq Q_{ik}^{t}\leq Q_{ik}, \tag{11}\] \[-DR_{i}\leq P_{i,g}^{t+1}-P_{i,g}^{t}\leq UR_{i}. \tag{12}\] where (6) and (7) represent the generators' real and reactive power generation limits, (8) and (9) are the voltage and voltage angle limits, (10) and (11) are the line limits for real and reactive power, and (12) is the ramp rate limit. The energy storage system (ESS) plays a vital role in minimizing the fuel consumption by the generators. The following Energy Storage System (ESS) constraints are included in the optimization problem: \[E_{i,b}^{t}=E_{i,b}^{t-1}-\eta_{b}P_{i,b}^{r,t}\triangle t \tag{13}\] \[-P_{i}^{C,max}\leq P_{i,b}^{r,t}\leq P_{i}^{D,max}, \tag{14}\] \[\sum_{t=0}^{t}P_{i,b}^{r,t}=0, \tag{15}\] \[SOC_{i}^{min}\leq SOC_{i}^{t}\leq SOC_{i}^{max}, \tag{16}\] where (13) indicates the energy conservation constraint, (14) is the limit for charging or discharging rate, and (16) is the state of charge (SOC) limit of the ESS. For the ESS, although the SOC can vary from 0 to 1 (0% to 100%), fully discharging can damage the battery permanently and shorten the life cycle of the battery [30]. In this paper, the minimum SOC is selected as 0.2 (20%). Eq.(15) ensures that the sum of the total charging and discharging power over a planning period will be zero, which helps the system to recharge the battery before the next planning cycle. ### _Reformulation of the Objective Function_ The objective function (1) for the formulated optimization problem is a highly nonlinear sum-of-ratios case. In this section, the objective function is reformulated to a convex function using a technique used for solving generalized convex multiplicative problems. Later, the convex minimization method is utilized to solve the reformulated problem iteratively. The general convex multiplicative minimization problem has the following structure: \[\text{minimize }h(x)+\sum_{i=1}^{p}f_{i}(x)g_{i}(x) \tag{17}\] \[\text{subject to }\ x\in X\] where \(h,f_{i}(x)\) and \(g_{i}(x)\) for all \(i\) are convex functions and \(X\subset R^{n}\) is a convex set. If \(h(x)=0\), \(f_{i}(x)=A_{i}(x)\), and \(g_{i}(x)=1/B_{i}(x)\), (17) will be in the following form: \[\text{minimize }H(x)=\sum_{i=1}^{m}\frac{A_{i}(x)}{B_{i}(x)} \tag{18}\] \[\text{subject to }\ x\in X\] which is a sum-of-ratios problem, where \(A_{i}(x)\) are non-negative, convex and \(B_{i}(x)\) are positive, concave functions for all \(i\). The authors in [28] defined the following problem by introducing 2m auxiliary variables \(\zeta_{i}\) and \(\beta_{i}\), where \(i=1,2,3,....n\): \[\text{minimize }F(x,\zeta,\beta)=\frac{1}{2}\sum_{i=1}^{m}[\zeta_{i}(A_{i}(x ))^{2}+\beta_{i}(B_{i}(x))^{2}] \tag{19}\] \[\begin{array}{c}\text{subject to }\ x\in X\\ \zeta_{i}\beta_{i}\geq 1\\ (\zeta,\beta)>0\end{array}\] where \(\zeta=(\zeta_{1},\zeta_{2},....\zeta_{m})\) and \(\beta=(\beta_{1},\beta_{2},.....\beta_{m})\). It can be proved that, if \((x^{*},\zeta^{*},\beta^{*})\) is an optimal solution of (19), then \(x^{*}\) will be an optimal solution of (18) and \(H(x^{*})=F(x^{*},\zeta^{*},\beta^{*})\)[28]. As a result, the optimization problem in section II(A) can be written as the following problem: \[\text{minimize }f(P,\zeta,\beta)=\frac{1}{2}[\sum_{i=1}^{m}(\zeta_{i}(P_{i,g} ^{t})^{2}+\beta_{i}\eta_{i,g}^{2})]\Delta t \tag{20}\] subject to \[\text{(2)-(16)}\,\] \[\zeta_{i}\beta_{i}\geq 1, \tag{21}\] \[(\zeta,\beta)>0, \tag{22}\] ### _Convex Relaxation Technique_ The convex optimization methods can only be applied to problems where the objective function and all constraints are finite and convex. Although the reformulated problem in section II(A) has a convex objective function for a fixed set of \(\eta_{i}\) and \(\beta_{i}\), several constraints still have nonlinearity. In this section, the nonlinear power flow constraints (2), (3), and line flow constraints (10),(11) are replaced with the following linear and quadratic constraints: \[P_{i,\text{inj}}=\sqrt{2}u_{i}G_{ii}+\sum_{k\in B}\left(G_{ik}W_{R_{ik}}+B_{ik }W_{I_{ik}}\right), \tag{23}\] \[Q_{i,\text{inj}}=-\sqrt{2}u_{i}B_{ii}+\sum_{k\in B}\left(G_{ik}W_{I_{ik}}-B_{ ik}W_{R_{ik}}\right), \tag{24}\] \[P_{ik}=\sqrt{2}u_{i}G_{ik}-\left(G_{ik}W_{R_{ik}}+B_{ik}W_{I_{ik}}\right), \tag{25}\] \[Q_{ik}=-\sqrt{2}u_{i}B_{ik}+\left(B_{ik}W_{R_{ik}}-G_{ik}W_{I_{ik}}\right). \tag{26}\] \[W_{R_{ik}}^{2}+W_{I_{ik}}^{2}\leq 2u_{i}u_{k}. \tag{27}\] \[\theta_{ik}=\tan^{-1}\left(\frac{W_{I_{ik}}}{W_{R_{ik}}}\right). \tag{28}\] Here, (23), (24) are the linear real and reactive power flow equations and (25), (26) are linear line flow equations. The relationship between the convex variables \(u_{i},W_{I_{ik}}\), and \(W_{R_{ik}}\) are defined by the equations (27) and (28). Since (28) is still nonlinear, Taylor series expansion can be used to linearize the equation: \[\tan^{-1}\frac{W_{I_{ik}}^{(q)}}{W_{R_{ik}}^{(q)}}=\theta_{ik}+ \frac{W_{I_{ik}}^{(q)}}{W_{R_{ik}}^{(q)2}+W_{I_{ik}}^{(q)2}}W_{R_{ik}}- \tag{29}\] \[\frac{W_{R_{ik}}^{(q)}}{W_{R_{ik}}^{(q)2}+W_{I_{ik}}^{(q)2}}W_{I_ {ik}}.\] where, the higher order terms are neglected, and \((W_{l_{ik}}^{(q)},W_{R_{ik}}^{(q)})\) are the initial estimation. A detailed description of this technique can be found in [31]. The final problem formulation will be as follows: \[\text{minimize }f(x,\zeta,\beta)\] subject to \[\text{(2), (3), (6)-(16), (21), (22), (27) and (29)}\] where, \(P_{i,inj},Q_{i,inj},P_{ik}\), and \(Q_{ik}\) are defined by (23), (24), (25), and (26), respectively. ## III Solution Technique In this section, an iterative method is described to solve the reformulated problem. For a fixed set of \((\zeta,\beta)\), let us consider the following sub-problem of (19): \[\text{minimize }F(x;\zeta,\beta)=\frac{1}{2}\sum_{i=1}^{m}[\zeta_{i}(A_{i}(x ))^{2}+\beta_{i}(B_{i}(x))^{2}] \tag{30}\] Equation (30) can be solved using any standard convex optimization technique. If the optimal solution for (30) is \(x^{*}(\zeta,\beta)\), then for a fixed set of \(x^{*}\) (18) is reduced to the following problem of 2m variables \((\zeta,\beta)\): \[\text{minimize }F_{aux}(\zeta,\beta)=\frac{1}{2}\sum_{i=1}^{m}[ \zeta_{i}(A_{i}(x^{*}))^{2}+\beta_{i}(B_{i}(x^{*}))^{2}] \tag{31}\] subject to, \[\zeta_{i}\beta_{i}\geq 1\] \[(\zeta,\beta)>0\] where, \(F_{aux}\) denotes the auxiliary problem of \(F\). Equation (30) and (31) are solved iteratively until convergence is achieved. The following algorithm is used to solve the fractional optimization problem: **Step 0: Set i = 0.** **Step 1: Find an optimal solution, \(x^{*}\) for (30) for a fixed \((\zeta,\beta)\).** **Step 2: Find an optimal solution \((\zeta_{i},\beta_{i})\) for (31) using convex minimization method.** **Step 4: Update the feasible region of (31) by including cutting function \(l_{i}\), where \(l_{i}=2-\beta_{i}\sqrt{\frac{\zeta_{i}}{\beta_{i}}}-\zeta_{i}\sqrt{\frac{\beta _{i}}{\zeta_{i}}}\leq 0\) Step 5: Let \(i=i+1\) and return to step 0. Continue until converges.** In this paper, the MOSEK optimization toolbox is used to solve the formulated problem [32]. The optimization problem is transformed into the conic quadratic format to fit into MOSEK. MOSEK supports two types of quadratic cones: * General quadratic cones: \[Q^{n}=[x\in R^{n}:x_{0}\geq\sqrt{\sum_{j=1}^{n-1}x_{j}^{2}}]\] * Rotated Quadratic cones: \[Q^{n}_{r}=[x\in R^{n}:2x_{0}x_{1}\geq\sum_{j=2}^{n-1}x_{j}^{2}],x_{0}\geq 0,x_{1}\geq 0\] All the quadratic parts of the reformulated minimization problem in section II(C) are replaced with rotated quadratic cones and corresponding linear equations. As a result, the transformed problem can be solved using the MOSEK solver. ## IV Case Studies In this literature, the proposed system efficiency model is tested with a notional 12-bus MVAC ship system, a modified IEEE 30-bus, and an IEEE 118-bus system. Each system consists of multiple generators, distribution lines, ESS, and loads at different buses. The load data is generated using the demand pattern of Real-time Dashboard, NYISO [33]. The load profile for different systems can be observed in Fig.1. This paper considers a 24-hour time horizon while solving the optimization problem, where each time step is 1 hour. However, any time horizon and length of time step can be selected based on the system requirement. ### _Notional 12-Bus MVAC Ship System_ The notional four-zone 12 bus MVAC ship system [34] (shown in Fig. 2) consists of two main gas turbine generators (MTG) and two auxiliary gas turbine generators (ATG). The generator parameters can be observed from TABLE I. The system also has 8 ESS (each zone has 2 ESS) and multiple loads, including 2 propulsion motor modules (PMM) at buses 6 and 7 and AC load centers (ACLC) at buses 1,2,3,4,9,10,11, and 12. The ESS data are shown in TABLE II. The proposed optimization model was run for the notional MVAC ship system; the simulation time was 34.414s, taking 178 iterations to converge. The output power generation can be observed in Fig. 3. The system is likely to dispatch the ATGs more than the MTGs to improve the overall efficiency since the capacities of the ATGs are lower than the MTGs. \begin{table} \begin{tabular}{c c c c} \hline \hline Types & \begin{tabular}{c} Capacity \\ (MW) \\ \end{tabular} & \begin{tabular}{c} Efficiency curve \\ coefficient \\ \end{tabular} & \begin{tabular}{c} Number of \\ Units \\ \end{tabular} \\ \hline ATG & 4.7 & \begin{tabular}{c} \(a_{i}=-.133,b_{i}=.31,c_{i}=.174\) \\ \end{tabular} & 2 \\ MTG & 35 & \begin{tabular}{c} \(a_{i}=-.133,b_{i}=.31,c_{i}=.204\) \\ \end{tabular} & 2 \\ \hline \hline \end{tabular} \end{table} TABLE I: Generator Data for Notional MVAC Ship System Fig. 1: 24-hours load profile of different systems The ESS charging and discharging schedule and the SOC of the ESS are shown in Figs. 4 and 5, respectively. ### _IEEE 30-bus System_ The IEEE 30-bus system [35] has 6 generator buses, 16 load buses, and 42 transmission lines. In addition, the system is modified by including six energy storage systems (ESS) at buses 5, 11, 15, 19, 23 and 27. The proposed model was then tested for the IEEE 30-bus system, and the observed convergence time was 91.728s for 476 iterations. The generator output schedule is shown in Fig. 6. Although the system has six generators, only three produced output during the simulation. Generator graphs with zero output are not included in the figure. The state of charge (SOC) of the ESS is shown in Fig. 7. ### _IEEE 118-bus System_ The IEEE 118-bus system [36] has 21 generator buses, 113 load buses, and 179 transmission lines. In addition, the system is modified by including 16 energy storage systems (ESS). In this study, the IEEE 118-bus system is the largest system where the system efficiency model is applied. The model was run successfully with a reasonable convergence time of 316.22s. Fig. 8 represents the output power of the generators. For the IEEE 118-bus system, only 9 generators generated power. Only the generators with output are shown in the figure. The summary of the results for all systems is shown in TABLE III. The simulations were run on Intel Core i7-10700 CPU, 2.90 GHz processor with 32.0 GB RAM. ### _Performance Comparison_ In this section, a comparative analysis is conducted of the performance of the proposed fuel efficiency model in two distinct domains: * Comparison in terms of convergence time: This comparison indicates that the model proposed in this paper can be solved efficiently within a reasonable convergence time. As a result, the model can be applied to large-scale systems where the nonlinear programming model takes excessive time to converge. * Comparison in terms of fuel consumption: The proposed model consumes significantly less amount of fuel compared to the other models with different generator dispatches. The results indicate that the proposed model is the most efficient and optimal. #### Iv-D1 Comparison in terms of convergence time The convergence time of NLP and FP models is compared in this subsection. The nonlinear optimization model from section II(A) is solved with MATLAB NLP (_fmincon_ function) for the notional MVAC ship system. The solution took an extensive time (more than 8 hours) to converge for the notional MVAC ship system with 24-time steps where the convergence time of the FP model was only 34.414s for the same system. As a result, the NLP makes the solution procedure impractical to apply to extensive systems. Moreover, the fuel consumed during the operation was \(6.53\times 10^{4}\)L, higher than the fuel Fig. 8: Generator output for IEEE 118-bus system Fig. 6: Generator output for IEEE 30-bus system at (a) bus 5, (b) bus 11, (c) bus 13. Fig. 7: State of charge (soc) for IEEE 30-bus system Fig. 9: State of charge (SOC) for IEEE 118-bus system consumed with the proposed FP model. The FP model is clearly more advantageous than the nonlinear programming optimization, even with a higher number of variables. The performance comparison between FP and NLP is shown in TABLE IV. #### Iv-D2 Comparison in terms of fuel consumption The proposed system efficiency model is compared with three other models (as listed in TABLE V) to demonstrate the fuel efficiency. The difference between the models is in the generator dispatch allowed during the simulation. The number and type of generators allowed for each model during the operation are indicated by the '_Dispatch_' column. The comparison results are shown in Fig. 11. It can be observed that the proposed model has the lowest fuel consumption among all models. Initially, all models consume almost similar amounts of fuel. However, the difference in fuel consumption increases with the number of time steps. This observation highlights the superior efficiency of the proposed model in comparison to the models examined in this section. ## V Conclusion This study has addressed the challenge of the fuel consumption minimization problem to enhance the system efficiency and reduce the operating cost of the power generation units. The traditional approaches typically focus on maximizing the efficiency function or minimizing the generator cost function to achieve optimal fuel consumption for the system. However, these approaches do not account for the fuel consumption rate directly and are impractical to implement in real-world systems where optimizing fuel use is the objective. In addition, existing studies that have used the fuel consumption curve to formulate the optimization problem have numerous limitations, including incompatibility to apply to large AC systems. As a result, it is crucial to incorporate a function that directly represents fuel consumption to enhance the system's fuel efficiency. This study introduced a novel objective function based on a sum-of-ratios approach, providing a straightforward representation of the fuel consumption rate. The sum-of-ratios problem was effectively solved by leveraging a fractional programming (FP) reformulation technique, resulting in successful fuel consumption minimization. Moreover, the low convergence time of the solution makes the model suitable for large-scale systems. While the model stands out in its uniqueness and effectiveness compared to other approaches, future research will concentrate on implementing a distributed algorithm to enhance scalability for larger and more complex systems. ## VI Acknowledgement The information, data, or work presented herein was partly funded by the U.S. Office of Naval Research under the award numbers N000142212239 and N000142112124.
2308.07792
Structural transformations in Cu, Ag, and Au metal nanoclusters
Finite-temperature structures of Cu, Ag, and Au metal nanoclusters are calculated in the entire temperature range from 0 K to melting using a computational methodology that we proposed recently [Settem \emph{et al.}, Nanoscale, 2022, 14, 939]. In this method, Harmonic Superposition Approximation (HSA) and Parallel Tempering Molecular Dynamics (PTMD) are combined in a complementary manner. HSA is accurate at low temperatures and fails at higher temperatures. PTMD, on the other hand, effectively samples the high temperature region and melting. This method is used to study the size- and system-dependent competition between various structural motifs of Cu, Ag, and Au nanoclusters in the size range 1 to 2 nm. Results show that there are mainly three types of structural changes in metal nanoclusters depending on whether a solid-solid transformation occurs. In the first type, global minimum is the dominant motif in the entire temperature range. In contrast, when a solid-solid transformation occurs, the global minimum transforms either completely to a different motif or partially resulting in a co-existence of multiple motifs. Finally, nanocluster structures are analyzed to highlight the system-specific differences across the three metals.
Manoj Settem, Cesare Roncaglia, Riccardo Ferrando, Alberto Giacomello
2023-08-15T14:17:23Z
http://arxiv.org/abs/2308.07792v1
# Structural transformations in Cu, Ag, and Au metal nanoclusters ###### Abstract Finite-temperature structures of Cu, Ag, and Au metal nanoclusters are calculated in the entire temperature range from 0 K to melting using a computational methodology that we proposed recently [Settem _et al._, Nanoscale, 2022, 14, 939]. In this method, Harmonic Superposition Approximation (HSA) and Parallel Tempering Molecular Dynamics (PTMD) are combined in a complementary manner. HSA is accurate at low temperatures and fails at higher temperatures. PTMD, on the other hand, effectively samples the high temperature region and melting. This method is used to study the size- and system-dependent competition between various structural motifs of Cu, Ag, and Au nanoclusters in the size range 1 to 2 nm. Results show that there are mainly three types of structural changes in metal nanoclusters depending on whether a solid-solid transformation occurs. In the first type, global minimum is the dominant motif in the entire temperature range. In contrast, when a solid-solid transformation occurs, the global minimum transforms either completely to a different motif or partially resulting in a co-existence of multiple motifs. Finally, nanocluster structures are analyzed to highlight the system-specific differences across the three metals. + Footnote †: preprint: AIP/123-QED Introduction Metal nanoclusters constitute an important branch of nanotechnology which exhibit size- and shape-dependent properties. Typically, metal nanoclusters adopt [1] either the non-crystalline icosahedron (Ih) and decahedron (Dh) motifs or the crystalline octahedron (fcc) motif; with the non-crystalline structures being dominant at smaller sizes, but becoming unfavorable at large sizes due to the stress contribution to the energy that is proportional to the volume. [2; 3; 4] Since properties of technological interest (catalytic, optical, etc.) depend on the cluster structure, it is crucial to understand the equilibrium structures of metal nanoclusters. For this purpose, computer simulations can be very useful. Most of the studies available in the literature focus on finding the global energy minimum at a given size. [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17] Although this information is important, it is limited in the sense that global minima refers to the structures at 0 K. However, metal nanoclusters are expected to be produced and observed at finite temperatures. In addition, various structural motifs coexist [18; 19] at a specific size and temperature. Hence, a method to reliably calculate the equilibrium distribution of various structural motifs in the entire temperature range is essential. One possible approach is the Harmonic Superposition Approximation (HSA) [20; 21] which has been used to study Lennard Jones, [22; 23; 24; 25] metal, [18; 26] and alloy nanoclusters. [27; 28] Briefly, in this method, a large number (> 10\({}^{3}\)) of low-lying minima are sampled from the potential energy surface (PES) to construct an approximation of the partition function. Subsequently, the temperature-dependent probability of an isomer is calculated based on the partition function. HSA captures the structural distribution at low temperatures fairly accurately. However, at higher temperatures, HSA becomes progressively erroneous. This stems mainly from the failure to accommodate the anharmonic effects which become significant at larger temperatures. Another issue is the difficulty in capturing the melting region. In order to reconstruct the melting region, it is necessary to sample the high energy region of the PES which would require one to collect a prohibitively large number of minima. Due to these constraints melting cannot be reliably captured using HSA. Alternatively, to sample the phase space effectively, one can simulate several _replicas_[29] of the system that are at different temperatures and are allowed to exchange configurations at specific intervals according to a Metropolis-like criterion. This method is referred to as _replica exchange_ or _parallel tempering_. At higher temperatures, the barriers between various structures are easily overcome ensuring a good sampling at these temperatures. On the other hand, exchange of configurations allows the high temperature configurations to cascade to lower temperatures and, in the process, to improve the phase space exploration at lower temperatures as well. Both Monte Carlo [30; 31; 32; 33] and molecular dynamics [34; 35] can be carried out in conjunction with parallel tempering. In PTMC, generally, random displacement moves are employed to sample configurations; which reduces the likelihood of inter-motif transition with increasing cluster size. [36] Also, collective atomic rearrangements [37] are involved during inter-motif transition involving metallic clusters which might not be straightforward to incorporate into Monte Carlo sampling. As a result, in this work, we carry out parallel tempering with molecular dynamics. Recently, we have proposed a method [19] that combines HSA and parallel tempering leveraging the advantages offered by these two methods to capture the structural distribution in the entire temperature range (0 K to melting). First, we carry out parallel tempering molecular dynamics (PTMD) with several replicas at temperatures ranging from room temperature to beyond melting. A large collection of local minima are sampled during the PTMD simulations which are then fed into the HSA calculations. This combined method offers several advantages where HSA and PTMD act in a complementary fashion. The conventional HSA calculations require collection of a large number of local minima which are obtained using structure optimization methods. [18; 27] In our case, the minima are directly obtained from PTMD simulations without the need to explicitly search for them. HSA can capture the low temperature solid-solid transitions which might prove to be elusive for PTMD. On the other hand, PTMD captures the high temperature and the melting regions accurately where HSA calculations fail. As a result, the low temperature and the high temperature regions are accurately captured by HSA and PTMD respectively. In the intermediate temperatures, HSA and PTMD have a good agreement. In this work, we apply this method to study the size- and system-dependent structural changes with temperature in Cu, Ag, and Au metal nanoclusters. This is crucial information given their strong influence on the properties of metal nanoclusters. For example, catalytic activity of metal nanoclusters depends on the structure type and size [38; 39; 40; 41] due to the wide variety of catalytic sites. [42] In addition, the catalytic activity can be enhanced by an ensemble of different geometrical structures in comparison to homogeneously shaped structures. [43] Hence, it is essential to gather knowledge on the equilibrium structural distribution where various geometrical motifs coexist. Several theoretical works have calculated the global minimum structures of Cu, Ag, and Au nanoclusters. Grigoryan _et al._[9] calculated the global minima of Cu clusters up to 150 atoms using the embedded atom method (EAM), [44] and up to 60 atoms using Gupta [45] and Sutton-Chen [46] potentials. Highly stable structures occur at the sizes 13, 19, 55, 92, and 147 with all of them having high symmetry icosahedral structures except 92 which is a chiral structure having \(T\) point group symmetry. Most of the structures are icosahedra with the sizes 4, 17, 26, 28, 29, 91\(-\)95 having tetrahedral geometry and 75, 78, 81, 101\(-\)103 being decahedra. In the case of Ag nanoclusters of sizes larger than 60 atoms, decahedron is found to be the dominant motif.[7; 10; 11; 15] There are few exceptions where truncated octahedron (fcc) and icosahedron (Ih) are the global minima. Due to the strong relativistic effects,[47] Au nanoclusters exhibit peculiar structures. At sizes smaller than 40 atoms, Au nanoclusters adopt either planar or hollow cage-like geometries.[48; 49; 50; 51; 52; 53; 54] In comparison to Cu and Ag, Au disfavors icosahedral structures. At the magic sizes of 55, 147, and 309 the icosahedron is not the global minimum.[14; 18; 37] This is also evident over larger size range (up to 1000 atoms).[3] However, when the icosahedral structures are observed in Au nanoclusters, for example, at higher temperatures,[55; 56] they typically have "rosette"[57; 58] defects on the surface. A "rosette" defect appears when a vertex atom is pushed out to form a six-atom ring with the five neighboring surface atoms leaving behind a vacancy at the vertex position. Cu, Ag, and Au clusters have also been studied using density functional theory (DFT) calculations. Generally, ideal structures are considered since global minimum search becomes prohibitive at the DFT level for clusters larger than \(\sim\) 50 atoms.[59] Roldan _et al.[60]_ carried out structural analysis of several "magic" sized octahedral Cu, Ag, and Au clusters in the range 38 \(-\) 225 atoms and identified a correlation to estimate cohesive energies in a large size range. Similarly, Kiss _et al.[61]_ studied octahedral and icosahedral Ag clusters (consisting of 6 \(-\) 600 atoms) and observed that the cohesive energy is linear with inverse of cluster size. Oliveira _et al.[62]_ showed that Ag icosahedra are energetically stable compared to cuboctahedra through density functional tight binding (DFTB) calculations of "magic" clusters in the range 55 \(-\) 561 atoms. The picture arising from experiments is more complex, since in experiments it is often difficult to disentangle kinetic effects from equilibrium ones.[1] Electron microscopy has been used to study the structure of metal nanoclusters with varying size and temperature. Langlois _et al.[63]_ prepared Cu nanoparticles in a broad size range of 1 nm to 12 nm using thermal evaporation. They observed a significant overlap between icosahedra and decahedra at sizes less than 8 nm beyond which _fcc_ structures were observed. Volk _et al.[64]_ analyzed Ag clusters with size < 7 nm grown in superfluid He droplets. The smallest particles were fcc, with decahedra at intermediate sizes and icosahedra at large sizes. However, theoretical predictions[2; 3] show that icosahedra are energetically favored at smaller sizes while fcc are favored at larger sizes, while large icosahedra are likely to be due to kinetically trapped growth on top of smaller decahedra.[65; 66] Recently, the structural distribution of size-selected Ag clusters centered around 309 atoms was measured,[67] finding an abundance of fcc structures with very little icosahedra (2%). This is in contrast to the prediction that icosahedra is the dominant motif around the size 309.[3] Wells _et al._[68] calculated the proportion of various motifs of Au\({}_{561}\), Au\({}_{742}\), and Au\({}_{923}\). At these sizes, fcc and decahedra making up 70% of the structures while icosahedra contribute less than 5%. Finite-temperature distribution of Au\({}_{561}\) was calculated by Foster _et al._[69] in the temperature range 20 \({}^{\circ}\)C to 500 \({}^{\circ}\)C. Again, icosahedra were almost non-existent beyond 100 \({}^{\circ}\)C with less than 3%. At temperatures greater than 125 \({}^{\circ}\)C, there is an increase in the proportion of decahedra at the expense of fcc structures. The experiments establish a lack of preference for the icosahedral motif in Au nanoclusters, in agreement with the findings of Gupta potential and DFT calculations.[70] From a theoretical and experimental viewpoint, it is essential to have a knowledge of the equilibrium proportion of various structural motifs as a function of temperature. In this work we calculate the structural distribution of Cu, Ag, and Au metal nanoclusters at the sizes 90, 147, and 201 which fall in the size range of 1 nm to 2 nm. These were chosen to highlight the size- and system-dependent structural changes. 147 and 201 are "magic" sizes corresponding to perfect icosahedron (147) and regular truncated octahedron (201). It is generally assumed that "magic" sized structures have energetic stability. Our results show that this assumption is not always true. Finally, we chose 90 to look at non-magic sized structures. ## II Methods We use the tight binding model within the second moment approximation (TBSMA)[71] which is also referred to as Gupta[45] potential or Rosato-Guillope-Legrand (RGL)[72] potential to model the atom-atom interactions in Cu, Ag, and Au nanoclusters. The parameters of the Gupta potential have been taken from Ref.[2]. The interaction potential of Au gives an accurate description of the experimental cluster structures in gas phase[68] and on MgO substrates.[73] In addition, this potential agrees well with DFT calculations in the prediction of surface "rosette" defects in icosahedra[57] and the tendency to disfavor icosahedra.[70] Coming to Ag and Cu, the Gupta potentials correctly predict the stability of Mackay stacking over anti-Mackay stacking in icosahedral clusters in line with the DFT calculations (see Supporting Information in ref.[74]). In Ag\({}_{586}\), fcc structure is energetically preferred in comparison to icosahedron which is also the case according to DFT.[75] Gupta potential predicts correctly that Ag icosahedra are energetically stable compared to cuboctahedra which agrees with the DFTB calculations[62] (see the plot of energy difference between cuboctahedron and icosahedron in supplementary figure S1). At the size 147, icosahedra and decahedra are the prominent motifs. In order to assess the competition between these motifs, we have carried out DFT calculations for Cu\({}_{147}\) and Ag\({}_{147}\). For Au\({}_{147}\) clusters, we refer to the calculations done previously.[70] DFT calculations were carried out using Quantum ESPRESSO[76] code. Projected augmented wave (PAW)[77] pseudopotentials were used with Perdew-Burke-Ernzerhof (PBE)[78] exchange-correlation functional. An energy cutoff of 45 Ry was used for both Ag, Cu; while the charge density cutoff of 181 Ry, 236 Ry were used for Ag, Cu respectively. The calculations were considered to be converged with energy and force tolerance of \(1\times 10^{-4}\) Ry and \(1\times 10^{-3}\) Ry/a.u. respectively. The energy difference between decahedron (Dh) and icosahedron (Ih) defined as, \(E_{Dh}-E_{Ih}\), at the DFT/PBE level are +3.87 eV, +2.55 eV, and -2.56 eV for Cu, Ag, and Au respectively. The corresponding values according to Gupta potential are +1.57 eV, +0.46 eV, and -1.86 eV. Both DFT/PBE and Gupta show therefore the same trend: Ih is energetically preferred in Cu and Ag while Dh is favored in Au. Based on these results, we believe that Gupta potentials are reliable for analyzing structural trends between Cu, Ag, and Au metal nanoclusters. The use of this model will allow a thorough sampling of the energy landscape which would be hardly feasible by DFT. A detailed comparison of Gupta potential with DFT calculations is provided in the _Results and Discussion_ section which allows us to assess its performance and limitations. Before the PTMD simulations, we calculate the global minimum at each size using basin hopping Monte Carlo (BHMC)[19; 37; 79] optimization search. For each size, we run five independent search simulations with at least 2.5\(\times 10^{5}\) basin hopping steps. The detailed procedure of the combined method of PTMD+HSA is described in a previous work[19]. Here we only recapitulate it briefly. In the PTMD simulations, there are two fundamental parameters: the number of replicas (\(M\)) and the temperature, T\({}_{\rm m}\) (\(m=1,2,3,...,M\)) of each replica. All the replicas are in a canonical ensemble (\(NVT\)) and exchange of configurations between a pair of replicas is attempted at specific intervals. The number of replicas is chosen such that we have at least 20\(-\)30% acceptance of the replica swaps. This is achieved by calculating an approximate caloric curve to identify the melting range and then adjusting the number of replicas and their temperatures to achieve the desired swap acceptance rate. All the PTMD simulations have been carried out in LAMMPS.[80] We use a time step of 5 fs for the molecular dynamics evolution and replica swaps are attempted every 250 ps. They are either accepted or rejected according to a Metropolis-like criterion. We begin the PTMD simulations with all the replicas having the same structure, either global minimum or a low energy structure. After discarding the initial phase of PTMD (\(\sim\) 0.5 \(\mu\)s), we sample configuration at 125 ps after a swap attempt for a total time of about 1 \(\mu\)s to 2 \(\mu\)s. The configurations sampled from PTMD simulations are also fed into the HSA analysis. In the HSA[27; 18; 28] method, the partition function is given by, \[Z=\sum_{i}\frac{e^{-\beta E_{i}^{0}}Z_{i}^{tr}Z_{i}^{rot}Z_{i}^{ vib}}{g_{i}} \tag{1}\] where \(\beta=1/(k_{B}T)\). The summation is over all the local minima, \(i\), considered for the HSA. \(E_{i}^{0}\) is the energy of the local minimum, \(i\). \(Z^{tr}\), \(Z^{rot}\), and \(Z^{ vib}\) are the translational, rotational, and vibrational contributions to the partition function, respectively. It has been shown that only the vibrational contribution is sufficient to calculate the probability of the local minima.[27] The denominator, \(g_{i}\), is the order of the symmetry group of the local minimum \(i\). The vibrational contribution due to a single minimum is given by \[Z^{ vib}=\prod_{n=1}^{3N-6}\frac{e^{-\beta\hbar\omega_{n}/2}}{1-e^{-\beta\hbar \omega_{n}/2}} \tag{2}\] where \(\omega_{n}\) are the \(3N-6\) (\(N\) is the number of atoms in the cluster) frequencies of the normal modes. The probability of a local minimum as a function of temperature is now given by \[p_{i}=\frac{e^{-\beta E_{i}^{0}}Z_{i}^{ vib}/g_{i}}{\sum_{j}e^{-\beta E_{j}^{0}}Z_{j}^{ vib}/g_{j}} \tag{3}\] We define the probability of a specific structure type (\(p^{struct}\)) by summing up the probabilities of all the minima belonging to that structure type. \[p^{struct}=\sum_{k}p_{k} \tag{4}\] where \(k\) represents all the minima having the same structure. Local minima for the HSA analysis were collected from PTMD simulations up to an energy cutoff of 1 eV to 1.5 eV with the exception of Cu\({}_{147}\) and Ag\({}_{147}\) where 2.5 eV was used. Two minima were considered to be different if they belonged to different structure types and were separated by at least 0.05 meV in energy. For identifying the geometrical motif of a given configuration, we use common neighbor analysis (CNA)[81] signatures. The structures are classified using the same scheme that we employed for Au nanoclusters previously[19; 82] and categorize them into decahedron (Dh), icosahedron (Ih), twin, fcc, and amorphous structure classes. A structure that does not fall into any of these categories is classified as a _mix_ structure. Typically, these structures are not well defined or contain structural features of more than one geometrical motif. These structures will be described in more detail while presenting the results. Further details about the parameters used for HSA and PTMD are provided in the Supplementary Information. ## III Results and Discussion We will present the results of Cu and Ag nanoclusters. We note that structural distribution of Au nanoclusters has been previously reported by us[19] and we use it here to make a comparison with Cu and Ag. Also, we compare in detail the structures of Au, Cu, and Ag which was not reported previously. To begin with, we discuss the finite-temperature structural distributions and then make a comparison to highlight the differences and similarities between Cu, Ag, and Au clusters. The melting point of all the metal nanoclusters in the current work are reported in Table 1. We identify the melting point by first constructing the heat capacity (C\({}_{V}\)) curve from PTMD simulations. Melting point is then calculated as the peak of C\({}_{V}\) curve. ### Cu Cu has a strong preference for icosahedral motif as compared to Ag and Au.[2; 3] The global minimum of Cu\({}_{90}\), Cu\({}_{147}\), and Cu\({}_{201}\) are shown in Fig. 1. The global minimum of Cu\({}_{90}\) and Cu\({}_{147}\) are both icosahedra with Cu\({}_{90}\) having \(C_{2v}\) point group symmetry. However, with the EAM potential, the global minimum of Cu\({}_{90}\) was predicted to be an icosahedron with \(C_{s}\) symmetry.[9] The best structure of Cu\({}_{201}\) is a decahedron with \(C_{s}\) point group symmetry. In the case of Cu\({}_{90}\), icosahedron (Ih) is the dominant motif at room temperature with very small amount of twins, decahedra (Dh) and _mix_ structures (Fig. 1a). The _mix_ structures comprise several different geometric types. Predominantly, the _mix_ structures consist of icosahedral-based geometries that either have amorphous regions or the entire structure adopts a configuration similar to \begin{table} \begin{tabular}{c c c|c c|c c c} \hline Cu\({}_{90}\) & Cu\({}_{147}\) & Cu\({}_{201}\) & Ag\({}_{90}\) & Ag\({}_{147}\) & Ag\({}_{201}\) & Au\({}_{90}\) & Au\({}_{147}\) & Au\({}_{201}\) \\ \hline 609 & 779 & 745 & 510 & 651 & 654 & 420 & 505 & 550 \\ \hline \end{tabular} \end{table} Table 1: Melting point (in K) of Cu, Ag, and Au nanoclusters. the 92-atom chiral structure [9; 83] with two missing atoms. The remaining _mix_ structures consist of polydecahedra (p-Dh) which have more than one local fivefold axis. [19; 37; 84] With increasing temperature, the proportion of _mix_ structures increases at the expense of Ih and peaks before melting at \(\sim 600\) K. Qualitatively, HSA predicts similar structural changes in Cu\({}_{90}\). The agreement between HSA and PTMD is good at room temperature and thereafter there are quantitative discrepancies. The increase in _mix_ structures is rather slow according to HSA. For example, at 600 K, PTMD predicts 71.2% _mix_ structures, while HSA predicts only 20.6%. At size 147 (Fig. 1b), the icosahedron, which is the global minimum, dominates in the entire temperature range according to both PTMD and HSA. This indicates a high thermal stability of the icosahedral motif at this size. Moving on to Cu\({}_{201}\) (Fig. 1c), again, the global minimum structure, a decahedron, dominates at room temperature and its proportion decreases steadily with temperature. Icosahedra compete with decahedra at higher temperatures with maximum proportion of Ih observed at 700 K just before melting. HSA on the other hand, predicts a significantly higher amount of Ih at this temperature (77.2% vs 33.7%). Interestingly, fcc and twin structures are almost absent in Cu\({}_{90}\), Cu\({}_{147}\), and Cu\({}_{201}\) clusters in the entire temperature range. Figure 1: Structural distribution of (a) Cu\({}_{90}\), (b) Cu\({}_{147}\), and (c) Cu\({}_{201}\). PTMD, HSA results are shown in the top and middle rows. Global minimum structures are shown in the bottom row. In the HSA results, for comparison, we report with vertical lines the range of PTMD temperatures and with a dashed line the fraction of amorphous structures calculated from PTMD simulations. ### Ag The global minimum structures of Ag\({}_{90}\) and Ag\({}_{201}\) (Fig. 2) are decahedra with both structures having \(C_{s}\) point group symmetry. The ideal icosahedron is the global minimum of Ag\({}_{147}\). These results are consistent with the previously reported global minima at these sizes for Ag clusters.[10; 11; 15] Ag\({}_{90}\) exhibits interesting structural changes (Fig. 2a). From the HSA results, it is evident that the global minimum decahedron undergoes a partial transition to twin and _mix_ structures with increasing temperature, leading to a combination of Dh \(+\) twin \(+\)_mix_ structures at 250 K. Considering the PTMD results, the proportion of Dh, twins, and _mix_ structures remains constant up to \(\sim\) 450 K. This is a case of one-to-many solid-solid transition[27] where one geometrical motif, the Dh, is replaced by a coexistence of Dh, twins, and _mix_ structures. Above 450 K, the proportion of _mix_ structures increases at the expense of Dh and twins. The _mix_ structures are a combination of polydecahedra[37] and distorted icosahedra having amorphous regions. The structural changes in Ag\({}_{147}\) and Ag\({}_{201}\) (Figs. 2b, c) are fairly straightforward. In both cases, the global minimum Figure 2: Structural distribution of (a) Ag\({}_{90}\), (b) Ag\({}_{147}\), and (c) Ag\({}_{201}\). PTMD, HSA results are shown in the top and middle rows. Global minimum structures are shown in the bottom row. In the HSA results, for comparison, we report with vertical lines the range of PTMD temperatures and with a dashed line the fraction of amorphous structures calculated from PTMD simulations. motif (Ih for 147 and Dh for 201) dominates in the entire temperature range, with other motifs nonexistent or in extremely small proportions. ### Au We have recently[19] reported the structural changes in Au nanoclusters and hence, we will only summarize them briefly here (see supplementary figure S2). The global minimum structures of Au\({}_{90}\), Au\({}_{147}\), and Au\({}_{201}\) are fcc, decahedron, and fcc (ideal truncated octahedron), respectively. At size 90, the global minimum motif, fcc, is dominant at lower temperatures and competes with twin and _mix_ structures. With increasing temperature, fcc structures decrease along with an increase in _mix_ structures. In the case of Au\({}_{147}\), the decahedron (global minimum) remains dominant up to melting along with small amounts of twin and fcc structures. Above 400K, Ih and _mix_ structures begin to appear with _mix_ structures dominating close to melting. In Au\({}_{201}\), there is a solid-solid transition from the fcc global minimum (a truncated octahedron) to a Dh at low temperature around 200 K. Thereafter, the Dh dominates up to melting along with a small amount of twins (\(\sim 10\%\)). ### Cu, Ag, and Au all together: combined HSA+PTMD We stitch together HSA and PTMD results in order to get the structural changes in the entire temperature range in a single plot. Data for Au are taken from ref.[19]. Figure 3 compares the available results for Cu, Ag, and Au at all temperature and sizes. HSA and PTMD are stitched together at 300 K. At the temperature where HSA and PTMD are joined, their structural distributions have an excellent agreement except for small jumps in the case of Ag\({}_{90}\) and Au\({}_{90}\), where the trends are anyway consistent. This shows that our approach of combining HSA and PTMD is fairly robust and validated across various metal systems. There are broadly three categories of structural changes that can be observed: type-(i) the global minimum remains the dominant motif up to melting, where amorphous takes over; type-(ii) solid-solid transitions occur, either completely or partially, well below melting temperature, resulting in an entirely different dominant motif; type-(iii) solid-solid transitions gradually occur leading to a co-existence of multiple motifs. The cases Cu\({}_{147}\), Ag\({}_{147}\) and Ag\({}_{201}\) of fall into the first category, while Au\({}_{201}\) falls into the second category. All other cases fall into the third category, but with some differences. In Au\({}_{147}\) and Cu\({}_{201}\), the coexistence between motifs is present in a relatively narrow temperature range close to melting, whereas in all clusters of size 90 coexistence is already found at low temperatures. The results show that ideal geometries corresponding to the "magic" sizes are not necessarily energetically preferred. Here we considered two "magic" sizes, 147 and 201. At size 201, truncated octahedron has the perfect geometry. However, only Au has truncated octahedron as the global minimum, while decahedron prevails for Cu and Ag. Even in Au, the global minimum transforms to Dh which remains the dominant structure at finite temperatures. On the other hand, at size 147, which corresponds to a perfect icosahedron, both Cu and Ag have this structure as the global minimum. However, decahedron is the global minimum of Au\({}_{147}\) with some icosahedra appearing only above 400 K. At size 90, all three systems have a different geometrical motif as the global minimum - Ih for Cu\({}_{90}\), Dh for Ag\({}_{90}\), and fcc for Au\({}_{90}\) - which remains dominant (Cu, Au) or competes with other motifs (Ag). The structural distribution of Cu reinforces the strong preference of icosahedral motif in Cu clusters. Figure 3: Structural changes in the entire temperature range by combining HSA and PTMD in Cu, Ag, and Au nanoclusters. Vertical line in each plot indicates the temperature at which HSA and PTMD are stitched together. The type of structural transformation is also indicated. Description of the various types of structural transformations is provided in the text. ### Structural characterization We have, so far, discussed how the various geometrical motifs compete with temperature. In this section, we characterize the structural features of the various motifs. Typical structures of Cu\({}_{90}\) are shown in Fig. 4 along with their energies relative to the global minimum. The icosahedron is the dominant motif of Cu\({}_{90}\) along with minor amounts of twin and Dh. The twin structures of Cu\({}_{90}\) typically have stacking faults (second structure in Fig. 4a). At higher temperatures, icosahedra resembling the 92-atom incomplete Mackay icosahedron having \(C_{3v}\) point group symmetry are observed. These structures have two surface vacancies at various positions on the 92-atom cluster resulting in Cu\({}_{90}\) icosahedra. An example is shown in the third structure in Fig. 4a. As the temperature increases further, some of these icosahedra undergo a twist and transform to _mix_ structures resembling the 92-atom chiral geometry with tetrahedral \(T\) symmetry (fourth structure in Fig. 4a). The 92-atom chiral structure is the global minimum [9; 83] of Cu\({}_{92}\) and has also been experimentally confirmed to have \(T\) symmetry from a comparison of the photoelectron spectra of Na and Cu clusters [85; 86; 87]. Again, the chiral-like Cu\({}_{90}\) clusters have two surface vacancies. In the case of Ag\({}_{90}\) (Fig. 4b), along with the conventional decahedron (first structure), we find decahedra with with either one (third structure) or two (second structure) hcp islands. When the two hcp islands are adjacent to each other, a local decahedral axis is formed at the intersection which can be considered as a polydecahedron (p-Dh) [84] which has more than one decahedral axis. The twin motif which competes with Dh consists of either a single hcp plane (fourth structure) or stacking faults (fifth structure). Moving on to Au\({}_{90}\) (Fig. 4c), the twins predominantly have a single hcp plane unlike Cu\({}_{90}\) and Ag\({}_{90}\). Also, Au\({}_{90}\) decahedra have deeper reentrant grooves (see the arrow in Fig. 4c) compared to decahedra of Cu and Ag. This is consistent with the general trend found in Ref. [2] The decahedron can undergo surface restructuring resulting in a _mix_ structure (see fourth structure in Fig. 4c). Consider the four atoms (1, 2, 3, and 4) shown before (top) and after (bottom) restructuring. The atoms 2, 4 are pushed apart and the atoms 1, 3 come closer leading to a {100} like arrangement. At size 147, we observe a gradual change in the nature of icosahedra from Cu to Ag to Au. With increasing temperature, the perfect icosahedra becomes defective with, initially, single vertex vacancy (second structure in Fig. 5a) and at still higher temperatures, multiple vacancies (third structure). These same vertex vacancies are also observed in Ag\({}_{147}\) icosahedra (second and fifth structures in Fig. 5b). However, along with the vertex vacancies, we also observe "rosette"[57; 58] defects where the vertex atom protrude to join the five nearest neighbors on the surface to form a six-atom ring. These are highlighted in blue for Ag\({}_{147}\) in Fig. 5b where either two or three "rosette" defects occur together. Icosahedra in Au\({}_{147}\), which appear mainly above 400 K, almost always have "rosette" defects as shown in Fig. 5c. Au\({}_{147}\) decahedra at higher temperatures exhibit deep reentrant grooves compared to the global minimum (second and third structures in 5c). The twins in Au\({}_{147}\) predominantly have single hcp planes as shown in 5c. Finally, at size 201, all three systems have Dh as the dominant motif at room temperature. In Cu\({}_{201}\) and Ag\({}_{201}\), the various decahedra that are observed are all obtained by differing arrange Figure 4: Structures of (a) Cu\({}_{90}\), (b) Ag\({}_{90}\), and (c) Au\({}_{90}\). The energy of each structure is relative to the global minimum (0 eV). The arrow in (c) shows the relatively deeper reentrant groove in Au compared to Cu and Ag. Atoms marked 1, 2, 3, and 4 show the surface restructuring in Au\({}_{90}\) decahedron. ments of nine additional atoms on magic sized 192-atom Marks decahedron. The nine additional atoms are indicated in red (see Figs. 6a, b). The twins in Cu\({}_{201}\) have significant amount of hcp regions and are either completely hcp or consist of stacking faults. At higher temperatures, we observe icosahedra which are incomplete 309-atom icosahedra. In Au\({}_{201}\), Dh is the dominant motif. In this case the best Dh (second structure in Fig. 6c) is different from the typical decahedra observed in Cu and Ag which are formed by adding nine atoms to the 192-atom Decahedron. Instead, the best Dh of Au\({}_{201}\) is highly asymmetrical with deep reentrant grooves. However, at higher temperatures, we do observe Dh structures similar to those of Cu and Ag (third structure in Fig. 6c). In addition to the structures discussed above, we also observe structures that are not straightforward to categorize. We refer to these as _mix_ structures which occur in greater proportions at the smallest size of 90. The typical _mix_ structures at the size 90 are shown in Fig. 7. In polydecahe Figure 5: Structures of (a) Cu\({}_{147}\), (b) Ag\({}_{147}\), and (c) Au\({}_{147}\). The energy of each structure is relative to the global minimum (0 eV). ”Rosette” defects in (b) and (c) are highlighted in blue. dron (p-Dh),[84] more than one decahedral axis is present within the same nanocluster. Examples of Cu\({}_{90}\) and Ag\({}_{90}\) p-Dh consisting of three decahedral axes are shown in the first image of Figs. 7a, b. On the other hand, p-Dh are highly uncommon in Au\({}_{90}\). Another type of _mix_ structure has icosahedral region along with disordered region. All the three systems exhibit these structures (second image in Figs. 7a, b and first image in 7c). A third type of _mix_ structure occurs when local icosahedral features are observed within fcc/twin (final image in Fig. 7a) or decahedron (final image in Figs. 7b, c). This type of structures are mainly observed in Au and are less common in Cu and Ag clusters. The proportion of _mix_ structures is significantly lower at larger sizes of 147 and 201. We observe structures similar to those at the size 90 with icosahedra mixed with disordered region being more dominant. A detailed analysis of _mix_ structures in Au clusters has been discussed previously.[19] Figure 6: Structures of (a) Cu\({}_{201}\), (b) Ag\({}_{201}\), and (c) Au\({}_{201}\). The energy of each structure is relative to the global minimum (0 eV). The atoms in red indicate the additional nine atoms that are arranged on 192-atom Marks’ decahedron to form various 201-atom decahedra. ### Comparison with DFT The structural distributions of Cu, Ag, and Au presented so far correspond to Gupta potential which does not account for the electronic interaction between atoms. In order to assess the performance of Gupta potential, we make a comparison with DFT calculations. We used PAW pseudopotentials with three types of exchange-correlation functionals \(-\) Perdew-Burke-Ernzerhof (PBE),[78] local-density approximation (LDA),[88] and PBE revised for solids (PBEsol).[89] We choose highly probable motifs (either two or more structures per each metal per each size) depending on the structural distribution. For instance, Ih and _mix_ are the most dominant motifs of Cu\({}_{90}\). In the case of Ag\({}_{90}\), three motifs coexist \(-\) Dh, twin, and mix. Hence, we chose the lowest energy Ih, _mix_ for Cu\({}_{90}\) and Dh, twin, _mix_ for Ag\({}_{90}\). All the Cu and Ag structures used for DFT calculations are shown in Fig. 8. For a given combination of metal and size, we measure the energy difference of each structure with respect to the global minimum predicted by Gupta potential. These values are reported in Table 2 for Gupta potential, DFT/PBE, DFT/LDA, and DFT/PBEsol. In case of Cu\({}_{90}\) and Cu\({}_{147}\), Ih has lower energy according to both Gupta and DFT. However, for Cu\({}_{90}\), Ih wins by only \(\sim 0.08\) eV in comparison to \(>1\) eV for all three DFT calculations. On the Figure 7: Mixed structures of (a) Cu\({}_{90}\), (b) Ag\({}_{90}\), and (c) Au\({}_{90}\). The large red atoms in (a) and (b) indicate the various decahedral axes. In (a) and (b), first image is a polydecahedron (p-Dh), second image is an icosahedron with disordered region. Third image in (a) consists of twin region and icosahedral region. The final image in (b) and (c) are mixed structures with decahedral and icosahedral regions coexisting. other hand, Cu\({}_{147}\) has a very good quantitative agreement with DFT. For Cu\({}_{201}\), Gupta potential predicts Dh to have lower energy than Ih in contrast to DFT. In case of Ag\({}_{90}\), DFT favours twin in comparison to _mix_ and Dh. According to DFT, the energetic ordering is E\({}_{\text{twin}}<\) E\({}_{\text{Dh}}<\) E\({}_{\text{mix}}\). Gupta potential, on the other hand, predicts Dh to have the lowest energy among the three. There is a good agreement between Gupta potential and DFT for Ag\({}_{147}\) and Ag\({}_{201}\). Moving on to Au, the various structures used for DFT calculations are shown in Fig. 9. In the case of Au\({}_{90}\), we observe a lack of consistency among the various exchange-correlation functionals. There is a good agreement between Gupta potential, DFT/LDA, and DFT/PBEsol with all three predicting a lower energy for fcc vs. twin. However, DFT/PBE predicts twin to be the lowest energy structure. For Au\({}_{147}\), we considered all the motifs (other than amorphous) given their co-existence before the melting region. Au icosahedra typically have "Rosette" defects. Hence, we also considered the regular closed-shell 147-atom icosahedron and refer to it as _Ih-reg_ in order to assess the competition between them. The energetic ordering according to Gupta potential is E\({}_{\text{Ih-reg}}>\) E\({}_{\text{Ih}}>\) E\({}_{\text{mix}}>\) E\({}_{\text{fcc}}>\) E\({}_{\text{twin}}>\) E\({}_{\text{Dh}}\). Firstly, Ih-reg has higher energy than Ih according to Gupta potential and DFT calculations confirming that Au favors defective icosahedra consisting Figure 8: Cu and Ag structures used for DFT calculations. of "Rosette" defects. DFT/PBE predicts Ih to have lower energy than Dh while Gupta potential, DFT/LDA, and DFT/PBEsol predict the opposite. When it comes to _mix_ vs. Dh, Gupta potential disagrees with DFT calculations which predict _mix_ to have lower energy than Dh. However, it is interesting to note that the _mix_ structure is indeed a Dh with local rearrangement of a few atoms near one of the reentrant grooves (see bottom of Fig. 9). Hence, we believe that Dh motif will dominate also at the DFT level, in agreement with Gupta results. Finally, for Au\({}_{201}\), both Gupta potential and DFT predict the same energetic ordering: E\({}_{\text{twin}}>\) E\({}_{\text{Dh}}>\) E\({}_{\text{fcc}}\). However, Gupta potential has lower energy difference compared to DFT. As a result, we anticipate that the solid-solid \begin{table} \begin{tabular}{l l l l l l l} \hline \hline **System** & \(\Delta\)**E** & **Gupta** & **DFT/PBE** & **DFT/LDA** & **DFT/PBEsol** & **EAM** \\ \hline Cu\({}_{90}\) & E\({}_{\text{mix}}\)-E\({}_{\text{Bh}}\) & 0.0828 & 1.09 & 1.10 & 1.19 & 0.2472 \\ Cu\({}_{147}\) & E\({}_{\text{mix}}\)-E\({}_{\text{Bh}}\) & 1.5815 & 2.14 & \(-\) & \(-\) & 1.8238 \\ Cu\({}_{201}\) & E\({}_{\text{Bh}}\)-E\({}_{\text{Dh}}\) & 0.9286 & -0.507 & -0.252 & \(-\) & 0.9003 \\ \hline Ag\({}_{90}\) & E\({}_{\text{mix}}\)-E\({}_{\text{Dh}}\) & 0.0252 & 0.159 & 0.139 & 0.149 & -0.0043 \\ Ag\({}_{90}\) & E\({}_{\text{twin}}\)-E\({}_{\text{Dh}}\) & 0.0319 & -0.231 & -0.422 & -0.325 & -0.1193 \\ Ag\({}_{147}\) & E\({}_{\text{mix}}\)-E\({}_{\text{Bh}}\) & 1.0019 & 1.51 & \(-\) & \(-\) & 1.3051 \\ Ag\({}_{201}\) & E\({}_{\text{twin}}\)-E\({}_{\text{Dh}}\) & 0.1193 & 0.609 & \(-\) & \(-\) & 0.0659 \\ \hline Au\({}_{90}\) & E\({}_{\text{twin}}\)-E\({}_{\text{fcc}}\) & 0.0522 & -0.106 & 0.0761 & 0.0641 & 0.1666 \\ Au\({}_{147}\) & E\({}_{\text{twin}}\)-E\({}_{\text{Dh}}\) & 0.0470 & 0.114 & \(-\) & \(-\) & 0.4785 \\ Au\({}_{147}\) & E\({}_{\text{fcc}}\)-E\({}_{\text{Dh}}\) & 0.1089 & 0.616 & \(-\) & \(-\) & 0.0819 \\ Au\({}_{147}\) & E\({}_{\text{mix}}\)-E\({}_{\text{Dh}}\) & 0.6411 & -0.348 & -0.330 & -0.209 & 0.4746 \\ Au\({}_{147}\) & E\({}_{\text{Bh}}\)-E\({}_{\text{Dh}}\) & 0.9104 & -0.176 & 0.189 & 0.175 & -0.3893 \\ Au\({}_{147}\) & E\({}_{\text{Bh}}\)-reg-E\({}_{\text{Dh}}\) & 1.8649 & 2.22 & 2.07 & 1.66 & 0.1919 \\ Au\({}_{201}\) & E\({}_{\text{Dh}}\)-E\({}_{\text{fcc}}\) & 0.0524 & 0.237 & 1.01 & 0.798 & 0.7491 \\ Au\({}_{201}\) & E\({}_{\text{twin}}\)-E\({}_{\text{fcc}}\) & 0.0677 & 0.575 & \(-\) & \(-\) & 0.3595 \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of energy differences (\(\Delta\)E in eV) of various motifs for Gupta potential and DFT with different exchange-correlation functional. Also, the values corresponding to embedded atom method (EAM) potentials are provided in the final column. transformation from fcc \(\rightarrow\) Dh will be delayed to occur at higher temperature than predicted by Gupta potential. Overall, we observe the following trends. At the size 147, Gupta potential performs fairly well, the more so for Cu\({}_{147}\) and Ag\({}_{147}\) which exhibit excellent quantitative agreement between Gupta potential and DFT. In the case of Au\({}_{147}\), Gupta potential does a good job. Firstly, it predicts that defective icosahedra are preferred with surface "rosettes". Secondly, Ih has higher energy than Dh and _mix_ according to both Gupta and DFT. The only difference is _mix_, which is a distorted Dh with local rearrangement near the reentrant groove, is energetically preferred over Dh at the DFT level. At size 90, there is a qualitative agreement between Gupta potential and DFT for Cu, but not for Ag and Au. In the case of Ag\({}_{90}\), twin is preferred at the DFT level, while Dh is preferred according to Gupta potential. In Au\({}_{90}\), there is internal disagreement among DFT exchange-correlation functionals. However, given the very small energy difference (absolute values are about 0.1 eV or lower), we expect a similar competition between fcc and twin as observed with Gupta potential. Finally, at size 201, both Ag and Au exhibit a qualitative agreement with DFT (although they are not in the case of Au\({}_{90}\)). Figure 9: Au structures used for DFT calculations. underestimate the energy differences). In the case of Cu\({}_{201}\), Ih is preferred at the DFT level as opposed to Dh according to Gupta potential. Finally, in order to understand how the embedded atom method (EAM) pair potential model performs in comparison to Gupta potential, we calculated the energy differences using EAM potentials for Cu,[90] Ag,[90] and Au.[91] The results are reported in the final column of Table 2. In the case of Cu, Gupta and EAM exhibit similar performance. On the other hand, EAM seems to perform marginally better in the case of Ag. According to EAM, Ag\({}_{90}\) predicts twin to have lower energy compared to _mix_ and Dh similar to DFT. In the case of Au, EAM performs poorly in comparison to Gupta. The major drawback with EAM is that it predicts Ih to be the lowest energy structure for Au\({}_{147}\). Based on these results, we find that model potentials are still a good guide to select the main structural motifs and for discussing general trends between metals, but in some cases they fail to select the lowest-energy motifs in agreement with DFT. We note however that there is a case, Au\({}_{90}\), where there is qualitative disagreement even between different types of DFT calculations. Moreover, in general there are quantitative discrepancies between the different exchange-correlation functionals, which would make it difficult to assign precise temperature-dependent isomer probabilities even at the DFT level. ## IV Conclusions In this work, we have applied a computational framework that we proposed recently[19] to study the size- and system-dependent structural distributions of Cu, Ag, and Au nanoclusters. In this method, we combine harmonic superposition approximation (HSA) and parallel tempering molecular dynamics (PTMD) in a complementary manner and calculate the structures of metal nanoclusters in the entire temperature range from 0 K to melting. We considered three cluster sizes \(-\) 90, 147, and 201 in the range 1 to 2 nm of which 147 and 201 are "magic" sizes. To begin with, "magic" sizes are not necessarily "magic" in that the global minimum is not always the ideal geometrical motif at that size. Perfect icosahedron and truncated octahedron are the ideal geometries at the sizes 147 and 201, respectively. However, only in three out of six cases (Cu\({}_{147}\), Ag\({}_{147}\), Au\({}_{201}\)) the global minimum corresponds to the ideal geometrical structure. The global minima of Au\({}_{147}\), Cu\({}_{201}\), and Ag\({}_{201}\) are all Marks decahedra. At size 90, all the three systems have a different global minimum: icosahedron for Cu\({}_{90}\), decahedron for Ag\({}_{90}\), and fcc for Au\({}_{90}\). The structural changes in these systems can be categorised broadly into three groups: type-(i) global minimum is also the dominant motif at finite temperatures up to melting; type-(ii) solid-solid transformations lead to a completely different motif; type-(iii) solid-solid transitions lead to a co-existence of two or more motifs. The majority of the cases belong to the second and third groups, which include Cu\({}_{90}\), Cu\({}_{201}\), Ag\({}_{90}\), Au\({}_{90}\), Au\({}_{147}\), and Au\({}_{201}\). Icosahedra are extremely dominant with almost 100% abundance in Cu\({}_{147}\) and Ag\({}_{147}\) right up to melting. Similarly, decahedra are the dominant motif in Au\({}_{201}\) up to melting. In the cases of Cu\({}_{90}\) and Au\({}_{90}\), we find find significant proportion of _mix_ structures close to melting. Although decahedra are dominant in Cu\({}_{201}\), we find significant amount of icosahedra beyond 400 K. Finally, in Au\({}_{147}\), the proportion of Dh decreases gradually and we find small amounts of twin, fcc, Ih and _mix_ structures co-existing at higher temperatures. In contrast, Ag\({}_{90}\) and Au\({}_{201}\) undergo solid-solid transformations. Ag\({}_{90}\) exhibits a partial transformation Dh \(\rightarrow\) Dh \(+\) twin \(+\)_mix_ between 100 K to 150 K. Beyond 150 K, the proportion of Dh, twin,and _mix_ structures remains approximately constant up to 450 K indicating a co-existence of multiple motifs. In the case of Au\({}_{201}\), fcc transforms to Dh below 200 K resulting in almost 100% Dh at room temperature which remains dominant up to melting. In both the instances, the solid-solid transformation occurs well below the room temperature (\(<\) 200 K). As a result, it is non-trivial to predict the finite-temperature structures from the global minimum alone. We also observed system specific differences across the three metals. Cu has a stronger preference for icosahedral structures. This is evident from almost 100% abundance at the sizes 90 and 147. While, at size 201, a significant amount of icosahedra are observed above 400 K which peaks around 700 K with \(\sim\) 33%. In the case of Ag, icosahedra are mainly observed at the "magic" size of 147 where they occur with almost 100% abundance. Au on the other hand disfavors icosahedra, with icosahedra observed mainly at the size 147 in small proportions beyond 400 K. Another interesting feature is the gradual change in the nature of "rosette" defects in icosahedra at the size 147 from Cu to Au. "Rosette"defects are completely absent in Cu, but appear at higher temperatures in Ag. However, typically, almost all the icosahedra in Au have "rosette" defects. In contrast to Cu and Ag, decahedra in Au have deeper reentrant grooves. Finally, a comparison of the performance of Gupta potential with DFT reveals few limitations of interatomic pair potentials. We observe a good agreement between Gupta potential and DFT at the size 147. In other cases (Cu\({}_{90}\), Ag\({}_{90}\), Ag\({}_{201}\), and Au\({}_{201}\)), the energetic ordering of the con sidered motifs is same according to both Gupta potential and DFT, with Gupta energy differences being underestimated. In the case of Au\({}_{90}\), Gupta potential agrees with DFT/LDA, DFT/PBEsol but not with DFT/PBE. Finally, Gupta potential fares poorly in the case of Cu\({}_{201}\) since it predicts Dh to prevail over Ih. However, according to DFT, Ih should prevail over Dh. Notwithstanding these limitations, interatomic pair potentials remain indispensable since the wide export of the energy landscape of metal nanoclusters at the DFT level is simply not feasible. It is instructive to first obtain the structural distributions using interatomic pair potentials, e.g., Gupta as done in the current work, followed by DFT calculations to understand the limitations of the structural distributions. For instance, we observed that Gupta potential predicts a complete solid-solid transformation from fcc \(\rightarrow\) Dh below room temperature for Au\({}_{201}\). However, the energy difference between Dh and fcc is lower than predicted by DFT (0.0524 eV for Gupta potential vs. \(>\) 0.2 eV for DFT). Based on this information, it can be inferred that the transformation from fcc \(\rightarrow\) Dh may occur at higher temperature than predicted by Gupta potential. A further check of another model potential, EAM, shows an overall performance of the same quality of the Gupta potential, with a better agreement with DFT for Ag\({}_{90}\) and a poorer performance for Au clusters. Our method can be easily applied to any size and system for which reasonable models for atomic interactions are available. As a result, this method enables one to estimate the equilibrium proportion of various geometrical motifs as a function of temperature which can then be used to compare with the experimentally obtained structural distribution.[67; 68; 69] This allows one to verify if the experimentally observed structures are in equilibrium or kinetically trapped metastable structures. ## Supplementary Material Supplementary material contains Parameters of HSA, PTMD; Structural distribution of Au nanoclusters. ## Acknowledgments This work has been supported by the project "Understanding and Tuning FRiction through nanOstructure Manipulation (UTFROM)" funded by MIUR Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Bando 2017 - grant 20178PZCB5. M.S. and A.G. acknowledge finan cial support from MIUR "Framework per l'Attrazione e il Rafforzzamento delle Eccellenze per la Ricerca in Italia (FARE)" scheme, grant SERENA n. R18XYKRW7J. R.F. acknowledges the Progetto di Eccellenza of the Physics Department of the University of Genoa for financial and the International Research Network Nanoalloys of CNRS for networking support. The authors acknowledge PRACE for awarding us access to Marconi100 at CINECA, Italy. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. ## Author Declarations The authors have no conflicts to disclose.
2310.03966
Numerical Radius Bounds via the Euclidean Operator Radius and Norm
In this paper, we begin by showing a new generalization of the celebrated Cauchy-Schwarz inequality for the inner product. Then, this generalization is used to present some bounds for the Euclidean operator radius and the Euclidean operator norm. These bounds will be used then to obtain some bounds for the numerical radius in a way that extends many well-known results in many cases. The obtained results will be compared with the existing literature through numerical examples and rigorous approaches, whoever is applicable. In this context, more than 15 numerical examples will be given to support the advantage of our findings. Among many consequences, will show that if $T$ is an accretive-dissipative bounded linear operator on a Hilbert space, then ${{\left\| \left( \Re T,\Im T \right) \right\|}_{e}}=\omega \left( T \right)$, where $\omega(\cdot), \|(\cdot,\cdot)\|_e, \Re T$ and $\Im T$ denote, respectively, the numerical radius, the Euclidean norm, the real part and the imaginary part.
Mohammad Sababheh, Hamid Reza Moradi
2023-10-06T01:48:14Z
http://arxiv.org/abs/2310.03966v1
# Numerical radius bounds via the Euclidean operator radius and norm ###### Abstract. In this paper, we begin by showing a new generalization of the celebrated Cauchy-Schwarz inequality for the inner product. Then, this generalization is used to present some bounds for the Euclidean operator radius and the Euclidean operator norm. These bounds will be used then to obtain some bounds for the numerical radius in a way that extends many well-known results in many cases. The obtained results will be compared with the existing literature through numerical examples and rigorous approaches, whoever is applicable. In this context, more than 15 numerical examples will be given to support the advantage of our findings. Among many consequences, will show that if \(T\) is an accretive-dissipative bounded linear operator on a Hilbert space, then \(\|(\mathfrak{R}T,\mathfrak{T}T)\|_{e}=\omega\left(T\right)\), where \(\omega(\cdot),\|(\cdot,\cdot)\|_{e},\mathfrak{R}T\) and \(\mathfrak{T}T\) denote, respectively, the numerical radius, the Euclidean norm, the real part and the imaginary part. Key words and phrases:Euclidean operator radius, numerical radius, norm inequality, operator matrix 2010 Mathematics Subject Classification: Primary 47A12; Secondary 47A30, 47A63, 47B15 ## 1. Introduction Let \(\mathbb{B}(\mathbb{H})\) be the \(C^{*}\)-algebra of all bounded linear operators on a complex Hilbert space \(\mathbb{H}\), and let \(\|\cdot\|\) be the operator norm defined by \(\|T\|=\sup\limits_{\|x\|=1}\|Tx\|.\) In this context, if \(x\in\mathbb{H}\), the quantity \(\|x\|\) is defined by \(\left\langle x,x\right\rangle^{\frac{1}{2}},\) where \(\left\langle\cdot,\cdot\right\rangle\) is the inner product defined on \(\mathbb{H}\). An equivalent definition for the operator norm can be stated as \(\|T\|=\sup\limits_{\|x\|=\|y\|=1}\left|\left\langle Tx,y\right\rangle\right|.\) If, in this latter definition, \(y=x\), a smaller quantity known as the numerical radius and denoted \(\omega(T)\) is obtained. Thus, for \(T\in\mathbb{B}(\mathbb{H})\), the numerical radius of \(T\) is the scalar quantity \(\omega(T)=\sup\limits_{\|x\|=1}\left|\left\langle Tx,x\right\rangle\right|.\) It is easily verified that \(\omega(\cdot)\) defines a norm on \(\mathbb{B}(\mathbb{H})\), as well. However, there are major differences between the norm properties of \(\omega(\cdot)\) and \(\|\cdot\|\). More precisely, the numerical radius is not sub-multiplicative nor unitarily invariant, unlike the operator norm. Although the definition of \(\omega(\cdot)\) seems easier than that of \(\|\cdot\|\), the calculations of \(\omega(\cdot)\) turn out to be much more complicated. Thus, it has been a key interest in the literature to find some approximate values of \(\omega(\cdot)\) in terms of \(\|\cdot\|.\) This is usually done through some sharp upper and lower bounds. In this direction, the relation [6, Theorem 1.3-1] \[\frac{1}{2}\|T\|\leq\omega(T)\leq\|T\| \tag{1.1}\] is a basic relation that furnishes the equivalence of the two norms \(\omega(\cdot)\) and \(\|\cdot\|.\) As the difference between the left and right sides of (1.1) can be very large, researchers have devoted a considerable effort to finding tighter bounds that could be used for approximation targets or to give new insight into such relations. Among the most useful and simple upper bounds in this context, we have [9, Ineq. (8)] \[\omega(T)\leq\frac{1}{2}\left\|\right.\left|T\right|+\left|T^{*}\right|\left\| \right., \tag{1.2}\] and [10, Theorem 1] \[\omega^{2}(T)\leq\frac{1}{2}\left\|\left|T\right|^{2}+\left|T^{*}\right|^{2} \right\|. \tag{1.3}\] In fact, using convexity of the function \(f(t)=t^{2}\), it can be seen that (1.2) is sharper than (1.3). We refer the reader to [13, 18] where this concern is discussed. In [16], the notions of the numerical radius and the operator norm were extended for \(n\)-tuple of operators \((T_{1},T_{2},\ldots,T_{n})\) by the Euclidean operator radius, defined by \[\omega_{e}\left(T_{1},T_{2},\ldots,T_{n}\right)=\sup_{\left\|x\right\|=1} \left(\sum_{i=1}^{n}\left|\langle T_{i}x,x\rangle\right|^{2}\right)^{\frac{1 }{2}},\] and the Euclidean operator norm, defined by \[\left\|(T_{1},T_{2},\ldots,T_{n})\right\|_{e}=\sup_{(\lambda_{1},\lambda_{2}, \ldots,\lambda_{n})\in\mathbb{B}_{n}}\left\|\lambda_{1}T_{1}+\lambda_{2}T_{2 }+\cdots+\lambda_{n}T_{n}\right\|\] where \(\mathbb{B}_{n}=\left\{(\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\in \mathbb{C}^{n}:\ \sum_{i=1}^{n}\left|\lambda_{i}\right|^{2}\leq 1\right\}\). It has been shown in [3, Theorem 6.1] that \[\left\|(T_{1},T_{2},\ldots,T_{n})\right\|_{e}=\sup_{\left\|x\right\|=\left\|y \right\|=1}\left(\sum_{i=1}^{n}\left|\langle T_{i}x,y\rangle\right|^{2} \right)^{\frac{1}{2}},\] as an alternative formula for \(\|\cdot\|_{e}\). Both \(\omega_{e}(\cdot)\) and \(\|\cdot\|_{e}\) have been investigated in the literature, as one can see in [1, 4, 15]. One of the most basic yet useful tools for obtaining possible bounds for the numerical radius and the operator norm is the celebrated Cauchy-Schwarz inequality, which states \(\left|\langle x,y\rangle\right|\leq\left\|x\right\|\left\|y\right\|\), for \(x,y\in\mathbb{H}\). In this paper, as a first contribution, we prove a new generalized form of this inequality that allows obtaining some new and simple relations among \(\omega(\cdot),\|\cdot\|,\omega_{e}(\cdot)\) and \(\|\cdot\|_{e}\). More precisely, we will show that if \(x,y,z\in\mathbb{H}\), then \[\left|\langle x,y\rangle\right|^{2}+\left|\langle x,z\rangle\right|^{2}\leq \left\|x\right\|\sqrt{\left|\langle x,y\rangle\right|^{2}\left\|y\right\|^{2} +\left|\langle x,z\rangle\right|^{2}\left\|z\right\|^{2}+2\left|\langle x,y \rangle\right|\left|\langle x,z\rangle\right|\left|\langle y,z\rangle\right|}.\] Many other bounds and forms will be shown and compared with the existing literature. Then, the above inequality will be utilized to obtain some bounds on the numerical radius and the Euclidean operator radius. In particular, we find an upper bound of \(\omega_{e}(\cdot,\cdot)\) in terms of \(\omega(\cdot)\) (see Theorem 2.1), and we compare this bound with the existing bounds. Then, this bound is used to obtain a known bound for \(\omega(\cdot)\) (see Corollary 2.2). A more elaborated bound for \(\omega_{e}(\cdot)\) will be found in Theorem 2.3, with an application to \(\omega(\cdot)\) in Corollary 2.3. This latter corollary will be utilized to find an upper bound of \(\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\) that can be better than previously known bounds, as we see in Corollary 2.4 and Remark 2.11. Similar discussion of \(\|(\cdot,\cdot)\|_{e}\) will be also presented. Among many other results, the following results are of interest for \(A,B,T\in\mathbb{B}(\mathbb{H})\). \[\omega_{e}^{2}\left(A,B\right)\leq\sqrt{\left\|\omega^{2}\left(A\right)\left|A ^{*}\right|^{2}+\omega^{2}\left(B\right)\left|B^{*}\right|^{2}\right\|+2\omega \left(A\right)\omega\left(B\right)\omega\left(AB^{*}\right)},\] \[\left\|\left(A,B\right)\right\|_{e}^{2}\leq\sqrt{\omega\left(\left|A\right|^{ 2}+\mathrm{i}\left|B\right|^{2}\right)\omega\left(\left|A^{*}\right|^{2}+ \mathrm{i}\left|B^{*}\right|^{2}\right)+\omega\left(\left|A\right|+\mathrm{i} \left|B\right|\right)\omega\left(\left|A^{*}\right|+\mathrm{i}\left|B^{*} \right|\right)\omega\left(BA^{*}\right)},\] \[\omega\left(T\right)\leq\frac{1}{2}\sqrt{\sqrt{2}\omega\left(\left|T\right|^{ 2}+\mathrm{i}\left|T^{*}\right|^{2}\right)+2\omega\left(T^{2}\right)},\] and \[\frac{1}{2}\left\|T\right\| \leq\omega\left(\begin{bmatrix}O&\Re T\\ \Im T&O\end{bmatrix}\right)\] \[\leq\frac{\sqrt{2}}{2}\|(\Re T,\Im T)\|_{e}\] \[\leq\frac{\sqrt{2}}{2}\omega\left(\left|\Re T\right|+\mathrm{i} \left|\Re T\right|\right)\] \[\leq\frac{\sqrt{2}}{2}\Big{\|}(\Re T)^{2}+\left(\Im T\right)^{2} \Big{\|}^{\frac{1}{2}}\] \[\leq\frac{\sqrt{2}}{2}\sqrt{\left\|\Re T\right\|^{2}+\left\|\Im T \right\|^{2}}\] \[\leq\omega\left(T\right).\] As mentioned earlier, the significance of the results will be explained through a sequence of remarks with numerous numerical examples. Before proceeding to the main results, we recall the reader's attention to some lemmas. The first lemma treats operator matrices, which are operators defined on \(\mathbb{H}\oplus\mathbb{H}\). Such operators are usually described by writing \(\begin{bmatrix}A&B\\ C&D\end{bmatrix}\), where \(A,B,C,D\in\mathbb{B}(\mathbb{H})\). **Lemma 1.1**.: _[_8_, (4.6)]_ _Let \(A,B\in\mathbb{B}(\mathbb{H})\). Then_ \[\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)=\frac{1}{2}\sup_{\theta\in\mathbb{R}}\|A+e^{ \mathrm{i}\theta}B\|,\] _where \(O\) is the zero operator in \(\mathbb{B}(\mathbb{H})\). In particular (see [8, Theorem 2.3]),_ \[\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\leq\frac{\|A\|+\|B\|}{2}.\] **Lemma 1.2**.: _(mixed Schwarz inequality [7, pp. 75-76]) Let \(T\in\mathbb{B}(\mathbb{H})\). Then for any \(x,y\in\mathbb{H}\),_ \[\left|\left\langle Tx,y\right\rangle\right|\leq\sqrt{\left\langle T\right|x,x \right\rangle\left\langle\left|T^{*}\right|y,y\right\rangle}.\] We also have the following simple observation. For any \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\), \[\omega_{e}\left(A,B\right)\leq\sqrt{\omega\left(A\right)\left\|A\right\|+\omega \left(B\right)\left\|B\right\|}, \tag{1.4}\] holds. Indeed, \[\sqrt{\left|\left\langle Ax,x\right\rangle\right|^{2}+\left|\left\langle Bx,x \right\rangle\right|^{2}} \leq\sqrt{\omega^{2}\left(A\right)+\omega^{2}\left(B\right)}\] \[\leq\sqrt{\omega\left(A\right)\left\|A\right\|+\omega\left(B \right)\left\|B\right\|}.\] Now by taking supremum over \(x\in\mathbb{H}\) with \(\left\|x\right\|=1\) we get (1.4). From [9, Corollary 1], we know that if \(X^{2}=O\), then \(\omega\left(X\right)=\frac{1}{2}\left\|X\right\|\). So, if \(A^{2}=B^{2}=O\), then from (1.4), we infer that \[\omega_{e}\left(A,B\right)\leq\sqrt{\frac{\left\|A\right\|^{2}+\left\|B \right\|^{2}}{2}}.\] ## 2. Main Results In this section, we present our main results. For organizational purposes, we present our results in four subsections. In the first subsection, we prove the generalized form of the Cauchy-Schwarz inequality with some simple consequences; then, we discuss possible relations for \(\omega_{e}(\cdot,\cdot)\). After that the quantity \(\|(\cdot,\cdot)\|_{e}\) is discussed, and applications towards \(\omega(\cdot)\) are presented in the end. Many numerical examples will be given to show the result's significance compared to the existing literature. ### The generalized Cauchy-Schwarz inequality Now, we show the following generalization of the Cauchy-Schwarz inequality. **Lemma 2.1**.: _Let \(x,y,z\in\mathbb{H}\). Then_ \[|\langle x,y\rangle|^{2}+|\langle x,z\rangle|^{2}\leq\|x\|\,\sqrt{|\langle x, y\rangle|^{2}\|y\|^{2}+|\langle x,z\rangle|^{2}\|z\|^{2}+2\,|\langle x,y \rangle|\,|\langle x,z\rangle|\,|\langle y,z\rangle|}. \tag{2.1}\] _In particular,_ \[|\langle x,e\rangle|^{2}+|\langle y,e\rangle|^{2}\leq\sqrt{|\langle x,e \rangle|^{2}\|x\|^{2}+|\langle y,e\rangle|^{2}\|y\|^{2}+2\,|\langle x,e\rangle |\,|\langle y,e\rangle|\,|\langle x,y\rangle|}, \tag{2.2}\] _provided that \(e\in\mathbb{H}\) with \(\|e\|=1\)._ Proof.: We have \[\left(|\langle x,y\rangle|^{2}+|\langle x,z\rangle|^{2}\right)^{2}\] \[=\left(\langle x,y\rangle\,\langle y,x\rangle+\langle x,z\rangle \,\langle z,x\rangle\right)^{2}\] \[=\langle x,\langle x,y\rangle\,y+\langle x,z\rangle\,z\rangle^{2}\] \[\leq\|x\|^{2}\|\langle x,y\rangle\,y+\langle x,z\rangle\,z\|^{2}\] \[\quad\text{(by the Cauchy-Schwarz inequality)}\] \[=\|x\|^{2}\left(|\langle x,y\rangle|^{2}\|y\|^{2}+|\langle x,z \rangle|^{2}\|z\|^{2}+2\Re\left(\langle x,y\rangle\,\overline{\langle x,z \rangle}\,\langle y,z\rangle\right)\right)\] \[\leq\|x\|^{2}\left(|\langle x,y\rangle|^{2}\|y\|^{2}+|\langle x, z\rangle|^{2}\|z\|^{2}+2\,|\langle x,y\rangle|\,|\langle x,z\rangle|\,|\langle y,z\rangle|\right),\] where the we have used the fact that \(\Re a\leq|a|\) for any \(a\in\mathbb{C}\) to obtain the last inequality. This proves the first desired inequality. The second inequality is observed from the first inequality by replacing \(x\) with \(e\) and substituting \(y\) and \(z\) with \(x\) and \(y\), respectively. In the following remark, we see how Lemma 2.1 generalizes the Cauchy-Schwarz inequality. **Remark 2.1**.: 1. _If_ \(y=z\)_, then_ \[\left|\left\langle x,y\right\rangle\right|\leq\left\|x\right\|\left\|y\right\|.\] 2. _If_ \(y\bot z\)_, then_ \[\left|\left\langle x,y\right\rangle\right|^{2}+\left|\left\langle x,z\right\rangle \right|^{2}\leq\left\|x\right\|\sqrt{\left|\left\langle x,y\right\rangle \right|^{2}\left\|y\right\|^{2}+\left|\left\langle x,z\right\rangle\right|^{2 }\left\|z\right\|^{2}}.\] **Remark 2.2**.: _It observes from the first inequality in (2.3) that_ \[\left|\left\langle x,y\right\rangle\right|^{2}+\left|\left\langle x,z\right\rangle \right|^{2}\leq\left\|x\right\|\left\|\left\langle x,y\right\rangle y+\left\langle x,z\right\rangle z\right\|. \tag{2.4}\] _In particular,_ \[\left|\left\langle x,e\right\rangle\right|^{2}+\left|\left\langle y,e\right\rangle \right|^{2}\leq\left\|\left\langle e,x\right\rangle x+\left\langle e,y\right\rangle y \right\|. \tag{2.5}\] For any \(0\neq a,b\in\mathbb{H}\), one can define the angle between \(a,b\) by the formula \[\cos\Psi_{ab}=\frac{\left|\left\langle a,b\right\rangle\right|}{\left\|a \right\|\left\|b\right\|},\;0\leq\Psi_{ab}\leq\frac{\pi}{2}. \tag{2.6}\] We refer the reader to [11, 12, 19, 20] for discussion of this definition and another definition for the angle. In the following result, we prove the following additional property. **Corollary 2.1**.: _Let \(x,y,z\in\mathbb{H}\) be nonzero. Then_ \[\cos\Psi_{yx}\,\cos\Psi_{xz}\leq\frac{1}{2}\sqrt{\cos^{2}\Psi_{yx}+\cos^{2} \Psi_{xz}+2\cos\Psi_{yx}\,\cos\Psi_{xz}\,\cos\Psi_{zy}}.\] Proof.: If we utilize the arithmetic-geometric mean inequality in the left-side of inequality (2.1), we conclude that Replacing \(x,y,z\) by \(\frac{y}{\left\|x\right\|},\frac{y}{\left\|y\right\|},\frac{z}{\left\|z\right\|}\), we have \[2\frac{\left|\left\langle x,y\right\rangle\right|\left|\left\langle x,z \right\rangle\right|}{\left\|x\right\|^{2}\left\|y\right\|\left\|z\right\|} \leq\sqrt{\frac{\left|\left\langle x,y\right\rangle\right|^{2}}{\left\|x \right\|^{2}\left\|y\right\|^{2}}+\frac{\left|\left\langle x,z\right\rangle \right|^{2}}{\left\|x\right\|^{2}\left\|z\right\|^{2}}+2\frac{\left|\left\langle x,y\right\rangle\right|\left\langle x,z\right\rangle\right|\left\langle y,z \right\rangle}{\left\|x\right\|^{2}\left\|y\right\|^{2}\left\|z\right\|^{2}},\] which is equivalent to the desired result, thanks to (2.6). **Remark 2.3**.: _Letting \(x=Tx\), \(y=T^{*}x\), and \(e=x\) with \(\left\|x\right\|=1\), in (2.2), we reach the following well-known inequality (see [2, Ineq. (3.15)])_ \[\left|\left\langle Tx,x\right\rangle\right|\leq\frac{1}{2}\sqrt{\left(\left[T \right]^{2}+\left|T^{*}\right|^{2}\right)x,x\right\rangle+2\left|\left\langle T ^{2}x,x\right\rangle|}.\] _In particular,_ \[\omega^{2}(T)\leq\frac{1}{4}\left\||T|^{2}+\left|T^{*}\right|^{2}\right\|+ \frac{1}{2}\omega(T^{2}). \tag{2.7}\] _Indeed,_ \[\left|\left\langle Tx,x\right\rangle\right|^{2}\] \[\leq\frac{1}{2}\sqrt{\left|\left\langle Tx,x\right\rangle\right|^{2 }\left(\left\|Tx\right\|^{2}+\left\|T^{*}x\right\|^{2}\right)+2\left|\left\langle Tx,x\right\rangle\right|^{2}\left|\left\langle T^{2}x,x\right\rangle\right|}\] \[=\frac{1}{2}\left|\left\langle Tx,x\right\rangle\right|\sqrt{ \left\langle\left(\left|T\right|^{2}+\left|T^{*}\right|^{2}\right)x,x\right\rangle +2\left|\left\langle T^{2}x,x\right\rangle\right|}.\] ### Upper bounds for \(\omega_{e}(\cdot,\cdot)\) In this subsection, we present some bounds for the Euclidean operator radius. The applications of these bounds towards the numerical radius will be given in Subsection 2.4. Although simpler bounds are known in the literature in terms of \(\left\|\cdot\right\|\), the following bound involves the smaller quantity \(\omega(\cdot)\). **Theorem 2.1**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega_{e}^{2}\left(A,B\right)\leq\sqrt{\omega^{2}\left(A\right)\left\|A \right\|^{2}+\omega^{2}\left(B\right)\left\|B\right\|^{2}+2\omega\left(A \right)\omega\left(B\right)\omega\left(B^{*}A\right)}.\] Proof.: By substituting \(x=Ax\), \(y=Bx\), and \(e=x\), in (2.2), we obtain \[\left|\left\langle Ax,x\right\rangle\right|^{2}+\left|\left\langle Bx,x \right\rangle\right|^{2}\] \[\leq\sqrt{\omega^{2}\left(A\right)\left\|A\right\|^{2}+\omega^{2 }\left(B\right)\left\|B\right\|^{2}+2\omega\left(A\right)\omega\left(B\right) \omega\left(B^{*}A\right)},\] i.e., \[\left|\left\langle Ax,x\right\rangle\right|^{2}+\left|\left\langle Bx,x \right\rangle\right|^{2}\leq\sqrt{\omega^{2}\left(A\right)\left\|A\right\|^{2 }+\omega^{2}\left(B\right)\left\|B\right\|^{2}+2\omega\left(A\right)\omega \left(B\right)\omega\left(B^{*}A\right)}.\] We obtain the desired result by taking supremum over all unit vectors \(x\in\mathbb{H}\). **Remark 2.4**.: _It has been shown in [16] that_ \[\omega_{e}^{2}(A,B)\leq\left\|\left|A\right|^{2}+\left|B\right|^{2}\right\|. \tag{2.8}\] _This remark shows that Theorem 2.1 can be better than this latter bound. Indeed, if we let \(A=\left[\begin{array}{cc}2&3\\ 1&0\end{array}\right]\) and \(B=\left[\begin{array}{cc}2&2\\ 5&3\end{array}\right],\) then numerical calculations show that_ \[\sqrt{\omega^{2}\left(A\right)\left\|A\right\|^{2}+\omega^{2}\left(B\right) \left\|B\right\|^{2}+2\omega\left(A\right)\omega\left(B\right)\omega\left(B^{ *}A\right)}\approx 47.0005,\] _and_ \[\left\|\left|A\right|^{2}+\left|B\right|^{2}\right\|\approx 53.7099.\] _This shows that, in this example, the bound we found in Theorem 2.1 is better than that in (2.8)._ _However, if we take \(A=\left[\begin{array}{cc}4&0\\ 1&3\end{array}\right]\) and \(B=\left[\begin{array}{cc}1&3\\ 0&5\end{array}\right],\) we find that_ \[\sqrt{\omega^{2}\left(A\right)\left\|A\right\|^{2}+\omega^{2}\left(B\right) \left\|B\right\|^{2}+2\omega\left(A\right)\omega\left(B\right)\omega\left(B^{ *}A\right)}\approx 47.5757,\] _and_ \[\left\|\left|A\right|^{2}+\left|B\right|^{2}\right\|\approx 44.3654.\] _Thus, neither Theorem 2.1 nor (2.8) is uniformly better than the other._ Another application of (2.5) is the following tighter bound than the one given in Theorem 2.1. Since we have emphasized the significance of Theorem 2.1 in Remark 2.4, the significance of Theorem 2.2 is evident. **Theorem 2.2**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega_{e}^{2}\left(A,B\right)\leq\sqrt{\left\|\omega^{2}\left(A\right)\left|A ^{*}\right|^{2}+\omega^{2}\left(B\right)\left|B^{*}\right|^{2}\right\|+2 \omega\left(A\right)\omega\left(B\right)\omega\left(AB^{*}\right)}.\] Proof.: Let \(x\in\mathbb{H}\) be a unit vector. If we replace \(x\) by \(Ax\), \(y\) by \(Bx\) and \(e\) by \(x\) in (2.5), we get \[\left|\left\langle Ax,x\right\rangle\right|^{2}+\left|\left\langle Bx,x\right\rangle\right|^{2}\] \[\leq\left\|\left(\left\langle A^{*}x,x\right\rangle A+\left\langle B ^{*}x,x\right\rangle B\right)x\right\|\] \[\leq\left\|\left\langle A^{*}x,x\right\rangle A+\left\langle B^{ *}x,x\right\rangle B\right\|\] \[=\left\|\left(\left\langle A^{*}x,x\right\rangle A+\left\langle B ^{*}x,x\right\rangle B\right)\left(\left\langle Ax,x\right\rangle A^{*}+ \left\langle Bx,x\right\rangle B^{*}\right)\right\|^{\frac{1}{2}}\] \[\quad\left(\text{since }\left\|XX^{*}\right\|=\left\|X \right\|^{2}\text{ for any }X\in\mathbb{B}\left(\mathbb{H}\right)\right)\] \[=\left\|\left|\left\langle Ax,x\right\rangle\right|^{2}\left|A^{ *}\right|^{2}+\left|\left\langle Bx,x\right\rangle\right|^{2}\left|B^{*} \right|^{2}+2\Re\left(\left\langle A^{*}x,x\right\rangle\left\langle Bx,x \right\rangle AB^{*}\right)\right\|^{\frac{1}{2}}\] \[\leq\sqrt{\left\|\left\langle Ax,x\right\rangle\right|^{2}\left|A ^{*}\right|^{2}+\left|\left\langle Bx,x\right\rangle\right|^{2}\left|B^{*} \right|^{2}\right\|+2\left\|\Re\left(\left\langle A^{*}x,x\right\rangle\left \langle Bx,x\right\rangle AB^{*}\right)\right\|}\] \[\quad\left(\text{by the triangle inequality for the usual operator norm}\right)\] \[\leq\sqrt{\left\|\left\langle Ax,x\right\rangle\right|^{2}\left|A ^{*}\right|^{2}+\left|\left\langle Bx,x\right\rangle\right|^{2}\left|B^{*} \right|^{2}\right\|+2\left|\left\langle A^{*}x,x\right\rangle\left\langle Bx, x\right\rangle\left|\omega\left(AB^{*}\right)\right.\] \[\quad\left(\text{since }\left\|\Re X\right\|\leq\omega\left(X \right)\text{ for any }X\in\mathbb{B}\left(\mathbb{H}\right)\right)\] \[\leq\sqrt{\left\|\omega^{2}\left(A\right)\left|A^{*}\right|^{2}+ \omega^{2}\left(B\right)\left|B^{*}\right|^{2}\right\|+2\omega\left(A\right) \omega\left(B\right)\omega\left(AB^{*}\right)}.\] Consequently, \[\left|\left\langle Ax,x\right\rangle\right|^{2}+\left|\left\langle Bx,x\right\rangle \right|^{2}\leq\sqrt{\left\|\omega^{2}\left(A\right)\left|A^{*}\right|^{2}+ \omega^{2}\left(B\right)\left|B^{*}\right|^{2}\right\|+2\omega\left(A\right) \omega\left(B\right)\omega\left(AB^{*}\right)},\] which indicates the desired inequality after taking supremum over all unit vectors \(x\in\mathbb{H}\). We remark here that Theorem 2.2 recovers (2.7) by substituting \(A=T\) and \(B=T^{*}\). **Remark 2.5**.: _Since Theorem 2.2 is a refinement of Theorem 2.1, Remark 2.4 already explains the advantage of Theorem 2.2 over (2.8). Here, we give an example to show that (2.8) can also be better than Theorem 2.2. Indeed, if we take \(A=\left[\begin{array}{cc}4&3\\ 4&2\end{array}\right]\) and \(B=\left[\begin{array}{cc}0&4\\ 3&0\end{array}\right],\) we find that_ \[\sqrt{\left\|\omega^{2}\left(A\right)\left|A^{*}\right|^{2}+\omega^{2}\left(B \right)\left|B^{*}\right|^{2}\right\|+2\omega\left(A\right)\omega\left(B\right) \omega\left(AB^{*}\right)}\approx 56.1224,\] _and_ \[\left\|\left|A\right|^{2}+\left|B\right|^{2}\right\|\approx 55.8806.\] _Thus, Theorem 2.2 and (2.8) are generally not comparable._ Next, a new form involving the Cartesian decomposition as an upper bound of \(\omega_{e}(\cdot,\cdot)\) is stated. **Theorem 2.3**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega_{e}^{2}\left(A,B\right)\leq\sqrt{\sqrt{\left(\omega^{4}\left(A\right)+ \omega^{4}\left(B\right)\right)}\ \omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right)+2\omega \left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)}.\] Proof.: By substituting \(x=Ax\), \(y=Bx\), and \(e=x\), in (2.2), we obtain \[\left|\left\langle Ax,x\right\rangle\right|^{2}+\left|\left\langle Bx,x\right\rangle\right|^{2}\] \[\leq\sqrt{\sqrt{\left(\left|\left\langle Ax,x\right\rangle \right|^{4}+\left|\left\langle Bx,x\right\rangle\right|^{4}\right)\left(\left\| Ax\right\|^{4}+\left\|Bx\right\|^{4}\right)}+2\left|\left\langle Ax,x\right\rangle \right|\left|\left\langle Bx,x\right\rangle\right|\left|\left\langle B^{*} Ax,x\right\rangle\right|\] \[\leq\sqrt{\sqrt{\left(\left|\left\langle Ax,x\right\rangle \right|^{4}+\left|\left\langle Bx,x\right\rangle\right|^{4}\right)\left( \left\|Ax\right\|^{4}+\left\|Bx\right\|^{4}\right)}+2\left|\left\langle Ax,x \right\rangle\right|\left|\left\langle Bx,x\right\rangle\right|\left|\left\langle B ^{*}Ax,x\right\rangle\right|\] (by the Cauchy-Schwarz inequality) \[=\sqrt{\sqrt{\left(\left|\left\langle Ax,x\right\rangle \right|^{4}+\left|\left\langle Bx,x\right\rangle\right|^{4}\right)\left( \left\langle\left|A\right|^{2}x,x\right\rangle^{2}+\left\langle\left|B\right|^ {2}x,x\right\rangle^{2}\right)}}+2\left|\left\langle Ax,x\right\rangle\right| \left|\left\langle Bx,x\right\rangle\right|\left|\left\langle B^{*}Ax,x\right\rangle\right|\] \[=\sqrt{\sqrt{\left|\left\langle Ax,x\right\rangle\right|^{4}+ \left|\left\langle Bx,x\right\rangle\right|^{4}}\ \left|\left\langle\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2} \right)x,x\right\rangle\right|+2\left|\left\langle Ax,x\right\rangle\right| \left|\left\langle Bx,x\right\rangle\right|\left|\left\langle B^{*}Ax,x \right\rangle\right|}\] (since \[\left|a+\mathrm{i}b\right|=\sqrt{a^{2}+b^{2}}\] for any \[a,b\in\mathbb{R}\]) \[\leq\sqrt{\sqrt{\omega^{4}\left(A\right)+\omega^{4}\left(B\right)}\ \omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right)+2\omega \left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)},\] i.e., \[\left|\left\langle Ax,x\right\rangle\right|^{2}+\left|\left\langle Bx,x\right\rangle \right|^{2}\leq\sqrt{\sqrt{\omega^{4}\left(A\right)+\omega^{4}\left(B\right) }\ \omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right)+2\omega \left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)}. \tag{2.9}\] Now, we reach the desired result by taking supremum over all unit vectors \(x\in\mathbb{H}\). **Remark 2.6**.: _In both Theorems 2.2 and 2.3, we have found some upper bounds for \(\omega_{e}(A,B)\). In this remark, we give examples showing that neither bound is uniformly better than the other. For this, let \(A=\left[\begin{array}{cc}-3.&3\\ 1&-1\end{array}\right]\) and \(B=\left[\begin{array}{cc}4&-5\\ 3&-5\end{array}\right].\) Then numerical calculations show that_ \[\sqrt{\sqrt{\left(\omega^{4}\left(A\right)+\omega^{4}\left(B\right)\right)}\ \omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right)+2\omega \left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)}\approx 57.1627,\] _and_ \[\sqrt{\left\|\omega^{2}\left(A\right)\left|A^{*}\right|^{2}+\omega^{2}\left(B \right)\left|B^{*}\right|^{2}\right\|+2\omega\left(A\right)\omega\left(B \right)\omega\left(AB^{*}\right)}\approx 57.3063,\] _showing that the bound in Theorem 2.3 is better than that in Theorem 2.2 for this example._ _On the other hand, if we let \(A=\left[\begin{array}{cc}0&-4\\ 1&2\end{array}\right]\) and \(B=\left[\begin{array}{cc}-3&3\\ 2&4\end{array}\right]\) we find that_ \[\sqrt{\sqrt{\left(\omega^{4}\left(A\right)+\omega^{4}\left(B\right)\right)} \omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right)+2\omega \left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)}\approx 33.1982,\] _and_ \[\sqrt{\left\|\omega^{2}\left(A\right)\left|A^{\ast}\right|^{2}+\omega^{2}\left(B \right)\left|B^{\ast}\right|^{2}\right\|+2\omega\left(A\right)\omega\left(B \right)\omega\left(AB^{\ast}\right)}\approx 31.3455.\] _Consequently, Theorems 2.2 and 2.3 are generally not comparable._ An easier bound than that in Theorem 2.3 can be stated in the following form. **Theorem 2.4**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega_{e}^{2}\left(A,B\right)\leq\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{ i}\left|B\right|^{2}\right)\omega\left(\left|A^{\ast}\right|^{2}+\mathrm{i} \left|B^{\ast}\right|^{2}\right)+2\omega\left(A\right)\omega\left(B\right) \omega\left(B^{\ast}A\right)}.\] Proof.: By substituting \(x=Ax\), \(y=Bx\), and \(e=x\), in (2.2), we obtain \[\left|\left\langle Ax,x\right\rangle\right|^{2}+\left|\left\langle Bx,x\right\rangle\right|^{2}\] \[\leq\sqrt{\left|\left\langle Ax,x\right\rangle\right|^{2}\left\| Ax\right\|^{2}+\left|\left\langle Bx,x\right\rangle\right|^{2}\left\| Bx\right\|^{2}+2\left|\left\langle Ax,x\right\rangle\right|\left|\left\langle Bx,x\right\rangle\right|\left|\left\langle B^{\ast}Ax,x\right\rangle\right|}\] \[=\sqrt{\left|\left\langle x,A^{\ast}x\right\rangle\right|^{2} \left\|Ax\right\|^{2}+\left|\left\langle x,B^{\ast}x\right\rangle\right|^{2} \left\|Bx\right\|^{2}+2\left|\left\langle Ax,x\right\rangle\right|\left| \left\langle Bx,x\right\rangle\right|\left|\left\langle B^{\ast}Ax,x\right\rangle\right|}\] \[\leq\sqrt{\left\|A^{\ast}x\right\|^{2}\left\|Ax\right\|^{2}+\left\| B^{\ast}x\right\|^{2}\left\|Bx\right\|^{2}+2\left|\left\langle Ax,x\right\rangle \right|\left|\left\langle Bx,x\right\rangle\right|\left|\left\langle B^{\ast} Ax,x\right\rangle\right|}\] \[\text{(by the Cauchy-Schwarz inequality)}\] \[\leq\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right| ^{2}\right)\omega\left(\left|A^{\ast}\right|^{2}+\mathrm{i}\left|B^{\ast} \right|^{2}\right)+2\omega\left(A\right)\omega\left(B\right)\omega\left(B^{ \ast}A\right)},\] i.e., Now, we receive the desired result by taking supremum over all unit vectors \(x\in\mathbb{H}\). **Remark 2.7**.: _If \(A=\left[\begin{array}{cc}4&-2\\ -4&-5\end{array}\right]\) and \(B=\left[\begin{array}{cc}2&5\\ 2&4\end{array}\right]\), we see that_ \[\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right) \omega\left(\left|A^{\ast}\right|^{2}+\mathrm{i}\left|B^{\ast}\right|^{2} \right)+2\omega\left(A\right)\omega\left(B\right)\omega\left(B^{\ast}A\right) }\approx 76.375,\] _and_ \[\sqrt{\left\|\omega^{2}\left(A\right)\left|A^{\ast}\right|^{2}+\omega^{2} \left(B\right)\left|B^{\ast}\right|^{2}\right\|+2\omega\left(A\right)\omega \left(B\right)\omega\left(AB^{\ast}\right)}\approx 76.389.\] _However, if we let \(A=\left[\begin{array}{cc}-5&1\\ 5&3\end{array}\right]\) and \(B=\left[\begin{array}{cc}-5&-2\\ 1&-4\end{array}\right],\) we see that_ \[\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right) \omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2}\right)+2 \omega\left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)}\approx 72.465,\] \[\sqrt{\sqrt{\left(\omega^{4}\left(A\right)+\omega^{4}\left(B\right)\right)} \omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right)+2\omega \left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)}\approx 67.9146,\] _and_ \[\sqrt{\left\|\omega^{2}\left(A\right)\left|A^{*}\right|^{2}+\omega^{2}\left(B \right)\left|B^{*}\right|^{2}\right\|+2\omega\left(A\right)\omega\left(B \right)\omega\left(AB^{*}\right)}\approx 66.9069.\] _These two examples show that Theorem 2.4 is not comparable, in general, with Theorems 2.2 and 2.3._ ### Upper bounds for \(\left\|(\cdot,\cdot)\right\|_{e^{*}}\) Now we discuss some possible bounds for \(\left\|(\cdot,\cdot)\right\|_{e}\). **Theorem 2.5**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\left\|(A,B)\right\|_{e}^{2}\leq\sqrt{\left\|\left\|A\right\|^{2}\left|A\right| ^{2}+\left\|B\right\|^{2}\left|B\right|^{2}\right\|+\frac{1}{2}\left\|\left|A \right|+\left|B\right|\left\|\left|\left|A^{*}\right|+\left|B^{*}\right| \right\|\left|\omega\left(A^{*}B\right)\right.\right.}\] Proof.: Letting \(y=A^{*}y\), \(z=B^{*}y\), and \(\left\|x\right\|=1\), in (2.4), we infer that \[\left|\left\langle Ax,y\right\rangle\right|^{2}+\left|\left\langle Bx,y \right\rangle\right|^{2}\] \[\leq\left\|\left(\left\langle Ax,y\right\rangle A^{*}+\left\langle Bx,y\right\rangle B^{*}\right)y\right\|\] \[\leq\left\|\left\langle Ax,y\right\rangle A^{*}+\left\langle Bx,y\right\rangle B^{*}\right\|\] \[=\left\|\left(\left\langle Ax,y\right\rangle A^{*}+\left\langle Bx,y\right\rangle B^{*}\right)\left(\overline{\left\langle Ax,y\right\rangle}A +\overline{\left\langle Bx,y\right\rangle}B\right)\right\|^{\frac{1}{2}}\] \[\quad\left(\text{since }\left\|XX^{*}\right\|=\left\|X\right\|^{2} \text{ for any }X\in\mathbb{B}\left(\mathbb{H}\right)\right)\] \[=\left\|\left|\left\langle Ax,y\right\rangle\right|^{2}\left|A \right|^{2}+\left|\left\langle Bx,y\right\rangle\right|^{2}\left|B\right|^{2} +2\Re\left(\left\langle Ax,y\right\rangle\overline{\left\langle Bx,y\right\rangle }A^{*}B\right)\right\|^{\frac{1}{2}}\] \[\leq\sqrt{\left\|\left|\left\langle Ax,y\right\rangle\right|^{2} \left|A\right|^{2}+\left|\left\langle Bx,y\right\rangle\right|^{2}\left|B \right|^{2}\right\|+2\left\|\Re\left(\left\langle Ax,y\right\rangle\overline {\left\langle Bx,y\right\rangle}A^{*}B\right)\right\|}\] \[\quad\left(\text{by the triangle inequality for the usual operator norm}\right)\] \[\leq\sqrt{\left\|\left|\left\langle Ax,y\right\rangle\right|^{2} \left|A\right|^{2}+\left|\left\langle Bx,y\right\rangle\right|^{2}\left|B \right|^{2}\right\|+2\left|\left\langle Ax,y\right\rangle\overline{\left\langle Bx,y\right\rangle}\right|\omega\left(A^{*}B\right)\right.}.\] On the other hand, \[\left\|\left|\left\langle Ax,y\right\rangle\right|^{2}\left|A\right|^ {2}+\left|\left\langle Bx,y\right\rangle\right|^{2}\left|B\right|^{2}\right\|+2 \left|\left\langle Ax,y\right\rangle\right|\left|\left\langle Bx,y\right\rangle \right|\left\langle A^{*}B\right)\] \[\leq\left\|\left\|A\right\|^{2}\left|A\right|^{2}+\left\|B\right\| ^{2}\left|B\right|^{2}\right\|+2\left|\left\langle Ax,y\right\rangle\right| \left|\left\langle Bx,y\right\rangle\right|\left\langle A^{*}B\right\right)\] \[\leq\left\|\left\|A\right\|^{2}\left|A\right|^{2}+\left\|B\right\| ^{2}\left|B\right|^{2}\right\|+2\sqrt{\left\langle\left|A\right|x,x\right\rangle \left\langle\left|A^{*}\right|y,y\right\rangle}\sqrt{\left\langle\left|B\right| x,x\right\rangle\left\langle\left|B^{*}\right|y,y\right\rangle}\omega\left(A^{*}B\right)\] \[\text{(by the mixed Schwarz inequality)}\] \[\leq\left\|\left\|A\right\|^{2}\left|A\right|^{2}+\left\|B\right\| ^{2}\left|B\right|^{2}\right\|+\frac{1}{2}\left\langle\left(\left|A\right|+ \left|B\right|\right)x,x\right\rangle\left\langle\left(\left|A^{*}\right|+ \left|B^{*}\right|\right)y,y\right\rangle\omega\left(A^{*}B\right)\] \[\text{(by the arithmetic-geometric mean inequality)}\] \[\leq\left\|\left\|A\right\|^{2}\left|A\right|^{2}+\left\|B\right\| ^{2}\left|B\right|^{2}\right\|+\frac{1}{2}\left\|\left|A\right|+\left|B\right| \right\|\left\|A^{*}\right|+\left|B^{*}\right|\left\|\omega\left(A^{*}B\right).\] Accordingly, \[\left|\left\langle Ax,y\right\rangle\right|^{2}+\left|\left\langle Bx,y\right\rangle \right|^{2}\leq\sqrt{\left\|\left\|A\right\|^{2}\left|A\right|^{2}+\left\|B \right\|^{2}\left|B\right|^{2}\right\|+\frac{1}{2}\left\|\left|A\right|+ \left|B\right|\right\|\left\|A^{*}\right|+\left|B^{*}\right|\left\|\omega \left(A^{*}B\right),\] which implies the desired inequality after taking supremum over all unit vectors \(x,y\in\mathbb{H}\). **Theorem 2.6**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\left\|\left(A,B\right)\right\|_{e}^{2}\leq\sqrt{\omega\left(\left|A\right|^{2 }+\mathrm{i}\left|B\right|^{2}\right)\omega\left(\left|A^{*}\right|^{2}+ \mathrm{i}\left|B^{*}\right|^{2}\right)+\omega\left(\left|A\right|+\mathrm{i} \left|B\right|\right)\omega\left(\left|A^{*}\right|+\mathrm{i}\left|B^{*} \right|\right)\omega\left(BA^{*}\right)}.\] Proof.: Letting \(y=A^{*}y\) and \(z=B^{*}y\), with \(\left\|x\right\|=\left\|y\right\|=1\), in (2.1), we obtain \[\left|\left\langle Ax,y\right\rangle\right|^{2}+\left|\left\langle Bx,y\right\rangle\right|^{2}\] \[\leq\] \[\leq\] \[\leq\] \[\left(\text{by the Cauchy-Schwarz inequality}\right)\] \[\leq\] \[\left(\text{by the the arithmetic-geometric mean inequality}\right)\] \[\leq\] \[\left(\text{by the Cauchy-Schwarz inequality}\right)\] \[\left(\text{by the arithmetic-geometric mean inequality}\right)\] \[\left(\text{by the Cauchy-Schwarz inequality}\right)\] \[\left(\text{since }\left|a+\mathrm{i}b\right|=\sqrt{a^{2}+b^{2}}\text{ for any }a,b\in\mathbb{R}\right)\] \[\leq\] i.e., \[\begin{split}&\left|\left\langle Ax,y\right\rangle\right|^{2}+\left| \left\langle Bx,y\right\rangle\right|^{2}\\ &\leq\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B \right|^{2}\right)\omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*} \right|^{2}\right)+\omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right) \omega\left(\left|A^{*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega \left(BA^{*}\right)}.\end{split} \tag{2.10}\] Now, we receive the desired result by taking supremum over all unit vectors \(x,y\in\mathbb{H}\). **Remark 2.8**.: _In this remark, we compare the two bounds found in Theorems 2.5 and 2.6. Indeed, letting \(A=\left[\begin{array}{cc}0&1\\ 1&0\end{array}\right]\) and \(B=\left[\begin{array}{cc}1&1\\ 0&1\end{array}\right]\), we find that_ \[\sqrt{\left\|\left\|A\right\|^{2}\left|A\right|^{2}+\left\|B\right\|^{2}\left| B\right|^{2}\right\|+\frac{1}{2}\left\|\left|A\right|+\left|B\right|\right\| \left\|A^{*}\right|+\left|B^{*}\right|\left\|\omega\left(A^{*}B\right)\approx 3.02706,\] _and_ \[\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right) \omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2}\right)+ \omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega\left(\left|A^ {*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega\left(BA^{*}\right)} \approx 3.70246.\] _On the other hand, letting \(A=\left[\begin{array}{cc}1&1\\ 0&0\end{array}\right],\) and \(B=\left[\begin{array}{cc}0&1\\ 1&0\end{array}\right],\) we find_ \[\sqrt{\left\|\left\|A\right\|^{2}\left|A\right|^{2}+\left\|B\right\|^{2}\left| B\right|^{2}\right\|+\frac{1}{2}\left\|\left|A\right|+\left|B\right|\right\| \left\|\left|A^{*}\right|+\left|B^{*}\right|\left\|\omega\left(A^{*}B\right) \approx 3.08509,\] _and_ \[\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right) \omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2}\right)+ \omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega\left(\left|A^ {*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega\left(BA^{*}\right)} \approx 2.93621.\] _These examples show that neither Theorem 2.5 nor Theorem 2.6 is uniformly better than the other._ ### Applications towards the numerical radius If we put \(A=T\) and \(B=T^{*}\), in Theorem 2.1, we reach to the following result due to Dragomir [5, Theorem 1]. **Corollary 2.2**.: _Let \(T\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega^{2}\left(T\right)\leq\frac{1}{2}\|T\|^{2}+\frac{1}{2}\omega\left(T^{2} \right). \tag{2.11}\] **Remark 2.9**.: _We know from (1.3) that_ \[\omega^{2}(T)\leq\frac{1}{2}\left\|\left|T\right|^{2}+\left|T^{*}\right|^{2} \right\|. \tag{2.12}\] _In this remark, we give two examples to show that neither this bound nor the bound found in Corollary 2.2 is uniformly better than the other._ 1. _If we take_ \(T=\left[\begin{array}{cc}-5&1\\ 4&4\end{array}\right],\) _then direct calculations illustrate that_ \[\frac{1}{2}\left\|\left|T\right|^{2}+\left|T^{*}\right|^{2}\right\|\approx 3 4.1478\text{ and }\frac{1}{2}\|T\|^{2}+\frac{1}{2}\omega\left(T^{2}\right)\approx 3 7.4633.\] _This shows, for this_ \(T\)_, that (_2.12_) is better than (_2.11_)._ _However, if we take \(T=\left[\begin{array}{cc}1&0\\ 1&0\end{array}\right],\) we find that_ \[\frac{1}{2}\left\|\left|T\right|^{2}+\left|T^{*}\right|^{2}\right\|\approx 1.70711 \text{ and }\frac{1}{2}\|T\|^{2}+\frac{1}{2}\omega\left(T^{2}\right)\approx 1.60355,\] _showing that (2.11) is sharper than (2.12)._ (ii) _Now we provide an example to show that (2.11) can be sharper than (1.2). Taking \(T=\left[\begin{array}{cc}2&0\\ 1&5\end{array}\right]\) yields_ \[\frac{1}{2}\|T\|^{2}+\frac{1}{2}\omega\left(T^{2}\right)\approx 25.8742\text{ and } \left(\frac{1}{2}\|\ |T|+\left|T^{*}\right|\ \|\right)^{2}\approx 26.018.\] _However, letting \(T=\left[\begin{array}{cc}3&0\\ 4&1\end{array}\right]\) gives_ \[\frac{1}{2}\|T\|^{2}+\frac{1}{2}\omega\left(T^{2}\right)\approx 19.7967\text{ and } \left(\frac{1}{2}\|\ |T|+\left|T^{*}\right|\ \|\right)^{2}\approx 19.4443.\] If we set \(A=T\) and \(B=T^{*},\) in Theorem 2.3, we obtain the following bound for \(\omega(T).\) **Corollary 2.3**.: _Let \(T\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega\left(T\right)\leq\frac{1}{2}\sqrt{\sqrt{2}\omega\left(\left|T\right|^{ 2}+\mathrm{i}\left|T^{*}\right|^{2}\right)+2\omega\left(T^{2}\right)}. \tag{2.13}\] **Remark 2.10**.: * _Notice that (_2.13_) is sharp. Indeed, if we assume that_ \(T\) _is a normal operator, we get the same quantity_ \(\|T\|\) _on both sides of the inequality._ * _Of course, the inequality in Corollary_ 2.3 _is stronger than the inequality in Corollary_ 2.2_, since we have (see, e.g.,_ _[_14_, Proposition 1.4]__)_ \[\omega\left(\left|T\right|^{2}+\mathrm{i}\left|T^{*}\right|^{2}\right)\leq\left\| \left|T\right|^{4}+\left|T^{*}\right|^{4}\right\|^{\frac{1}{2}}.\] * _We explain the advantage of the bound in (_2.13_). If we let_ \(T=\left[\begin{array}{cc}1&0\\ 4&1\end{array}\right],\) _we can see that_ \(\omega(T)=3\)_. Moreover,_ \[\frac{1}{2}\|\ |T|+\left|T^{*}\right|\|\approx 3.1305,\text{ and }\frac{1}{2}\sqrt{\sqrt{2}\omega\left(\left|T\right|^{2}+\mathrm{i}\left|T^{* }\right|^{2}\right)+2\omega\left(T^{2}\right)}\approx 3.00956,\] _which shows that (_2.13_) is sharper than (_1.2_), in this example. However, if we let_ \(T=\left[\begin{array}{cc}0&3\\ 0&0\end{array}\right]\)_, then (_1.2_) is sharper than (_2.13_), as we have_ \(\omega(T)\approx 1.5,\)__ \[\frac{1}{2}\|\ |T|+\left|T^{*}\right|\ \|=1.5,\text{ and }\frac{1}{2}\sqrt{\sqrt{2}\omega\left(\left|T\right|^{2}+\mathrm{i}\left|T^{* }\right|^{2}\right)+2\omega\left(T^{2}\right)}\approx 1.78381.\] Letting \(T=\left[\begin{array}{cc}O&A\\ B^{*}&O\end{array}\right]\), in Corollary 2.3, obtain the following upper bound for the numerical radius of the operator matrix \(\left[\begin{array}{cc}O&A\\ B^{*}&O\end{array}\right].\) **Corollary 2.4**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega^{2}\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\] \[\leq\frac{\sqrt{2}}{4}\max\left\{\omega\left(\left|A^{*}\right|^{2}+ \mathrm{i}|B^{*}|^{2}\right),\omega\left(\left|A\right|^{2}+\mathrm{i}|B|^{2} \right)\right\}+\frac{1}{2}\max\left\{\omega\left(AB^{*}\right),\omega\left(B^ {*}A\right)\right\}.\] **Remark 2.11**.: * _The inequality in Corollary_ 2.4 _is sharp. Indeed, if we let_ \(B^{*}=A\) _be a normal operator, we get_ \(\left\|A\right\|^{2}\) _on both sides of the inequality._ * _Among the sharpest upper bounds for_ \(\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\) _is_ \(\frac{\left\|A\right\|+\left\|B\right\|}{2},\) _as we see from Lemma_ 1.1_. In Corollary_ 2.4_, we have found a new upper bound. We give two examples to show that neither bound is uniformly better. For, let_ \[A=\left[\begin{array}{cc}5&0\\ 2&5\end{array}\right]\text{ and }B=\left[\begin{array}{cc}1&0\\ 1&3\end{array}\right].\] _Then_ \[\omega^{2}\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\approx 20.078,\left(\frac{\left\|A\right\|+\left\|B \right\|}{2}\right)^{2}\approx 21.5231,\] _and_ \[\frac{\sqrt{2}}{4}\max\left\{\omega\left(\left|A^{*}\right|^{2}+\mathrm{i}|B ^{*}|^{2}\right),\omega\left(\left|A\right|^{2}+\mathrm{i}|B|^{2}\right) \right\}+\frac{1}{2}\max\left\{\omega\left(AB^{*}\right),\omega\left(B^{*}A \right)\right\}\approx 22.4192.\] _On the other hand, if_ \[A=\left[\begin{array}{cc}5&0\\ 1&2\end{array}\right]\text{ and }B=\left[\begin{array}{cc}5&4\\ 4&0\end{array}\right],\] _then_ \[\omega^{2}\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\approx 36.25,\left(\frac{\left\|A\right\|+\left\|B \right\|}{2}\right)^{2}\approx 38.0298,\] _and_ \[\frac{\sqrt{2}}{4}\max\left\{\omega\left(\left|A^{*}\right|^{2}+\mathrm{i}|B ^{*}|^{2}\right),\omega\left(\left|A\right|^{2}+\mathrm{i}|B|^{2}\right) \right\}+\frac{1}{2}\max\left\{\omega\left(AB^{*}\right),\omega\left(B^{*}A \right)\right\}\approx 37.7279.\] **Proposition 2.1**.: _Let \(T\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\left\|\Re T\right\|^{2}\leq\frac{1}{4}\sqrt{\sqrt{2}\omega^{2}\left(T\right) \omega\left(\left|T\right|^{2}+\mathrm{i}|T^{*}|^{2}\right)+2\omega^{2}\left( T\right)\omega\left(T^{2}\right)}+\frac{1}{2}\omega^{2}\left(T\right).\] Proof.: It observes from the inequality (2.9) that \[\left|\left\langle\left(A+B\right)x,x\right\rangle\right|^{2}\] \[\leq\sqrt{\sqrt{\omega^{4}\left(A\right)+\omega^{4}\left(B\right) }}\text{ }\omega\left(\left|A\right|^{2}+\mathrm{i}|B|^{2}\right)+2\omega\left(A\right) \omega\left(B\right)\omega\left(B^{*}A\right)+2\left|\left\langle Ax,x\right\rangle \right|\left|\left\langle Bx,x\right\rangle\right|\] \[\leq\sqrt{\sqrt{\omega^{4}\left(A\right)+\omega^{4}\left(B\right) }}\text{ }\omega\left(\left|A\right|^{2}+\mathrm{i}|B|^{2}\right)+2\omega\left(A\right) \omega\left(B\right)\omega\left(B^{*}A\right)+2\omega\left(A\right)\omega \left(B\right),\] i.e., \[\left|\left\langle\left(A+B\right)x,x\right\rangle\right|^{2}\] \[\leq\sqrt{\sqrt{\omega^{4}\left(A\right)+\omega^{4}\left(B\right)} \text{ }\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right)+2\omega \left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)}+2\omega\left(A \right)\omega\left(B\right)\] which implies \[\omega^{2}\left(A+B\right) \tag{2.14}\] \[\leq\sqrt{\sqrt{\omega^{4}\left(A\right)+\omega^{4}\left(B\right)} \text{ }\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right)+2\omega \left(A\right)\omega\left(B\right)\omega\left(B^{*}A\right)}+2\omega\left(A \right)\omega\left(B\right).\] If we placed \(A=T\) and \(B=T^{*}\), in 2.14, we get the desired result. **Remark 2.12**.: _Replacing \(T\) by \(\mathrm{e}^{\mathrm{i}\theta}T\) in Proposition 2.1, and using the fact that \(\omega(T)=\sup_{\theta\in\mathbb{R}}\left\|\mathfrak{R}\left(\mathrm{e}^{ \mathrm{i}\theta}T\right)\right\|\) (see, e.g., [21, (2.3)]), we obtain Corollary 2.3._ Now, if we replace \(A\) by \(T\) and \(B\) by \(T^{*}\) in Theorem 2.4, then we use the fact that \(\omega_{e}(T,T^{*})=\sqrt{2}\omega(T)\), we obtain the following, upon noting the equality \[\omega\left(\left|T\right|^{2}+\mathrm{i}\left|T^{*}\right|^{2}\right)=\omega \left(\left|T\right|^{2}-\mathrm{i}\left|T^{*}\right|^{2}\right).\] **Corollary 2.5**.: _Let \(T\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega^{2}\left(T\right)\leq\frac{1}{2}\sqrt{\omega^{2}\left(\left|T\right|^{2 }+\mathrm{i}\left|T^{*}\right|^{2}\right)+2\omega^{2}\left(T\right)\omega \left(T^{2}\right)}.\] **Remark 2.13**.: * _The inequality in Corollary_ 2.5 _is sharp. Indeed, if we assume that_ \(T\) _is a normal operator, we get the same quantity_ \(\left\|T\right\|\) _on both sides of the inequality._ * _In Corollary_ 2.5_, we have found an upper bound for_ \(\omega^{2}(T)\)_. Here, we provide two examples to show that neither this new bound nor the celebrated bound from (_1.2_) is always better than the other. Indeed, if_ \(T=\left[\begin{array}{cc}1&2\\ 0&4\end{array}\right],\) _then_ \[\omega^{2}(T)\approx 18.5139,\left(\frac{1}{2}\left\|\right.\left|T\right|+ \left|T^{*}\right|\right.\left\|\right)^{2}\approx 19.0656,\] _and_ \[\frac{1}{2}\sqrt{\omega^{2}\left(\left|T\right|^{2}+\mathrm{i}\left|T^{*} \right|^{2}\right)+2\omega^{2}\left(T\right)\omega\left(T^{2}\right)}\approx 18.7755.\] _This shows that, for this example, the bound found in Corollary_ 2.5 _is sharper than that in (_1.2_). On the other hand, if we let_ \(T=\left[\begin{array}{cc}0&3\\ 4&2\end{array}\right],\) _then_ \[\omega^{2}(T)\approx 21.5301,\left(\frac{1}{2}\left\|\right.\left|T\right|+ \left|T^{*}\right|\right.\left\|\right)^{2}\approx 21.5301,\] _and_ \[\frac{1}{2}\sqrt{\omega^{2}\left(\left|T\right|^{2}+\mathrm{i}\left|T^{*} \right|^{2}\right)+2\omega^{2}\left(T\right)\omega\left(T^{2}\right)}\approx 21.5932.\] _._ 3. _In Corollary_ 2.2_, we have found another upper bound for_ \(\omega^{2}(T)\)_. If we let_ \(T=\left[\begin{array}{cc}1&1\\ 5&1\end{array}\right],\) _then it can be verified that_ \[\frac{1}{2}\|T\|^{2}+\frac{1}{2}\omega\left(T^{2}\right)\approx 19.7082,\text{ and }\frac{1}{2}\sqrt{\omega^{2}\left(\left|T\right|^{2}+\mathrm{i}\left|T^{*} \right|^{2}\right)+2\omega^{2}\left(T\right)\omega\left(T^{2}\right)}\approx 17.282,\] _showing that the bound we found in Corollary_ 2.5 _is sharper than that we found in Corollary_ 2.2_, for this example. However, letting_ \(T=\left[\begin{array}{cc}2&0\\ 1&5\end{array}\right]\) _implies_ \[\frac{1}{2}\|T\|^{2}+\frac{1}{2}\omega\left(T^{2}\right)\approx 25.8742,\text{ and }\frac{1}{2}\sqrt{\omega^{2}\left(\left|T\right|^{2}+\mathrm{i}\left|T^{*} \right|^{2}\right)+2\omega^{2}\left(T\right)\omega\left(T^{2}\right)}\approx 25.881.\] 4. _Letting_ \(T=\left[\begin{array}{cc}0&3\\ 0&0\end{array}\right]\) _verifies that_ \[\frac{1}{2}\sqrt{\omega^{2}\left(\left|T\right|^{2}+\mathrm{i}\left|T^{*} \right|^{2}\right)+2\omega^{2}\left(T\right)\omega\left(T^{2}\right)}\approx 4.5 \text{ and }\left(\frac{1}{2}\sqrt{\sqrt{2}\omega\left(\left|T\right|^{2}+ \mathrm{i}\left|T^{*}\right|^{2}\right)+2\omega\left(T^{2}\right)}\right)^{2} \approx 3.18198,\] _which shows that the bound we found in Corollary_ 2.3 _is sharper than the one found in Corollary_ 2.5_, in this case._ **Corollary 2.6**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega^{2}\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\leq\frac{1}{4}\sqrt{\omega\left(\left|A\right|^{2} +\mathrm{i}\left|B\right|^{2}\right)\omega\left(\left|A^{*}\right|^{2}+ \mathrm{i}\left|B^{*}\right|^{2}\right)+\omega\left(\left|A\right|+\mathrm{i} \left|B\right|\right)\omega\left(\left|A^{*}\right|+\mathrm{i}\left|B^{*} \right|\right)\omega\left(BA^{*}\right)}\\ +\frac{1}{4}\sqrt{\left\|\left|A\right|^{2}+\left|B\right|^{2} \right\|\left\|\left|A^{*}\right|^{2}+\left|B^{*}\right|^{2}\right\|}.\] Proof.: From (2.10), we have \[\left|\left\langle\left(A+B\right)x,y\right\rangle\right|^{2}\] \[\leq\left|\left\langle Ax,y\right\rangle\right|^{2}+\left|\left\langle Bx,y\right\rangle\right|^{2}+2\left|\left\langle Ax,y\right\rangle\right|\left| \left\langle Bx,y\right\rangle\right|\] \[\quad\quad+2\left|\left\langle Ax,y\right\rangle\right|\left| \left\langle Bx,y\right\rangle\right|\] \[\leq\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^ {2}\right)\omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2} \right)+\omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega\left( \left|A^{*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega\left(BA^{*}\right)}\] \[\quad\quad+2\sqrt{\left\langle\left|A\right|x,x\right\rangle \left\langle\left|A^{*}\right|y,y\right\rangle}\sqrt{\left\langle\left|B \right|x,x\right\rangle\left\langle\left|B^{*}\right|y,y\right\rangle}\] (by the mixed Schwarz inequality) \[\leq\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^ {2}\right)\omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2} \right)+\omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega\left( \left|A^{*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega\left(BA^{*}\right)}\] \[\quad\quad+\left\langle\left|A\right|x,x\right\rangle\left\langle \left|A^{*}\right|y,y\right\rangle+\left\langle\left|B\right|x,x\right\rangle \left\langle\left|B^{*}\right|y,y\right\rangle\] (by the arithmetic-geometric mean inequality) \[\leq\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^ {2}\right)\omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2} \right)+\omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega\left( \left|A^{*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega\left(BA^{*}\right)}\] \[\quad\quad+\sqrt{\left(\left\langle\left|A\right|x,x\right\rangle ^{2}+\left\langle\left|B\right|x,x\right\rangle^{2}\right)\left(\left\langle \left|A^{*}\right|y,y\right\rangle^{2}+\left\langle\left|B^{*}\right|y,y \right\rangle^{2}\right)}\] (by the Cauchy-Schwarz inequality) \[\leq\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right| ^{2}\right)\omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2 }\right)+\omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega \left(\left|A^{*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega\left(BA^{*}\right)}\] \[\quad\quad+\sqrt{\left\||A\right|^{2}+\left|B\right|^{2}}\left\| \left\|A^{*}\right|^{2}+\left|B^{*}\right|^{2}\right\|.\] That is, \[\left|\left\langle\left(A+B\right)x,y\right\rangle\right|^{2}\] \[\leq\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right| ^{2}\right)\omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2} \right)+\omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega\left( \left|A^{*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega\left(BA^{*}\right)}\] \[\quad\quad+\sqrt{\left\||A\right|^{2}+\left|B\right|^{2}}\left\| \left\|A^{*}\right|^{2}+\left|B^{*}\right|^{2}\right\|,\] which implies \[\left\|A+B\right\|^{2}\leq\sqrt{\omega\left(\left|A\right|^{2}+ \mathrm{i}\left|B\right|^{2}\right)\omega\left(\left|A^{*}\right|^{2}+\mathrm{i }\left|B^{*}\right|^{2}\right)+\omega\left(\left|A\right|+\mathrm{i}\left|B \right|\right)\omega\left(\left|A^{*}\right|+\mathrm{i}\left|B^{*}\right| \right)\omega\left(BA^{*}\right)}\\ +\sqrt{\left\||A\right|^{2}+\left|B\right|^{2}}\left\|\left\||A^ {*}\right|^{2}+\left|B^{*}\right|^{2}\right\|\] Now, if we replace \(B\) by \(e^{\mathrm{i}\theta}B\), we obtain \[\frac{1}{4}\left\|A+e^{\mathrm{i}\theta}B\right\|^{2}\leq\frac{1} {4}\sqrt{\omega\left(\left|A\right|^{2}+\mathrm{i}\left|B\right|^{2}\right) \omega\left(\left|A^{*}\right|^{2}+\mathrm{i}\left|B^{*}\right|^{2}\right)+ \omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega\left(\left|A ^{*}\right|+\mathrm{i}\left|B^{*}\right|\right)\omega\left(BA^{*}\right)}\\ +\frac{1}{4}\sqrt{\left\||A\right|^{2}+\left|B\right|^{2}}\left\| \left\||A^{*}\right|^{2}+\left|B^{*}\right|^{2}\right\|\] Now taking supremum over \(\theta\in\mathbb{R}\), we conclude \[\omega^{2}\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\leq\frac{1}{4}\sqrt{\omega\left(\left|A\right|^{2} +\mathrm{i}\left|B\right|^{2}\right)\omega\left(\left|A^{*}\right|^{2}+ \mathrm{i}\left|B^{*}\right|^{2}\right)+\omega\left(\left|A\right|+\mathrm{i} \left|B\right|\right)\omega\left(\left|A^{*}\right|+\mathrm{i}\left|B^{*} \right|\right)\omega\left(BA^{*}\right)}\\ +\frac{1}{4}\sqrt{\left\||A\right|^{2}+\left|B\right|^{2}}\left\| \left\||A^{*}\right|^{2}+\left|B^{*}\right|^{2}\right\|,\] as required. If we set \(A=T\) and \(B=T^{*}\), in Corollary 2.6, we get the following. **Corollary 2.7**.: _Let \(T\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega^{2}\left(T\right)\leq\frac{1}{4}\left(\sqrt{\omega^{2}\left(\left|T \right|^{2}+\mathrm{i}\left|T^{*}\right|^{2}\right)+\omega^{2}\left(\left|T \right|+\mathrm{i}\left|T^{*}\right|\right)\omega\left(T^{2}\right)}+\left\|| T|^{2}+\left|T^{*}\right|^{2}\right\|\right).\] The inequality in Corollary 2.7 is sharp. Indeed, if we assume that \(T\) is a normal operator, we get the same quantity \(\left\|T\right\|\) on both sides of the inequality. **Corollary 2.8**.: _Let \(T\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega^{2}\left(T\right)\leq\left\|\left(\Re T,\Im T\right)\right\|_{e}^{2} \leq\omega^{2}\left(\left|\Re T\right|+\mathrm{i}\left|\Im T\right|\right) \leq\frac{1}{2}\left\|T^{*}T+TT^{*}\right\|.\] Proof.: Very recently, it has been shown in [17, Theorem 2.3] that if \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\), then \[\left\|\left(A,B\right)\right\|_{e}\leq\sqrt{\omega\left(\left|A\right|+ \mathrm{i}\left|B\right|\right)\omega\left(\left|A^{*}\right|+\mathrm{i}\left| B^{*}\right|\right)}. \tag{2.15}\] On the other hand, we know that \[\omega_{e}\left(A,B\right)\leq\left\|\left(A,B\right)\right\|_{e}.\] Thus, \[\omega_{e}\left(A,B\right)\leq\left\|\left(A,B\right)\right\|_{e}\leq\sqrt{ \omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right)\omega\left(\left|A^ {*}\right|+\mathrm{i}\left|B^{*}\right|\right)}. \tag{2.16}\] Assume that \(T=\Re T+\mathrm{i}\Im T\) be the Cartesian decomposition of \(T\). If we replace \(A\) and \(B\) by \(\Re T\) and \(\Im T\), and use the fact that \[\omega\left(T\right)=\omega_{e}\left(\Re T,\Im T\right),\] we infer that \[\omega^{2}\left(T\right) \leq\left\|\left(\Re T,\Im T\right)\right\|_{e}^{2}\] \[\leq\omega^{2}\left(\left|\Re T\right|+\mathrm{i}\left|\Im T\right|\right)\] \[\leq\left\|\left|\Re T\right|^{2}+\left|\Im T\right|^{2}\right\|\] \[=\left\|\left(\Re T\right)^{2}+\left(\Im T\right)^{2}\right\|\] \[=\frac{1}{2}\left\|T^{*}T+TT^{*}\right\|,\] as required. As a direct consequence of the first and the second inequality in Corollary 2.8, we have the following interesting result. **Corollary 2.9**.: _If \(T\in\mathbb{B}\left(\mathbb{H}\right)\) is an accretive-dissipative operator, then \(\left\|\left(\Re T,\Im T\right)\right\|_{e}=\omega\left(T\right)\)._ **Corollary 2.10**.: _If \(T\in\mathbb{B}\left(\mathbb{H}\right)\) is a normal operator, then \(\left\|\left(\Re T,\Im T\right)\right\|_{e}=\omega\left(\left|\Re T\right|+ \mathrm{i}\left|\Im T\right|\right)=\left\|T\right\|\)._ **Remark 2.14**.: 1. _From (_2.16_), we have_ \[\sqrt{2}\omega\left(T\right)=\omega_{e}\left(T,T^{*}\right)\leq\left\|\left(T,T^{*}\right)\right\|_{e}\leq\omega\left(\left|T\right|+\mathrm{i}\left|T^{*} \right|\right),\] _i.e.,_ \[\omega\left(T\right)\leq\frac{\sqrt{2}}{2}\|\left(T,T^{*}\right)\|_{e}\leq \frac{\sqrt{2}}{2}\omega\left(\left|T\right|+\mathrm{i}\left|T^{*}\right| \right).\] _Notice that in_ _[_14_, Corollary 2.2]__, it is proved that_ \[\omega\left(T\right)\leq\frac{\sqrt{2}}{2}\omega\left(\left|T\right|+\mathrm{ i}\left|T^{*}\right|\right).\] _Thus, we have shown a refinement of this inequality in terms of_ \(\|\left(\cdot,\cdot\right)\|_{e}\)_._ 2. _We have the following chain of inequalities_ \[\frac{1}{2}\left\|T\right\| \leq\omega\left(\left[\begin{matrix}O&\Re T\\ \Im T&O\end{matrix}\right]\right)\] \[\leq\frac{\sqrt{2}}{2}\|\left(\Re T,\Im T\right)\|_{e}\] \[\leq\frac{\sqrt{2}}{2}\omega\left(\left|\Re T\right|+\mathrm{i} \left|\Re T\right|\right)\] \[\leq\frac{\sqrt{2}}{2}\Big{\|}\left(\Re T\right)^{2}+\left(\Im T \right)^{2}\Big{\|}^{\frac{1}{2}}\] \[\leq\frac{\sqrt{2}}{2}\sqrt{\left\|\Re T\right\|^{2}+\left\| \Im T\right\|^{2}}\] \[\leq\omega\left(T\right).\] _To prove this, we recall the following result, which has been shown recently in_ _[_17_, Theorem 2.1]___ \[\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\leq\frac{\sqrt{2}}{2}\|(A,B)\|_{e}.\] _So, from (_2.15_), we infer that_ \[\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\leq\frac{\sqrt{2}}{2}\|(A,B)\|_{e}\leq\frac{ \sqrt{2}}{2}\sqrt{\omega\left(\left|A\right|+\mathrm{i}\left|B\right|\right) \omega\left(\left|A^{*}\right|+\mathrm{i}\left|B^{*}\right|\right)}.\] _Assume that_ \(T=\Re T+\mathrm{i}\Im T\) _is the Cartesian decomposition of_ \(T\)_. Replace_ \(A\) _and_ \(B\) _by_ \(\Re T\) _and_ \(\Im T\)_, in the above inequality, we get_ \[\frac{1}{2}\left\|T\right\| =\frac{1}{2}\left\|\Re T+\mathrm{i}\Im T\right\|\] \[\leq\omega\left(\begin{bmatrix}O&\Re T\\ \mathrm{i}\Im T&O\end{bmatrix}\right)\quad\text{(by Lemma \ref{lem:1})}\] \[=\omega\left(\begin{bmatrix}O&\Re T\\ \Im T&O\end{bmatrix}\right)\] \[\leq\frac{\sqrt{2}}{2}\|(\Re T,\Im T)\|_{e}\] \[\leq\frac{\sqrt{2}}{2}\omega\left(\left|\Re T\right|+\mathrm{i} \left|\Im T\right|\right).\] _On the other hand,_ \[\omega\left(\left|\Re T\right|+\mathrm{i}\left|\Im T\right|\right) \leq\left\|\left(\Re T\right)^{2}+\left(\Im T\right)^{2}\right\| ^{\frac{1}{2}}\quad\text{(by \@@cite[cite]{[\@@bibref{}{14}{}{}, Proposition \ref{1.4}{}{}])}}\] \[\leq\sqrt{\left\|\Re T\right\|^{2}+\left\|\Im T\right\|^{2}}\] \[\quad\text{(by the triangle inequality for the usual operator norm)}\] \[\leq\sqrt{2}\omega\left(T\right).\] _Combining the last two relations implies the desired chain of inequalities._ 3. _Of course, if_ \(T\) _is an accretive-dissipative operator, one can write_ \[\frac{\sqrt{2}}{2}\left\|T\right\| \leq\sqrt{2}\omega\left(\begin{bmatrix}O&\Re T\\ \Im T&O\end{bmatrix}\right)\] \[\leq\left\|(\Re T,\Im T)\right\|_{e}\] \[\leq\omega\left(T\right).\] _This provides a refinement of the inequality_ \(\|T\|\leq\sqrt{2}\omega(T)\)_, valid for accretive-dissipative operators, as shown in_ _[_14_]__._ We conclude with the following bound of \(\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\) in terms of \(\left\|(A,B)\right\|_{e}\). **Theorem 2.7**.: _Let \(A,B\in\mathbb{B}\left(\mathbb{H}\right)\). Then_ \[\omega\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)\leq\left\|(A,B)\right\|_{e}-\frac{\left\|\left\|A \right\|-\left\|B\right\|\right\mid}{2}.\] Proof.: Let \(x,y\in\mathbb{H}\) be two unit vectors. One can easily see that \[\sqrt{\left|\left\langle Ax,y\right\rangle\right|^{2}+\left|\left\langle Bx,y \right\rangle\right|^{2}}\geq\left|\left\langle Ax,y\right\rangle\right|,\sqrt{ \left|\left\langle Ax,y\right\rangle\right|^{2}+\left|\left\langle Bx,y \right\rangle\right|^{2}}\geq\left|\left\langle Bx,y\right\rangle\right|\] which implies, \[\max\left\{\left\|A\right\|,\left\|B\right\|\right\}\leq\left\|\left(A,B\right) \right\|_{e}.\] Notice that \[\boldsymbol{\omega}\left(\begin{bmatrix}O&A\\ B^{*}&O\end{bmatrix}\right)+\frac{\left\|A\right\|-\left\|B\right\|}{2} \leq\frac{1}{2}\left(\left\|A\right\|+\left\|B\right\|\right)+ \frac{\left\|\left\|A\right\|-\left\|B\right\|\right.\mid}{2}\] \[=\max\left\{\left\|A\right\|,\left\|B\right\|\right\}.\] Combining the last two inequalities implies the desired result. **Declarations.** * **Availability of data and materials**: Not applicable. * **Conflict of interest**: The authors declare that they have no conflict of interest. * **Funding**: Not applicable. * **Authors' contributions**: Authors declare that they have contributed equally to this paper. All authors have read and approved this version. * **Acknowledgments**: Not applicable.
2305.16663
GDA: Generative Data Augmentation Techniques for Relation Extraction Tasks
Relation extraction (RE) tasks show promising performance in extracting relations from two entities mentioned in sentences, given sufficient annotations available during training. Such annotations would be labor-intensive to obtain in practice. Existing work adopts data augmentation techniques to generate pseudo-annotated sentences beyond limited annotations. These techniques neither preserve the semantic consistency of the original sentences when rule-based augmentations are adopted, nor preserve the syntax structure of sentences when expressing relations using seq2seq models, resulting in less diverse augmentations. In this work, we propose a dedicated augmentation technique for relational texts, named GDA, which uses two complementary modules to preserve both semantic consistency and syntax structures. We adopt a generative formulation and design a multi-tasking solution to achieve synergies. Furthermore, GDA adopts entity hints as the prior knowledge of the generative model to augment diverse sentences. Experimental results in three datasets under a low-resource setting showed that GDA could bring {\em 2.0\%} F1 improvements compared with no augmentation technique. Source code and data are available.
Xuming Hu, Aiwei Liu, Zeqi Tan, Xin Zhang, Chenwei Zhang, Irwin King, Philip S. Yu
2023-05-26T06:21:01Z
http://arxiv.org/abs/2305.16663v2
# GDA: Generative Data Augmentation Techniques for ###### Abstract Relation extraction (RE) tasks show promising performance in extracting relations from two entities mentioned in sentences, given sufficient annotations available during training. Such annotations would be labor-intensive to obtain in practice. Existing work adopts data augmentation techniques to generate pseudo-annotated sentences beyond limited annotations. These techniques neither preserve the semantic consistency of the original sentences when rule-based augmentations are adopted, nor preserve the syntax structure of sentences when expressing relations using seq2seq models, resulting in less diverse augmentations. In this work, we propose a dedicated augmentation technique for relational texts, named GDA, which uses two complementary modules to preserve both semantic consistency and syntax structures. We adopt a generative formulation and design a multi-tasking solution to achieve synergies. Furthermore, GDA adopts entity hints as the prior knowledge of the generative model to augment diverse sentences. Experimental results in three datasets under a low-resource setting showed that GDA could bring 2.0% F1 improvements compared with no augmentation technique. Source code and data are available1. Footnote 1: [https://github.com/THU-BPM/GDA](https://github.com/THU-BPM/GDA) ## 1 Introduction Relation Extraction (RE) aims to extract semantic relations between two entities mentioned in sentences and transform massive corpora into triplets in the form of (subject, relation, object). Neural relation extraction models show promising performance when high-quality annotated data is available (Zeng et al., 2017; Zhang et al., 2017; Peng et al., 2020). While in practice, human annotations would be labor-intensive and time-consuming to obtain and hard to scale up to a large number of relations (Hu et al., 2020, 2021, 2021, 2022). This motivates us to solicit data augmentation techniques to generate pseudo annotations. A classical effort devoted to data augmentation in NLP is adopting rule-based techniques, such as synonym replacement (Zhang et al., 2015; Cai et al., 2020), random deletion (Kobayashi, 2018; Wei and Zou, 2019), random swap (Min et al., 2020) and dependency tree morphing (Sahin and Steedman, 2018). However, these methods generate synthetic sentences without considering their semantic consistencies with the original sentence, and may twist semantics due to the neglection of syntactic structures. Other successful attempts on keeping the semantic consistency of the sentences are model-based techniques. The popular back translation method (Dong et al., 2017; Yu et al., 2018) generates synthetic parallel sentences using a translation model to translate monolingual sentences from the target language to the source language. However, it works exclusively on sentence-level tasks like text classification and translation, which is not designed to handle fine-grained semantics in entity-level tasks like relation extraction. Bayer et al. (2022) design a specific method for RE tasks by fine-tuning GPT-2 to generate sentences for specific relation types. However, it cannot be used in practice because the model generates less diverse sentences - it includes similar entities and identical relational expressions under the same relation. To keep the generated sentences diverse while semantically consistent with original sentences, we propose a relational text augmentation technique named GDA. As illustrated in Figure 1, we adopt the multi-task learning framework with one shared encoder and two decoders that are complementary with each other: One decoder aims to predict the original sentence by restructuring words in the syntactic structure, which can maintain the semantics of the original sentence and ensure the model has the ability to generate semantically consistent target sentence. However, restructuring the syntactic structure of the original sentence inevitably breaks the coherence. Therefore, another decoder preserves and approximates the syntax patterns of the original sentence by generating the target sentence with a similar syntax structure drawn from the existing data. This decoder can not only keep the target sentences coherent but more importantly, ensure that the model could maintain the original syntax pattern when generating pseudo sentences. Therefore, different patterns under the same relation can be preserved, instead of predicting the same syntax pattern due to relational inductive biases Sun et al. (2021), thereby increasing the diversity of augmented sentences. We further adopt an entity in the target sentence as a hint to the input of that decoder, which can serve as prior knowledge to control the content of generated sentences. During inference, we could generate diverse sentences by taking a variety of different entity hints and origin sentences with various syntax patterns as input. To summarize, the main contributions of this work are as follows: * We study the task that focuses on the synergy between syntax and semantic preserving during data augmentation and propose a relational text augmentation technique GDA. * We adopt GDA which leverages the multi-task learning framework to generate semantically consistent, coherent, and diverse augmented sentences for RE task. Furthermore, entity hints from target sentences are served to guide the generation of diverse sentences. * We validate the effectiveness of GDA on three public RE datasets and low-resource RE settings compared to other competitive baselines. ## 2 Related Work Data augmentation techniques have been widely used to improve the performance of models in the NLP tasks. The existing methods could be divided mainly into three categories: Rule-based techniques, Example interpolation techniques, and Model-based techniques Feng et al. (2021). Rule-Based TechniquesRule-based techniques adopt simple transform methods. Wei and Zou (2019) proposes to manipulate some words in the original sentences such as random swap, insertion, and deletion. Sahin and Steedman (2018) proposes to swap or delete children of the same parent in the dependency tree, which could benefit the original sentence with case marking. Chen et al. (2020) constructs a graph from the original sentence pair labels and augment sentences by directly inferring labels with the transitivity property. Example Interpolation TechniquesExample interpolation techniques such as MixText Chen et al. (2020), Ex2 Lee et al. (2021), and BackMix Jin et al. (2022) aim to interpolate the embeddings and labels of two or more sentences. Guo et al. (2020) proposes SeqMix for sequence translation tasks in two forms: the hard selection method picks one of the two sequences at each binary mask position, while the soft selection method softly interpolates candidate sequences with a coefficient. Soft selection method also connects to existing techniques such as SwitchOut Wang et al. (2018) and word dropout Sennrich et al. (2016). Model-Based TechniquesModel-based techniques such as back translation Sennrich et al. (2016), which could be used to train a question answering model Yu et al. (2018) or transfer sequences from a high-resource language to a low-resource language Xia et al. (2019). Hou et al. (2018) introduce a sequence to sequence model to generate diversely augmented data, which could Figure 1: Overview of the proposed relational text augmentation technique with pattern approximation: GDA. We highlight the entities and pattern in the sentences. We define the pattern as the dependency parsing path between two entities. improve the dialogue language understanding task. Kobayashi (2018); Gao et al. (2019) propose the contextualized word replacement method to augment sentences. Anaby-Tavor et al. (2020); Li et al. (2022); Bayer et al. (2022) adopt language model which is conditioned on sentence-level tags to modify original sentences exclusively for classification tasks. Some techniques try to combine some simple data augmentation methods Ratner et al. (2017); Ren et al. (2021) or add human-in-the-loop Kaushik et al. (2019, 2020). Other paraphrasing techniques Kumar et al. (2020); Huang and Chang (2021); Gangal et al. (2021) and rationale thinking methods Hu et al. (2023) also show the effectiveness of data augmentation methods. Characteristics ComparsionWe compare our GDA with other data augmentation techniques from the characteristics of semantic consistency, coherence, and diversity in Table 1. Note that the example interpolation techniques do not generate specific sentences, and only operates at the semantic embedding level. Therefore, we believe that they can only maintain semantic consistency. Compared with other SOTA data augmentation techniques, GDA uses a multi-task learning framework, which leverages two complementary seq2seq models to make the augmented sentences have semantic consistency, coherence, and diversity simultaneously. ## 3 Proposed data augmentation technique The proposed data augmentation technique GDA consists of two steps: 1) Train a seq2seq generator. 2) Generate pseudo sentences. As illustrated in Figure 1, the first step adopts T5 Raffel et al. (2020) consisting of encoder and decoder parts as the seq2seq generator (\(\theta\)). The generator learns to convert two sentences with the same relation label. Specifically, the encoder part takes a sentence \(X=(x_{1},x_{2},...,x_{T_{x}})\) as input where named entities are recognized and marked in advance, and obtains the contextualized token embeddings \(H=(\textbf{h}_{1},\textbf{h}_{2},...,\textbf{h}_{T_{x}})\). The decoder part takes the \(H\) as input and generates target sentence \(Y=(y_{1},y_{2},...,y_{T_{y}})\) word by word by maximizing the conditional probability distribution of \(p(y_{i}|y_{<i},H,\theta)\). The second step randomly selects an annotated sentence as input, and leverages the trained generator to generate pseudo sentence with entity marker and same relation label. Now, we introduce the details of each step. ### Train a seq2seq generator Training a seq2seq generator aims to obtain a generator that could augment annotated sentences to diverse, semantically consistent, and coherent pseudo sentences. In addition, the entities in the augmented pseudo sentences also need to be marked for entity-level relation extraction task. To achieve this goal, the generator must convert two sentences with the same relation label and emphasize contextualized relational signals at the entity level during the generation process. In practice, for each annotated sentence \(X=(x_{1},x_{2},...,x_{T_{x}})\), we adopt the labeling scheme introduced in Soares et al. (2019), and augment \(X\) with four reserved tokens: \([E_{sub}]\), \([/E_{sub}]\), \([E_{obj}]\), \([/E_{obj}]\) to represent the start and end position of subject and object named entities respectively, and inject them to \(X\). For example, "A \([E_{sub}]\) surgeon \([/E_{sub}]\) carefully applies the \([E_{obj}]\) splits \([/E_{obj}]\) to the forearm.". Then we feed the updated \(X\) into the T5 encoder part to obtain contextualized token embeddings \(H\): \(H=\mathrm{Encoder}(X)\). A natural paradigm for the decoder part to generate the target sentence is to select another sentence in the training set that has the same relation as the input sentence. Bayer et al. (2022) fine-tuned GPT-2 to generate sentences for specific relation types. However, it requires too much computational cost to train multiple GPT-2s for each relation type, and we observed no promising results are obtained. We attribute the reason to two aspects: 1) the diversity of generated sentences is not emphasized, resulting in the generation of sentences with similar patterns, and 2) the entity-level relation extraction task is not considered, resulting in missing entity information. In this paper, we propose to leverage the multi-task learning framework to address the above short \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{Methods} & \multicolumn{3}{c}{Characteristics} \\ \cline{3-5} & & Seman. & Cober. & Diver. \\ \hline Rule & Wei and Zou (2019) & βœ“ & – & βœ“ \\ Based & Chen et al. (2020) & βœ“ & – & βœ“ \\ \hline \multirow{3}{*}{\begin{tabular}{c} Example \\ Interpolation \\ \end{tabular} } & Chen et al. (2020) & βœ“ & – & – \\ \cline{2-5} & Lee et al. (2021) & βœ“ & – & – \\ \cline{2-5} & Jin et al. (2022) & βœ“ & – & – \\ \hline \multirow{3}{*}{ \begin{tabular}{c} Model \\ Based \\ \end{tabular} } & Gao et al. (2019) & βœ“ & – & – \\ \cline{2-5} & Anaby-Tavor et al. (2020) & βœ“ & βœ“ & – \\ \cline{2-5} & Papambiosou and Pierleoni (2020) & βœ“ & βœ“ & – \\ \cline{2-5} & Bayer et al. (2022) & βœ“ & βœ“ & – \\ \cline{2-5} & **GDA (Ours)** & βœ“ & βœ“ & βœ“ \\ \hline \hline \end{tabular} \end{table} Table 1: Characteristics comparison between different categories of techniques. β€œSeman.” means β€œsemantic consistency”, β€œCoher.” means β€œcoherence”, and β€œDiver.” means β€œdiversity”. comings, which performs two tasks: original sentence restructuring and original sentence pattern approximation. In practice, our framework consists of two seq2seq models that share the same encoder part, but employ two decoder parts to complete the two tasks, respectively. Original sentence restructuring.The original sentence restructuring task aims to improve the ability of the model to generate semantically consistent sentences. As illustrated in Figure 1, the target generated sentence is just the restructured original sentence \(X^{{}^{\prime}}=(x_{1}^{{}^{\prime}},x_{2}^{{}^{\prime}},...,x_{T_{x}}^{{}^{ \prime}})\) that has the same length and words as the original sentence. We adopt the pre-ordering rules proposed by Wang et al. (2007) in machine translation. These rules could modify the syntactic parse tree obtained from the original sentence and permutate the words by modifying the parsed tree. The target sentence is closer to the expression order of words without changing the semantics of the original sentence. Furthermore, since the entities are not changed, it is easy to mark the position of the entities, e.g.: \[\begin{pmatrix}\text{Original: A}\left(\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text sentence's pattern, where \(\lambda\) is a hyperparameter. In this way, the decoder network is employed to predict the pattern approximation target sentence \(Y=(y_{1},y_{2},...,y_{T_{y}})\) by maximizing \(p(Y|H,\theta_{P})\): \[\mathcal{L}_{\theta_{P}}=\sum_{n=1}^{N}\sum_{i=1}^{T_{y}}\log p\left(y_{i}^{(n )}\mid y_{<i}^{(n)},H^{(n)},\theta_{P}\right), \tag{2}\] where \(\theta_{P}\) denotes the parameters of the decoder part for the original sentence pattern approximation. \(N\) is the number of sentences for all outputs that satisfy the Lev distance less than \(3\). Entity-level sentence generation.Furthermore, to generate more controllable entity-level sentences and help the generator to better mark entities in the augmented sentences, we add a subject or object **Entity**\((E)\) from the target output sentence to the input embedding of the decoder as a hint. For example, in Figure 1, we add **winemaker** or **grapes** to the input of the decoder part \(\theta_{P}\), which helps derive entity-oriented controllable sentence and increase the diversity of generated sentences by adopting different entity hints. Therefore, we finalize the loss function of Eq. 2 as: \[\mathcal{L}_{\theta_{P}}=\sum_{n=1}^{N}\sum_{i=1}^{T_{y}}\log p\left(y_{i}^{( n)}\mid y_{<i}^{(n)},E^{(n)},H^{(n)},\theta_{P}\right). \tag{3}\] The overall loss function of multi-task learning is the sum of log probabilities of original sentence restructuring and pattern approximation tasks: \[\begin{split}\mathcal{L}_{\theta}=&\sum_{n=1}^{N} \sum_{i=1}^{T_{y}}\log p\left(Y_{i}^{(n)}\mid Y_{<i}^{(n)},E^{(n)},H^{(n)}, \theta\right)\\ &+\sum_{m=1}^{M}\sum_{i=1}^{T_{x}}\log p\left(X_{i}^{{}^{\prime (m)}}\mid X_{<i}^{{}^{\prime(m)}},H^{(m)},\theta\right),\end{split} \tag{4}\] where \(\theta=(\theta_{E},\theta_{R},\theta_{P})\). \(\theta_{E}\) is the parameter of encoder part. In practice, we adopt an iterative strategy to train two complementary tasks. For each iteration, we first optimize the \((\theta_{E},\theta_{R})\) framework in the restructuring task for five epochs. The optimized \(\theta_{E}\) will be employed as the initial \(\theta_{E}\) of the pattern approximation task. Next, we optimize the \((\theta_{E},\theta_{P})\) framework for five epochs in the pattern approximation task, and the updated \(\theta_{E}\) will be used in the next iteration. Finally, \(\theta_{E}\) and \(\theta_{P}\) will be adopted to generate augmented sentences. ### Generate pseudo sentences After we obtain the trained seq2seq generator T5 \((\theta_{E},\theta_{P})\), which focuses on the reconstruction of diverse, semantically consistent, and coherent relational signals. We leverage the generator to generate entity-oriented pseudo sentences as augmented data. In practice, we randomly select an annotated sentence \(X\) and one marked subject or object entity \(E\) under the relation label to which the \(X\) belongs from the annotated data. Then we obtain the augmented sentence by \((X,E,\theta_{E},\theta_{P})\), where subject and object entities (one of them is \(E\)) have been marked during the generation process. The augmented sentences have the same relation label as the original sentences and have enough diversity with different sentences and entity hints randomly sampled from the annotated data. ## 4 Experiments We conduct extensive experiments on three public datasets and low-resource RE setting to show the effectiveness of GDA and give a detailed analysis to show its advantages. ### Base Models and Baseline Introduction We adopt two SOTA base models: (1) **SURE**Lu et al. (2022) creates ways for converting sentences and relations that effectively fill the gap between the formulation of summarization and RE tasks. (2) **RE-DMP**Tian et al. (2022) leverages syntactic information to improve relation extraction by training a syntax-induced encoder on auto-parsed data through dependency masking. We adopt three types of baseline models. (1) **Rule-Based Techniques**: **EDA**Wei and Zou (2019) adopts synonym replacement, random insertion, random swap, and random deletion to augment the original sentences. **Paraphrase Graph**Chen et al. (2020) constructs a graph from the annotated sentences and creates augmented sentences by inferring labels from the original sentences using a transitivity property. (2) **Example Interpolation Techniques**: Inspired by Mixup Zhang et al. (2018), **MixText**Chen et al. (2020) and **Ex2**Lee et al. (2021) aim to interpolate the embeddings and labels of two or more sentences. **BackMix**Jin et al. (2022) proposes a back-translation based method which softly mixes the multilingual augmented samples. (3) **Model-Based Techniques**: **Soft DA**Gao et al. (2019) replaces the one-hot representation of a word by a distribution over the vocabulary and calculates it based on contextual information. **LAMBADA**[1] and **DARE**[1] fine-tune multiple generative models for each relation type to generate augmentations. **Text Gen**[1] proposes a sophisticated generation-based method that generates augmented data by incorporating new linguistic patterns. ### Datasets and Experimental Settings Three classical datasets are used to evaluate our technique: the SemEval 2010 Task 8 (**SemEval**) [1], the TAC Relation Extraction Dataset (**TACRED**) [1], and the revisited TAC Relation Extraction Dataset (**TACREV**) [1]. SemEval is a classical benchmark dataset for relation extraction task which consists of 19 relation types, with 7199/800/1864 relation mentions in training/validation/test sets, respectively. TACRED is a large-scale crowd-source relation extraction benchmark dataset which is collected from all the prior TAC KBP shared tasks. TACREV found that the TACRED dataset contains about 6.62% noisily-labeled relation mentions and relabeled the validation and test set. TACRED and TACREV consist of 42 relation types, with 75049/25763/18659 relation mentions in training/validation/test sets. We train the T5-base [1] with the initial parameter on the annotated data and utilize the T5 default tokenizer with max-length as 512 to preprocess data. We use AdamW with \(5e{-5}\) learning rate to optimize cross-entropy loss. Both GDA and all baseline augmentation methods augment the annotated data by **3x** for fair comparison. For the low-resource RE setting, We randomly sample 10%, 25%, and 50% of the training data and use them for all data augmentation techniques. All augmented techniques can only be trained and augmented on these sampled data. ### Main Results Table 2 shows the average micro F1 results over three runs in three RE datasets. All base models could gain F1 performance improvements from the augmented data when compared with the models that only adopt original training data, which demonstrates the effectiveness of data augmentation tech \begin{table} \begin{tabular}{l l l r r r r r r r r r r r r r r} \hline \hline \multirow{2}{*}{Methods / Datasets} & \multirow{2}{*}{PLMs} & \multirow{2}{*}{Para.} & \multicolumn{5}{c}{SemEval} & \multicolumn{5}{c}{TACRED} & \multicolumn{5}{c}{TACREV} & \multicolumn{1}{c}{AVG.} & \multicolumn{1}{c}{\(\Delta\)} \\ \cline{5-16} & & & 10\% & 25\% & 50\% & 100\% & 10\% & 25\% & 50\% & 100\% & 10\% & 25\% & 50\% & 100\% & \\ \hline **Base (SURE)**\(\dagger\) & BART-Large & 406M & 77.2 & 81.5 & 83.9 & 86.3 & 67.9 & 70.4 & 71.9 & 73.3 & 72.3 & 75.1 & 77.4 & 79.2 & 76.4 & – \\ +EDA\(\dagger\) & \(-\) & \(-\) & 77.9 & 82.0 & 84.4 & 86.7 & 68.6 & 71.0 & 72.2 & 73.8 & 72.7 & 76.0 & 77.9 & 76.9 & 0.5 \(\dagger\) \\ +Paraphrase Graph\(\dagger\) & \(-\) & \(-\) & 77.8 & 81.8 & 84.4 & 86.7 & 68.4 & 70.9 & 72.3 & 73.7 & 72.9 & 75.7 & 77.7 & 79.4 & 76.8 & 0.4 \(\dagger\) \\ +MixText\(\dagger\) & BERT-Base & 110M & 78.6 & 82.6 & 85.0 & 87.2 & 69.0 & 71.7 & 72.9 & 74.1 & 73.6 & 76.4 & 78.4 & 80.0 & 77.5 & 1.1 \(\dagger\) \\ +Ex2\(\dagger\) & T5-Base & 220M & 79.1 & 83.0 & 85.5 & 87.5 & 69.6 & 72.1 & 73.2 & 74.3 & 72.2 & 76.7 & 78.8 & 80.3 & 77.8 & 1.4 \(\dagger\) \\ +BackMix\(\dagger\) & mHART-Base & 140M & 78.7 & 82.5 & 85.2 & 87.7 & 69.2 & 71.8 & 72.8 & 74.0 & 73.9 & 76.3 & 78.2 & 80.0 & 77.5 & 1.1 \(\dagger\) \\ +Soft DA\(\dagger\) & BERT-Base & 110M & 78.5 & 82.4 & 85.1 & 87.0 & 68.9 & 71.7 & 72.8 & 74.0 & 73.5 & 76.5 & 78.4 & 79.9 & 77.4 & 1.0 \(\dagger\) \\ +LAMBAD\(\dagger\) & GPT-2 & 117M & 78.4 & 84.4 & 85.0 & 87.1 & 69.1 & 71.6 & 72.9 & 73.8 & 73.6 & 76.5 & 78.3 & 79.8 & 77.4 & 1.0 \(\dagger\) \\ +DARE\(\dagger\) & GPT-2 & 117M & 78.7 & 82.6 & 85.3 & 87.7 & 69.2 & 71.7 & 79.4 & 71.3 & 78.5 & 78.4 & 80.1 & 77.5 & 1.1 \(\dagger\) \\ +Text Gen\(\dagger\) & GPT-2-Medium & 345M & 79.0 & 83.2 & 85.7 & 87.7 & 69.7 & 71.9 & 73.4 & 74.4 & 74.2 & 76.8 & 78.6 & 80.4 & 77.8 & 1.4 \(\dagger\) \\ \hline +**GDA (TS-Base)** & T5-Base & 220M & **79.7** & **83.6** & **85.9** & **85.0** & **70.4** & **72.6** & **73.8** & **74.9** & **74.8** & **77.2** & **79.1** & **80.8** & **78.3** & **1.9 \(\dagger\)** \\ _ww Approximation_ & T5-Base & 220M & 78.8 & 82.7 & 85.2 & 87.4 & 69.2 & 71.9 & 73.0 & 74.5 & 74.1 & 76.9 & 78.6 & 80.4 & 77.7 & – \\ _w/o Restructuring_ & T5-Base & 220M & 79.1 & 82.9 & 85.4 & 87.5 & 69.4 & 71.9 & 73.2 & 74.6 & 74.4 & 77.0 & 78.8 & 80.5 & 77.9 & – \\ _w/o Two Tabs_ & T5-Base & 220M & 78.3 & 82.3 & 84.7 & 86.9 & 68.7 & 71.2 & 72.5 & 74.0 & 73.2 & 76.2 & 78.1 & 79.6 & 77.1 & – \\ \hline +**GDA (BART-Base)** & BART-Base & 140M & 79.2 & 83.2 & 85.6 & 87.8 & 69.7 & 72.3 & 73.3 & 74.5 & 74.4 & 76.8 & 78.7 & 80.5 & 78.0 & 1.6 \(\dagger\) \\ +**GDA (TS-Small)** & T5-Small & 60M & 79.0 & 82.9 & 85.4 & 87.6 & 69.5 & 72.1 & 72.9 & 74.1 & 74.1 & 76.5 & 78.4 & 80.2 & 77.7 & 1.3 \(\dagger\) \\ \hline **Base (RE-DMP)** & BERT-Large & 340M & 76.4 & 81.1 & 83.4 & 85.9 & 67.3 & 70.0 & 71.1 & 72.4 & 71.5 & 74.7 & 77.0 & 78.5 & 75.8 & – \\ +EDA\(\dagger\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \(-\) \\ +Paraphrase Graph\(\dagger\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\ +MixText\(\dagger\) & BERT-Base & 1104M & 77.8 & 82.1 & 83.9 & 86.4 & 86.7 & 70.9 & 71.4 & 72.6 & 73.5 & 73.1 & 76.2 & 78.0 & 79.5 & 77.2 & 1.4 \(\dagger\) \\ +Ex2\(\dagger\) & T5-Base & 220M & 78.3 & 82.6 & 84.6 & 87.0 & 69.1 & 71.4 & 72.6 & 73.5 & 73.1 & 76.2 & 7 niques in the RE task. For three RE datasets, Text Gen is considered the previous SOTA data augmentation technique. The proposed GDA technique consistently outperforms all baseline data augmentation techniques in F1 performance (with student's T test \(p<0.05\)). More specifically, compared to the previous SOTA: Text Gen, GDA on average achieves 0.5% higher F1 in SemEval, 0.5% higher F1 in TACRED, and 0.4% higher F1 in TACREV across various annotated data and base models. Considering the low-resource relation extraction setting when annotated data are limited, e.g. 50% for SemEval, TACRED and TACREV, GDA could achieve an average boost of 0.5% F1 compared to Text Gen. When less labeled data is available, 10% for SemEval, TACRED, and TACREV, the average F1 improvement is consistent, and surprisingly increased to 0.8%. We attribute the consistent improvement of GDA to the diverse and semantically consistent generated sentences that are exploited: we bootstrap the relational signals of the augmented data via multi-task learning, which could help generate entity-oriented sentences for relation extraction tasks. To demonstrate the impact of different pre-trained language models (PLMs) on the quality of augmented data, we present the PLMs adopted by GDA and baseline augmentation techniques and their corresponding parameters in Table 2. An exciting conclusion is that compared to Text Gen, although we use PLMs with fewer parameters (345M vs. 220M), our augmentation effect is still improved by an astonishing 0.6% compared to Text Gen, and a new SOTA for the RE task has been achieved. Even though we adopt T5-Small (60M) in GDA, which has fewer parameters than BERT-Base and GPT-2 (\(\approx\) 110M), the augmented data can still bring competitive F1 improvement. More specifically, GDA (T5-Small) can achieve F1 improvement of 0.9% and 1.1% on SURE and RE-DMP, respectively, which illustrates the effectiveness of GDA for data augmentation in RE task. ### Ablation Study We conduct an ablation study to show the effectiveness of different modules of GDA on test set. GDA_w/o Restructuring_ is the proposed technique without the decoder part \(\theta_{R}\) and only uses the original sentence pattern approximation task to train the T5. GDA_w/o Approximation_ is the proposed technique without the decoder part \(\theta_{P}\) and entity hint from the target sentence, and we use \(\theta_{R}\) for generation during both training/inference. GDA_w/o Two Tasks_ directly fine-tunes T5 on the training data, only requiring that the input sentence to be from the same relation as the target sentence. A general conclusion from the ablation results in Table 2 is that all modules contribute positively to GDA. More specifically, without multi-task learning framework, GDA_w/o Two Tasks_ brings 1.3% less F1 performance averaged over three datasets. Similarly, compared with the restructuring task, the pattern approximation task can bring more average improvement in F1 performance (0.6% vs. 0.8%), which also means that we need to focus more on the pattern approximation task when training T5. ### Generative Model Ablation Study We additionally study the effect of removing the generative model on the augmentation effect, that is, we directly use restructured original sentences and pattern approximation target sentences as augmented sentences. From Table 3, we found that directly using restructured sentences and pattern approximation sentences as augmented data results in a 1.3% drop in F1 performance compared to GDA, which indicates the necessity of using T5-Base to train augmented sentences. These two augmented sentences are also rule-based techniques. Compared with other rule-based data augmentation techniques (EDA and Paraphrase Graph), they can bring an average F1 improvement of 0.4%, which additionally illustrates the effectiveness of our modification of the original sentences on the RE tasks. ### Performance on Various Augmentation Multiples We vary the multiple of augmented data from 2x to 10x the 10% training set to study the influence of data augmentation techniques for the base models under low-resource scenarios. We choose the \begin{table} \begin{tabular}{l c c c c} \hline \hline Methods / Datasets & PLMs & SemEval & TACRED & TACREV \\ \hline SURE & BART-Large & 86.3 & 73.3 & 79.2 \\ +EA & – & 86.7 & 73.8 & 79.6 \\ *-Praphrase Graph & – & 86.6 & 73.7 & 79.4 \\ +Ex2 & T5-Base & 87.5 & 74.3 & 80.3 \\ +Text Gen & GPT-2 Medium & 87.7 & 74.4 & 80.4 \\ *Restructured & – & 87.0 & 74.0 & **79.8** \\ +Pattern & – & 87.2 & 74.1 & 80.0 \\ +GDA & T5-Base & **88.0** & **74.9** & **80.8** \\ \hline \hline \end{tabular} \end{table} Table 3: We adopt SURE as the base model and use 100% training data over three datasets. We report F1 results on the test sets. β€œRestructured” and β€œPattern” mean to directly use restructured original sentences and pattern approximation target sentences as augmentations. 10% SemEval and 10% TACREV training datasets and the base models: SURE and RE-DMP, then represent the results on the test set in Figure 3. We observe that two base models have more performance gains with ever-increasing augmented data and GDA achieves consistently better F1 performance, with a clear margin, when compared with baseline data augmentation techniques under various multiples of the augmented data. Especially for 10% TACREV, GDA brings an incredible 3% improvement in F1 performance with only 4x augmented data, which is even 0.2% better than adopting 25% (2.5x) of the training data directly. ### Diversity Evaluation We measure the diversity of augmented sentences through automatic and manual metrics. For automatic metric, we introduce the Type-Token Ratio (TTR) (Tweedie and Baayen, 1998) to measure the ratio of the number of different words to the total number of words in the dependency path between two entities for each relation type. Higher TTR (%) indicates more diversity in sentences. Besides that, we ask 5 annotators to give a score for the degree of diversity of the 30 generated sentences for each relation type, with score range of 1-5. According to the annotation guideline in Appendix C, a higher score indicates the method can generate more diverse and grammatically correct sentences. We present the average scores for all relation types on three datasets in Table 5. Since the example interpolation techniques do not generate the sentences shown, they are ignored. As a model-based augmentation technique, GDA could obtain 11.4% TTR and 0.4 diversity performance boost in average compared to Text Gen, and can even have a diversity capability similar to the rule-based methods. Furthermore, we give the detailed hyperparameter analysis in Appendix. ### Case Study We give two cases in Table 4. GDA adopts the entity hint: "program" and input sentence to generate a controllable target sentence, while retaining the original pattern: "was opened by" without changing the semantics. GDA _w/o Pattern Approximation_ converts the rare pattern "was opened by" to the high frequency pattern "consists of" due to the inductive bias, which will affect the diversity of augmented sentences. GDA _w/o Entity Hint_ will generate uncontrollable entities, resulting in the same sentence generated by the same relation, which affects the diversity of generated sentences. ### Coherence Analysis Compared to rule-based augmentation techniques, GDA conditionally generates pseudo sentences with entity hints, providing more coherent and reasonable sentences. We analyze the coherence of the augmented sentences through perplexity based on GPT-2 (Radford et al., 2019). Note that the exam \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Methods / Datasets} & \multicolumn{2}{c}{SemEval} & \multicolumn{2}{c}{TACRED} & \multicolumn{2}{c}{TACREV} \\ \cline{2-6} & TTR & Diver. & TTR & Diver. & TTR & Diver. \\ \hline EDA & 82.4 & 3.1 & **84.7** & 3.4 & 83.2 & 3.3 \\ Paraphrase Graph & 85.9 & 3.6 & 84.1 & 3.9 & 84.6 & 3.5 \\ LAMBADA & 72.6 & 2.3 & 76.2 & 2.2 & 78.4 & 2.2 \\ DARE & 75.3 & 2.4 & 75.8 & 2.7 & 74.7 & 2.5 \\ Text Gen & 74.8 & 3.5 & 72.1 & 3.7 & 76.3 & **3.6** \\ GDA (**TS-Base**) & **86.4** & **4.0** & 84.1 & **4.1** & **86.9** & **3.9** \\ GDA w/o _Approximation_ & 74.7 & 3.2 & 75.2 & 3.2 & 72.8 & 3.1 \\ GDA w/o Restructuring & 80.3 & 3.8 & 81.3 & 3.7 & 82.0 & 3.8 \\ \hline \hline \end{tabular} \end{table} Table 4: Case study. We highlight the entities and pattern in the original and generated sentences. Figure 3: F1 results of the base model: SURE and RE-DMP with various multiples of the augmented data. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Methods / Datasets} & \multicolumn{2}{c}{SemEval} & \multicolumn{2}{c}{TACRED} & \multicolumn{2}{c}{TACREV} \\ \cline{2-6} & TTR & Diver. & TTR & Diver. & TTR & Diver. \\ \hline EDA & 82.4 & 3.1 & **84.7** & 3.4 & 83.2 & 3.3 \\ Paraphrase Graph & 85.9 & 3.6 & 84.1 & 3.9 & 84.6 & 3.5 \\ LAMBADA & 72.6 & 2.3 & 76.2 & 2.2 & 78.4 & 2.2 \\ DARE & 75.3 & 2.4 & 75.8 & 2.7 & 74.7 & 2.5 \\ Text Gen & 74.8 & 3.5 & 72.1 & 3.7 & 76.3 & **3.6** \\ GDA (**TS-Base**) & **86.4** & **4.0** & 84.1 & **4.1** & **86.9** & **3.9** \\ GDA w/o Approximation & 74.7 & 3.2 & 75.2 & 3.2 & 72.8 & 3.1 \\ GDA w/o Restructuring & 80.3 & 3.8 & 81.3 & 3.7 & 82.0 & 3.8 \\ \hline \hline \end{tabular} \end{table} Table 5: Diversity Evaluation on three datasets. ple interpolation techniques interpolate the embeddings and labels of two or more sentences without the generation of specific sentences, so we did not compare these methods. From Table 6, GDA could obtain the lowest average perplexity. Although Text Gen is also based on generative models, the augmented sentences are still not coherence enough due to the neglect of entity-level relational signals (entity hint) during the training process. Therefore, Text Gen is not so natural in generating augmented sentences with entity annotations. ### Semantic Consistency Analysis Unlike rule-based data augmentation techniques, which will change the semantics of the original sentence, GDA can better exploit relational signals: the target sentence during the training process comes from the restructured original sentence with the same relation label, so GDA can generate semantically consistent augmented sentences. In our tasks, we first train SURE on the 100% training datasets and then apply GDA to the test set to obtain augmented sentences. We feed the 100 original sentences and 100 augmented sentences with the same relation labels into the trained SURE, and obtain the output representations from the last dense layer. We apply t-SNE Van Der Maaten (2014) to these embeddings and plot the visualization of the 2D latent space. From Figure 4, we observed that the latent space representations of the augmented sentences closely surrounded those of the original sentences with the same relation labels, indicating that GDA could augment sentences semantically consistently. Conversely, sentences augmented with rule-based method: EDA appear outliers, indicating semantic changes. ## 5 Conclusions and Future works In this paper, we propose a relational text augmentation technique: GDA for RE tasks. Unlike conventional data augmentation techniques, our technique adopts the multi-task learning framework to generate diverse, coherent, and semantic consistent augmented sentences. We further adopt entity hints as prior knowledge for diverse generation. Experiments on three public datasets and low-resource settings could show the effectiveness of GDA. For future research directions, we can try to explore more efficient pre-ordering and parsing methods, and apply our data augmentation methods to more NLP applications, such as semantic parsing Liu et al. (2022, 2023), natural language inference Li et al. (2023, 2022). ## 6 Limitations We would like to claim our limitations from two perspectives: application-wise and technical-wise. Application-wise: GDA needs annotations to fine-tune T5, which requires more computing resources and manual labeling costs than the rule-based techniques. Technical-wise: Our "original sentence restructuring" and "original sentence pattern approximation" tasks rely on the efficiency and accuracy of pre-ordering rules Wang et al. (2007) and parsing methods Chen and Manning (2014). Although current GDA show effectiveness, we still need to find more efficient pre-ordering and parsing methods. ## 7 Acknowledgement We thank the reviewers for their valuable comments. The work described here was partially supported by grants from the National Key Research and Development Program of China (No. 2018AAA0100204) and from the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14222922, RGC GRF, No. 2151185), NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. \begin{table} \begin{tabular}{l c c c} \hline \hline Methods / Datasets & SemEval & TACRED & TACREV \\ \hline EDA & 8.24 & 9.18 & 8.33 \\ Paraphrase Graph & 7.44 & 7.88 & 7.01 \\ LAMBADA & 4.21 & 4.38 & 4.11 \\ DARE & 4.28 & 4.46 & 4.22 \\ Text Gen & 4.02 & 4.24 & 4.11 \\ GDA (T5-Base) & **3.97** & **4.21** & **4.05** \\ \hline Original & 3.88 & 4.09 & 3.91 \\ \hline \hline \end{tabular} \end{table} Table 6: Perplexity of the augmented sentences in three datasets. Original means the original sentences. Lower perplexity is better. Figure 4: Latent space visualization of original and augmented sentences in the SemEval (left) and TACRED (right). The same relation labels use the same color.
2304.01036
Constraints on the in-medium nuclear interaction from chiral symmetry and Lattice-QCD
In this paper we discuss the combined effects on nuclear matter properties of the quark confinement mechanism in nucleon and of the chiral effective potential resulting from the spontaneous breaking of the chiral symmetry in nuclear matter. Based on the Nambu-Jona-Lasinio predictions, it is shown that the chiral potential acquires a specific scalar field cubic dependence, which contributes to the three-body interaction. We also discuss the constraints induced by Lattice-QCD on the model parameters governing the saturation properties. We introduce the term "QCD-connected parameters" for these quantities. We demonstrate that chiral symmetry and Lattice-QCD provide coherent constraints on the in-medium nuclear interaction, suggesting a fundamental origin of the saturation mechanism.
G. Chanfray, H. Hansen, J. Margueron
2023-04-03T14:34:23Z
http://arxiv.org/abs/2304.01036v1
# Constraints on the in-medium nuclear interaction from chiral symmetry and Lattice-QCD ###### Abstract In this paper we discuss the combined effects on nuclear matter properties of the quark confinement mechanism in nucleon and of the chiral effective potential resulting from the spontaneous breaking of the chiral symmetry in nuclear matter. Based on the Nambu-Jona-Lasinio predictions, it is shown that the chiral potential acquires a specific scalar field cubic dependence, which contributes to the three-body interaction. We also discuss the constraints induced by Lattice-QCD on the model parameters governing the saturation properties. We introduce the term "QCD-connected parameters" for these quantities. We demonstrate that chiral symmetry and Lattice-QCD provide coherent constraints on the in-medium nuclear interaction, suggesting a fundamental origin of the saturation mechanism. pacs: 24.85.+p 11.30.Rd 12.40.Yx 13.75.Cs 21.30.-x ## I Introduction Relativistic theories of nuclear matter initiated by Walecka and collaborators [1; 2] attract a lot of interest for, at least, two reasons: i) this type of approach provides a very economical saturation mechanism and ii) a spectacular well-known success in predicting the correct magnitude of the spin-orbit potential since nucleons move in an attractive background scalar field and in a repulsive vector background field which contribute in an additive way (see a recent discussion for this specific point in Ref. [3]). If the origin of the repulsive vector field can be safely identified as associated with the omega vector-meson exchange, the real nature of the attractive Lorentz scalar field has been a controversial subject since there is no sharp scalar resonance with a mass of about 500-700 MeV, which would lead to a simple interaction based on a scalar particle exchange. More fundamentally the question of the very nature of these background fields has to be elucidated; in other words, it is highly desirable to clarify their relationship with the QCD condensates, in particular the chiral quark condensate \(\langle\overline{q}q\rangle\), and more generally with the low energy realization of chiral symmetry which is spontaneously broken in the QCD vacuum and is expected to be progressively restored when the density increases. Indeed the microscopic origin of low-energy nuclear interaction properties is related to fundamental properties of the theory of the strong interaction (QCD) and should be implemented in the modeling of nuclear matter. To bridge the gap between relativistic theories of the Walecka type and approaches insisting on chiral symmetry, it has been proposed in Ref. [4] to identify the "nuclear physics" scalar sigma meson of the Walecka model at the origin of the nuclear binding, let us call it \(\sigma_{W}\), with the chiral invariant \(s=S-F_{\pi}\) field associated with the radial fluctuation of the chiral condensate \(S\) around the "chiral radius" \(F_{\pi}\), identified with the pion decay constant. In the present approach we take the point of view that the effective theory has to be formulated, as a starting point, in term of the field \(W\) associated with the fluctuations of the chiral quark condensate and parameterized as \[W = \sigma+i\vec{\tau}\cdot\vec{\pi}\equiv S\,U\equiv\left(s\,+\,F_{ \pi}\right)U\equiv\left(\sigma_{W}\,+\,F_{\pi}\right)U \tag{1}\] \[\mbox{with}\qquad U(x)=e^{i\vec{\tau}\cdot\vec{\phi}(x)/F_{\pi}}.\] The scalar field \(\sigma\) (\(S\)) and pseudoscalar fields \(\vec{\pi}\) (\(\vec{\phi}\)) written in cartesian (polar) coordinates appear as the dynamical degrees of freedom and may deviate from the vacuum value, \(\left\langle\sigma\right\rangle_{\mbox{\tiny vac}}=\left\langle S\right\rangle _{\mbox{\tiny vac}}=F_{\pi}\propto\left\langle\overline{q}q\right\rangle_{ \mbox{\tiny vac}}\). The sigma and the pion, associated with the amplitude \(s\equiv\sigma_{W}\) and phase fluctuations \(\vec{\phi}\) of this condensate, are considered in our approach to be effective degrees of freedom. Their dynamics are governed by an effective chiral potential, \(V\left(\sigma,\vec{\pi}\right)\), having a typical Mexican hat shape associated with a broken (chiral) symmetry of the QCD vacuum. There is however a well identified problem concerning the nuclear saturation with usual chiral effective theories [5; 6; 7; 8]: independently of the particular chiral model, in the nuclear medium the value of \(S\) (\(\equiv S_{\mbox{\tiny medium}}\)) will be different from the one in vacuum (\(\equiv S_{\mbox{\tiny vacuum}}\), the minimum of the vacuum effective potential represented by a "Mexican hat" potential). At \(S_{\mbox{\tiny medium}}\) the chiral potential has a smaller curvature : \(V^{\prime\prime}(S_{\mbox{\tiny medium}})<V^{\prime\prime}(S_{\mbox{\tiny vacuum}})\). This single effect results in the lowering of the sigma mass and destroys the stability, which is a problem for the applicability of such effective theories in the nuclear context. The effect can be associated with a \(s^{3}\) tadpole diagram generating attractive three-body forces destroying saturation even if the repulsive three-body force from the Walecka mechanism is present. The origin of this problem is most probably related to the fact that nucleons are not point particle, but in reality composite systems made of quarks. Hence the nucleon will react against the presence of the nuclear scalar field. This effect can be taken into account by introducing the nucleon response to the scalar field \(s\), \(\kappa_{\rm NS}=d^{2}M_{N}^{*}(s)/ds^{2}\) with the nucleon mass \(M_{N}^{*}(s)\) defined in Eq. (7), which is the central ingredient of the quark-meson coupling model (QMC), introduced in the original pioneering work of P. Guichon [9] and successfully applied to finite nuclei with an explicit connection to the Skyrme force [10]. This effect associated with the polarization of the quark substructure in presence of the nuclear scalar field, will unavoidably generate three-body forces which may bring the needed repulsion. In practice this response or more precisely the nucleon scalar susceptibility \(\kappa_{\rm NS}\) generates a non-linear coupling of the scalar field to the nucleon or equivalently a decrease of the scalar coupling constant with increasing density. Hence to achieve saturation, in a set of successive works devoted to the study of ordinary nuclear matter and neutron stars [11; 12; 13; 14; 15], we have complemented the relativistic chiral approach in such a way that the effect of the nucleon response is able to counterbalance the attractive chiral tadpole diagram to get good saturation properties, especially the correct curvature coefficient - the incompressibility modulus which is an empirical parameter defined at saturation density. All these aforementioned approaches were based on a chiral effective potential of the simplest linear sigma model with a Mexican hat shape of the following form \[V_{\chi,{\rm L}\sigma{\rm M}}(s)=\frac{1}{2}\,M_{\sigma}^{2}s^{2}\,+\,\frac{1 }{2}\frac{M_{\sigma}^{2}-M_{\pi}^{2}}{F_{\pi}}\,s^{3}\,+\,\frac{1}{8}\,\frac{ M_{\sigma}^{2}-M_{\pi}^{2}}{F_{\pi}^{2}}\,s^{4}\,, \tag{2}\] which displays a strong cubic tadpole term, also referred as the tadpole diagram [16; 7; 17]. Indeed in order to get a correct description of the saturation properties it requires systematically a value of the dimensionless nucleonic response parameter, defined as (see also Eq. (8)), \[C\equiv\frac{\kappa_{\rm NS}\,F_{\pi}^{2}}{2M_{N}},\] to be larger than one [11; 12; 13; 14; 15]. Such values are also required by the analysis of Lattice-QCD (LQCD) data on the chiral properties of the nucleon, with mass \(M_{N}\), scalar charge \(Q_{S}=\partial M_{N}/\partial m\), and chiral susceptibility \(\chi_{N}=\partial^{2}M_{N}/\partial m^{2}\)[18; 19; 20; 21] (\(m\) is the current quark mass governing the explicit chiral symmetry breaking). Moreover in a recent work based on a Bayesian analysis with lattice data as an input [22], we found that the response parameter is strongly constrained to a value \(C\sim 1.4\) very close to the value where the scalar susceptibilities changes its sign: \(C=1.5\). The problem associated with this large value of \(C\) is that it seems impossible to find a realistic confining models for the nucleon able to generate \(C\) larger than one. For instance in the MIT bag model used in the QMC scheme, one has \(C_{\rm MIT}\simeq 0.5\). One possible reason for this discrepancy between models and phenomenological values of \(C\) lies in the use of the L\(\sigma\)M which is probably too naive. Hence one should certainly use an enriched chiral effective potential from a model able to give a correct description of the low-energy realization of chiral symmetry in the hadronic world. A good easily tractable candidate is the Nambu-Jona-Lasinio (NJL) model. Indeed in Ref. [23], referred as [NJLCONF] (NJL plus confinement) in the following, an explicit construction of the background scalar field was performed in the NJL model using a bosonization technique based on an improved derivative expansion valid at low (space-like) momenta [24]. Various confining interactions have been incorporated (quark-diquark string interaction, linear and quadratic confining interaction) on top of the NJL model which seem to be sufficient to generate saturation although the response parameters \(C\) remain relatively small on the order of \(C\sim 0.5\). The reason is that, for a given scalar mass, the NJL chiral effective potential generates a significantly smaller attractive tadpole diagram than the simplistic L\(\sigma\)M. We will discuss this point in more details and demonstrate that the repulsive three-body force generating saturation, is not only determined by the nucleon response \(C\) but also by the cubic term of the NJL potential, hereafter described by the new parameter \(C_{\chi}\). The parameters \(C\) and \(C_{\chi}\) combine together in the three-body interaction. We will also demonstrate how a particular combination of \(C\) and \(C_{\chi}\) is constrained by lattice data [18; 19; 20; 21], which constitutes one main result of this paper. In this paper we mainly discuss the effect of the chiral effective potential, i.e., the contribution of the \(C_{\chi}\) parameter, on the nuclear matter equation of state and on the saturation mechanism, without explicitly specifying the underlying nucleon confinement model. As mentioned above, very simple confining models have been already presented in [NJLCONF] and in a longer forthcoming paper referred as [NJLFCM][25], we will explicitly introduce an effective Hamiltonian inspired from the field correlator method (FCM) developed by Y. Simonov and collaborators [26; 27; 28; 29; 30]. Modulo some ansatz prescription this approach allows us to generate simultaneously, at a semi-quantitative level, a confining interaction with long distance (\(r\gg T_{g}\)) behaviour \(V(r)=\sigma_{g}\,r\), where the string tension \(\sigma_{g}=0.18\) GeV\({}^{2}\), together with an equivalent NJL model with scalar interaction strength \(G_{1}=120\pi\sigma_{g}T_{g}^{4}/(4N_{c}N_{F})\sim 10\) GeV\({}^{-2}\) and cutoff \(\Lambda\sim 1/T_{g}\sim 600\) MeV, where the gluon correlation length, \(T_{g}=0.25\) to \(0.3\) fm [31], is itself related to the gluon condensate, \({\cal G}_{2}\), according to \(T_{g}^{2}=9\sigma_{g}/(\pi^{3}{\cal G}_{2})\). Note that the string tension \(\sigma_{g}\) and the gluon correlation length The NJL chiral confining model The general picture underlying our approach has been sketched in our previous papers (see, e.g., [NJLCONF]) and will be precised in our forthcoming work [NJLFCM]. It can be summarized as follows: nuclear matter is made of nucleons, themselves built from quarks and gluons which look like Y-shaped strings generated by a non perturbative confining force, with constituent quarks at the ends. These quarks acquire a large mass from the quark condensate, which is the order parameter associated with the spontaneous breaking of chiral symmetry in the QCD vacuum. When the density \(n\) of nuclear matter increases, the QCD vacuum is modified by the presence of the nucleons: the value of the quark condensate decreases and the chiral symmetry is progressively restored. Hence what is usually called "the nuclear medium" can be seen as the original "vacuum shifted" by a lower value of the order parameter. The mass of the constituent quarks coincides with the in-medium expectation value, \(M=\overline{\mathcal{S}}(n)\), of the chiral invariant scalar field \(\mathcal{S}\), associated with the radial fluctuation mode of the chiral condensate. We define an "effective" or "nuclear physics" scalar field \(s\) by rescaling the chiral invariant scalar field \(\mathcal{S}\), according to: \[\mathcal{S}\equiv\frac{M_{0}}{F_{\pi}}\,S\equiv\frac{M_{0}}{F_{\pi}}\,\left(s +F_{\pi}\right)\quad\rightarrow\quad\frac{\partial}{\partial s}=\frac{M_{0}}{ F_{\pi}}\,\frac{\partial}{\partial\mathcal{S}} \tag{3}\] where \(M_{0}\sim 350\) MeV is the constituent quark mass in vacuum: \(\overline{\mathcal{S}}(s=0)=M_{0}\). The vacuum expectation value of the "effective" scalar field, \(\overline{S}=F_{\pi}\), coincides by construction with the value of the pion decay constant \(F_{\pi}\). The details of this construction are given in Ref. [23]. The important point is that its fluctuating piece, i.e., the \(s\) field, has to be identified with the usual "nuclear physics sigma meson" of relativistic Walecka theories, \(\sigma_{W}\). The nucleon is assumed to be described by an underlying model where constituent quarks (or diquarks) move in a confining interaction. In the previous [NJLCONF] work, ad-hoc confining potentials have been used on top of the NJL model generating the chirally broken vacuum. In the forthcoming longer paper [NJLFCM] the shape of this effective confining potential and the parameters of the equivalent NJL model will be obtained simultaneously in a way inspired from the field correlator method (FCM)[26; 27; 28; 29; 30]. The nucleon mass will thus naturally depend on the scalar field whose expectation value, \(M=\overline{\mathcal{S}}(n)\), is associated with the in-medium constituent quark mass, namely: \[M_{N}^{*}(\mathcal{S})=M_{N}+\,G_{S}\,\left(\mathcal{S}-M_{0}\right)+3\, \frac{C_{N}}{M_{0}}\,\left(\mathcal{S}-M_{0}\right)^{2}+.... \tag{4}\] In passing we can notice that this approach is in spirit identical with the approach of Bentz and Thomas [7] but with a different underlying picture of the nucleon; in this latter paper the nucleon was constructed from the same NJL model as a bound quark-diquark state and the effect of confinement was taken into account through the presence of an infrared cutoff in the NJL loop integrals. We also used in our previous [NJLCONF] paper [23] a simple quark-diquark NJL model but with confinement incorporated through a string interaction between the color antitriplet diquark state and the color triplet quark state as in a heavy \(Q\overline{Q}\) meson. The two dimensionless response parameters, \(G_{S}\) which can be seen as the scalar number of quarks in the nucleon, and the susceptibility parameter \(C_{N}\), only depend on the constituent quark mass and on the confining force, i.e., the confinement mechanism: \[G_{S}=\left(\frac{\partial M_{N}^{*}(\mathcal{S})}{\partial\mathcal{S}}\right) _{\mathcal{S}=M_{0}},\qquad C_{N}=\frac{M_{0}}{6}\left(\frac{\partial^{2}M_{N}^ {*}(\mathcal{S})}{\partial\mathcal{S}^{2}}\right)_{\mathcal{S}=M_{0}}. \tag{5}\] One important purpose of the present paper is to obtain phenomenological constraints on these two fundamental parameters that we will call "QCD-connected parameters", whereas our forthcoming paper [25] will provide a model calculation of these parameters in terms of \(\sigma_{g}\) and \(T_{g}\) within the FCM approach. ### The NJL chiral effective potential In the following, we connect the expansion (4) of the nucleon mass to previously published expansion in terms of the effective "nuclear physics" scalar field \(s\)[4; 11; 12; 13; 14; 15; 22], defined as: \[s=\frac{F_{\pi}}{M_{0}}\left(\mathcal{S}-M_{0}\right)\,. \tag{6}\] We have the following expansion of the nucleon mass: \[M_{N}^{*}(s) = M_{N}+g_{S}s+\frac{1}{2}\kappa_{\rm NS}s^{2}+{\cal O}(s^{3})\,=\,M _{N}\left(1+\frac{g_{S}F_{\pi}}{M_{N}}\frac{s}{F_{\pi}}+C\left(\frac{s}{F_{\pi}} \right)^{2}+\ldots\right)\,, \tag{7}\] \[{\rm with:}\qquad g_{S}=\frac{M_{0}}{F_{\pi}}G_{S}\,,\qquad C\equiv \frac{\kappa_{\rm NS}F_{\pi}^{2}}{2M_{N}}=\frac{3M_{0}}{M_{N}}C_{N}\,. \tag{8}\] Consequently the in-medium nucleon mass mainly depends on two effective dimensionless QCD-connected parameters, the scalar nucleon coupling constant, \(g_{S}\), and the dimensionless scalar nucleon susceptibility, \(C\equiv\kappa_{\rm NS}\,F_{\pi}^{2}/2M_{N}\), which embeds the influence of the internal nucleon structure or said differently the response of the nucleon to the nuclear scalar field. Notice that the response parameter \(C\) used in our previous work is numerically close to the QCD-connected susceptibility parameter \(C_{N}\). Its presence generates a decreasing density dependence of the in-medium scalar coupling constant, \(g_{S}^{*}(s)=\partial M_{N}^{*}/\partial s=g_{S}+\kappa_{\rm NS}\,s+..\), corresponding to a progressive decoupling of the nucleon from the chiral condensate, which is an essential ingredient of the saturation mechanism (recall that \(s\) is a negative quantity varying between zero in the vacuum to \(-F_{\pi}\) at full chiral restoration). The nuclear matter energy density as a functional of the scalar field \({\cal S}\) or the \(s\) field is given by \[\varepsilon_{0}=\int\,\frac{4\,d^{3}k}{(2\pi)^{3}}\,\Theta(p_{F}-k)\,\left( \sqrt{k^{2}+M_{N}^{*2}(s)}\,-\,M_{N,{\rm vac}}\right)\,+\,V_{\chi}(s)\,+\, \varepsilon_{\omega+\rho}\,+\,\varepsilon_{\rm Fock}\,+\,\varepsilon_{\rm pion -nucleon\,loops}, \tag{9}\] where only the scalar field contribution at the Hartree level together with the kinetic energy are explicitly written, while omega and rho meson exchanges, Fock terms and pion-nucleon loops (or correlation energy in the terminology of Ref. [12]) can be incorporated as well according to Refs. [12; 13; 14]. Note that \(V_{\chi}(s)\) is the chiral effective potential which is expressed in the L\(\sigma\)M by Eq. (2). Let us now consider the case of the NJL model defined by the Lagrangian: \[{\cal L} = \overline{\psi}\left(i\,\gamma^{\mu}\partial_{\mu}\,-\,m\right) \,\psi\,+\,\frac{G_{1}}{2}\,\left[\left(\overline{\psi}\psi\right)^{2}\,+\, \left(\overline{\psi}\,i\gamma_{5}\vec{\tau}\,\psi\right)^{2}\right] \tag{10}\] \[{}-\frac{G_{2}}{2}\,\left[\left(\overline{\psi}\,\gamma^{\mu} \vec{\tau}\,\psi\right)^{2}\,+\,\left(\overline{\psi}\,\gamma^{\mu}\gamma_{5} \vec{\tau}\,\psi\right)^{2}\,+\,\left(\overline{\psi}\,\gamma^{\mu}\,\psi \right)^{2}\right].\] It depends on four parameters: the coupling constants \(G_{1}\) (scalar), \(G_{2}\) (vector), the current quark mass \(m\) and a (noncovariant) cutoff parameter \(\Lambda\). Three of these parameters (\(G_{1}\), \(m\), and \(\Lambda\)) are adjusted to reproduce the pion mass, the pion decay constant and the quark condensate. For \(G_{2}\) we consider different scenarios: \(G_{1}=G_{2}\) and \(G_{2}=0\). We refer the reader to [NJLCONF] and [NJLFCM] for more details. Using path integral techniques and after a chiral rotation of the quark field, it can be equivalently written in a semi-bozonized form involving a pion field \(\vec{\phi}\) embedded in the unitary operator \(U=\xi^{2}=exp(i\,\vec{\tau}\cdot\vec{\phi}(x)/F_{\pi})\), a scalar field, \({\cal S}\), a vector field, \(V^{\mu}\), and an axial-vector field, \(A^{\mu}\). It has the explicit form given in Eqs. (2, 7-11) of Ref. [23]. Subtracting the vacuum expectation values, the chiral effective potential can be expressed as: \[V_{\chi,{\rm NJL}}(s)=-2N_{c}N_{f}\left(I_{0}({\cal S})\,-\,I_{0}(M_{0})\right) \,+\,\frac{({\cal S}-m)^{2}-(M_{0}-m)^{2}}{2\,G_{1}}. \tag{11}\] The quantity, \(-2N_{c}N_{f}\,I_{0}({\cal S})\), is nothing but the total (in-medium) energy of the Dirac sea of constituent quarks with the NJL loop integral \(I_{0}({\cal S})\) given hereafter. The vacuum constituent quark mass \(M_{0}\) corresponds to the minimum of the chiral effective potential, i.e., \(V_{\chi,{\rm NJL}}^{\prime}(s=0)=0\), where \(V^{\prime}\) is the derivative with respect to the scalar field \(s\). It is consequently the solution of the gap equation \[M_{0}=m\,+\,4N_{c}N_{f}M_{0}\,G_{1}\,I_{1}(M_{0}), \tag{12}\] where \(I_{1}(M_{0})\) is another NJL loop integral given in the set of equations below \[I_{0}({\cal S})=\int_{0}^{\Lambda}\frac{d{\bf p}}{(2\pi)^{3}}\,E _{p}({\cal S}),\quad I_{1}({\cal S})=\int_{0}^{\Lambda}\frac{d{\bf p}}{(2\pi)^ {3}}\,\frac{1}{2\,E_{p}({\cal S})},\] \[I_{2}({\cal S})=\int_{0}^{\Lambda}\frac{d{\bf p}}{(2\pi)^{3}}\, \frac{1}{4\,E_{p}^{3}({\cal S})},\quad J_{3}({\cal S})=\int_{0}^{\Lambda} \frac{d{\bf p}}{(2\pi)^{3}}\,\frac{3}{8\,E_{p}^{5}({\cal S})}\,, \tag{13}\] where \(E_{p}({\cal S})=\sqrt{{\cal S}^{2}+p^{2}}\). ### Effective chiral potential expanded in the \(s\) field For a comparison with usual RMF model using the L\(\sigma\)M chiral effective potentials of Eq. (2) or equivalently non-linear sigma couplings, we expand the effective potential to third order in \(s\) as: \[V_{\chi,{\rm NJL}}(s)=V_{\chi}(0)+V^{\prime}_{\chi}(0)\,s+\frac{1}{2}V^{\prime \prime}_{\chi}(0)\,s^{2}+\frac{1}{6}V^{\prime\prime\prime}_{\chi}(0)\,s^{3}+.... \tag{14}\] An explicit calculation of the derivatives of the potential yields \[V_{\chi,{\rm NJL}}(s)=\frac{1}{2}\,M_{\sigma}^{2}\,s^{2}\,+\,\frac{1}{2}\, \frac{M_{\sigma}^{2}-M_{\pi}^{2}}{F_{\pi}}\,s^{3}\left(1\,-\,C_{\chi,{\rm NJL} }\right)+..., \tag{15}\] where \(F_{\pi}\) is the pion decay constant and \(M_{\pi}=\sqrt{mM_{0}/G_{1}F_{\pi}^{2}}\), the canonical pion mass calculated in the bosonized NJL model. The effective sigma mass \(M_{\sigma}\) (considering the axial-pion mixing) is defined as \[M_{\sigma}^{2}=4\,M_{0}^{2}\,\frac{f_{\pi}^{2}}{F_{\pi}^{2}}+\,M_{\pi}^{2}, \qquad\mbox{with:}\quad\ f_{\pi}^{2}=\frac{F_{\pi}^{2}}{1-4G_{2}F_{\pi}^{2}} \tag{16}\] (where the second relation is obtained in the NJL model [23]) and \(C_{\chi,{\rm NJL}}\) is a specific NJL parameter: \[C_{\chi,{\rm NJL}}=\frac{2}{3}\,\frac{M_{0}^{2}\,J_{3}(M_{0})}{I_{2}(M_{0})}. \tag{17}\] This form of the NJL chiral effective potential deviates from the original L\(\sigma\)M, see Eq. (2), through the presence of the model dependent parameter \(C_{\chi,{\rm NJL}}\) whose net effect is to decrease the attractive cubic tadpole term of the L\(\sigma\)M. The use of this \(C_{\chi,{\rm NJL}}\) parameter is particularly convenient, since taking \(C_{\chi,{\rm NJL}}=1\) is equivalent to the absence of the tadpole diagram as in the case of the QMC model [9; 10]. In the absence of vector interaction (\(G_{2}=0\)), for typical value of FCM parameters, \(\sigma_{g}=0.18\) GeV\({}^{2}\), \(T_{g}=0.286\) fm, one obtains \(G_{1}=12.514\) GeV\({}^{-2}\). The NJL cutoff behaves necessarily as \(\Lambda\sim 1/T_{g}\) but there is a certain arbitrariness in setting its precise value: we take \(\Lambda=0.604\) GeV. Taking \(m=5.8\) MeV this enables us to obtain reasonable values for the pion decay constant, \(F_{\pi}=91.9\) MeV, the pion mass \(M_{\pi}=140\) MeV, and the quark condensate \(\langle\bar{q}q\rangle=-(241.1\,{\rm MeV})^{3}\). The resulting vacuum constituent quark mass, effective sigma mass and \(C_{\chi}\) parameter are \(M_{0}=356.7\) MeV, \(M_{\sigma}=716.4\) MeV and \(C_{\chi,{\rm NJL}}=0.488\). Fig. 1 shows that the approximate expansion (15) reproduces very well the exact NJL potential. Comparing L\(\sigma\)M with NJL scalar potential in Fig. 1, one sees that the attractive tadpole term is larger in the case of L\(\sigma\)M. The effect of the parameter \(C_{\chi,{\rm NJL}}\) is then to reduce the attractive tadpole diagram and make the scalar potential more repulsive. Using another parameter set, \(G_{1}=7.705\) GeV\({}^{-2}\), \(\Lambda=0.740\) GeV and \(m=3.5\) MeV, compatible with the \(\pi-a_{1}\) mixing with \(G_{2}=G_{1}\) as suggested by the FCM [26; 27; 28; 29; 30], one obtains \(M_{0}=365.3\) MeV and a smaller value of \(C_{\chi,{\rm NJL}}=0.43\) but the reduction of the tadpole diagram is still significant. In the following, we set \(C_{\chi}\equiv C_{\chi,{\rm NJL}}\) and \(V_{\chi}\equiv V_{\chi,{\rm NJL}}\) for simplicity. ### Impact on nuclear matter properties At the Hartree approximation (RMF), the scalar field minimizing the total energy is the solution of the following self-consistent equation of motion: \[V^{\prime}_{\chi}(s)=-g^{*}_{S}(s)n_{s}\qquad\mbox{with}\qquad n_{s}=4\int_{0} ^{k_{F}}\frac{d{\bf k}}{(2\pi)^{3}}\frac{M_{N}^{*}(s)}{\sqrt{M_{N}^{*2}(s)+k^{ 2}}}, \tag{18}\] where \(V^{\prime}_{\chi}(s)\) is the derivative of the Mexican hat chiral effective potential, with respect to the scalar field \(s\). This equation constitutes an in-medium modified gap equation whose solution is controlled by the nucleonic scalar density \(n_{s}\). To second order in \(s/F_{\pi}\) or equivalently to second order in the scalar density \(n_{s}\), the in-medium gap equation can be formally solved with the result: \[\overline{s} = -\frac{g_{S}}{M_{\sigma}^{2}}\,n_{s}\,+\,\frac{g_{S}}{M_{\sigma}^{4} }\,\left(\kappa_{\rm NS}\,-\,\frac{g_{S}\,V_{\chi}^{\prime\prime\prime}(0)}{2 \,M_{\sigma}^{2}}\right)\,n_{s}^{2} \tag{19}\] \[= -\,\frac{g_{S}}{M_{\sigma}^{2}}\,n_{s}\,+\,\frac{g_{S}}{M_{\sigma }^{4}}\,\left(\frac{2\,M_{N}}{F_{\pi}^{2}}\,C\,-\,\frac{3\,g_{S}}{2\,F_{\pi}} \,\frac{M_{\sigma}^{2}\,-\,M_{\pi}^{2}}{M_{\sigma}^{2}}\,(1\,-\,C_{\chi}) \right)\,n_{s}^{2}\] \[= -\,\frac{g_{S}}{M_{\sigma}^{2}}\,n_{s}\,+\,\frac{g_{S}^{2}}{M_{ \sigma}^{4}\,F_{\pi}}\,\left(2\,\tilde{C}_{s}\,-\,\frac{3}{2}\right)\,n_{s}^{2 }\qquad{\rm with}\qquad\tilde{C}_{s}\simeq\frac{M_{N}}{g_{S}\,F_{\pi}}\,C+\, \frac{3}{4}\,C_{\chi}.\] For a qualitative discussion, we have supposed \(M_{\pi}\ll M_{\sigma}\) to get the approximate expression \(\tilde{C}_{s}\). The scalar field contribution to the energy per nucleon is defined as \(E_{s}/A=V_{\chi}(s)/n+M_{N}^{*}(s)-M_{N}\). To leading order in density, its contribution is defined as \(E^{(2b)}\), which reads \[\frac{E^{(2b)}}{A}=-\,\frac{g_{S}^{2}}{M_{\sigma}^{2}}\,n_{s}\,+\,\frac{1}{2} \,\frac{g_{S}^{2}}{M_{\sigma}^{2}}\,\frac{n_{s}^{2}}{n}=-\,\frac{1}{2}\,\frac {g_{S}^{2}}{M_{\sigma}^{2}}\,n\,+\,\frac{1}{2}\,\frac{g_{S}^{2}}{M_{\sigma}^{ 2}}\,\frac{(n_{s}-n)^{2}}{n}. \tag{20}\] In the first expression of Eq. (20), we have separated the effect of the scalar self-energy of the nucleon and the contribution of the effective potential at leading order in the densities \(n\) and \(n_{s}\). In the second form, we display explicitly the term proportional to \((n_{s}-n)^{2}\), corresponding to an effective repulsive three-body force, which is exactly the Walecka saturation mechanism when the omega is added. This contribution, which survives for an point-like nucleon, is proportional to the square of the nucleon momentum. This is the so-called Z graph associated with the excitation of \(N\overline{N}\) pairs [32; 33]. To second order in density \(E_{s}/A\) provides an effective three-body contribution to the energy per nucleon: \[\frac{E^{(3b)}}{A}\simeq\frac{g_{S}^{2}}{2\,M_{\sigma}^{4}}\,\left(\kappa_{ \rm NS}\,-\,\frac{g_{S}\,V_{\chi}^{\prime\prime\prime}(0)}{3\,M_{\sigma}^{2}} \right)\,n_{s}^{2}\,=\,\frac{g_{S}^{3}}{M_{\sigma}^{4}\,F_{\pi}}\,\left(2\, \tilde{C}_{3}\,-\,1\right)\,n_{s}^{2}\qquad{\rm with}\qquad\tilde{C}_{3} \simeq\frac{M_{N}}{g_{S}\,F_{\pi}}\,C+\,\frac{1}{2}\,C_{\chi}. \tag{21}\] We can recover Eq. (44) of Ref. [16] with \(C_{\chi}=0\). Figure 1: Effective potential (in units of the string tension \(\sigma^{2}\), with \(\sigma_{g}=0.18\) GeV\({}^{2}\)) plotted against \(|s|/F_{\pi}\) for the NJL model (full line), L\(\sigma\)M (dashed line) and original Walecka model that is limited to the quadratic term (dotted line), for a given effective sigma mass \(M_{\sigma}=716.4\) MeV. Also shown is the approximate form of the NJL potential when limited to the cubic term in the scalar field \(s\) expansion (15) (dot-dashed line). The effect of the \(s^{3}\) term is well seen when comparing to the Walecka model. Note that the approximate expansion (15) is almost identical to the exact NJL potential. We now give a qualitative discussion of the influence of the three parameters \(g_{S}\), \(\kappa_{\rm NS}\) and \(V_{\chi}^{\prime\prime\prime}(0)\) or equivalently \(g_{S}\), \(C\) and \(C_{\chi}\), taking various works as illustrative examples. If we ignore both the response of the nucleon, i.e., \(\kappa_{\rm NS}=0\) (or \(C=0\)), and the contribution of the tadpole diagram to the chiral potential, i.e., \(V_{\chi}^{\prime\prime\prime}(0)=0\) (or \(C_{\chi}=1\)), we recover the original Walecka model since the three-body contribution (21) is absent and the saturation mechanism is associated with the Z graph alone, see Eq. (20). It is known that in this case saturation requires a large \(g_{S}/M_{\sigma}\) value, which implies a large repulsion induced by \(g_{\omega}/m_{\omega}\) in order to obtain the empirical value of the binding energy. As a consequence one gets a much too large incompressibility modulus \(K_{\rm sat}\). One possibility to cure this problem is to introduce density dependent coupling constants [34; 35]. In the QMC model originally proposed in Ref. [9] and providing a successful phenomenology [10], the response of the nucleon is incorporated, but without explicit connection with the chiral status of the scalar field. Hence no tadpole diagram is considered, i.e., \(V_{\chi}^{\prime\prime\prime}(0)=0\) or \(C_{\chi}=1\). The original QMC model is formulated in the MIT bag model, yielding \(C\sim 0.5\) and \(E^{(3b)}\propto 2\tilde{C}_{3}-1=2C\sim 1\) which turns out to be sufficient to bring the needed repulsion to get nuclear saturation with a correct incompressibility modulus, although this approach does not satisfy chiral symmetry requirements. Soon after the first version of the relativistic Walecka model, it has been realized [5; 6; 7; 8] that in relativistic theory with a mexican hat-like effective potential, the contribution of the Walecka \(Z\) graph is not large enough to stabilize nuclear matter against the effect of the attractive tadpole diagram. This is the typical situation of the original L\(\sigma\)M where \(V_{\chi}^{\prime\prime\prime}(0)\) is large and positive (\(C_{\chi}\sim 0\)) and even of the NJL model (\(C_{\chi}<1\)) where the response of the nucleon is ignored, i.e., \(C=0\). Some phenomenological approaches, such as the so-called NL3 model [36], have introduced self-interactions of the scalar field in the form of an effective potential but without connection to chiral symmetry. In particular a repulsive cubic term, i.e., \(V_{\chi}^{\prime\prime\prime}(0)<0\), is introduced in this model. From Table II of Ref. [36], one can obtain the equivalent \(C_{\chi}\sim 1.47\) parameter, which corresponds to \(\tilde{C}_{3}\sim 0.74\). One can thus re-interpret the original NL3 model with a negative value of the \(c_{2}\) parameter (see table II of Ref. [36]) as a way to simulate in an effective way the nucleon response with \(C\sim 0.74\). The way the non-linear potential has been introduced in the NL3 model was pragmatic, but it can now be understood in a more fundamental approach. ## III Constraining the chiral confining potential by lattice-QCD In this section, we connect the in-medium properties of the nucleon mass defined by Eq. (7) with the Lattice-QCD calculations performed in vacuum (\(s=0\)). For this reason, the nucleon mass will be noted in the following \(M_{N}(s)\) (without the \({}^{*}\)). The derivatives of the nucleon mass could however be obtained, on the one hand, from the derivatives of the nucleon mass (7) taken at \(s=0\) and providing \(g_{S}\) and \(\kappa_{\rm NS}\), and, on the other hand, from the Lattice-QCD calculations. Within an underlying microscopic confining model for the nucleon, i.e., [NJLCONF] and [NJLFCM], generating the quark core wave functions, the axial charge, the \(\pi NN\) coupling constant and the \(\pi NN\) form factor can be obtained, allowing the calculation of the pion cloud contribution (pion self-energy) to the in-medium nucleon (and Delta resonance) mass, as in the Cloudy Bag model [37] or similar approaches using an alternative confinement potential [38]. The pion contribution to the nucleon mass is expressed as \[\Sigma^{(\pi)}(M;m)=-\frac{3}{2}\left(\frac{g_{A}(M)}{2F_{\pi}(M)}\right)^{2} \int\frac{d{\bf q}}{(2\pi)^{3}}{\bf q}^{2}v^{2}({\bf q};M)\left(\frac{1}{ \omega_{q}}\frac{1}{\omega_{q}+\epsilon_{N{\bf q}}}+\frac{32}{25}\frac{1}{ \omega_{q}}\frac{1}{\omega_{q}+\epsilon_{\Delta{\bf q}}}\right)\,, \tag{22}\] with \(\omega_{q}=\sqrt{q^{2}+M_{\pi}^{2}(M)}\) and \(M_{\pi}^{2}(M)=mM/G_{1}F_{\pi}^{2}(M)\), the other quantities being defined in Eq. (22) of Ref. [12]. Here the various quantities such as \(M_{\pi}^{2}(M)\) are in-medium quantities where the vacuum constituent quark mass \(M_{0}\) is replaced by \(M=\overline{\cal S}\) (see Eq. (34) of Ref. [23] and the text just before). Thus in this framework, the nucleon mass is split according to: \[M_{N}(s)\equiv M_{N}(M;m)=M_{N}^{\rm core}(M)+\Sigma^{(\pi)}(M;m)\,. \tag{23}\] ### Nucleon response and its chiral properties The derivatives of the nucleon mass with respect to the constituent quark mass gives the response parameters, which are defined in Eq. (5), i.e., \(G_{S}=\partial M_{N}(M;m)/\partial M\) and \(C_{N}=(1/6)\partial^{2}M_{N}(M;m)/\partial M^{2}\), where the derivatives are taken at \(M=M_{0}(m)\), i.e. \(s=0\). To benefit from the lattice data, we can relate them to two chiral properties of the nucleon, the scalar charge, \(Q_{S}=\partial M_{N}(M=M_{0}(m);m)/\partial m\), and the chiral susceptibility, \(\chi_{N}=\partial^{2}M_{N}(M=M_{0}(m);m)/\partial m\). \(M_{0}(m);m)/\partial m^{2}\). All what we need for this calculation are the derivatives of the constituent quark mass with respect to the current quark mass. These derivatives are obtained from the NJL model and read ([NJLFCM]): \[\left(\frac{\partial M_{0}}{\partial m}\right)=\left(\frac{M_{\pi}^{2}}{m} \right)\,\frac{M_{0}}{M_{\sigma}^{2}},\qquad\left(\frac{\partial^{2}M_{0}}{ \partial m^{2}}\right)\approx-\,\left(\frac{M_{\pi}^{2}}{m}\right)^{2}\,\frac{ 3M_{0}}{M_{\sigma}^{4}}\,\,\left(1-C_{\chi}\right)\!. \tag{24}\] where in the second expression a correction factor of order \(M_{\pi}^{2}/M_{\sigma}^{2}\) has been neglected. We now note that in Eq. (23) the current quark mass appears explicitly only in the pionic self-energy \(\Sigma^{(\pi)}\). It appears also implicitly through the dependence of the constituent quark mass upon the current quark mass. Hence the scalar charge, \(Q_{S}\)[16], receives two different contributions: \[Q_{S} = \frac{\partial M_{N}}{\partial m}=\frac{\partial M_{N}}{ \partial M}\left(\frac{\partial M_{0}}{\partial m}\right)+\frac{d\Sigma^{(\pi )}(M_{0};m)}{dm}=\frac{F_{\pi}}{M_{0}}g_{S}\left(\frac{M_{\pi}^{2}}{m}\right) \frac{M_{0}}{M_{\sigma}^{2}}+\left(\frac{M_{\pi}^{2}}{m}\right)\frac{d\Sigma^ {(\pi)}(M_{0};m)}{dM_{\pi}^{2}} \tag{25}\] \[= \left(\frac{M_{\pi}^{2}}{m}\right)\,\frac{F_{\pi}\,g_{S}}{M_{ \sigma}^{2}}+\left(\frac{M_{\pi}^{2}}{m}\right)\frac{d\Sigma^{(\pi)}}{dM_{\pi }^{2}}\equiv Q_{S}^{(s)}+Q_{S}^{(\pi)},\] where we have employed the relation (3). The second term, \(Q_{S}^{(\pi)}\), is referred as the pion cloud contribution. It is obtained by taking only the linear quark mass dependence appearing in \(M_{\pi}^{2}=M_{0}m/G_{1}F_{\pi}^{2}\), thus ignoring all the implicit \(m\) dependencies through the \(M\) dependence of \(M_{\pi}\), \(g_{A}\), \(F_{\pi}\) and the form factor. We refer the first term, \(Q_{S}^{(s)}\), as the scalar field contribution despite it contains the implicit \(m\) dependence of the pionic self-energy. In effect \(Q_{S}^{(s)}\) receives itself two separate contributions: \[Q_{S}^{(s)}=\frac{\partial M_{N}^{\rm core}}{\partial M}\left(\frac{\partial M _{0}}{\partial m}\right)\,+\,\frac{\partial\Sigma^{(\pi)}}{\partial M}\left( \frac{\partial M_{0}}{\partial m}\right)\equiv\left(\frac{M_{\pi}^{2}}{m} \right)\,\frac{F_{\pi}\,g_{S}}{M_{\sigma}^{2}}. \tag{26}\] The second contribution contains the implicit \(m\) dependence of the pion self-energy coming from the \(M\) dependence of the various quantities (\(F_{\pi}(M)\), \(M_{\pi}(M)\), \(g_{A}\), form factor) through the \(m\) dependence of the constituent quark mass taken at its vacuum value \(M_{0}\). Regarding this specific point it is generally assumed that the pion properties are protected by chiral symmetry and this is what we find in the model developped in [NJLFCM] where the pion mass displays a remarkable stability for a large domain of the constituent quark mass or equivalently of the nuclear scalar field \(s\). As a consequence the induced effect on \(g_{S}\) is extremely small. However the combined effect of the modification of the nucleon size and of the pion decay constant might induce a more important correction on the \(\pi NN\) vertex \((g_{A}v(q)^{2}/2F_{\pi})^{2}\) but we do not consider this effect which certainly requires a more detailed study. It follows for Eq. (25) that: \[m\,\frac{\partial M_{N}}{\partial m}=F_{\pi}\,g_{S}\,\frac{M_{\pi}^{2}}{M_{ \sigma}^{2}}\,+\,M_{\pi}^{2}\,\frac{d\Sigma^{(\pi)}(M_{0};m)}{dM_{\pi}^{2}} \equiv\sigma_{N}^{(s)}\,+\,\sigma_{N}^{(\pi)}\equiv\sigma_{N}. \tag{27}\] Hence we recover the nucleon sigma term. This result is just the expression of the Feynman-Hellman theorem. This light quark sigma term has been abundantly discussed in our previous papers [12; 13; 23]. Using a dipole \(\pi NN\) form factor with cutoff \(\Lambda=1\) GeV, the pionic contribution to the sigma term was found to be \(\sigma_{N}^{(\pi)}=21.5\) MeV [12] and a pionic self-energy \(\Sigma^{(\pi)}(M;m)=420\) MeV. The value of the non pionic contribution was found to be \(\sigma_{N}^{(s)}\sim 29\) MeV [12] to get a total sigma term \(\sigma_{N}=50.5\) MeV. Evidently the relative weight of the two contributions may be altered by the precise values of the parameters, but according to our model FCM calculation [25] and from the lattice data constraints discussed below, this modification of the relative weight should be rather moderate and the value of the sigma term and its repartition is a rather strong constraint on the nucleon modelling. For the scalar susceptibility one obtains from Eq. (25), ignoring again higher order correction \(M_{\pi}^{n}/M_{\sigma}^{n}\) \[\chi_{N} = \frac{\partial^{2}M_{N}}{\partial m^{2}}=\,\frac{\partial M_{N}} {\partial M}\left(\frac{\partial^{2}M_{0}}{\partial m^{2}}\right)+\,\frac{ \partial^{2}M_{N}}{\partial M^{2}}\left(\frac{\partial M_{0}}{\partial m} \right)^{2}+\frac{\partial}{\partial M}\left(\frac{d\Sigma^{(\pi)}}{dm} \right)\left(\frac{\partial M_{0}}{\partial m}\right)+\frac{d^{2}\Sigma^{(\pi)}} {dm^{2}} \tag{28}\] \[= -\,\left(\frac{M_{\pi}^{2}}{m}\right)^{2}\,\frac{3\,g_{S}\,F_{\pi} }{M_{\sigma}^{4}}\,(1-C_{\chi})\,+\,\left(\frac{M_{\pi}^{2}}{m}\right)^{2}\kappa _{\rm NS}\,\frac{F_{\pi}^{2}}{M_{\sigma}^{4}}+\left(\frac{M_{\pi}^{2}}{m} \right)^{2}\frac{1}{M_{\sigma}^{2}}\frac{d}{dM_{\pi}^{2}}\left(M_{0}\,\frac{ \partial\Sigma^{(\pi)}}{\partial M}\right)+\frac{d^{2}\Sigma^{(\pi)}}{dm^{2}}\] \[\equiv \chi_{N}^{(s)}+\chi_{N}^{(s\pi)}\,+\,\left(\frac{M_{\pi}^{2}}{m} \right)^{2}\frac{d^{2}\Sigma^{(\pi)}(M_{0};m)}{d(M_{\pi}^{2})^{2}}\equiv\chi_{N} ^{(s)}+\chi_{N}^{(s\pi)}\,+\,\chi_{N}^{(\pi)}\,,\] where we have used Eq. (24). One can split the scalar susceptibility into a non pionic (\(\chi_{N}^{(s)}\)), a mixed scalar field-pionic (\(\chi_{N}^{(s\pi)}\)) and a purely pionic (\(\chi_{N}^{(\pi)}\)) piece. The first two contributions in the second line of Eq. (28) with (considering small \(M_{\pi}\) and \(G_{2}\) in Eq. (16)), gives \(\chi_{N}^{(s)}\) as: \[\chi_{N}^{(s)}=-\,\left(\frac{M_{\pi}^{2}}{m}\right)^{2}\,\frac{F_{\pi}\,g_{S}}{ M_{\sigma}^{4}}\,\left(3\,-\,2\,\tilde{C}_{L}\right)\quad\mbox{with}\quad\tilde{C} _{L}=\frac{M_{N}}{g_{S}\,F_{\pi}}\,C\,+\,\frac{3}{2}\,C_{\chi}. \tag{29}\] As for the case of \(g_{S}\), the nucleon susceptibility \(\kappa_{\rm NS}\) may receive a contribution from the pion-self-energy; again the contribution to the dimensionless \(C\) parameter is very small if the vertex correction is omitted. The mixed scalar field-pionic susceptibility originating from the scalar field (i.e, the constituent quark mass) dependence of the pionic self-energy, \[\chi_{N}^{(s\pi)}=-\left(\frac{M_{\pi}^{2}}{m}\right)^{2}\,\frac{F_{\pi}}{M_{ \sigma}^{2}}\,\frac{\partial}{\partial s}\left(\frac{\sigma_{N}^{(\pi)}}{M_{ \pi}^{2}}\right), \tag{30}\] was ignored in our previous works. Using a sharp cutoff in the expression of the nucleon pionic self-energy, it can be shown analytically that this term is negligible compared to the other contributions to the susceptibility. In view of the comparison with lattice QCD result it is very important to notice that the chiral susceptibility is governed by the particular combination: \[\chi_{N}\sim\left(3\,-\,2\,\tilde{C}_{L}\right)\,. \tag{31}\] to be compared with the particular combination entering the expression of the three-body repulsive contribution (21) to the binding energy per nucleon: \[\frac{E^{(3b)}}{A}\sim\left(2\,\tilde{C}_{3}\,-\,1\right)\,n_{s}^{2}\qquad \mbox{with}\qquad\tilde{C}_{3}=\frac{M_{N}}{g_{S}\,F_{\pi}}\,C\,+\,\frac{1}{2} \,C_{\chi}. \tag{32}\] Limiting ourselves to the pure L\(\sigma\)M case \(C_{\chi}=0\), inducing \(\tilde{C}_{L}=\tilde{C}_{3}\), the susceptibility \(\chi_{N}\) (31) and the three-body repulsive contribution (21) are directly related, as found in our previous works, e.g., Ref. [16]. This constitutes a very important result linking chiral properties of the nucleon to the saturation mechanism. In the general case where \(C_{\chi}\neq 0\), there is still a strong link between the susceptibility and the three-body repulsive contribution. ### Constraints from Lattice-QCD Those chiral properties of the nucleon, associated with explicit chiral symmetry breaking, namely the first and second derivatives of the nucleon mass with respect to the current quark mass, are thus very sensitive to the modeling of the nucleon. We have also shown that the scalar coupling constant, \(g_{S}\), and the nucleon response parameter, \(C_{N}\) (or \(C\) or \(\kappa_{\rm NS}\)), depend on the quark substructure and the confinement mechanism as well as the effect of spontaneous chiral symmetry breaking. We will now show how they can be constrained by lattice data. The nucleon mass, as well as other intrinsic properties of the nucleon (sigma term, chiral susceptibilities), are QCD quantities which are in principle obtainable from lattice simulations. The problem is that lattice calculations of this kind are still difficult for small quark masses, or equivalently small pion mass \({\cal M}_{\pi}\). Here \({\cal M}_{\pi}\) represents the pion mass to leading order in the quark mass (i.e., ignoring the NLO chiral logarithm correction), \({\cal M}_{\pi}^{2}=2m\,B=-2m\,\langle\overline{q}\,q\rangle_{\chi L}/F^{2}\) (GOR relation). The quantities \(F\) (the pion decay constant in the chiral limit) and \(B\) are two low energy parameters appearing in chiral perturbation theory [39]. In practice \({\cal M}_{\pi}\) deviates numerically very little from the bosonised NJL pion mass \(M_{\pi}\). Typically at the time of the publication of the pioneering work from the Adelaide group [19] (that we will call hereafter AD1), these LQCD limitations were \(m>50\) MeV and \({\cal M}_{\pi}^{2}>0.27\) GeV\({}^{2}\) (to be compared to the physical value, \(0.02\) GeV\({}^{2}\)). Hence a technique was needed to extrapolate the lattice data to the physical region. The difficulty of the extrapolation is linked to the non analytical behaviour of the nucleon mass as a function of \(m\) (or equivalently \({\cal M}_{\pi}^{2}\)) which comes from the pion cloud contribution. The idea of the Adelaide group, [18; 19; 20; 21] (papers referred herafter as AD0, AD1, AD2 and AD3) was to separate the pion cloud self-energy, \(\Sigma_{\pi}({\cal M}_{\pi},\Lambda)\), from the rest of the nucleon mass and to calculate it with just one adjustable cutoff parameter \(\Lambda\) entering the form factor. Actually different cutoff forms for the pion loops (Gaussian, dipole, monopole, sharp) were used with the adjustable parameter \(\Lambda\). This formulation of Chiral Perturbation Theory (ChiPT) is thus called the Finite Range Regulator (FRR) method. The remaining non pionic part is expanded in terms of powers of \({\cal M}_{\pi}^{2}\) as follows: \[M_{N}({\cal M}_{\pi}^{2})=a_{0}\,+\,a_{2}\,{\cal M}_{\pi}^{2}\,+\,a_{4}\,{\cal M }_{\pi}^{4}\,+...+\,\Sigma_{\pi}({\cal M}_{\pi},\,\Lambda) \tag{33}\] where \(\Sigma_{\pi}({\cal M}_{\pi},\Lambda)=\Sigma^{(\pi)}({\cal M}_{\pi},\,\Lambda)+ \,\Sigma_{\rm tad}^{(\pi)}({\cal M}_{\pi},\,\Lambda)\). In AD1, which incorporates in the analysis the effect of a tadpole contribution \(\Sigma_{\rm tad}^{(\pi)}\,({\cal M}_{\pi},\,\Lambda)\), the best-fit value for \(a_{2}\) shows little sensitivity to the shape of the form factor, with a value \(a_{2}\simeq 1.5\) GeV\({}^{-1}\), which corresponds to a non pionic piece of the light quark sigma commutator \(\sigma_{N}^{(s)}=30\) MeV. In AD0 (which is actually the preprint version of AD1) and in the more recent paper, AD3, the contribution of the tadpole was not considered. Depending on the precise method used in the lattice simulation, the preferred values for \(a_{2}\) was smaller, in the range \(a_{2}\simeq 1.0\) to \(1.2\) GeV\({}^{-1}\). Notice that taking \(a_{2}\) in the range \(a_{2}\simeq 1.2\) to \(1.5\) GeV\({}^{-1}\) corresponds to a non pionic piece of the light quark sigma commutator \(\sigma_{N}^{(s)}=24\) to \(30\) MeV. In AD1, (which incorporates the effect of the pion tadpole) the best-fit value for \(a_{4}\) shows again little sensitivity to the shape of the form factor, with a values \(a_{4}\simeq-0.5\) GeV\({}^{-3}\). In AD0 and AD3, depending on the precise method used in the lattice simulation, the preferred values for \(a_{4}\) was even smaller, in the range \(a_{4}\simeq-0.2\) to \(-0.25\) GeV\({}^{-3}\). Ignoring the pion tadpole contribution to the nucleon mass, we assume that we can identify the pionic self-energy on the lattice with our model calculation described above. Consequently the first and second derivative of the non pionic piece of the lattice expansion, \[Q_{S,L}^{(s)} = \frac{\partial M_{N}^{(\rm no\,pion)}}{\partial m}=\left(\frac{ {\cal M}_{\pi}^{2}}{m}\right)(a_{2}\,+\,a_{4}\,{\cal M}_{\pi}^{2})\simeq\left( \frac{{\cal M}_{\pi}^{2}}{m}\right)\,a_{2} \tag{34}\] \[\chi_{N,L}^{(s)} = \frac{\partial^{2}M_{N}^{(\rm no\,pion)}}{\partial m^{2}}=\left( \frac{{\cal M}_{\pi}^{2}}{m}\right)^{2}\,(2\,a_{4}) \tag{35}\] can be identified with the non pionic piece of the scalar charge, see Eq. (26), and of the chiral susceptibility, see Eq. (30), derived above: \[Q_{S}^{(s)}\equiv Q_{S,L}^{(s)}\,\,\,{\rm and}\,\,\chi_{N}^{(s)}\equiv\chi_{N, L}^{(s)}\,. \tag{36}\] One arrives at the important result: \[a_{2}=\frac{F_{\pi}\,g_{S}}{M_{\sigma}^{2}} \tag{37}\] \[a_{4}=-\frac{F_{\pi}\,g_{S}}{2M_{\sigma}^{4}}\,\left(3\,-\,2\,\tilde{C}_{L} \right)\quad{\rm with}\quad\tilde{C}_{L}=\frac{M_{N}}{g_{S}\,F_{\pi}}\,C\,+\, \frac{3}{2}\,C_{\chi}. \tag{38}\] Our previous works [12; 13] coincide with these relations in the specific case of the L\(\sigma\)M effective potential (\(C_{\chi}=0\)). They provide two constraints on the parameters of the confining model. Also notice that the model results on the rhs of the above equations should be rigorously understood with the various parameters calculated in the chiral limit which are in practice very close to their values at the physical current quark mass. The very robust conclusion is that the lattice result is much smaller than the one obtained in a the simplistic linear sigma model (\(C=C_{\chi}=0\)), for which \(a_{4}\simeq-3.5\) GeV\({}^{-3}\). Hence lattice data require a strong compensation from effects governing the three-body repulsive force needed for the saturation mechanism. ## IV Discussion The above results demonstrate that the lattice data \(a_{2}\) and \(a_{4}\), themselves related to the chiral responses of the nucleon, bring severe constraints on the nuclear matter equation of state. This suggests to enter these quantities as an input of a Bayesian analysis to generate the probability distribution function for the nucleon response parameters \(g_{S}\) and \(C\). Such an analysis limited to the Hartree level has been performed in a recent work [22], but using the simplistic L\(\sigma\)M, with an output for \(C\) very close to \(C\sim 1.5\), the obvious reason being the very small input value for \(a_{4}\) (see Eq.(38)). In a work in preparation [40], we will perform again the same kind of analysis but with the incorporation of the Fock terms (and in particular the pion and rho Fock terms in presence of short range correlation) first with the L\(\sigma\)M and second with the enriched NJL chiral effective potential. As already mentioned, the problem of the analysis using the L\(\sigma\)M chiral effective potential is a large value of the \(C\) response parameter in strong disagreement with all the nucleon models calculation which predict a value of \(C\) smaller and most of the time significantly smaller than one (recall the MIT bag value \(C\sim 0.5\)). Just to have an insight on the effect of an enriched chiral effective potential we return to our our original paper [11]. In this paper where the L\(\sigma\)M was used we obtained correct saturation properties with \(C=1\) (see Fig. 1 of [11]). We can retrospectively calculate the \(a_{2}\) and \(a_{4}\) parameters: we find \(a_{2}=1.67\) GeV\({}^{-1}\) and \(a_{4}=-1.48\) GeV\({}^{-3}\). If the obtained \(a_{2}\) is not very far from the lattice values, \(a_{4}\) is in magnitude three times larger than the upper value compatible with lattice calculation. To see the effect of the NJL-like potential (via the parameter \(C_{\chi}\)), we simply incorporate the \((1-C_{\chi})\) correction in the cubic term term of the L\(\sigma\)M chiral effective potential, fixing \(C_{\chi}=0.44\). Keeping all the other parameters at their original value, we take \(C=0.78\), so as to keep the same value of the repulsive three-body force, i.e., \(\tilde{C}_{3}=C+C_{\chi}/2=1\) (21). The saturation points is only slightly modified (see Fig. 2) but now \(\tilde{C}_{L}=C+3C_{\chi}/2=1.44\) (38) and the \(a_{4}\) parameter becomes very close to zero, \(a_{4}=-0.1\) GeV\({}^{-3}\), in much better agreement with lattice data. ## V Conclusions The nuclear matter properties originate from the fundamental theory of the strong interaction and the aim of this manuscript is to investigate how this microscopic origin can be implemented in the modeling of nuclear matter. Of particular importance on the QCD side are the quark confinement mechanism and the chiral potential associated to the chirally broken QCD vacuum. In this article we use an enriched chiral effective potential, based on the NJL model, in place of the L\(\sigma\)M employed in our previous phenomenological works. This significantly increases the agreement with LQCD data together with expected model values of the nucleonic response parameter \(C\). Note that this conclusion should be confirmed by a more thorough analysis (work in preparation [40]). Hence the fundamental QCD theory and nuclear matter modeling are linked by, on the one hand the LQCD data \(a_{2}\) and \(a_{4}\) and on the other hand what we have called the "QCD connected parameters", namely the response parameters \(G_{s}\) and \(C_{N}\). Specifically we have shown that a particular combination of \(C\) and \(C_{\chi}\) (\(\tilde{C}_{L}\)) is constrained by LQCD, which constitutes one of the main result of this paper. In addition a closely related combination (\(\tilde{C}_{3}=C+C_{\chi}/2\)) governs the repulsive three-body force ensuring the mechanism mechanism. Indeed these results provide a link between chiral properties of the nucleon and the saturation mechanism, already obtained in our previous works, but limited to the pure L\(\sigma\)M case. Further investigations of these results shall be perform to understand more globally how they modify the properties of nuclear matter. Works in this direction is Figure 2: Full line: original calculation of the EOS [11] with \(C=1,\,C_{\chi}=0\). Dotted line: New calculation with he same parameters but \(C=0.78\), \(C_{\chi}=0.44\). The density is scaled by the normal nuclear matter density. being performed.
2306.10742
BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming
In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs). Given a compact set of input points $T\subset \mathbb{R}^n$, BNN-DP computes lower and upper bounds on the BNN's predictions for all the points in $T$. The framework is based on an interpretation of BNNs as stochastic dynamical systems, which enables the use of Dynamic Programming (DP) algorithms to bound the prediction range along the layers of the network. Specifically, the method uses bound propagation techniques and convex relaxations to derive a backward recursion procedure to over-approximate the prediction range of the BNN with piecewise affine functions. The algorithm is general and can handle both regression and classification tasks. On a set of experiments on various regression and classification tasks and BNN architectures, we show that BNN-DP outperforms state-of-the-art methods by up to four orders of magnitude in both tightness of the bounds and computational efficiency.
Steven Adams, Andrea Patane, Morteza Lahijanian, Luca Laurenti
2023-06-19T07:19:15Z
http://arxiv.org/abs/2306.10742v1
# BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming ###### Abstract In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs). Given a compact set of input points \(T\subset\mathbb{R}^{n}\), BNN-DP computes lower and upper bounds on the BNN's predictions for all the points in \(T\). The framework is based on an interpretation of BNNs as stochastic dynamical systems, which enables the use of Dynamic Programming (DP) algorithms to bound the prediction range along the layers of the network. Specifically, the method uses bound propagation techniques and convex relaxations to derive a backward recursion procedure to over-approximate the prediction range of the BNN with piecewise affine functions. The algorithm is general and can handle both regression and classification tasks. On a set of experiments on various regression and classification tasks and BNN architectures, we show that BNN-DP outperforms state-of-the-art methods by up to four orders of magnitude in both tightness of the bounds and computational efficiency. Machine Learning, BNN-DP, BNN-DP, BNN-DP: Robustness Cer precision and computational time. For instance, on the Fashion MNIST dataset, our approach achieves an average 93% improvement in certified lower bound compared to Berrada et al. (2021), while being around 3 orders of magnitude faster. In summary, this paper makes the following main contributions: * we introduce a framework based on stochastic dynamic programming and convex relaxation for the analysis of adversarial robustness of BNNs, * we implement an efficient algorithmic procedure of our framework for BNNs trained with Gaussian variational inference (VI) in both regression and classification settings,1 and Footnote 1: Our code is available at [https://github.com/sjladams/BNN_DP](https://github.com/sjladams/BNN_DP). * we benchmark the robustness of a variety of BNN models on five datasets, empirically demonstrating how our method outperforms state-of-the art approaches by orders of magnitude in both tightness and efficiency. Related WorksMany algorithms have been developed for certification of deterministic (i.e., non Bayesian) neural networks (NNs) (Katz et al., 2017; Weng et al., 2018; Wong and Kolter, 2018; Bunel et al., 2020). However, these methods cannot be employed to BNNs because they all assume the weights of the network have a fixed value, whereas in the Bayesian setting they are distributed according to the BNN posterior. Methods for certification of BNNs have recently presented in (Wicker et al., 2020; Berrada et al., 2021; Lechner et al., 2021). Wicker et al. (2020) consider a different notion of robustness than the one in this paper, not directly related to adversarial attacks on the BNN decision. Furthermore, that work considers a partitioning procedure in weight space that makes it applicable only to small networks and/or with small variance. The method proposed in (Berrada et al., 2021) is based on dual optimization. Hence, it is restricted to distributions with bounded support and needs to solve non-convex problems at large computational costs for classification tasks. Separately, Lechner et al. (2021) aims to build an intrinsic safe BNN by truncating the posterior in the weight space. Cardelli et al. (2019); Wicker et al. (2021) introduced statistical approaches to quantify the robustness of a BNN, which however, does not return formal guarantees, which are necessary in safety-critical settings. Empirical methods that use the uncertainty of BNNs to flag adversarial examples are introduced in (Rawat et al., 2017; Smith and Gal, 2018). These, however, consider only point-wise uncertainty estimates, specific to a particular test point and do not account for worst-case adversarial perturbations. Various recent works have proposed formal methods to compute adversarial robustness for Gaussian Processes (GPs) (Cardelli et al., 2019; Smith and Gal, 2018; Patane et al., 2022; Smith et al., 2022). In BNNs, however, due to the non-linearity of activation functions, the distribution over the space of functions induced by a BNN is generally non-Gaussian, even if a Gaussian distribution in weight space is assumed. Hence, the techniques that are developed for GPs cannot be directly applied to BNNs. ## 2 Robust Certification of BNNs Problem ### Bayesian Neural Networks (BNNs) For an input vector \(x\in\mathbb{R}^{n_{0}}\), we consider fully connected neural networks \(f^{w}:\mathbb{R}^{n_{0}}\rightarrow\mathbb{R}^{n_{K+1}}\) of the following form for \(k=0,\ldots,K\):2 Footnote 2: Note that the formulation of neural networks considered in Eqn (1) also includes convolutional neural networks (CNNs). In fact, the convolutional operation can be interpreted as a linear transformation into a larger space; see, e.g., Chapter 3.4.1 in (Gal and Ghahramani, 2016). This allows us to represent convolutional layers equivalently as fully connected layers, and do verification for CNNs as we show in Section 7. \[\begin{split} z_{0}&=x,\qquad\qquad\zeta_{k+1}=W_{k} (z_{k}^{T},1)^{T},\\ z_{k}&=\phi_{k}(\zeta_{k}),\quad f^{w}\left(x\right) =\zeta_{K+1},\end{split} \tag{1}\] where \(K\) is the number of hidden layers, \(n_{k}\) is the number of neurons of layer \(k\), \(\phi_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{n_{k}}\) is a vector of continuous activation functions (one for each neuron) in layer \(k\), and \(W_{k}\in\mathbb{R}^{n_{k}\times n_{k+1}}\) is the matrix of weights and biases that correspond to the \(k\)th layer of the network. We denote the vector of parameters by \(w=(W_{1}^{T},\ldots,W_{K}^{T})^{T}\) and the mapping from \(\zeta_{k_{1}}\) to \(\zeta_{k_{2}}\) by \(f^{w}_{k_{1}:k_{2}}:\mathbb{R}^{n_{k_{1}}}\rightarrow\mathbb{R}^{n_{k_{2}}}\) for \(k_{1},k_{2}\in\{0,...,K\}\). \(\zeta_{K+1}\) is the final output of the network (or the logit in the case of classification problems). In the Bayesian setting, one starts by assuming a prior distribution \(p(w)\) over the parameters \(w\) and a likelihood function \(p(y|x,w)\). We adopt bold notation to denote random variables and write \(f^{w}\) to denote a BNN defined according to Eqns. (1). The likelihood is generally assumed to be Gaussian in case of regression and categorical for classification, where the probability for each class is given as the softmax of the neural network final logits (MacKay, 1992). Then, given a training dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N_{\mathcal{D}}}\), learning amounts to computing the posterior distribution \(p(w|\mathcal{D})\) via the Bayes rule (MacKay, 1992). The posterior predictive distribution over an input \(x^{*}\) is finally obtained by marginalising the posterior over the likelihood, i.e., \(p(y^{*}|x^{*},\mathcal{D})=\int p(y^{*}|x^{*},w)p(w|\mathcal{D})dw\). The final output (decision) of the BNN, \(\hat{y}(x^{*})\), is then computed by minimising a loss function \(\mathcal{L}\) averaged over the predictive distribution, i.e., \[\hat{y}(x^{*})=\arg\min_{y}\int\mathcal{L}(y,y^{*})p(y^{*}|x^{*},\mathcal{D})dy ^{*}.\] In this paper, we focus on both regression and classification problems. In regression, an \(l_{2}\) loss is generally used, which leads to an optimal decision \(\hat{y}(x^{*})\) given by the mean of the predictive posterior distribution (Neal, 2012), i.e., \(\hat{y}(x^{*})=\mathbb{E}_{y\sim p(y|x^{*},\mathcal{D})}\left[\mathbf{y}\right].\)3 For classification, \(\ell_{0-1}\) loss is typically employed, which results in Footnote 3: In the remainder, we may omit the probability measure of an expectation or probability when it is clear from the context. \[\hat{y}(x^{*})=\operatorname*{arg\,max}_{i\in\{1,...,n_{K+1}\}}\mathbb{E}_{ \boldsymbol{w}\sim p(w|\mathcal{D})}\left[\text{softmax}^{(i)}(f^{\boldsymbol {w}}\left(x^{*}\right))\right],\] where \(\text{softmax}^{(i)}\) is the \(i\)th component of the \(n_{K+1}\)-dimensional softmax function.4 Unfortunately, because of the non-linearity introduced by the neural network architecture, the computation of the posterior distribution and consequently of \(\hat{y}(x^{*})\) cannot be done analytically. Therefore, approximate inference methods are required. In what follows, we focus on mean-field Gaussian Variational Inference (VI) approximations (Blundell et al., 2015). Specifically, we fit an approximating multivariate Gaussian \(q(w)=\mathcal{N}\left(w\mid\mu_{w};\,\Sigma_{w}\right)\approx p(w\mid\mathcal{ D})\) with mean \(\mu_{w}\) and block diagonal covariance matrix \(\Sigma_{w}\) such that for \(k\in\{0,\ldots,K\}\) and \(i\in\{1,\ldots,n_{k}\}\), the approximating distribution of the parameters corresponding to the \(i\)th node of the \(k\)th layer is Footnote 4: Analogous formulas can be obtained for the weighted classification loss by factoring in misclassification weights in the argmax. \[q(W_{k}^{(i,:)})=\mathcal{N}\left(W_{k}^{(i,:)}\mid\mu_{w,k,i};\,\Sigma_{w,k,i }\right) \tag{2}\] with mean \(\mu_{w,k,i}\) and covariance matrix \(\Sigma_{w,k,i}\).5 Footnote 5: BNN-DP can be extended to Gaussian approximation distributions with inter-node or inter-layer correlations. In that case, to solve the backward iteration scheme of Theorem 1, the value functions need to be marginalized over partitions in weight space. **Remark 1**.: _While our primary focus is on VI, the techniques presented in this paper can be applied to other approximate inference methods, such as HMC (Neal, 2012) and Dropout (Gal & Ghahramani, 2016). In these cases, the prediction of the BNN is obtained by averaging over a finite ensemble of NNs. For this setting, the dynamic programming problem in Theorem 1 reduces to computing piecewise linear relaxations for each layer of a weighted sum, i.e., an average, of deterministic neural networks, and propagating the resulting relaxations backward._ ### Problem Statement Given a BNN \(f^{\boldsymbol{w}}\) trained on a dataset \(\mathcal{D}\), as common in the literature (Madry et al., 2017), for a generic test point \(x^{*}\), we represent the possible adversarial perturbations by defining a compact neighbourhood \(T\) around \(x^{*}\) and measure the changes in the BNN output caused by limiting the perturbations to lie within \(T\). **Definition 1** (Adversarial Robustness).: _Consider a BNN \(f^{\boldsymbol{w}}\), a compact set \(T\subset\mathbb{R}^{n_{0}}\), and input point \(x^{*}\in T\). For a given threshold \(\gamma>0\), \(f^{\boldsymbol{w}}\) is adversarially robust in \(x^{*}\) iff_ \[\forall x\in T,\quad\|\hat{y}(x)-\hat{y}(x^{*})\|_{p}\leq\gamma,\] _where \(\|\cdot\|_{p}\) is an \(\ell_{p}\) norm._ Definition 1 is analogous to the standard notion of adversarial robustness employed for deterministic neural networks (Katz et al., 2017) and Bayesian models (Patane et al., 2022). As discussed in Section 2.1, the particular form of a BNN's output depends on the specific application considered. Below, we focus on regression and classification problems. **Problem 1**.: _Let \(T\subset\mathbb{R}^{n_{0}}\) be a compact subset. Define functions \(I(y)=y\) and \(\text{softmax}(y)=\left[\text{softmax}^{(1)}(y),...,\text{softmax}^{(n_{K+1}) }(y)\right]\). Then, for a BNN \(f^{\boldsymbol{w}}\), \(h\in\{1,\text{softmax}\}\), and \(i\in\{1,...,n_{K+1}\}\), compute:_ \[\begin{split}\pi_{\min}^{(i)}(T)&=\min_{x\in T} \mathbb{E}_{\boldsymbol{w}\sim q(\cdot)}\left[h^{(i)}(f^{\boldsymbol{w}}\left( x\right))\right],\\ \pi_{\max}^{(i)}(T)&=\max_{x\in T}\mathbb{E}_{ \boldsymbol{w}\sim q(\cdot)}\left[h^{(i)}(f^{\boldsymbol{w}}\left(x\right)) \right].\end{split} \tag{3}\] In the regression case (\(h=I\)), Problem 1 seeks to compute the ranges of the expectation of the BNN for all \(x\in T\). Similarly, in the classification case (\(h=\text{softmax}\)), Eqns. (3) define the ranges of the expectation of the softmax of each class for \(x\in T\). It is straightforward to see that these quantities are sufficient to check whether \(f^{\boldsymbol{w}}\) is adversarially robust for \(x\in T\); that is, if \(\sup_{x\in T}||\hat{y}(x)-\hat{y}(x^{*})||_{p}\leq\gamma\). **Remark 2**.: _Our method can be extended to other losses, i.e., other forms of \(h\) in Eqns. (3), as long as affine relaxations of \(h\) can be computed._ Approach OutlineDue to the non-convex nature of \(f^{\boldsymbol{w}}\) and possibly \(h\), the computation of \(\mathbb{E}_{\boldsymbol{w}\sim q(\cdot)}\left[h(f^{\boldsymbol{w}}\left(x \right))\right]\) is analytically infeasible. To solve this problem, in Section 4, we view BNNs as stochastic dynamical systems evolving over the layers of the neural network. Through this, we show that adversarial robustness can be characterized as the solution of a dynamic programming (DP) problem. This allows us to break its computation into \(K\) simpler optimization problems, one for each layer. Each problem essentially queries a back-propagation of the uncertainty of the BNN through \(h\) and from one layer of the neural network to the next. Due to the non-convex nature of the layers of the BNN, these problems still cannot be solved exactly. We overcome this problem by using convex relaxations. Specifically, in Section 5, we show that efficient PWA relaxations can be obtained by recursively bounding the DP problem. In Section 6, we combine the theoretical results into a general algorithm called BNN-DP that solves Problem 1 efficiently. ## 3 Preliminaries on Relaxations of Functions To propagate the uncertainty of the BNN from one layer to the other, we rely on upper and lower approximations of the corresponding Neural Network (NN), also known as _relaxations_. For vectors \(\hat{x},\hat{x}\in\mathbb{R}^{n}\), we denote by \([\hat{x},\hat{x}]\) the \(n\)-dimensional hyper-rectangle defined by \(\hat{x}\) and \(\hat{x}\), i.e., \([\hat{x},\hat{x}]=[\hat{x}^{(1)},\hat{x}^{(1)}]\times[\hat{x}^{(2)},\hat{x}^{(2 )}]\times\ldots\times[\hat{x}^{(n)},\hat{x}^{(n)}]\). We consider two types of relaxations, interval and affine. **Definition 2** (Interval Relaxation).: _An interval relaxation of a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) over a set \(T\subseteq\mathbb{R}^{n}\) are two vectors \(\hat{b},\hat{b}\in\mathbb{R}^{m}\) such that \(f(x)\in[\hat{b},\hat{b}]\) for all \(x\in T\)._ **Definition 3** (Affine Relaxation).: _An affine relaxation of a function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) over a set \(T\subseteq\mathbb{R}^{n}\) are two affine functions \(\hat{A}x+\hat{b}\) and \(\hat{A}x+\hat{b}\) with \(\hat{A},\hat{A}\in\mathbb{R}^{m\times n}\) and \(\hat{b},\hat{b}\in\mathbb{R}^{m}\) such that \(f(x)\in[\hat{A}x+\hat{b},\hat{A}x+\hat{b}]\) for all \(x\in T\)._ Interval and symbolic arithmetic can be used to propagate relaxations through the layers of a NN. Let, \([\alpha]_{+}\coloneqq\max\{\alpha,0\}\) and \([\alpha]_{-}\coloneqq\min\{\alpha,0\}\) represent the saturation operators on \(\alpha\). For a vector or matrix, \([\cdot]_{+}\) and \([\cdot]_{-}\) represent element-wise max and min, respectively. We adopt the notation of Liu et al. (2021) and write interval arithmetic w.r.t. a linear mapping \(M\) compactly as \(\otimes\) where \(M\otimes[\hat{b},\hat{b}]\coloneqq[[M]_{+}\hat{b}+[M]_{-}\hat{b},\,[M]_{+} \hat{b}+[M]_{-}\hat{b}]\), and use the similar notation for symbolic arithmetic. ## 4 BNN Certification via Dynamic Program As observed in Marchi et al. (2021), NNs and consequently BNNs can be viewed as dynamical systems evolving over the layers of the network. In particular, for \(k\in\{0,...,K\}\), Eqn. (1) can be rewritten as: \[z_{k+1}=\phi_{k+1}(\mathbf{W}_{k}(z_{k}^{T},1)^{T}) \tag{4}\] with initial condition \(z_{0}=x\). Since, in a BNN, weights and biases are random variables sampled from the approximate posterior \(q(\cdot)\), Eqn. (4) describes a non-linear stochastic process evolving over the layers of the NN. This observation leads to the following theorem, which shows that \(\mathbb{E}_{\mathbf{w}\sim q(\cdot)}\left[h(\mathbf{W}_{k}(z^{T},1)^{T})\right]\) can be characterized as the solution to a backward recursion DP problem. **Theorem 1**.: _Let \(f^{\mathbf{w}}\left(x\right)\) be a fully connected BNN with \(K\) hidden layers and \(h:\mathbb{R}^{n_{K+1}}\rightarrow\mathbb{R}^{l}\) be an integrable function. For \(k=0,...,K\), define functions \(V_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{l}\) backwards-recursively as:_ \[V_{K}(z)=\mathbb{E}_{\mathbf{W}_{K}\sim q(\cdot)}\left[h(\mathbf{W}_{K}( z^{T},1)^{T})\right], \tag{5a}\] \[V_{k-1}(z)=\mathbb{E}_{\mathbf{W}_{k-1}\sim q(\cdot)}\left[V_{k}( \phi_{k}(\mathbf{W}_{k-1}(z^{T},1)^{T}))\right]. \tag{5b}\] _Then, it holds that \(\mathbb{E}_{\mathbf{w}\sim q(\cdot)}\left[h(f^{\mathbf{w}}\left(x\right))\right]=V_{0}(x)\)._ The proof of Theorem 1 is reported in Appendix A.1 and obtained by induction over the layers of the NN by relying on the law of total expectation and independence of the parameters distribution at different layers.6 Figure 1 illustrates the backward-iteration scheme of Theorem 1 for a two-hidden-layer BNN. Starting from the last layer, value functions \(V_{k}\) are constructed according to Eqns. (5a) and (5b) describing how the output of layer \(k\) is transformed in the previous layers. Theorem 1 is a central piece of our framework as it allows one to break the computation of \(\mathbb{E}_{\mathbf{w}\sim q(\cdot)}\left[h(f^{\mathbf{w}}\left(x\right))\right]\) into \(K+1\) (simpler) sub-problems, one for each layer of the BNN. In fact, note that \(V_{k}\) is a deterministic function. Hence, all the uncertainty in \(V_{k-1}\) depends only on the weights of layer \(k-1\). This is a key point that we use to derive efficient methods to solve Problem 1. Nevertheless, we stress that since \(V_{k}(z)\) is obtained by propagating \(z\) over \(K-k\) layers of the BNN, this is still generally a non-convex function, whose exact optimisation is infeasible in practice. Consequently, we employ the following corollary, which guarantees that, to solve Problem 1, it suffices to recursively bound \(V_{k}\) following Eqns. (5a) and (5b). Footnote 6: While the vast majority of VI algorithms make this assumption, Theorem 1 can be generalized to the case where there is inter-layer correlation by marginalizing Eqn. (5a) and (5b) over partitions in correlated weight-spaces. **Corollary 1**.: _For \(k\in\{1,\ldots,K\}\), let functions \(\hat{V}_{k},\hat{V}_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{n_{l}}\) be relaxations of \(V_{k}(z_{k})\), i.e, \(\forall z_{k}\in\mathbb{R}^{n_{k}},\hat{V}_{k}(z_{k})\leq V_{k}(z_{k})\leq \hat{V}_{k}(z_{k})\). Then_ \[\mathbb{E}_{\mathbf{W}_{k-1}\sim q(\cdot)}\left[\hat{V}_{k}(\phi_{k} (\mathbf{W}_{k-1}(z^{T},1)^{T}))\right]\leq V_{k-1}(z_{k})\leq\] \[\mathbb{E}_{\mathbf{W}_{k-1}\sim q(\cdot)}\left[\hat{V}_{k}(\phi_{k }(\mathbf{W}_{k-1}(z^{T},1)^{T}))\right].\] Figure 1: Illustration of the DP algorithm in Theorem 1 for a BNN with two hidden layers. Value functions \(V_{k}\) are mappings from the latent and input spaces of the BNN to the mean of the output distribution. For each mapping, the distribution and mapping of a single point is displayed in orange. Starting from the last hidden layer, we recursively compute PWA approximations of the mappings. The true mean of the BNN for all \(z_{2}\in\mathcal{Z}_{2}\) is in the green oval, which we over-approximate by the blue hexagon. _Further, for \(i\in\{1,\ldots,l\}\), it holds that \(\pi^{(i)}_{\min}(T)\geq\min_{x\in T}\hat{V}^{(i)}_{0}(x)\) and \(\pi^{(i)}_{\max}(T)\leq\max_{x\in T}\hat{V}^{(i)}_{0}(x)\)._ Corollary 1.1 allows us to recursively find relaxations of \(V_{k}\) via Theorem 1. In what follows, we focus on finding PWA relaxations \(\hat{V}_{k}\) and \(\hat{V}_{k}\). To achieve that, there are two basic steps: (i) initialization of \(\hat{V}_{k},\hat{V}_{k}\) via Eqn. (5a), and (ii) backward propagation of \(\hat{V}_{k},\hat{V}_{k}\) through a hidden layer of the BNN via Eqn. (5b). In Section 5, we first show an efficient method to perform step (ii) and then focus on (i). ## 5 PWA Relaxation for Dynamic Programming Our goal in this section is to find PWA relaxations of \(V_{k}\). To do that, we first show how to propagate affine relaxations of the value function backwards through a single hidden layer of the BNN via Eqn. (5b) and then generalize this result to PWA relaxations. Note that, because the support of a BNN is generally unbounded, affine relaxations of Eqn. (5b) and (5a) lead to overly conservative results (an affine function should over-approximate a non-linear function over an unbounded set). Thus, PWA relaxations are necessary to obtain tight approximations. Finally, in Subsection 5.3 we show how to compute relaxations for Eqn. (5a). ### Affine Value Functions For the sake of presentation, we focus on upper bound \(\hat{V}_{k-1}\); the lower bound case follows similarly. Let \(\hat{V}_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{l}\) be an affine upper bound on \(V_{k}\). Then, by Corollary 1.1 and the linearity of expectation, it holds that \[\hat{V}_{k-1}(z)=\hat{V}_{k}(\mathbb{E}_{\mathbf{W}_{k-1}\sim q(\cdot)}\left[ \phi_{k}(\mathbf{W}_{k-1}(z^{T},1)^{T})\right]). \tag{6}\] Recall that here \(q\) is a Gaussian distribution (see Section 2.1). Hence, due to the closure of Gaussian random variables w.r.t. linear transformations, we can rewrite Eqn. (6) as: \[\hat{V}_{k-1}(z)=\hat{V}_{k}(\mathbb{E}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k}(z); \text{ diag}(s_{k}(z)))}\left[\phi_{k}(\mathbf{\zeta})\right]), \tag{7}\] where \(m_{k}:\mathbb{R}^{n_{k-1}}\rightarrow\mathbb{R}^{n_{k}}\) and \(s_{k}:\mathbb{R}^{n_{k-1}}\rightarrow\mathbb{R}^{n_{k}}_{\geq 0}\) are defined component-wise as \[\begin{split}& m^{(i)}_{k}(z)=\mu_{w,k-1,i}(z^{T},1)^{T},\\ & s^{(i)}_{k}(z)=(z^{T},1)\Sigma_{w,k-1,i}(z^{T},1)^{T},\end{split} \tag{8}\] for all \(i\in\{1,\ldots,n_{k}\}\) with \(\mu_{w,k-1,i},\Sigma_{w,k-1,i}\) being mean and covariance of the \(i\)th node of the \(k\)th layer. \(\text{diag}(s)\) is a diagonal matrix with the elements of \(s\) on the main diagonal. Note that Eqn (7) reduces the propagation of the value function to the propagation of a Gaussian random variable (\(\mathbf{\zeta}\)) through an activation function (\(\phi_{k}\)). In Proposition 2, we show how this propogation can be achieved analytically for ReLU activation functions. Generalization to other activation functions is discussed in Remark 3. **Proposition 2**.: _For \(k\in\{1,\ldots,K\}\), let \(\hat{V}_{k}\) be an affine function and \(Z\subset\mathbb{R}^{n_{k-1}}\) be a compact set. Define function \(r_{k}:\mathbb{R}^{n_{k-1}}\rightarrow\mathbb{R}^{n_{k}}_{\geq 0}\) as \(r_{k}(z)=\sqrt{s_{k}(z)}\), and let \(\check{r}_{k},\check{r}_{k}:\mathbb{R}^{n_{k-1}}\rightarrow\mathbb{R}^{n_{k}}_ {\geq 0}\) be an affine-relaxation of \(r_{k}\) w.r.t. \(Z\). Further, define \(g:\mathbb{R}^{2}\rightarrow\mathbb{R}\) as_ \[g(\mu,\sigma)=\frac{\mu}{2}\left[1-\text{erf}\!\left(\frac{-\mu}{\sigma\sqrt{2 }}\right)\right]+\frac{\sigma}{\sqrt{2\pi}}\,e^{-(\mu/\sigma\sqrt{2})^{2}},\] _and, let \(\check{g}_{i},\hat{g}_{i}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) be an affine-relaxation of \(g\) w.r.t. \(\{(m^{(i)}_{k}(z),r^{(i)}_{k}(z))\mid\forall z\in Z\}\), Then, for \(\check{A},\hat{A}\in\mathbb{R}^{n_{k-1}\times n_{k}}\) and \(\check{b},\hat{b}\in\mathbb{R}^{n_{k}}\) defined as, \(\forall i\in\{1,\ldots,n_{k}\}\),_ \[\check{A}^{(i,\cdot)}=[\nabla_{z}g(m^{(i)}_{k}(z),r^{(i)}_{k}(z)) ]_{z=z^{*}},\] \[\check{b}^{(i)}=g(m^{(i)}_{k}(z^{*}),r^{(i)}_{k}(z^{*}))-\check{A} ^{(i,\cdot)}z^{*},\] \[[\cdot,\check{A}^{(i,z)}z+\check{b}^{(i)}]=(m^{(i)}_{k},\check{r} ^{(i)}_{k})^{T}\otimes[\check{g},\hat{g}],\] _with \(z^{*}\in Z\) and \(\nabla_{z}\) being the gradient w.r.t. \(z\), it holds that \(\forall z\in Z\), \(\mathbb{E}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k}(z);\text{ }s_{k}(z))}\left[\hat{V}_{k}(\text{ReLU}(\mathbf{\zeta}))\right]\in\hat{V}_{k} \otimes[\check{A}z+\check{b},\check{A}z+\hat{b}]\)._ The proof of Proposition 2 is based on the convexity of the expected value of a rectified Gaussian w.r.t. its mean and variance. The proof and detailed procedures for obtaining affine relaxations of \(g\) and \(r\) are reported in Appendix B.1. Next, we show how the result of Proposition 2 can be extended to PWA relaxations of the value functions. **Remark 3**.: _The results of Proposition 2 (as well as Propositions 4 and 5 below) extend to any continuous activation function \(\phi_{k}\). That is, as shown in (Beunissi et al., 2022), every continuous activation function can be under and over-approximated by PWA functions \(\hat{\phi}_{k},\hat{\phi}_{k}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{n_{k}}\) such that \(\hat{\phi}_{k}\leq\phi_{k}\leq\hat{\phi}_{k}\). Consequently, \(\mathbb{E}\!\left[\hat{\phi}_{k}(\mathbf{\zeta})\right]\leq\mathbb{E}\left[\phi_{k}( \mathbf{\zeta})\right]\leq\mathbb{E}\!\left[\hat{\phi}_{k}(\mathbf{\zeta})\right]\), which allows the extension of Proposition 2 from ReLU to general continuous \(\phi_{k}\)._ ### Piecewise Affine Value Functions For \(N\in\mathbb{N}\), let \(\mathcal{Z}_{k}=\{Z_{k,1},\ldots,Z_{k,N}\}\subseteq\mathbb{R}^{n_{k}}\) be a partition of the support of \(f^{\mathbf{\alpha}_{k}}_{0:k}\), and let \(\hat{V}_{k,j},\hat{V}_{k,j}:\mathbb{R}^{n_{k}}\rightarrow\mathbb{R}^{l}\) be an affine relaxation of \(V_{k}(z_{k})\) w.r.t. \(Z_{k,j}\) for all \(j\in\{1,\ldots,N\}\), i.e., \(\forall z_{k}\in Z_{k,j}\)\(V_{k}(z_{k})\leq\hat{V}_{k,j}\) with \(\hat{V}_{k,j}\coloneqq\hat{A}_{k,j}z_{k}+\hat{b}_{k,j}\). Then, by Eqn. (5b) and the law of total expectation, we obtain an upper bound on \(V_{k-1}\): \[V_{k-1}(z)\leq\sum_{j=1}^{N}\hat{b}_{k,j}\underbrace{\mathbb{P}_{\mathbf{\zeta}\sim \mathcal{N}(m_{k}(z);\text{ diag}(s_{k}(z)))}\left[\mathbf{\zeta}\in Z_{k,j}\right]}_{9 \text{a}}+ \tag{9}\] \[\hat{A}_{k,j}\underbrace{\mathbb{E}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k} (z);\text{ diag}(s_{k}(z)))}\left[\phi_{k}(\mathbf{\zeta})\mid\mathbf{\zeta}\in Z_{k,j}\right]}_{9 \text{b}},\] where \(\mathbb{E}_{\mathbf{\zeta}\sim p}\left[\mathbf{\zeta}\mid\mathbf{\zeta}\in Z\right]:= \mathbb{E}_{\mathbf{\zeta}\sim p}\left[\mathbf{\zeta}\mid\mathbf{\zeta}\in Z\right]\mathbb{P}_{ \mathbf{\zeta}\sim p}\left[\mathbf{\zeta}\in Z\right]\). The lower bound on \(V_{k-1}\) follows similarly. Term 9a is simply the probability that a Gaussian random variable (\(\zeta\)) is in a given set (partition \(Z_{k,j}\)). If the partition is hyperrectangular, in Lemma 3 we express Term 9a in closed-form. **Lemma 3**.: _For \(k\in\{1,\ldots,K\}\), \(\tilde{\zeta},\hat{\zeta}\in\mathbb{R}^{n_{k}}\), it holds that 7_ Footnote 7: A similar result holds for unbounded regions defined by vector \(\tilde{z}\), that is, \(\mathbf{\zeta}\in[\tilde{z},\infty)\) or \(\mathbf{\zeta}\in(\infty,\tilde{z}]\), as shown in Appendix B.2. \[\mathbb{P}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k}(z);\;\text{diag}(s_{ k}(z)))}\left[\mathbf{\zeta}\in[\tilde{\zeta},\hat{\zeta}]\right]= \tag{10}\] \[\frac{1}{2^{n_{k}}}\prod_{i=1}^{n_{k}}\text{erf}\left(\frac{ \hat{\zeta}^{(i)}-m_{k}^{(i)}(z)}{\sqrt{2s_{k}^{(i)}(z)}}\right)-\text{erf} \left(\frac{\hat{\zeta}^{(i)}-m_{k}^{(i)}(z)}{\sqrt{2s_{k}^{(i)}(z)}}\right)\] Term 9b is the conditional expectation of the random variable propagated through an activation function. The following shows that we can decompose this term in expectations, which we can bound using the result of Proposition 2, and probabilities for which Lemma 3 can be applied. **Proposition 4**.: _For \(k\in\{1,\ldots,K\}\), vectors \(\tilde{\zeta},\hat{\zeta}\in\mathbb{R}^{n_{k}}\), and \(\mathbf{\zeta}\sim\mathcal{N}\left(m_{k}(z);\;\text{diag}\;(s_{k}(z))\right)\), it holds that8_ Footnote 8: A similar relation can be obtained for \(\phi_{k}\) being the identity function, as shown in Appendix B.3. \[\mathbb{\mathbb{E}}\left[\text{ReLU}\left(\mathbf{\zeta}\right)\mid\mathbf{\zeta}\in[ \tilde{\zeta},\hat{\zeta}]\right]=\] \[\mathbb{\mathbb{E}}\left[\text{ReLU}\left(\mathbf{\zeta}+[\tilde{ \zeta}]_{+}\right)\right]-\mathbb{\mathbb{E}}\left[\text{ReLU}\left(\mathbf{\zeta }+[\tilde{\zeta}]_{+}\right)\right]+\] \[[\tilde{\zeta}]_{+}\mathbb{P}\left[\mathbf{\zeta}\in[[\tilde{\zeta}]_ {+},\infty)\right]-[\hat{\zeta}]_{+}\mathbb{P}\left[\mathbf{\zeta}\in[[\hat{\zeta}] _{+},\infty)\right].\] Next, we show how these results can be extended to unbounded sets in partition \(\mathcal{Z}_{k}\), i.e., unbounded support \(f_{0:k}^{\mathbf{w}}\). Unbounded SupportIf \(f_{0:k}^{\mathbf{w}}\) has an unbounded support, then there must necessarily be at least a region that is unbounded in the partition \(\mathcal{Z}_{k}\). While for this region we can still apply Lemma 3 to compute Term 9a, we cannot use Proposition 4 to compute a bound for Term 9b. Instead, we rely on Proposition 5 (below), which derives relaxations based on the fact that Gaussian distributions decay exponentially fast (thus, faster than a linear function). **Proposition 5**.: _For \(k\in\{1,\ldots,K\}\), \(i\in\{1,\ldots,n_{k}\}\), and vector \(\tilde{\zeta}\in\mathbb{R}^{n_{k}}\), it holds that9_ Footnote 9: A similar relation can be obtained for \(\phi_{k}\) being the identity function, as shown in Appendix B.4. \[\frac{1}{2}[m_{k}^{(i)}(z)]_{-}\leq\] \[\leq\frac{1}{2}[m_{k}^{(i)}(z)]_{+}+\sqrt{\frac{s_{k}^{(i)}(z)}{ 2\pi}}.\] **Algorithm 1** Adversarial Robustness for Classification \[\mathbb{\mathbb{E}}_{\mathbf{\zeta}\sim\mathcal{N}(m_{k}(z);\;\text{diag}(s_{ k}(z)))}\left[\text{ReLU}\left(\mathbf{\zeta}\right)\mid\mathbf{\zeta}\in[\tilde{ \zeta},\infty)\right]\] \[\leq\frac{1}{2}[m_{k}^{(i)}(z)]_{+}+\sqrt{\frac{s_{k}^{(i)}(z)}{ 2\pi}}.\] \(\mathbb{\mathbb{E}}\left[\text{ReLU}\left(\mathbf{\zeta}\right)\mid\mathbf{\zeta} \in[\tilde{\zeta},\infty)\right]\) ### Relaxation of the Last Layer of BNN We show how to compute interval relaxations of Eqn (5a). For the regression case (\(h=I\)), the process is simple since Eqn. (5a) becomes an affine function. That is, \(V_{K}(z)=m_{K}(z)\), where \(m_{K}(z)\) as defined in Eqn. (8), and hence no relaxation is required. For classification, however, further relaxations are needed because \(h=\text{softmax}\), i.e., the output distribution of the BNN (the logit) is propagated through the softmax. The following proposition shows that an interval relaxation can be obtained by relaxing the distribution of \(\mathbf{W}_{K}(z^{T},1)^{T}\) by Dirac delta functions on the extremes of \(h\) for each set in the partition of the BNN's output. **Proposition 6**.: _For \(N\in\mathbb{N}\), let \(\{Z_{1},\ldots,Z_{N}\}\subseteq\mathbb{R}^{n_{K+1}}\) be a partition of \(\text{supp}(f^{\mathbf{w}}\left(x\right))\). Then, for \(i\in\{1,\ldots,n_{K+1}\}\) and \(\mathbf{w}\sim q(\cdot)\), it holds that_ \[\sum_{j=1}^{N}[\underset{\zeta\in Z_{j}}{\min}h^{(i)}(\zeta)] \mathbb{P}\left[f^{\mathbf{w}}\left(x\right)\in Z_{j}\right]\leq\mathbb{E}\left[ h^{(i)}(f^{\mathbf{w}}\left(x\right)\right]\leq\] \[\sum_{j=1}^{N}[\underset{\zeta\in Z_{j}}{\max}h^{(i)}(\zeta)] \mathbb{P}\left[f^{\mathbf{w}}\left(x\right)\in Z_{j}\right].\] A particularly simple case is when there are only two sets in the partition of the BNN's output layer. Then, the following corollary of Proposition 6 guarantees that, similarly to deterministic NNs (Zhang et al., 2018), we can determine adversarial robustness by simply looking at the logit. **Corollary 6**.: _Let \(\{[\tilde{\zeta},\hat{\zeta}],Z\}\subseteq\mathbb{R}^{n_{K+1}}\) be a partition of \(\text{supp}(f^{\mathbf{w}})\). Then, for \(i,j\in\{1,\ldots,n_{K+1}\}\) and \(\mathbf{w}\sim q(\cdot)\), it holds that_ \[\hat{e}^{\hat{\zeta}^{(j)}}-e^{\hat{\zeta}^{(i)}}+(\frac{1}{ \mathbb{P}\left[f^{\mathbf{w}}(x)\in[\tilde{\zeta},\hat{\zeta}]\right]}-1)\sum_{l=1 }^{n_{K+1}}e^{\hat{\zeta}^{(l)}}\leq 0\] \[\implies\mathbb{E}\left[\text{softmax}^{(j)}(f^{\mathbf{w}}(x))- \text{softmax}^{(i)}(f^{\mathbf{w}}(x))\right]\leq 0.\] ## 6 BNN-DP Algorithm We summarize our overall procedure to solve Problem 1 in an algorithm called BNN-DP. Algorithm 1 presents BNN-DP for the classification setting; the procedure for the regression setting follows similarly and is provided in Appendix D. Algorithm 1 consists of a forward pass to partition the latent space of the BNN (Lines 2-4), and a backward pass to recursively approximate the value functions via Eqns. (5a) and (5b) (Lines 7-10). The last layer of the BNN (Eqn. 5a) is handled by the IBPSoftmax function in Line 7 using the results of Proposition 6. The BP function in Line 9 performs the back-propagation over the hidden layers of the BNN (Eqn. 5b) using the results of Lemma 3 and Proposition 4 and 5. The detailed procedures of IBPSoftmax and BP can be found in Appendix D. In what follows, we describe how we partition the support of the latent space of the BNN, and discuss the computational complexity of BNN-DP. PartitioningRecall that our results rely on hyperrectangular partitions. Hence, for each layer \(k\), we employ the following proposition to find a hyper-rectangular subset of the support of each layer that captures at least \(1-\epsilon\) of the probability mass of \(\text{supp}(f_{0:k}^{u})\). **Proposition 7**.: _For \(k\in\{1,\ldots,K\}\), let \(\epsilon\in[0,1]\) be a constant, and \(Z\subset\mathbb{R}^{n_{k-1}}\) be a compact set. Then, for vectors \(\hat{\zeta}_{k},\hat{\zeta}_{k}\in\mathbb{R}^{n_{k}}\) defined such that \(\forall i\in\{1,\ldots,n_{k}\}\),_ \[\hat{\zeta}_{k}^{(i)} =\max_{z\in Z}\left[\text{erf}^{-1}\left(-\eta\right)\sqrt{2s_{k} ^{(i)}(z)}+m_{k}^{(i)}(z)\right], \tag{11}\] \[\hat{\zeta}_{k}^{(i)} =\min_{z\in Z}\left[\text{erf}^{-1}\left(\eta\right)\sqrt{2s_{k} ^{(i)}(z)}+m_{k}^{(i)}(z)\right], \tag{12}\] _where \(\eta=(1-\epsilon)^{\frac{1}{n_{k}}}\), it holds that, \(\forall z\in Z\),_ \[\mathbb{P}_{\boldsymbol{\zeta}\sim\mathcal{N}(m_{k}(z);\text{ diag}(s_{k}(z)))}\left[\boldsymbol{\zeta}\in[\tilde{\zeta}_{k},\hat{\zeta}_{k}] \right]\geq 1-\epsilon.\] Here, Eqns (11) and (12) are convex minimization problems, which can be efficiently solved via, e.g., the gradient descent algorithm. We denote the resulting region obtained via Proposition 7 as \(Z_{k,main}\subset\text{supp}(f_{0:k}^{u})\). Then, \(Z_{k,main}\) can be further refined by interval splitting. Computational ComplexitySimilarly as for linear bounding procedures for deterministic neural networks, see e.g. [Zhang et al.2018], the cost of computing piecewise-affine relaxations of a BNN with \(K\) layers and \(n\) neurons per layer is polynomial in both \(K\) and \(n\). Refinement, which is not part of the main algorithm, has exponential cost in \(n\). In practice, however, in NNs and consequently in BNNs, only a few neurons are generally active, and those are the ones that most influence the posterior [Frankle and Carbin2018]. Therefore, the refining procedure can focus only on these neurons. Because of this, in almost all the experiments in Section 7, only \(2\) regions in the partition per hidden layer were required to certify robustness, even in cases where the BNN had large posterior variance and thousands of neurons. ## 7 Experimental Results We empirically evaluated BNN-DP on various regression and classification benchmarks. We ran our experiments on an AMD EPYC 7252 8-core CPU and train the BNNs using Noisy Adam [Zhang et al., 2018] and variational online Gauss-Newton [Khan et al., 2018]. We first validate the bounds obtained by BNN-DP for BNNs trained on samples from an 1D sine with additive noise (referred to as the 1D Noisy Sine). We then analyse a set of BNNs with various architectures trained on the 2D dimensional equivalent of 1D Noisy Sine and the Kinsnm dataset.10 The latter dataset contains state-space readings for the dynamics of an 8 link robot arm, and is commonly used as a regression task to benchmark BNNs [Hernandez-Lobato and Adams, 2015, Gal and Ghahramani, 2016]. Last, we turn our attention to classification and evaluate BNNs trained on the MNIST, Fashion MNIST and CIFAR-10 datasets. 11 Footnote 10: Available at [http://www.cs.toronto.edu/~delve](http://www.cs.toronto.edu/~delve). Footnote 11: Our code is available at [https://github.com/sjladams/BNN_DP](https://github.com/sjladams/BNN_DP). As a baseline for our experiments, we consider the state-of-the-art approach of berrada2021, to which we refer as "FL". In fact, FL is the only existing method that can provide robustness certification for BNNs in similar settings as our BNN-DP. Nevertheless, we must remark that even Figure 2: Certified affine bounds on the mean of BNNs trained on the 1D Noisy Sine dataset w.r.t. the grey marked interval of input. FL is not fully formal; it works by truncating the Gaussian posterior distribution associated to each weight at a given multiple of its standard deviation (std), disregarding a large part of the posterior distribution. Hence, the returned bound is not sound over the full posterior but only a subset of it. More importantly, the disregarded portion of the posterior grows exponentially with the number of weights of the networks. Already for a two hidden layer BNN with 48 neurons per layer, FL verifies only \(0.1\%\) of the BNN posterior when truncated at 3 std. Thus, the bounds computed by FL are optimistic and not mathematically guaranteed to hold. In contrast, not only BNN-DP returns formal bounds accounting for the whole posterior, but also the benchmark results show that BNN-DP bounds are much tighter than FL ones. ### Bound Validation We validate and qualitatively compare the bounds obtained by BNN-DP and FL on BNNs with 1 and 2 hidden layers trained on 1D Noisy Sine. The results of these analyses are reported in Figure 2. Visually, we see that BNN-DP is able to compute tight affine relaxations (blue lines) on the mean of the BNNs over the grey shaded intervals. In contrast, already in this simple scenario, and even when truncating the posterior distribution at just 1 std, FL returns as guaranteed output intervals \([-1.68,0.59]\) and \([0.23,0.71]\) for the 1 and 2 hidden layer BNN, respectively. Hence, even though FL disregards most of the BNNs posterior, BNN-DP still produces tighter bounds. When using 3 std, the FL interval bounds become even wider, that is \([-7.07,9.31]\) and \([-0.65,1.54]\), for the 1 and 2 hidden layer BNN, respectively. Intuitively, the major improvement of the bounds can be explained by the fact that, while BNN-DP directly averages the uncertainty of each layer by solving the DP in Theorem 1, FL solves an overall optimisation problem that at each layer considers the worst combination of parameters in the support of the (truncated) distribution, leading to conservative bounds. In fact, the bound computed by FL is looser in the one-hidden layer case than in the two-hidden layers one by one order of magnitude, precisely because of the higher variance of the former BNN compared to the second. In what follows, we see that analogous observations apply to more complex deep learning benchmarks. ### Regression Benchmarks We consider a set of BNNs with various architectures trained on the 2D Noisy Sine and Kin8nm regression datasets. To asses the certification performance of BNN-DP, we compute the difference between the upper and lower bounds on the expectation of the BNNs, referred to as the \(\gamma\)-robustness, for its input in a \(\ell_{\infty}\)-norm ball of radius \(\epsilon\) centered at a sampled data point. Clearly, a smaller value of \(\gamma\) implies a tighter bound computation. Results, averaged over 100 randomly sampled test points, are reported in Table (a)a. For all experiments, BNN-DP greatly improves the value of \(\gamma\)-robustness provided by the FL-baseline by 1 to 4 orders of magnitude with similar computation times. We also note that the larger the BNN is, the larger the improvement in the (tightness of the) bounds are, which empirically demonstrates the superior scalability of BNN-DP. Figure 3 explicitly shows the impact of the model size and variance on the certified \(\gamma\)-robustness. For BNNs with 1 hidden layer, BNN-DP guarantees small \(\gamma\)-robustness (and hence tighter bounds) irrespective of the number of neurons as well as the amount of uncertainty. In contrast, as already observed for the 1D Noisy Sine case, FL is particularly impacted by the variance of the posterior distribution. For BNNs with two hidden layers, BNN-DP requires partitioning the latent space, which leads to a positive correlation with the value of \(\gamma\)-robustness and the number of hidden neurons. A similar, but more extreme, trend is also observed for FL. \begin{table} \end{table} Table 1: Comparison between BNN-DP and FL on various fully connected BNN architectures, with \(K\) being the number of hidden layers, and \(n_{hid}\) the number of neurons per layer. The results are the average over \(100\) test point, and the computation times are averaged over all architectures. The best values for each comparison are reported in bold. ### Classification Benchmarks We now evaluate BNN-DP on the MNIST, Fashion MNIST and CIFAR-10 classification benchmarks. In order to quantitatively measure the robustness of an input point \(x^{*}\), we consider the maximum radius \(\epsilon\) for which the decisions on \(\ell_{\infty}\)-norm perturbations of \(x^{*}\) with radii \(\epsilon\) is invariant. That is, any perturbation of \(x^{*}\) smaller than \(\epsilon\) does not change the classification output; hence, the larger \(\epsilon\) in \(x^{*}\), the more robust the BNN in the specific point. Results are reported in Table 0(b) and 2. For the fully connected BNN architectures, BNN-DP not only is able to certify a substantially larger \(\epsilon\) compared to the baseline, but also it does so by orders of magnitude smaller computation time. This is because our approach uses interval relaxations (Proposition 6) to bound the softmax, whereas FL explicitly considers a non-convex optimization problem, which is computationally demanding. For the Bayesian CNN architectures, FL is able to certify a slightly larger \(\epsilon\), at the costs of magnitudes of orders increase of computation time. This can be explained by the decreasing support of the BNN posterior certified by FL for increasing network size, whereas, the \(\epsilon\) certified by BNN-DP holds for the whole posterior. ## 8 Conclusion We introduced BNN-DP, an algorithmic framework to certify adversarial robustness of BNNs. BNN-DP is based on a reformulation of adversarial robustness for BNNs as a solution of a dynamic program, for which efficient relaxations can be derived. Our experiments on multiple datasets for both regression and classification tasks show that our approach greatly outperforms state-of-the-art competitive methods, thus paving the way for applications of BNNs in safety-critical applications. ## Acknowledgements This work was supported in part by the NSF grant 2039062.
2305.18896
Learning Off-Road Terrain Traversability with Self-Supervisions Only
Estimating the traversability of terrain should be reliable and accurate in diverse conditions for autonomous driving in off-road environments. However, learning-based approaches often yield unreliable results when confronted with unfamiliar contexts, and it is challenging to obtain manual annotations frequently for new circumstances. In this paper, we introduce a method for learning traversability from images that utilizes only self-supervision and no manual labels, enabling it to easily learn traversability in new circumstances. To this end, we first generate self-supervised traversability labels from past driving trajectories by labeling regions traversed by the vehicle as highly traversable. Using the self-supervised labels, we then train a neural network that identifies terrains that are safe to traverse from an image using a one-class classification algorithm. Additionally, we supplement the limitations of self-supervised labels by incorporating methods of self-supervised learning of visual representations. To conduct a comprehensive evaluation, we collect data in a variety of driving environments and perceptual conditions and show that our method produces reliable estimations in various environments. In addition, the experimental results validate that our method outperforms other self-supervised traversability estimation methods and achieves comparable performances with supervised learning methods trained on manually labeled data.
Junwon Seo, Sungdae Sim, Inwook Shim
2023-05-30T09:51:27Z
http://arxiv.org/abs/2305.18896v1
# Learning Off-Road Terrain Traversability with Self-Supervisions Only ###### Abstract Estimating the traversability of terrain should be reliable and accurate in diverse conditions for autonomous driving in off-road environments. However, learning-based approaches often yield unreliable results when confronted with unfamiliar contexts, and it is challenging to obtain manual annotations frequently for new circumstances. In this paper, we introduce a method for learning traversability from images that utilizes only self-supervision and no manual labels, enabling it to easily learn traversability in new circumstances. To this end, we first generate self-supervised traversability labels from past driving trajectories by labeling regions traversed by the vehicle as highly traversable. Using the self-supervised labels, we then train a neural network that identifies terrains that are safe to traverse from an image using a one-class classification algorithm. Additionally, we supplement the limitations of self-supervised labels by incorporating methods of self-supervised learning of visual representations. To conduct a comprehensive evaluation, we collect data in a variety of driving environments and perceptual conditions and show that our method produces reliable estimations in various environments. In addition, the experimental results validate that our method outperforms other self-supervised traversability estimation methods and achieves comparable performances with supervised learning methods trained on manually labeled data. Semantic scene understanding, deep learning for visual perception, vision-based navigation, autonomous vehicle navigation, field robots. ## I Introduction Recent advancements in visual perception enabled the success of fast-moving autonomous off-road vehicles. Estimating the traversability of the terrain with visual sensors is a crucial component of off-road driving. Numerous studies have made significant improvements in traversability estimation using large-scale datasets with human annotations and RGB images that provide semantically rich information about complex environments [1, 2]. However, the datasets only contain observations for a specific and limited context, resulting in unreliable estimates for unobserved conditions. To successfully adapt to new circumstances, frequent manual annotation is required, which is not only unsustainable but also erroneous. Due to the high cost of the data-labeling procedure, obtaining sufficient labeled data regarding the various environments would be challenging. The labels produced by human experts often provide inadequate information for learning traversability in complex environments, since the ground truth regarding traversable regions can not be clearly defined in off-road environments. In addition, the domain-specific annotations would lose their relevance in unfamiliar environments. For instance, various conditions, including places, seasons, weather, lighting, and camera settings, can significantly affect the visual appearance of an outdoor environment and the performance of estimations. Consequently, the vehicle cannot accurately predict traversability from images in a variety of situations if only static and constrained datasets are utilized. While it is impractical to manually annotate images of every single environment, labels on traversable regions can be automatically generated by exploiting the vehicle trajectories in a self-supervised fashion [3, 4, 5, 6, 7, 8]. Various works present the self-supervised approaches to learning traversability, which leverage self-supervised traversability labels instead of human-provided annotations [3, 4, 5, 6, 7, 8, 9]. However, they focus mostly on traversal cost analysis or terrain categorization in confined contexts, rather than identifying traversable regions reliably in diverse environments. In order for a vehicle to operate successfully and sustainably in a range of environments, it would be desirable that traversability is learned solely by utilizing the self-supervised traversability data. Nonetheless, learning traversability from self-supervised Fig. 1: We present a traversability estimation method that can be trained without human annotation in various environments. _Top_: Self-supervised traversability data gathered on diverse environments. There exists a large variance in visual appearances. _Bottom_: Traversability estimation results of the model learned solely with the self-supervised traversability labels. traversability data is challenging due to the following reasons. First, since the vehicle has no experience traversing non-traversable regions, no labels for non-traversable regions can be obtained. While supervised learning-based methods [10] learn to differentiate between regions with distinct labels, the self-supervised traversability data only contains labels for a single class. It lacks supervision for discriminating between traversable and non-traversable regions, leading to overconfident predictions [3]. Second, the self-supervised traversability label is incomplete. Only a small portion of the traversable regions are labeled by trajectories, leaving the remainder unlabeled, and some of the labels may be inaccurate due to occlusions of trajectories [8]. Lastly, while some methods leverage one-class classification methods for learning with the self-supervised label, they not only fail to produce a dependable prediction for off-road images but also do not conduct experiments in various environments. In this work, we propose a self-supervised traversability estimation method that learns traversability only from self-supervisions without explicit labels. We present an automated labeling process that can produce reliable self-supervised traversability labels on images by utilizing past vehicle trajectories. With the self-supervised labels, our algorithm learns traversability in off-road environments with complex distributions by leveraging Positive-Unlabeled(PU) [11] learning method and the 2D normalizing flow [12]. Moreover, to complement for insufficient supervision of the self-supervised labels, we employ approaches for self-supervised learning of visual representations to obtain discriminative representations from images [13]. To demonstrate the efficacy of our method, we collect large-scale driving data under a variety of conditions, including terrain types, places, weather, seasons, and lighting conditions. We conduct extensive experiments using our dataset along with the public dataset, RELLIS-3D [2]. Our comprehensive quantitative and qualitative evaluations demonstrate that our method can effectively learn traversability in a wide range of unstructured and unknown environments. ## II Related Works ### _Traversability Estimation_ With developments in learning traversability, autonomous driving has made significant progress in urban and off-road environments [14]. Most early works on estimating traversability concentrate on analyzing simple geometric and visual features [15, 16]. With the development of deep neural networks, semantic segmentation is widely utilized to classify terrains into predefined terrain classes leveraging large datasets [17]. In addition, numerous approaches have been developed to identify traversable regions in unstructured environments [17, 18]. Fully convolutional networks for image segmentation [10] have significantly improved the off-road traversability estimation performance since images contain semantically rich and dense information about the off-road environments [19]. However, such methods heavily rely on training data, which leads to incorrect estimations when confronted with data from distributions not included in the training data [20]. The supervised learning methods may not generalize well to changing and unknown environmental circumstances [4]. For the widespread deployment of autonomous vehicles off-road, where the likelihood of encountering an unfamiliar context is considerable, the model should be capable of working reliably in various environments. ### _Self-Supervised Learning of Traversability_ For reliable traversability estimation in a wide range of environments, self-supervised approaches are proposed, which exploit a vehicle's driving experience to learn the traversability of a terrain [3, 4, 9, 21, 22]. These methods enable automated procedures to self-label visual data for learning traversability. For example, measurements from proprioceptive sensors are used to assess the traversal cost of terrain or to classify terrains. However, they either rely on manual labels or are oblivious to the fact that estimations in unseen environments can be unreliable. As labels on non-traversable regions cannot be acquired via self-labeling, the estimations are prone to over-confident predictions, which might lead to navigational failure [5]. Consequently, identifying traversable regions with reliability is a crucial problem. While our previous work [8] has shown that traversability can be learned using point clouds, the method for identifying the traversable region using images has not been exhaustively examined in a variety of environments. One-class classification algorithms can be employed to distinguish traversable and non-traversable regions [5, 6, 7, 23]. For example, normalizing flow [24] shows a great performance for traversability classification on multi-modal images [5]. However, it freezes the feature encoder after pretraining and encodes local patches, which do not incorporate global scene information. The features would simply capture low-level meanings, such as the color and texture of terrains, without their semantic information required to discriminate between traversable and non-traversable terrain [25]. In addition, an autoencoder is used to identify high-risk terrains based on the reconstruction error [6]. The simple autoencoder-based reconstruction focuses on low-frequency details, resulting in the estimation that high-frequency details are simply classified as non-traversable. The autoencoder is also known to generalize well to unseen data, resulting in large false-positive predictions in which well-reconstructed non-traversable regions are assigned a low anomaly score [26]. The one-class classification algorithm should be capable of extracting more discriminative features from images with self-supervised labels in order to reliably identify traversable regions in diverse off-road contexts. ### _Self-Supervised Learning of Visual Representations_ Instead of predicting human-annotated labels, approaches on self-supervised visual pre-training learn without labels by solving pretext tasks. The pretext tasks include the reconstruction of inputs, instance discriminations, and clustering with pseudo-labels [27, 28]. Most state-of-the-art methods are contrastive learning, in which the network is trained to attract positive sample pairs and repel negative sample pairs [13]. Due to the incomplete labeling of the self-supervised traversability data, the acquisition of highly discriminative features from the data is challenging. By complementing the short supervision of self-supervised traversability data with self-supervised learning of visual representations, the visual representation for learning traversability could be more discriminative. ## III Methods Our goal is to train a network that can successfully embed the complex data distribution of environments, allowing for precise and reliable estimation in a wide range of unstructured contexts. Given image **x**, we generate a self-supervised label \(\mathbf{\hat{y}}\) that does not require manual annotation. Then, we learn a model that estimates a pixel-wise traversability \(\mathbf{y}_{i}=f(\mathbf{x}_{i})\), where \(\mathbf{x}_{i}\) is an image pixel and \(\mathbf{y}_{i}\) is terrain traversability representing whether or not the terrain is traversable. ### _Self-Supervised Traversability Label_ From images gathered while driving, the self-supervised label \(\mathbf{\hat{y}}\) is generated by an automated procedure, as illustrated in Fig. 2. Since the regions traversed by a vehicle during data collection can be considered safe to traverse, we can designate such regions as traversable. The wheel-terrain contact points are calculated using the trajectories recovered by the SLAM [29]. The trajectories of horizon \(\alpha\) from time \(t_{i}\) are converted to contact points, denoted as \(\mathbf{T}(t_{i},t_{i+\alpha})\). Prior to the labeling, the contact points are filtered to eliminate false-positive labels. Since the past trajectory is projected onto the \(2D\) images, parts of the contact points can be occluded due to obstacles or rotations of the vehicle. Without filtering, the obstacles would be labeled as traversable, leading to a large number of false positives in estimations. Although these false-positive labels can be avoided by shortening the horizons, this shortens supervision for learning. The contact points are therefore filtered using LiDAR points captured simultaneously with the images. Similar to an occlusion filtering algorithm [6], a contact point is filtered as occluded if it has a longer radial distance than the nearest LiDAR points in spherical coordinates. However, numerous undesirable noises exist in LiDAR point clouds acquired under a variety of off-road conditions (e.g. dust, rain, snow). The noises can be regarded as an obstacle during filtering, which may hinder the effectiveness of filtering. For robust labeling in a variety of unstructured environments, unsupervised LiDAR denoising [30] is performed prior to occlusion filtering. With the denoised point cloud, the contact points are filtered, denoted as \(\mathbf{T}^{\prime}(t_{i},t_{i+\alpha})\). Finally, the contact points are projected into the camera coordinates to generate the self-supervised label of the PU type by the following equations: \[\mathbf{\hat{y}}=\mathbf{K}\cdot[\mathbf{R}|\mathbf{t}]\cdot\mathbf{T}^{\prime }(t_{i},t_{i+\alpha}) \tag{1}\] where \(\mathbf{K}\) and \([\mathbf{R}|\mathbf{t}]\) represent the intrinsic camera calibration matrix and the world-to-camera transformation matrix respectively. On the image coordinates, pixels between the left and right wheel-terrain contact points are labeled as _positive_, while all other pixels are left _unlabeled_. Note that the label consists of a relatively small number of positive pixels and unlabeled pixels are a combination of traversable and non-traversable regions. ### _Learning Traversability_ In this section, we propose a method for learning traversability with self-supervised labels only. The overall architecture of our learning method is illustrated in Fig. 3. #### Iii-B1 One-Class Classification The image is forwarded to a feature encoder for image segmentation, denoted as \(f\), that maps images \(\mathbf{x}\in\mathbb{R}^{h\times w\times 3}\) into pixel-wise features \(\mathbf{z}\in\mathbb{R}^{h\times w\times d}\). Our backbone encoder is PSPNet [10] which can capture global context through the pyramid pooling module. The feature embedding space can be trained to minimize the volume of a positive-data-enclosing hypersphere. Then, the similarity metric between a feature and the center of the hypersphere, \(p(\mathbf{z}_{i})\in[0,1]\), can be used to determine the traversability of the \(\hat{r}^{h}\) pixel. The simple one-class classification loss can be used for positive pixels: \[\mathcal{L}_{\text{OCC}}=1-p(\mathbf{z}_{i}). \tag{2}\] However, the representations of positive pixels have high intra-class variation as there exist various representations of traversable regions in off-road environments. The loss function Eq. (2) assumes a single center and pushes dissimilar features towards a single center, thereby diminishing the discriminative power of the representations. Not only is the solution susceptible to a hypersphere-collapse solution, in which the majority of data can be trivially mapped to the hypersphere center, but it is also incapable of effectively capturing multimodal distributions. Normalizing flow [24] can be used to project complex distributions of features to a simple distribution while avoiding a trivial solution. The flow model, denoted as \(g\), transforms a pixel-wise feature \(\mathbf{z}_{i}\in\mathbb{R}^{d}\) into a flow feature \(\mathbf{z}_{i}^{F}\in\mathbb{R}^{d^{\prime}}\) with a tractable distribution using a bijective invertible mapping. We adopt the \(2D\) normalizing flow model with affine coupling layers, Fastflow [12], which produces more accurate features for segmentation. The likelihood of a flow feature can be simply defined as \(p(\mathbf{z}_{i}^{F})=\mathbf{z}_{i}^{F}\cdot\mathbf{C}_{p}\), which represents cosine similarity Fig. 2: Overview of our proposed automated procedure for self-supervised traversability label generation. (a) On captured data, the wheel-contact points are considered traversable. The occluded points are filtered before projection to image coordinates to eliminate false positive labels. (b) Noises in LiDAR points are regarded as obstacles, leading to erroneous filtering. (c) Unsupervised LiDAR denoising is performed to improve the efficacy of filtering in off-road conditions. (d) Trajectories are then projected to image coordinates and labeled as traversable. with the hypersphere center of positive data, \(\mathbf{C}_{p}\in\mathbb{R}^{d^{p}}\). Then, the traversability of an image pixel can be easily calculated using the change of the variable formula: \[\log p(\mathbf{z}_{i})=\log p(\mathbf{z}_{i}^{F})+\log\left|\mathbf{det}\frac{ \partial\mathbf{z}_{i}^{F}}{\partial\mathbf{z}_{i}}\right|, \tag{3}\] where the determinant of the Jacobian \(\frac{\partial\mathbf{z}^{F}}{\partial\mathbf{z}_{i}}\) can be calculated with affine coupling layers [24]. Intuitively, the Jacobian penalizes the trivial solutions that have constant mappings. However, the normalizing flow model cannot be end-to-end trained with a feature encoder because it would generate a trivial backbone encoder while preventing the trivial flow model. In addition, the network does not utilize unlabeled data. The network would be overfitted to the distributions of traversable regions, limiting its ability for discriminating between traversable and non-traversable regions. #### Iii-B2 Self-Supervised Clustering with Unlabeled Data We use unlabeled data in a self-supervised manner to train the network end-to-end so that the network can learn better embeddings while avoiding trivial solutions. Motivated by the clustering-based self-supervised learning of visual representations [27, 28], our methodology solves the clustering pretext task. The flow features of unlabeled pixels are jointly self-labeled and clustered with a set of \(K\) learnable prototypes, \(\mathbf{P}\in\mathbb{R}^{K\times d^{p}}\), which functions as cluster centers. By taking the softmax of the similarity between prototypes and unlabeled features, the posterior distribution of unlabeled pixels to prototypes, \(\mathbf{Q}\in\mathbb{R}^{K\times n_{u}}\), is computed, where \(n_{u}\) is the number of unlabeled pixels within a batch. Then, soft cluster assignment \(\mathbf{A}\in\mathbb{R}^{K\times n_{u}}\) from features to prototypes is computed by optimizing the following equation with an equipartition constraint: \[\max_{\mathbf{A}}\operatorname{Tr}(\mathbf{A}^{\intercal}\mathbf{Q})\quad s.t. \quad\mathbf{A}\cdot\mathbf{1}^{n_{u}}=\frac{n_{u}}{K}\cdot\mathbf{1}^{K} \tag{4}\] The constraints ensure that the prototypes equally partition the assignments, thereby preventing trivial solutions in which features are collapsed into equal representations [27]. This optimization problem can be efficiently solved with a few iterations of the _Sinkhorn-Knopp_ algorithm [31]. By minimizing the cross-entropy loss between the posterior distribution and the optimized cluster assignment, the features and prototypes are simultaneously updated: \[\mathcal{L}_{\text{CE}}=-\frac{1}{n_{u}}\sum_{k}^{K}\sum_{j}^{n_{u}}\mathbf{A} _{kj}\log(\mathbf{Q}_{kj}). \tag{5}\] However, the learned representations are still insufficient because the supervision of the self-supervised labels is restricted to a small portion of the entire traversable regions. In contrast to supervised learning methods, which are trained to explicitly distinguish traversable and non-traversable regions with full labels, the unsupervised clustering objective may attempt to learn simplistic features. #### Iii-B3 Self-Supervised Contrastive Learning To supplement the representation power of backbone features, the encoder \(f\) is simultaneously learned with the contrastive pretext task of self-supervised visual representation learning [13] alongside other objectives in order to generate a more powerful visual representation for a given data distribution. Given \(N\) images in a minibatch, two random _views_ are generated for each image as a positive pair by random data augmentations, \(\tau\) and \(\tau^{\prime}\). The remaining \(2N-2\) augmented views of images within a minibatch are regarded as negative pairs. The augmentation comprises low-level image transformations. The contrastive feature of each view, \(\mathbf{\chi}^{C}\in\mathbb{R}^{d^{c}}\), is produced by forwarding the pixel-wise features into the contrastive projection head, denoted as \(c\). Then, we minimize the contrastive loss function for each data with the cosine similarity as follows: \[\mathcal{L}_{sim:r}=-log\frac{\exp(\frac{\mathbf{\chi}^{C}\cdot\mathcal{C}_{+}^{ c}}{\mathbf{\chi}^{C}\cdot\mathbf{\chi}^{C}})+\sum_{\mathbf{\chi}^{C}}^{2N-2}\exp(\frac{ \mathbf{\chi}^{C}\cdot\mathbf{\chi}^{C}}{\mathbf{\chi}})}{.} \tag{6}\] Fig. 3: High-level structure of the proposed method. (a) Training. (b) Inference. (c) Illustration of the embedding space of flow features at training. The feature space is learned with the clustering pretext task, enhancing the discriminative power of features for the one-class classification. \(\lambda\) is a temperature hyperparameter, and \(\mathbf{z}_{+}^{C}\) and \(\mathbf{z}_{-}^{C}\) denotes features of positive and negative pairs, respectively. By minimizing the loss, features of positive pairs are pulled while pushing those of negative pairs away. In a self-supervised manner, it regularizes the features to be more semantically meaningful and discriminative for traversability estimation. ## IV Experiments In this section, we validate that our self-supervised traversability estimation method can effectively learn traversability in a wide range of environments. We first describe the dataset used for the evaluation, followed by the experimental setup as well as implementation details. Then, we present both quantitative and qualitative results of our traversability estimation method. Lastly, we present detailed ablation studies demonstrating that our self-supervised traversability estimation method is capable of learning traversability in a variety of environments and under appearance changes without human annotations. ### _Datasets_ #### Iv-A1 Driving Data Under Adverse Conditions We collected driving data in a variety of environments using our platform [32] equipped with an RGB camera, VLP-32 LiDAR. It comprises about \(20,000\) images gathered under a wide range of conditions, including varying places, seasons, weather, lighting, terrain types, obstacles, and lens conditions. According to the characteristics of the environments, we divide our data into five categories: _paved, unpaved, snowy, rainy_, and _night_. The _paved_ contains images of urban and rural areas with paved roads. The _unpaved_ includes images acquired while driving on unpaved off-roads, where obstacles, dust, and smoke are captured on camera and the drivable regions are less clear. The _snowy_ and _rainy_ categories consist of images taken in the context of snow and rain, with snowed surfaces and puddles, as well as frost and raindrops on the lens. The _night_ is composed of images obtained in dark areas with headlights on. For evaluation, 300 images per category are manually annotated by an expert. They are chosen from a subset of sequential images and excluded from training. #### Iv-A2 Reflips-3d We also present experimental results using the publicly available RELLIS-3D off-road dataset [2], which contains RGB camera images with pixel-level annotation. The self-supervised traversability labels are generated from the raw data using LiDAR-based SLAM [29]. The data is divided into a training set containing \(4,827\) images and a validation set with the remaining images. Although the annotation does not indicate which points are traversable, we define the _grass, dirt, asphalt, concrete_, and _mud_ classes as traversable and the _tree, pole, vehicle, object, person, fence, barrier, rubble_, and _bush_ classes as non-traversable. ### _Experimental Setup_ First, we demonstrate that the model learned with our self-supervised traversability estimation methods is more effective than models trained in a fully supervised manner using datasets with human annotations. For the comparison, the PSPNet is trained in a supervised manner with the following datasets: _KITTI_ road detection, _RELLIS-3D_, and our labeled dataset of outdoor driving scenes (_Outdoor_). For the outdoor dataset, about 1K images of the paved and unpaved categories are randomly selected and manually labeled for supervised learning. The model is also trained with the aforementioned three datasets altogether (ALL). Then, we show that our method yields higher performances than other self-supervised traversability estimation algorithms. Our approach is compared with the method that uses normalizing flow [24] on top of the pre-trained backbone (_Real-NVP_) [5] and the method based on reconstruction-based anomaly detection with autoencoder (_AE Based_) [6]. ### _Evaluation Metires_ The Area Under the Receiver Operating Characteristic (AUROC) is used to quantitatively evaluate the methods. AUROC quantifies the likelihood of a positive sample having a higher normal score than a negative sample, and therefore evaluates the one-class classification algorithms regardless of the threshold. In addition, we evaluate our methods using the standard evaluation metrics of the KITTI road detection system. Maximum F1-measure (MaxF), average precision (AP), precision rate (PRE), recall rate (REC), false positive rate (FPR), and false negative rate (FNR) are the metrics included. Note that the four latter measures are obtained at the threshold of the maximum F1 measure. ### _Implementaion Details_ We use PSPNet with ResNet50 [33] as a backbone embedding network for every method for fair comparisons. We use the flow model with eight transformation blocks composed of affine coupling layers. The contrastive head consists of adaptive average pooling and two MLP layers with a ReLU in the middle. Both the flow model and the contrastive head produce 128 dimensional vectors that are \(l_{2}\) normalized. Our models are trained for 60 epochs with a mini-batch size of 64, using the Adam optimizer with a learning rate of \(1e^{-3}\) and a polynomial learning rate decay. The sum of the means of the three losses (Eq. 2,5, and 6) is used as the objective of the optimization. The random data augmentation pipeline includes \(256\times 256\) pixel random cropping, flipping, random color jittering, random gray-scale conversion, gaussian blurring, rotation, and random perspective transformation. The number of learnable prototypes is set to 256 and they are randomly initialized by the normal distribution. We execute three iterations of the Sinkhorn-Knopp algorithm and set the temperature parameters \(\lambda\) as 0.1. For data labeling, we set horizons \(\alpha\) for self-supervised labels in Section III-A to 100, indicating that we utilized trajectories 10 seconds ahead of the image acquisition. ### _Experimental Results_ The Table I and Fig. 4 provide the quantitative and qualitative results for our dataset. In most of the categories, our method outperforms models trained with manual labels in a supervised way, implying that our methods can yield comparable or even greater results than a model trained with laborious manual annotations. In addition, combining multiple datasets for supervised learning does not seem to improve the traversability estimation performance on target distributions. These results demonstrate that distribution shifts have a significant impact on the performance of traversability estimation, suggesting the necessity of self-supervised traversability estimation methods for autonomous vehicles to operate effectively in widespread environments. Due to the fact that our method exploits self-supervised labels of the target distributions, the model can estimate traversability reliably in the presence of distributional shifts. Our method shows a significant margin compared to other self-supervised methods based on one-class classification. Note that the methods based on autoencoders produce a large number of false positives, meaning that the simple reconstruction-based anomaly detections fail to distinguish non-traversable regions. Ours produces fewer false negative occurrences than others, indicating that it obtains discriminative features for identifying traversable regions in complex off-road environments. The Table II and Fig. 5 illustrate results for RELLIS Fig. 4: Qualitative traversability estimation results for our dataset. The thresholds of the maximum F1 scores are used for the visualizations. The pixels where the estimated traversability exceeds the thresholds are colored green. More results are available in the multimedia material. 3D. Our method shows better performance than others and even yields comparable performance with models overfit with manual annotations of the RELLIS-3D. ### _Ablation Studies_ We present comprehensive ablation studies to examine the validity of each component of our methodology. We quantitatively verify the efficacy of the self-supervised traversability labeling, PU learning algorithm with the normalizing flow, self-supervised clustering, and self-supervised contrastive learning. The ablations are trained using data from all categories of our dataset. The results are shown in Table III and Fig. 6 shows the ROC curves of the results. #### Iv-F1 Self-Supervised Labels First, to validate the efficacy of our self-supervised labeling algorithm, we compare the impacts of labels obtained without occlusion filtering, without LiDAR denoising, and with varying horizons. The labels generated without occlusion filtering result in a high FPR, indicating that non-navigable regions, such as obstacles, are incorrectly estimated as traversable due to unreliable labels. The labels created without LiDAR denoising lead to a higher FNR, implying that the model is trained with fewer supervisions because self-labels for traversable regions are misclassified as occluded due to LiDAR noise. Similarly, lowering the horizons of trajectories increased the FNR because it reduces the number of positive pixels of the labels. #### Iv-F2 PU Learning Second, we replace the flow model with \(1D\) flow (Real-NVP) and simple MLP layers to evaluate the efficacy of flow models. We observe that \(2D\) normalizing flow produces better results compared to \(1D\) flow. It confirms that using \(2D\) normalizing flow is more effective for localizing traversable regions with pixel-wise features while avoiding a trivial solution. The model trained by replacing the flow model with MLPs results in low AUROC and high FPR, implying that the models produce a trivial solution without the normalizing flow. Also, the model is trained without clustering loss in order to highlight the effectiveness of using unlabeled data. Without the loss for unlabeled pixels, Eq. (5), the performance severely diminishes as AUROC approaches 0.5. It indicates that the use of unlabeled data is essential for avoiding trivial solutions and learning more discriminative features about environments. #### Iv-F3 Self-Supervised Learning for Visual Representation Then, we verify the efficacy of our self-supervised contrastive learning. The models are trained without contrastive learning and with the reconstruction pretext task. The model trained without contrastive learning exhibits a low REC, denoting that the image features are less relevant for learning traversability, as the self-labels cannot provide guidance regarding the distinction between traversable and non-traversable regions. Fig. 5: Qualitative results for RELLIS-3D. Note that the ground truths are sometimes inaccurate and labeled for terrain classification rather than traversability. For example, some pixels on obstacles are labeled as traversable due to thin overhanging vegetation, and distant grass across fences is also labeled as traversable. Instead of merely assessing terrain type, our method identifies traversable regions that are contextually significant. Fig. 6: ROC curves for (a) traversability estimation methods and (b) the ablation studies. The values indicate the AUROC of the results. Notably, our method yields low false positive rates compared to other methods, which is essential for safe navigation in unstructured environments. The performance of the model learned concurrently with the reconstruction pretext task is improved, but still inferior to the model with contrastive loss. The reason for this is that the naive reconstruction objective tends to focus on texture and color rather than semantic meanings. ## V Conclusions This paper introduces a self-supervised traversability estimation method that can learn traversability in varied environments without manual annotations. Using the self-supervised labels only, the network is trained to predict traversable regions using a one-class classification method. Self-supervised learning of visual representation is incorporated into the learning in order to improve the network. Extensive experiments demonstrate that the proposed method is capable of learning traversability more effectively than others. We believe that our method can be leveraged for the wider deployment of autonomous vehicles since it is capable of easily adapting to a variety of contexts by precisely embedding the data distribution of target environments. Future work includes using labeled data with domain adaptation and semi-supervised learning. Also, we are investigating incremental learning and online learning for a more general model that can be used in various environments.
2307.07117
When Conversations Turn Into Work: A Taxonomy of Converted Discussions and Issues in GitHub
Popular and large contemporary open-source projects now embrace a diverse set of documentation for communication channels. Examples include contribution guidelines (i.e., commit message guidelines, coding rules, submission guidelines), code of conduct (i.e., rules and behavior expectations), governance policies, and Q&A forum. In 2020, GitHub released Discussion to distinguish between communication and collaboration. However, it remains unclear how developers maintain these channels, how trivial it is, and whether deciding on conversion takes time. We conducted an empirical study on 259 NPM and 148 PyPI repositories, devising two taxonomies of reasons for converting discussions into issues and vice-versa. The most frequent conversion from a discussion to an issue is when developers request a contributor to clarify their idea into an issue (Reporting a Clarification Request -35.1% and 34.7%, respectively), while agreeing that having non actionable topic (QA, ideas, feature requests -55.0% and 42.0%, respectively}) is the most frequent reason of converting an issue into a discussion. Furthermore, we show that not all reasons for conversion are trivial (e.g., not a bug), and raising a conversion intent potentially takes time (i.e., a median of 15.2 and 35.1 hours, respectively, taken from issues to discussions). Our work contributes to complementing the GitHub guidelines and helping developers effectively utilize the Issue and Discussion communication channels to maintain their collaboration.
Dong Wang, Masanari Kondo, Yasutaka Kamei, Raula Gaikovina Kula, Naoyasu Ubayashi
2023-07-14T01:46:43Z
http://arxiv.org/abs/2307.07117v1
# When Conversations Turn Into Work: A Taxonomy of Converted Discussions and Issues in GitHub ###### Abstract Popular and large contemporary open-source projects now embrace a diverse set of documentation for communication channels. Examples include contribution guidelines (i.e., commit message guidelines, coding rules, submission guidelines), code of conduct (i.e., rules and behavior expectations), governance policies, and Q&A forum. In 2020, GitHub released Discussion to distinguish between communication and collaboration. However, it remains unclear how developers maintain these channels, how trivial it is, and whether deciding on conversion takes time. We conducted an empirical study on 259 NPM and 148 PyPI repositories, devising two taxonomies of reasons for converting discussions into issues and vice-versa. The most frequent conversion from a discussion to an issue is when developers request a contributor to clarify their idea into an issue (_Reporting a Clarification Request -35.1% and 34.7%, respectively_), while agreeing that having non actionable topic (_QA, ideas, feature requests -55.0% and 42.0%, respectively_) is the most frequent reason of converting an issue into a discussion. Furthermore, we show that not all reasons for conversion are trivial (e.g., not a bug), and raising a conversion intent potentially takes time (i.e., a median of 15.2 and 35.1 hours, respectively, taken from issues to discussions). Our work contributes to complementing the GitHub guidelines and helping developers effectively utilize the Issue and Discussion communication channels to maintain their collaboration. Keywords:Communication Channels, GitHub Discussion, Empirical Study ## 1 Introduction Contemporary open-source projects nowadays employ a plethora of communication channels to facilitate knowledge sharing and sustain the community around them (Storey et al., 2016). Complementary studies have shown that GitHub projects tend to adopt multiple communication channels and they are used to both capture new knowledge and update existing knowledge (Tantisuwankul et al., 2019; Vale et al., 2020). As part of a community, the management of communication and collaboration channels has led to the increase in the sophistication of documentation standards such as contribution guidelines (e.g., commit message guidelines, coding rules, submission guidelines), code of conduct (e.g., rules and behavior expectations), governance policies, and Q&A forum. Launched in 20201, GitHub Discussion is a community forum that serves as an asynchronous communication channel. The intended usage is for open-source communities, developer teams, and companies to ask questions, share ideas, and build connections with each other, all within the same GitHub platform. The first exploratory study by Hata et al. (2022) showed that Discussion is considered useful by developers and plays a crucial role in advancing the development of projects, uncovering several reasons for using GitHub Discussion mentioned in initial discussions (e.g., question-answering, community engagement, etc.). Formally, Discussion is designed as a tool to share questions, ideas, conversations, requests for comment (RFC), resource planning, and community engagement. This is different from the more traditional bug and code review systems such as GitHub Issue, which tracks executable pieces of work with a defined start and end point, including new features, fixing bugs, general updates, and tracking for eips and sprints, among other things. Used together, GitHub claims that one benefit is that a developer can reference a Discussion in an issue as background and context for a piece of work, while converting an issue into a discussion could be due to the lack of information and decisions needed to complete a task. Although GitHub provides guidelines2, it remains unclear how developers maintain this intertwine between communication and actionable collaboration, how trivial is the conversion, and whether or not it takes time to decide on a conversion. Footnote 1: [https://github.blog/2020-05-06-new-from-satellite-2020-github-codespaces-](https://github.blog/2020-05-06-new-from-satellite-2020-github-codespaces-)\github-discussions-securing-code-in-private-repositories-and-more/ Footnote 2: [https://resources.github.com/devops/process/planning/discussions/](https://resources.github.com/devops/process/planning/discussions/) In this paper, we investigate the maintenance between communication and actionable collaboration by analyzing how developers decide between Discussion and Issue in real-world projects. Hence, we conduct an empirical study to mine the conversions between them from 259 NPM repositories and 148 PyPI repositories. Three research questions are formulated to guide this study: * **RQ1: What is the reason of converting a discussion to an issue?** Motivation: Although the prior work (Hata et al., 2022) explored the adoption of GitHub Discussion in terms of the usage and perception by devel opers, it is still unclear how contributors choose between the two communication channels. Specifically, Hata et al. studied the appearance of issue links and demonstrated that GitHub Discussions play a role in moving development forward by triggering new issues, but the intentions behind such triggers are not yet revealed. Answering this RQ would help the project to better maintain the communication channel and mentor newcomers to find a way to make contributions. Results: Four reasons for suggesting a transition from discussions to issues are identified from a manual classification of a total of 331 samples, including Reporting a Bug, External Repository, Reporting a Clarification Request, and Reporting an Enhancement. Reporting a Clarification Request is the most common reason, with 35.1% and 34.7% of instances being classified for the NPM and the PyPI, respectively. * **RQ2: What is the reason of converting an issue to a discussion?** _Motivation:_ As one strategy for moderating discussions, developers are allowed to convert an issue to a discussion. Hata et al. (2022) shows that around 18% of discussions are converted from issues. However, the reason behind this conversion is not widely explored. Answering this RQ would help contributors further understand the proper position of the issue tracker system and avoid unnecessary burdens for engineering developers. Results: Through a manual classification of a total of 433 samples, seven reasons are identified for converting issues to discussions. Non actionable topic is the most frequent reason, with 55.0% and 42.0% of instances being classified for the NPM and the PyPI, separately. Furthermore, Question-answering is the main non actionable topic. * **RQ3: How long does it take to raise a conversion intent?** _Motivation:_ We would like to explore whether deciding on a conversion from discussions to issues and vice versa takes time or not (i.e., discussion length and spent time). Answering this RQ would provide a potential future venue for the automatic classifier proposal to identify the appropriate topics between GitHub Issue and Discussion. Results: The quantitative analysis shows that few posts are involved until the conversion intent is raised, i.e., mostly the median is one in both studied conversion kinds. However, raising the conversion intent could potentially take time. For instance, 15.2 hours and 35.1 hours (median) are taken until the conversion intent is raised from issues to discussions for the NPM and the PyPI, respectively. Based on our empirical study results, we provide the following implications for the stakeholders. For _maintainers and contributors_, we find that our devised taxonomy complements the GitHub guidelines and further helps them decide when to contribute to each channel and reduce unnecessary conversions. Meanwhile, Discussion could be considered as a means to attract, and onboard potential new contributors. We recommend that project maintainers should clearly state the submission rules in their README or contribution guidelines to avoid the inconsistent use of communication channels. We also suggest contributors, especially newcomers, to start from the Discussion for the uncertainty problem (e.g., reporting potential bugs). For _researchers_, we provide the future direction of automatic classifier needs, including the detection of duplication between Issue and Discussion, the identification of non actionable topics from Issue, and bug-related topics from Discussion. The remainder of this paper is organized as follows: Section 2 illustrates an example to motivate this study. Section 3 describes the repository selection and the discussion extraction. Section 4 presents the approach to the proposed questions. Section 5 shows the research results. Section 6 discusses the insights from our findings. Section 7 discloses the threats to validity and Section 8 presents the related work. Finally, we conclude the paper in Section 9. ## 2 Motivating Example As pointed out by the survey feedback in the work of Hata et al. (2022), developers face the problem of topic duplication between Discussions and Issues. Figure 1: GitHub issue is converted into Discussion thread (pixijs #7680). Inspired by their work, we searched for some anecdotal evidence to understand the conversion process from Discussion to Issue and vice-versa. Figure 1 shows an example from pixijs #7680 where a GitHub issue was converted into the Discussion channel. As shown in the figure, the issue was initially entitled as process interactive bug on hover. To further confirm the bug or not, the author additionally provided the error message screenshot and the environment. Then, a conversation consisting of 22 posts was taken between four developers. During the half-conversation, three developers tried to investigate the causes and claimed that they found a way to resolve the raised issues. At the same time, they raised another potential problem around the pre-issue "the issue seems to be bound to our codebase, not reproducible on a clean project" and they further discussed this regarding the conflict of package versions. At the end of this conversation, the maintainer suggested that this issue had turned more into support and raised an intention of converting it to a discussion thread. In total, around two days were taken for this conversion process between the issue creation time (5 Aug 2021) and the conversion time (7 Aug 2021). Although the work of Hata et al. (2022) highlighted the challenge of choosing appropriate channels from the survey view, it is still unclear and lacks in-depth empirical analysis to reveal how the developers are using this new feature in real life. Inspired by this example, we hypothesize that (i) _the conversion process between Issue and Discussion may not be trivial_, and (ii) _the conversion process may take a long period in terms of discussion length and time_. ## 3 Studied Datasets In this section, we describe the process of preparing studied datasets, including repository selection and discussion extraction. Figure 2: The overview of dataset preparation. Footnote 2: [https://www.cds.org/](https://www.cds.org/) Footnote 3: [https://library.ics.org/](https://library.ics.org/) Footnote 4: [https://graphql.org/](https://graphql.org/) Footnote 5: [https://github.com/sbaltes/github-retriever/](https://github.com/sbaltes/github-retriever/) #### 4.2.2 Repository Selection. To answer our proposed RQs, we perform an empirical study on the NPM package and PyPI ecosystems. We selected the NPM and PyPI package ecosystems as (I) they are two of the largest package collections that are hosted on the GitHub platform and have been widely studied in the recent studies (Abdalkareem et al., 2017; Cogo et al., 2019; Chinthanet et al., 2021), (II) inspired by the work of Hata et al. (2022), 38% of web libraries and frameworks (i.e., the highest proportion) have adopted the discussion beta feature. Similar to the previous work (Chinthanet et al., 2021), we refer to the listing of NPM packages from the NPM registry and then matched them to the projects that are available in GitHub. For the PyPI ecosystem, we rely on the open-source discovery service Libraries.io3 and the PyPI registry listing. We assume that the more active and well-maintained package repositories are more likely to adopt the discussion feature. Thus, we filtered and collected the repositories based on their contributor number (i.e., the number of contributors is larger than 100), resulting in 1,255 and 510 distinct repositories from the NPM and the PyPI, respectively. Then, we use GraphQL API4 to determine whether or not these repositories have already adopted the Discussion feature. In the end, 263 NPM package repositories and 148 PyPI package repositories have introduced the GitHub Discussion, by the end of March 2022. Footnote 3: [https://library.ics.io/](https://library.ics.io/) Footnote 4: [https://graphql.org/](https://graphql.org/) #### 4.2.3 Discussion Extraction. For the remaining 411 repositories, we then used GraphQL API to retrieve all discussions, including metadata such as discussion title, body, author, created time, and whether the discussion has selected answer or not. For each post inside the discussion, we collected its body text, author, timestamp, and whether the post is an answer or not. Moreover, for each post, we as well collected its nested replies, including reply body, reply time, and reply author. After this, we were able to obtain 33,716 discussions from 259 NPM repositories (the rest of the four repositories have the Discussion feature but no discussions) with 68,695 individual posts and 69,593 replies in these posts, by April 2022. For the PyPI ecosystem, 12,662 discussions were retrieved from 148 repositories with 28,753 individual posts and their 29,001 replies. We regard this dataset as Dataset I, as shown in Figure 2. Since we would like to understand the characteristics and reasons of those discussion threads that are converted from the issues, we rely on the custom web scraper5 provided by Hata et al. (2022) to determine whether the discussion thread was converted from an existing Issue. Specifically, we provide the standard input as required (repository names and their owners) and the scraper (i.e., HTML parser) would automatically output the related attributes of discussion threads including the information of whether the thread was converted in the form of binary values. Table 1 shows the statistic summary of our studied datasets. As shown in the table, 5,689 discussion threads (16.8%) and 3,337 discussion threads (26.4%) are converted from the existing issues for the NPM and the PyPI, separately, denoted as Dataset II. ## 4 Approach In this section, we will describe the approach of our proposed RQs. ### RQ1 Analysis To answer RQ1: What is the reason of converting a discussion to an issue?, we analyze a discussion that is suggested to be converted into an issue by looking at what is the conversion reason, using content analysis (one of the most broadly used qualitative data analysis methods) (Stemler, 2000). Identifying discussions that are converted to issues.Since there are no available datasets and tools or classifiers that were provided in the literature to be used, we first have to identify discussions that are suggested to be converted to issues. To do so, we applied a semi-automatic method to identify such discussions from 46,388 discussion threads (33,716 and 12,622 discussion threads for the NPM and PyPI, respectively) and their posts (Dataset I), using a list of keywords. To ensure the keyword is sufficient, we first manually inspected a group of 100 discussions that contain issue related links and picked up the potential indicator words. Based on the observations and our knowledge, we refined the keyword and came up with the following keyword list: \(<\)open, move, convert, create, please, transfer, submit, file, bug, issue\(>\), by taking case sensitive into account. Then we used this keyword list to match the discussion posts, resulting in 15,126 discussion threads (7,751 and 7,375 discussion threads for the NPM and PyPI, respectively) that at least contained one keyword in their posts. Next, the first author manually validated these posts to identify the true positives where the discussion thread is suggested to be converted to an issue. Finally, 595 discussion threads (374 and 221 discussion threads for the NPM and the PyPI, respectively) were collected as shown in Table 2. Representative dataset construction.Similar to the prior work (Wang et al., 2021; Chouchen et al., 2021), we then drew a statistically representative sample and the required sample size was calculated so that our conclusions \begin{table} \begin{tabular}{l r r} \hline \hline & NPM & PyPI \\ \hline Studied period & \(\sim\) 2022.04 & \\ \# Package repositories & 259 & 148 \\ \# Discussion threads & 33,716 & 12,622 \\ \# Avg. Discussion threads per repositories & 130 & 85 \\ \# Discussion posts & 68,695 & 28,753 \\ \# Discussion threads converted from Issues & 5,689 (16.8\%) & 3,337 (26.4\%) \\ \# Avg. Discussion threads converted from Issues & 22 & 23 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of Studied Datasets. about the reasons of converted discussion threads would be generalized to all discussion threads in the same bucket with a confidence level of 95% and a confidence interval of 5. 6 Thus, we randomly selected 190 and 141 discussion threads from the NPM and the PyPI that are suggested to be converted into issues to conduct the subsequent analysis, as shown in Table 2. Footnote 6: [https://www.surveysystem.com/sscalc.htm](https://www.surveysystem.com/sscalc.htm) Manually Coding Reasons.We performed a manual analysis to investigate the reasons behind those discussions that are suggested to be filed as issues. The manual analysis was conducted in multiple rounds, similar to the prior work (Hata et al., 2019). In the first round, the first two authors opened a round-table discussion on classifying twenty randomly selected discussion threads from the sample, and constructed an initial coding guide. Note that to classify the reasons, we not only relied on the specific discussion comments, but also referred to the context of the whole discussion threads and sometimes tracked back to the opened issues, in order to obtain a comprehensive understanding. To validate the coding guide and ensure that if there exist any missing codes, in the second round, the first two authors independently coded another twenty discussion threads. After this round, we found that the constructed coding guide can fit the sample and no new codes occurred. Similar to prior work (Wang et al., 2021; Chouchen et al., 2021), we then calculated the Kappa agreement of this iteration between two authors across four codes for these twenty discussions. The score of the Free-marginal Kappa agreement is 0.80, implied as nearly perfect (McHugh, 2012). Based on the encouraging results, the first author then coded the rest of the samples. ### RQ2 Analysis To answer RQ2: What is the reason of converting an issue to a discussion?, we analyze a discussion that is converted from an issue in terms of the reasons behind such conversion, through content analysis same to RQ1. \begin{table} \begin{tabular}{l r r r r} \hline \hline & \multicolumn{2}{c}{Dis. from issues (RQ1)} & \multicolumn{2}{c}{Dis. into issues (RQ2)} \\ \cline{2-5} & NPM & PyPI & NPM & PyPI \\ \hline \# validated Discussions & 7,751 & 7,375 & 1,156 & 523 \\ \# satisfied Discussions & 374 & 221 & 562 & 438 \\ \# representative samples & 190 & 141 & 228 & 205 \\ \hline Total & & 331 & & 433 \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of Representative Dataset. # of validated Dis. refers to the number of the discussions that are retrieved from the keyword list. # of satisfied Dis. denotes the number of discussions that satisfy the criteria after the manual validation. Representative dataset construction.To understand what is the reason of this kind of discussion conversion, similar to RQ1, we perform a manual analysis on a statistically representative sample of our discussion dataset (Dataset II) where the discussion threads are automatically identified as converted from issues or not by the crawler tool. Through an exploration of ten randomly selected samples, we observed that around 70% of these discussions do not imply explicit reasons from their post context. However, our study interest is to find the reasons and we would like to avoid the potential subjective threat. Thus, we then applied a filter to retrieve those discussion threads that could probably provide the reasons. To do so, we used the keyword discussion to narrow the data scope since we assume that the keyword discussion could indicate the discussion feature, resulting in 1,156 and 523 discussion threads for the NPM and the PyPI, separately. Then, the first author manually inspected these discussion threads to validate whether or not their comments imply the conversion between discussions and issues. We noticed that existing old issues can also be converted into discussion threads. We argue that these existing old issues are out of our scope, as we are interested in the instances where issues are converted due to the adoption of the Discussion feature. Therefore, we further excluded the converted issues that were submitted before the GitHub Discussion feature was firstly introduced (i.e., May 2020). In the end, 562 and 438 discussion threads from the NPM and the PyPI satisfied the criteria. To be consistent with the manual analysis in RQ1, we drew a statistically representative sample. To reach the confidence level of 95% and a confidence interval of 5, we then randomly sampled 433 discussion threads (228 and 205 threads for the NPM and the PyPI, respectively) from the bucket. Table 2 shows the summary of the studied dataset in RQ2. Manually Coding Reasons.We classify the reasons of converting issues to discussions, using statistically representative samples (433 discussion threads). Our initial codes were informed by taxonomy to understand the reasons for using GitHub Discussions mentioned in initial discussions, such as question-answering, idea sharing, information resource building, and so on (Hata et al., 2022). We refer to their taxonomy since it is closely relevant to our study scope, and the taxonomy is validated by the systematic process. To test whether or not the existing taxonomy can fit our cases, in the first iteration, we randomly selected twenty samples and classified them into the available codes between the first two authors. After the classification, an open discussion was conducted between the two authors and we observed that the existing taxonomy cannot fit well. We then refined the coding schema by modifying certain codes and adding new codes. To validate our refined coding schema, another twenty samples were selected and the first two authors independently classified the samples. After the second iteration, we found that there existed new codes that were not covered in our coding schema. Then, an open discussion was held to discuss these new codes and further polish up our coding schema by emerging new codes. To assure that there was no new code and further evaluated the coding schema, in the third iteration, we selected another twenty samples, and the first two authors independently coded them again. After the third iteration, no new codes occurred and we found that the coding schema can fit well the samples. In total, 60 samples were used to establish our coding schema of reasons of converting issues to discussions. Note that our annotation is based on the whole issue conversation and the guidelines do not allow for multiple categories. We then evaluated the inter-rater level of agreement between two raters across nine reasons, relying on the Kappa agreement. The Kappa agreement score is 0.72 (i.e., substantial agreement). Encouraged by this result, the first author then manually coded the rest of the 374 samples. We classify those instances that do not fit the above codes into Others. Then the above codes were merged into cohesive groups that can be represented by a similar subcategory (i.e., Question-answering, Idea sharing, and Feature request are merged into Non Actionable Topic), by conducting an open discussion among the four authors of this paper. ### RQ3 Analysis To answer RQ3: How long does it take to raise a conversion intent?, we perform a quantitative analysis on the manually labeled representative samples that are constructed from RQ1 and RQ2. We define two metrics to conduct our statistical analysis, to understand the process of raising a conversion intent from discussion to issues and vice versa, as shown below: * _Raising time (# hours):_ The duration that is from the time when the discussion or issue is submitted to the time when the post firstly suggests that the discussion or issue should be converted. * _Number of posts until the raising time:_ The number of posts that are submitted until the first conversion intent is raised. Note that it includes the post in which the conversion suggestion is provided. For instance, in the motivating example presented in Section 2, the twenty-second post suggests that the issue topic should be converted to a discussion. Thus, in this case, we count _Number of posts until the raising time_ as 22. To assure that the post firstly raises the conversion intent, the first author manually validated 1,243 posts (740 and 503 posts for the NPM and the PyPI, respectively) and 1,938 posts (892 and 1046 posts for the NPM and the PyPI, respectively) from the studied discussions that are suggested to convert into issues (i.e., 331 samples in RQ1) or the discussions that are converted from issues (i.e., 433 samples in RQ2). We then measure these two metrics for those labeled discussion threads that are either converted from issues or converted to issues. Meanwhile, to further understand the effect of different reasons, we propose a null hypothesis that _'Raising time and the number of posts until the conversion intent are significantly different among reasons of the conversion'_. To statistically confirm the significant differences, we use the Kruskal-Wallis H test (Kruskal and Wallis, 1952). This is a non-parametric statistical test to be used when comparing two or more than two categories. In our study, there are four and seven main categories for RQ1 and RQ2, separately. The advantage of applying non-parametric statistical methods is that they make no assumptions about the distribution of the data (Hecke, 2012). In addition, we invoke a Mann-Whitney test (Mann and Whitney, 1947) to examine any significant difference in each pair category between NPM and PyPI ecosystems. ## 5 Results In this section, we present the results of the empirical study. ### Conversion from Discussions to Issues (RQ1) To answer RQ1, we analyze (I) the reasons of converting discussions to issues, and (II) the frequency of these reasons across our studied samples. Below, we first provide representative examples for each reason type, and then we discuss the result of the frequency of reasons (Table 3 and Table 4). **Taxonomy of Reasons.** Four reasons of converting discussions to issues are classified through our qualitative analysis, which is described in Section 4.1: _(I) Reporting a Bug._ This category refers to the reason where the discussion post indicates that the discussion topic describes a bug. In this category, the keyword "bug" is usually left on the post, suggesting that the submitted post is explicitly related to the bug. For instance, in the Ex 17, one collaborator pointed out that the discussion topic (entitled as "prisma generates creates a package-lock.json") was indeed a bug and encouraged the author to open an issue in the repository, along with a directory structure sharing. Footnote 7: [https://github.com/prisma/prisma/discussions/10488](https://github.com/prisma/prisma/discussions/10488) Footnote 8: [https://github.com/facebook/docusaurus/discussions/6099](https://github.com/facebook/docusaurus/discussions/6099) Ex 1 **Collaborator:** This doesn't sound right and it's definitely a **bug**. Could you please **open an issue** and share your directory structure in the monorepo and which directory you are running prisma generate from? Ideally, even a Git repository with minimal example that mimics your monorepo layout and triggers this bug. Thanks! _(II) External Repository._ This category emerges by grouping discussion threads in which the post indicates that the discussion topic should not be in the current repository, instead should be opened as an issue in another repository. We observe that in this category, the appropriate repository that should be referred is always specifically provided by the collaborator. As shown in the Ex 28, a collaborator from the docusaurus repository posted a comment to make the author aware that the openapi plugin faces the incompatible issues with the latest chance, and suggested the author to open an issue in plugin related repository. Ex 2 **Collaborator:** Hi, it seems the openapi plugin is using the docusaurus internals which is not compatible with the latest change. Please **open an issue** in **their repo** telling them that the constants have been moved to @docusaurusutils. _(III) Reporting an Enhancement._ This category denotes the reason where the post indicates that the idea or feature request or any warning that should be highlighted in the issue to be aware. For example, in the Ex \(3\)9, an author submited a discussion to ask questions regarding prefer-destructuring undstructurable. One collaborator suggested the author to use slint-disabl, and also encouraged the author to file an issue to enhance this (i.e., ignore these cases by default). Ex 3 **Collaborator:** you can just use eslint-disable. :) it sounds a reasonable **enhancement** to ignore these cases by default (or behind an option). can you **file an issue**, thanks! Footnote 9: [https://github.com/eslint/eslint/discussions/14669](https://github.com/eslint/eslint/discussions/14669) _(IV) Reporting a Clarification Request._ We define this category as the post indicates that the topic stated in the discussion thread is not clear or lacks of details, and an issue is suggested in order to better understand the problem (e.g., add reproduce examples) or better track the progress how the problem evolves. We show the following two representative examples to describe this reason. In the Ex \(4\)10, the collaborator was unsure about the proposed error message. To further clarify this, the collaborator suggested the author to create an issue with a small reproduction. In another example Ex \(5\)11, to investigate the problem reason, resulting from the mixed versions or not, the collaborator encouraged the author to open an issue. Footnote 10: [https://github.com/gatsbyjs/gatsby/discussions/32147](https://github.com/gatsbyjs/gatsby/discussions/32147) Footnote 11: [https://github.com/Automatic/mongoose/discussions/10516](https://github.com/Automatic/mongoose/discussions/10516) Ex 4 **Collaborator:** i'm **unsure** if we can get a better error message but could you maybe **create an issue** with a small reproduction? Ex 5 **Collaborator:** please **open an issue** and follow the issue template. it looks like you **might** have mixed versions of mongoose and mongodb. **Frequency of Reasons.** We now examine what is the common reason of converting discussions to issues. Table 4 presents the distribution of the reason category in our studied NPM and PyPI repositories. We observe that _Reporting a Clarification Request_ is the most common reason for both package ecosystems, with 66 (35.1%) and 49 (34.7%) discussion threads being classified, separately. Such a result indicates that the discussions that should be converted to issues are more likely to result from uncertain discussion topics (e.g., potential bugs), which request further clarification from the issue tracker support. The following two reasons are the second most popular for the NPM repositories, i.e., _Reporting a Bug_ and _External Repository_. 44 discussion threads were manually classified for both cases, accounting for 23.4% of instances. While, for the PyPI repositories, the following frequent reason is _Reporting an Enhancement_, with 26.9% of instances being classified. [title=RQ] Summary Four reasons are classified behind converting discussions to issues, including Reporting a Bug, External Repository, Reporting a Clarification Request, and Reporting an Enhancement. Reporting a Clarification Request is the most common reason for the two studied ecosystems (NPM and PyPI), accounting for 35.1% and 34.7% of instances, respectively. \begin{table} \begin{tabular}{l c c} \hline \hline & **NPM** & **PyPI** \\ \hline (I) Reporting a Bug & **44 (23.4\%)** & **32 (22.7\%)** \\ \hline (II) External Repository & **44 (23.4\%)** & **22 (15.6\%)** \\ \hline (III) Reporting an Enhancement & **34 (18.1\%)** & **38 (26.9\%)** \\ \hline (IV) Reporting a Clarification Request & **66 (35.1\%)** & **49 (34.7\%)** \\ \hline \hline \end{tabular} \end{table} Table 4: Frequency of reasons for converting discussions to issues. \begin{table} \begin{tabular}{l l} \hline \hline \multicolumn{1}{c}{**Category**} & **Description** \\ \hline (I) Reporting a Bug & The comment indicates that the discussion topic describes a bug. \\ \hline (II) External Repository & The comment indicates that the discussion topic should not be reported in the current repository, instead should be reported in another repository (e.g., dependent repository). \\ \hline (III) Reporting an Enhancement & The comment indicates that an idea or a feature request or any warning is needed to be highlighted in the issue. \\ \hline (IV) Reporting a Clarification Request & The comment indicates that to trigger more details and further confirm the problems (e.g., potential bugs), an issue is needed to follow up the discussion. \\ \hline \hline \end{tabular} \end{table} Table 3: Descriptions of reasons for converting discussions to issues. ### Conversion from Issues to Discussions (RQ2) To answer RQ2, we analyze (I) the reasons of converting issues to discussions, and (II) the popularity of the classified reasons. We now illustrate the reasons using representative instances and discuss the frequency results (Table 5 and Table 6). **Taxonomy of Reasons.** Nine reasons of converting issues to discussions are classified through our qualitative analysis, which is described in Section 4.2: _(I) Non Actionable Topic._ This category relates to the reason where the topic proposed in the issue is not actionable and does not fit the issue scope. The category includes three sub-codes: _Question-answering_, _Feature requests_, and _Idea sharing_. For example, in the Ex 112, the author submitted an issue to ask for _"DataStore: How to handle a partial sync up to AppSync?"_. While the maintainer left the comment and considered this topic was more of a general question that should be in the discussion area. Thus, we classify its reason into _Question-answering_. In another example Ex 213, the author proposed an issue to report a problem regarding query result sharing (i.e., _Unable to select Query X in pane_), attached with the screenshots showing unexpected results. However, a collaborator pointed out that this issue should be converted into the discussion as a feature request. Footnote 11: [https://github.com/aws-amplify/amplify-js/discussions/8106](https://github.com/aws-amplify/amplify-js/discussions/8106) Footnote 12: [https://github.com/grafana/grafana/discussions/46356](https://github.com/grafana/grafana/discussions/46356) Footnote 13: [https://github.com/logaretm/vee-validate/discussions/3723](https://github.com/logaretm/vee-validate/discussions/3723) Ex 1 **Maintainer:** Hey Thanks for raising this! After reading over this issue, it appears to be **more of a general question** or topic for discussion and will be labeled as such. This is to differentiate it from the other types of issues and make sure it receives the attention it deserves. ... Ex 2 **Collaborator:** Hello @\(<\)username\(>\), thanks for reporting this. The feature is working as intended as it allows to share all the results in a panel, also as described in the docs. I do agree however that it may be an interesting feature. I'm converting this to a discussion where we **track feature requests**. _(II) Invalid Issues._ This category is merged by two cohesive codes (i.e., _Lack of description information_, _Not follow the template_), referring to the reason where the reported issue lacks of sufficient information for other developers to investigate. The example Ex 314 illustrates a scenario regarding _Lack of description information_. As we can see, the maintainer can not understand the issue fully with only provided code and suggested the author to create a minimal live example. Furthermore, this issue was then moved to discussion due to its invalidity. Ex 3 **Maintainer:** Please **create a minimal live example** on codesandbox, I can't guess the issue from this code alone. Also moved this to discussions since this doesn't satisfy an "issue" report. _(III) Not a Bug._ This denotes the reason where the author proposes a bug report, while the developer does not agree that it should be a bug. For instance, in the example Ex 415, the author followed the issue template including unexpected behavior and reproduction steps to report a potential bug related to password reset. However, the maintainer did not agree with this and argued that this was not a bug, instead due to design and changed it to a discussion. Footnote 15: [https://github.com/keycloak/keycloak/discussions/8988](https://github.com/keycloak/keycloak/discussions/8988) Ex 4 **Maintainer:** Changing this to a discussion as this is **not a bug** and is by design. Recover password is used in scenarios when a user has forgotten the password, as such should not invalidate other sessions. In fact that is pretty common practice. If a user suspect the account has been compromised they will update the credentials through the account console, which gives the option to logout existing sessions. _(IV) Further Discussion._ This category refers to the reason where the issue requires further confirmation or additional feedback from GitHub Discussion. As shown in the Ex 516, to further validate the problem cause of _"Serial-Port.write(): callback never called"_, the maintainer moved the issue to the discussion system. Footnote 16: [https://github.com/serialport/node-serialport/discussions/2287](https://github.com/serialport/node-serialport/discussions/2287) Ex 5 **Maintainer:** I'm going to convert this to our new discussion system until we **confirm** some sort of bug _(V) Already Fixed._ This category denotes the reason where the comment indicates that the issue has already been raised in either the issue lists or the discussion thread. As shown in the Ex 617, the maintainer suggested that the pitfalls related to build has been addressed by adding a doc page and then converted the existing issue into a discussion thread. Footnote 17: [https://github.com/gatsbyjs/gatsby/discussions/31283](https://github.com/gatsbyjs/gatsby/discussions/31283) Ex 6 **Maintainer:** Since we **added** a doc page about these pitfalls and there's **nothing to "fix"** here I'll move this to discussion :) _(VI) External Repository._ This reason refers to the case where the comments indicate that the issue is not raised in the appropriate place. For instance, in the Ex 718, the author proposed an issue concerning task tracker app in the create-react-app repository. However, the collaborator considered that the issue was from CRA tool and further this issue was converted. \begin{table} \begin{tabular}{l r r} \hline \hline & **NPM** & **PyPI** \\ \hline (I) Non Actionable Topic & **121 (55.0\%)** & **86 (42.0\%)** \\ Question-answering & 76 (34.5\%) & 51 (24.8\%) \\ Idea sharing & 24 (10.9\%) & 20 (9.8\%) \\ Feature request & 21 (9.5\%) & 15 (7.3\%) \\ \hline (II) Invalid Issues & **19 (8.6\%)** & **34 (16.6\%)** \\ Lack of description & 17 (7.7\%) & 32 (15.5\%) \\ Not follow the template & 2 (0.9\%) & 2 (1.0\%) \\ \hline (III) Not a Bug & **45 (20.5\%)** & **38 (18.4\%)** \\ \hline (IV) Further Discussion & **10 (4.5\%)** & **17 (8.3\%)** \\ \hline (V) Already Fixed & **7 (3.1\%)** & **6 (2.9\%)** \\ \hline (VI) External Repository & **13 (5.9\%)** & **7 (3.4\%)** \\ \hline (VII) Information Storage & **5 (2.2)** & **17 (8.3\%)** \\ \hline \hline \end{tabular} \end{table} Table 6: Frequency of reasons for converting issues to discussions. \begin{table} \begin{tabular}{l l} \hline \hline **Category** & **Description** \\ \hline (I) Non Actionable Topic & The comment indicates that the topic \\ Question-answering & proposed in the issue is not actionable \\ Idea sharing & and does not fit the issue scope. \\ Feature request & \\ \hline (II) Invalid Issues & The comment indicates that the reported \\ Lack of description & issue lacks of sufficient information for the \\ Not follow the template & developers to investigate. \\ \hline (III) Not a Bug & The comment indicates that the bug proposed by \\ & the author does not get recognized by developers. \\ \hline (IV) Further Discussion & The comment indicates that the issue requires \\ further confirmation or additional feedback from & GitHub discussion. \\ \hline (V) Already Fixed & The comment indicates that the issue has already \\ & been raised in either the issue lists or the discussion \\ thread. \\ \hline (VI) External Repository & The comment indicates that the discussion topic \\ & should not be reported in the current repository, \\ & but instead should be reported in another repository (e.g., dependent repository) \\ \hline (VII) Information Storage & The comment indicates that the issue contents \\ & should be kept as a reference for the community. \\ \hline \hline \end{tabular} \end{table} Table 5: Descriptions of reasons for converting issues to discussions. community. For example, in the Ex 819, the author raised a problem regarding react-native formatting and seeked for the solution. One collaborator provided a couple of solutions and later moved this issue into a discussion for other developers to find it easily. Footnote 19: [https://github.com/date-fns/date-fns/discussions/2841](https://github.com/date-fns/date-fns/discussions/2841) Ex 8 **Collaborator:** Going to move this to a discussion so it's **easier for people to find**. **Frequency of Reasons.** Table 6 shows the reason frequency of converting issues to discussions. As shown in the table, we observe that _Non actionable topic_ is the most common reason category within two studied package ecosystems, with 121 instances (55.0%) and 86 instances (42.0%) being classified for the NPM and the PyPI, separately. Upon closer look, _Question-answering_ is the main non actionable topic, accounting for 34.5% and 24.8% of the occurrences of the _Not actionable topic_ category for the NPM and the PyPI, respectively. The second most frequently occurring reason category is _Not a bug_ (20.5% and 18.4% for the NPM and PyPI, respectively), where the identified bug by the author is not recognized by other developers. Least frequently occurring pattern includes the _Already fixed_ reason, with 3.1% and 2.9% of instances being classified for two ecosystems. ### Analysis of the Conversion Process (RQ3) To answer RQ3, we analyze the process of the conversion from discussion to issues and vice versa, in terms of the following two aspects: (I) the raising time of conversion intent and (II) the number of post until the raising time. We below present the two metric related results (Figure 3 and Figure 4) with regard to two conversion kinds. **(I) Conversion from discussion to issue.** Figure 3 presents the results of the computed metrics for the discussions that are suggested to be converted into issues. As shown in Figure 3 (a), on the one hand, we find that two ecosystems share a similar raising time in terms of categories _Reporting an Enhancement_ and _Reporting a Bug_. For example, we observe that _Reporting an Enhancement_ category takes a relatively long time (i.e., the medians of raising time are 74 hours and 67.8 hours for the NPM and the PyPI, respectively) when the conversion intent is raised when compared to the other three reason categories. Such a result indicates that it may take time for developers to discuss and reach a consensus to propose an enhancement in the issue tracker. On the other hand, significantly large differences across two ecosystems are observed in terms of categories _External Repository_ and _Reporting a Clarification Request_, validated by the Mann-Whitney test with _p-value_\(<0.05\). For instance, _External Repository_ category takes the least time to be identified for the NPM, i.e., the median of raising time is 3.9 hours. While, for the PyPI, it takes almost a median of 69.5 hours to be notified of the external repository. One possible reason is that the NPM packages are less likely to be isolated when compared to the PyPI ones (Decan et al., 2016). For the metric related to the number of posts, as shown in Figure 4 (b), the median posts of _Reporting an Enhancement_ category for both ecosystems are two which are larger than another three reason categories, suggesting that more discussions are likely to be involved in this instance. However, the Mann-Whitney test suggests that there is no significant difference for all the paired categories between the NPM and the PyPI. Figure 3: Discussions that are converted into issues: (I) raising time and (II) number of posts until the raising time. For the statistical test, Kruskal-Wallis H tests confirm that the hypothesis _'Raising time and the number of posts until the conversion intent are significantly different among reasons of the conversion'_ is established in terms of the conversion from discussions to issues, with _p_-value \(<0.001\) for the _Raising time_ and _p_-value \(<0.05\) for the _Number of posts_ for the NPM repositories. However, the hypothesis is not supported for both raising time and the number of posts within the PyPI repositories. **(II) Conversion from issue to discussion.** Figure 4 shows the results of the computed metrics for the discussions that are suggested to be converted from issues. Note that we only analyze the relatively frequent reasons for the two studied ecosystems (i.e., those instances whose frequencies are greater than 10 in Table 5). As shown in Figure 4 (a), we observe that it takes relatively a long time to receive the convention intent, i.e., the median is around 15.2 hours and 35.1 hours for all reason codes within the NPM and the PyPI, sepa Figure 4: Discussions that are converted from issues: (I) raising time and (II) number of posts until the raising time. rately. More specifically, 17.5 hours and 24.5 hours (median) are taken for _Non Actionable Topic_ for the NPM and the PyPI, respectively. At the same time, we observe that there exists a significant difference via the Mann-Whitney test in terms of the category _Invalid Issues_ between the two ecosystems. It takes a much longer time for developers to raise this intent in the PyPI, i.e., a median of 79 hours. On the other hand, as shown in Figure 4 (b), few posts are involved until the conversion intent of _Non Actionable Topic_ is raised, with the median being one for both ecosystems. This result suggests that this conversion intent is likely to be raised in the first post of an issue. Unsurprisingly, _Further Discussion_ category involves the most posts (i.e., the median is four for two ecosystems). Furthermore, the Mann-Whitney test confirms that there is a significant difference in categories _Non Actionable Topic_ and _Invalid Issues_ between the NPM and the PyPI. This suggests that relatively more posts are submitted to decide these two conversions with the PyPI. For the statistical test, Kruskal-Wallis H tests confirm that the hypothesis _'Raising time and the number of posts until the conversion intent are significantly different among reasons of the conversion'_ is established for the NPM repositories in terms of the conversion from issues to discussions, with \(p\)-value \(<0.05\) for both the _Raising time_ and the _Number of posts_. For the PyPI repositories, a significant difference is observed for the _Number of posts_ but is not observed for the _Raising time_. ## 6 Implications Based on the results of our RQs, we now discuss the implications of the study and provide suggestions accordingly. **Reduce unnecessary conversion in the first place.** Findings from the study illustrate a variety of reasons behind conversions, i.e., four reasons of converting discussion to issues (RQ1) and seven reasons of converting issues to discussions (RQ2). GitHub guidelines pointed out the hint that the topics like questions and ideas should be raised in the GitHub Discussion. While, we also find that other reasons exist, for instance, _invalid issues_ and _not a bug_. These findings could complement the GitHub Discussion guidelines to guide those repositories that intend to adopt the GitHub Discussion feature. To maintain these channels, our constructed taxonomy can help maintainers and contributors decide when to contribute to each channel. We further performed an additional analysis to investigate whether or not the inconsistent use of these two channels exists. In the context of feature requests (between _Reporting an Enhancement_ and _Non Actionable Topic_), first, we observe that inconsistency does exist in the specific repositories. For example, within a repository namely _next.js_, on the one hand, a discussion20 that requested adding custom attributes was suggested to be opened as an issue. On the other hand, an issue that requested a configuration setting was converted into a discussion and the author expressed confusion "Hey team, why did a feature request become a discussion?...". Similarly, another author from the discussion21 doubted "Is it normal that feature requests are moved to a discussion?". Second, we notice that different repositories tend to have their own rules. For example, a collaborator from the _superset_ repository left a comment22 "Moving this feature request to Github Discussions so we can keep Issues focused on bugs!", while another repository _ant-design_ allows feature requests to be proposed as one maintainer suggested in a discussion23 "Since this not a bug or feature request. Move to discussion instead.". Such inconsistent use is also observed in the cases where issues should be created until enough information is provided or not (between _Reporting a Clarification Request_ and _Invalid Issues_). Given the ecosystem PyPI, we observe that 28 out of 32 instances classified into _Invalid Issues_ are from the _airflow_ project. This may indicate that this project could use discussions as a gatekeeping mechanism. At the same time, we also notice that inconsistent use of two different channels did exist in the _airflow_ project. For example, in this instance24, the contributor suggested the author to open an issue from the discussion thread with some replication information (i.g., DAG). Based on these insights, to relieve the confusion for contributors and improve the user experience, we suggest that project maintainers should clearly state the submission rules in their project README or contribution guidelines. Footnote 20: [https://github.com/vercel/next.js/discussions/12325](https://github.com/vercel/next.js/discussions/12325) Footnote 21: [https://github.com/vercel/next.js/discussions/27756](https://github.com/vercel/next.js/discussions/27756) Footnote 22: [https://github.com/apache/superset/discussions/19185](https://github.com/apache/superset/discussions/19185) Footnote 23: [https://github.com/ant-design/ant-design/discussions/29818](https://github.com/ant-design/ant-design/discussions/29818) Footnote 24: [https://github.com/apache/airflow/discussions/14315](https://github.com/apache/airflow/discussions/14315) A side-effect of keeping separate channels is duplication. Our empirical observations are align with the survey insights where the developers face the challenge of the duplication (Hata et al., 2022). We find that duplication indeed exists either between GitHub Discussions, or between GitHub Discussions and Issues. In terms of duplication between GitHub Discussions and Issues, for example, in this discussion 25, the maintainer left the post and pointed out that _"Already tracking here #4283 - please check issues first"_. Duplicate software artifacts have been proven to cause additionally unnecessary efforts and reduce efficiency. Many techniques are proposed to detect duplicate artifacts using information retrieval, such as issue report (Nguyen et al., 2012; Hindle et al., 2016), pull request (Li et al., 2017; Wang et al., 2019) on GitHub. We notice that the latest work has started to explore the related posts (duplicate or near duplicate) in GitHub Discussions and proposes an approach based on a SentenceBERT pre-trained model (Lima et al., 2022). Hence, another challenge would be the detection and removal of such duplication between these different channels. Furthermore, the study shows evidence that GitHub Discussion not only facilitates communication but also can lead to actionable contributions to the project. For instance, our manual classification in RQ1 shows that these contributions are also significant, i.e., 76.6% of samples being classified for _Reporting a Clarification Request_, _Reporting a Bug_, and _Reporting an Enhancement_ reason categories for the NPM ecosystem (Table 3). We point out that maintainers may consider Discussion as a means to attract and onboard potential new contributors, as Discussion can be acted as an incubation for those uncertain problems (e.g., potential bugs). After the evaluation by the project developers, these initial discussions have opportunities to be converted into actionable contributions. **Conversion is not trivial.** Through the manual analysis of RQ2, our results show that some conversions may not be trivial to be decided. For instance, we observe that _Not a Bug_ reason category accounts for up to 20.5% and 18.4% for the two ecosystems (Table 5). This indicates that the bug proposed by the contributor is not true from the engineering developer's perspective and thus it is converted into a discussion. We argue that such a type of conversion is not trivial and would take up time and pipeline for the developers to investigate the real causes. For instance, in the example26, the contributor originally submitted an issue (including bug description and bug related logs). While, after the confirmation, the maintainer commented that _"This is not a bug. It's working as intended. Please open a discussion first. Maybe we can allow this as a feature request."_. Based on this insight, we suggest the contributor, especially those newcomers, to start from the GitHub Discussion for the uncertainty problem. At the same time, one potential future research direction would include the tendency analysis of non-trivial conversions. Some conversions would be converted back and forth, thus it is valuable to know what kinds of discussions or issues are difficult to get a consensus, especially bug related topics. In addition, the fact exists that invalid issues could be closed while they are not converted to Discussion. Hence, another direction for researchers is to investigate how common it is and the reasons behind this fact, which would gain a deeper insight into the maintenance of channels. Footnote 26: [https://github.com/renovatebot/renovate/discussions/14457](https://github.com/renovatebot/renovate/discussions/14457) **Raising conversion intent potentially takes time.** In RQ3, a quantitative analysis was conducted to look at the conversion process. Although few discussions are involved, the time to receive the conversion intent is relatively long (Figure 3 and Figure 4). On the one hand, for those non-trivial reasons (e.g., _Reporting an Enhancement_ and _Not a Bug_), we argue that such a long process would not be avoidable since these discussion/issue topics may require time and conversations to confirm their significance and correctness, regardless of the channel it happens on. Thus, we recommend developers that they should accommodate the development pipeline of the contributed projects. On the other hand, for those trivial reasons (e.g., _Non Actionable Topic_ and _External Repository_), we find that it still takes almost 17.5 hours and 24.5 hours for the issues related to non actionable topics to be converted into discussions for the NPM and the PyPI. One potential reason could be that these topics are not being paid attention to by the community members. Hence, a future direction for the researchers is to investigate whether the authorships of discussions or issues play a role in the conversion process. Interestingly, our statistical tests indicate that raising conversion intent within the PyPI likely takes a longer time and involves more posts than the NPM in some specific categories, e.g., _External Repository_, _Invalid Issues_. Therefore, we encourage researchers to further explore how the nature of the ecosystem will affect the conversion process. Meanwhile, we argue that such a long process of trivial ones could be wasteful, hence another direction is to call for a promising classifier that is able to identify these non actionable topics effectively to make the issue tracker lighter and save the developers' efforts. ## 7 Threads to Validity Below, we describe the threats to the validity of this study: External validityExternal validity is with regard to the ability to generalize based on our results. In this study, we conduct a case study of NPM package repositories (web libraries) that adopt the Discussion feature. Thus, the observations based on this case study may not be generalized to other kinds of repositories (e.g., system software). However, our goal is not to build a theory to fit all repositories, but rather to shed light on the challenge that developers face during the choice between GitHub Issue and Discussion. Nonetheless, additional replication studies would help to generalize our observations in other repository domains, such as software tools, application software, and Non-web libraries and frameworks. Thus, to encourage future replication studies, we disclose our research materials and provide a replication package online, including raw NPM repository discussion data, manually labeled data, and the script to retrieve the discussions. The package is available: [https://github.com/posl/GitHub_Discussion_Conversion](https://github.com/posl/GitHub_Discussion_Conversion). Construct validityConstruct validity refers to the degree to which our measurements capture what we aim to study. During the data collection of those discussions that should be converted into issues, we rely on the heuristic approach using a list of keywords (e.g., open, transfer, convert, and so on) to filter the discussion posts. The threat may occur due to the incompleteness of the initialized keyword list. To mitigate this threat, we manually explored the random 100 discussion posts, and inspected the potential keywords that were highly frequent. At the same time, the goal is not to retrieve all these kinds of discussions, instead, we aim to get insights from a sufficient sample size. Thus, we believe that our sample size is sufficient enough to provide insightful observations. Internal validity.Internal validity denotes the approximate truth about inferences regarding cause-effect or causal relationships. Three threats are summarized. First of all, to understand the reasons behind the conversion, we performed a manual analysis, which may be mislabeled due to its subjective nature. To relieve such a threat, we conducted the manual coding in multiple iterations with two authors and calculated the Kappa agreement scores to ensure the quality. Once the scores suggested nearly perfect or substantial agreement, the first author then coded the rest of the samples. Additionally, to separate category _Reporting a Bug_ and category _Reporting a Clarification Request_, we refer to whether the possibility-related word is given in a comment. This is likely to introduce a bias due to the natural usage of English words. The second threat occurs in the metric computation (RQ3). Since there is no automatic method to identify the exact time of the conversion, we rely on the post time when the conversion intent is raised. The third threat may exist in the selection of statistical tests. To test the significance of the studied metrics among different conversion reasons (RQ3), we use the Kruskal-Wallis H test. The cause-effect may differ from the other statistical tests. We are however confident, as the selected test is widely used in the prior study (Chinthanet et al., 2021; Wang et al., 2021a). ## 8 Related Work In this section, we position our work with respect to the literature on question and answer forum, developer communication in software development, and barrier for newcomers in OSS projects. ### Question & Answer Community Developers often turn to programming question and answer (Q&A) communities to seek for help with their codes. Stack Overflow, as one of the most popular Q&A communities (Wang et al., 2023), has become a gold mine for software engineering research and is found to be useful for software development. Treude et al. (2011), as a pioneer work, categorized the kinds of questions that are asked, and explored which questions are answered well and which ones remain unanswered. Vasilescu et al. (2012) provided a quantitative study of the phenomenon, in order to assess the representation and social impact of gender in Stack Overflow. Treude and Robillard (2017) conducted a survey-based study to understand developers' information needs as they relate to code fragments. A large body of studies targets the knowledge extraction and challenges of specific topics from Stack Overflow. To facilitate program repair, Liu and Zhong (2018) proposed an approach to extract code samples from Stack Overflow, and mined repair patterns from extracted code samples. Wan et al. (2021) used Stack Overflow to understand the challenges and needs amongst blockchain developers by applying Balanced LDA. Bangash et al. (2019) studied the machine learning related posts and found that some machine learning topics are significantly more discussed than others, and others need more attention. To brainstorm feature ideas, help new users get their bearings, and further improve collaboration, GitHub released Discussion in 2020. Hata et al. (2022) took a first look at the early adoption of GitHub Discussion from several aspects. For instance, they found that errors, unexpected behavior, and code reviews are prevalent discussion categories. Specifically, the perception from the developer survey pointed out that developers consider GitHub Discussions useful but face the problem of topic duplication between Discussions and Issues. Motivated by their survey insight, we conduct an empirical study on NPM repositories to understand how developers maintain these channels, so as to complement the knowledge gap and provide empirical suggestions for the practitioners on how to select the appropriate channel. ### Developer Communication in Software Development Developer communication plays a significant role in software development, such as code review process (Wang et al., 2021). Bacchelli and Bird (2013) stated that developers have need of richer communication than comments annotating the changed code when reviewing. Pascarella et al. (2018) found that during reviews, reviewers often request additional information about correct understanding, alternative solution, to improve patch quality. Recent work cited that reviewers suffer from confusion due to a lack of information about the intention of a patch (Ebert et al., 2019). Wang et al. (2021c) observed that developers are likely to share links during review discussions with seven intentions to fulfill information needs. Meanwhile, Hirao et al. (2019) reported that the patch linkage (i.e., posting a patch link to another patch) is used to indicate patch dependency, competing solutions, or provide broader context. In addition to code reviews, nowadays interactive communication channels are available to support development. Stray and Moe (2020) conducted a longitudinal study on coordination to understand the use of meetings and Slack and found that collaboration tools increase awareness and informal communication. Parra et al. (2022) presented a comparative study of developer communications on Slack and Gitter. Raglianti et al. (2022) proposed a tool using Discord conversations to aid program comprehension. With the official release of GitHub Discussion, it would gradually become a popular centre for developers to communicate and share ideas during their software development. Hence, one potential benefit of our study is to improve the communication efficiency between GitHub Discussion and Issue. ### Barriers for Newcomers in OSS Projects Newcomers are significant to the survival, long-term success, and continuity of OSS projects (Kula and Robles, 2019). However, literature pointed out that newcomers face many challenges in their initiative activities in OSS projects (Steinmacher et al., 2014). For instance, Lee et al. (2017) reported that newcomers lack the necessary domain knowledge and programming skills. In addition, some non-technical also affect newcomers' onboarding process, such as communication and social interaction (Tan and Zhou, 2019; Rehman et al., 2022). In the recent work, Mendez et al. (2018) studied newcomer barriers and gender through a new perspective, i.e., the usage of OSS tools and infrastructure. To facilitate the newcomers' onboarding process, a series of theories and strategies have been proposed. Steinmacher et al. (2018) provided guidelines for both OSS communities interested in receiving more external contributions, and newcomers who want to contribute to OSS projects. Tan et al. (2020) discovered the criteria to identify good first issues (GFIs) that may make GFIs more likely to be solved by newcomers. In the following study, Xiao et al. (2022) proposed RECGFI, an effective practical approach for the recommendation of good first issues to newcomers. With the introduction of a new feature (i.e., GitHub Discussion), the newcomers may also face the barrier of choosing proper communication channels when asking questions or raising issues. Especially for those uncertain problems (i.e., whether they are real issues or not), our results suggest that it would be appropriate for newcomers to start from the Discussion channel. ## 9 Conclusion With the adoption of the GitHub Discussion feature, it becomes challenging for developers to appropriately choose or maintain between GitHub Discussion and Issue. In this work, we conducted an empirical study on 259 NPM and 148 PyPI repositories to understand the reasons behind converting Discussion to Issue and vice versa, and to investigate whether or not the conversion requires additional effort. Our empirical results show that reporting a clarification request is the most common reason of converting discussions to issues (35.1% and 34.7%, respectively), while having non actionable topic is the most frequent reason of converting issues to discussions (55.0% and 42.0%, respectively). Moreover, we observe that it potentially takes time to raise a conversion intent. This study contributes to helping developers effectively utilize these different communication channels, and also provides the future direction on the automatic classifier needs such as duplication detection between GitHub Discussion and Issue, and identification of non actionable topics from Issue. ## Acknowledgement This work is supported by Japanese Society for the Promotion of Science (JSPS) KAKENHI grants (JP20K19774, JP20H05706, JP22K17874, JP21H04877, JP23K16864), and JSPS and SNSF for the project "SENSOR" (JPJSJRP20191502). ## Declarations ### Conflict of Interest The authors declare that Raula Gaikovina Kula and Yasutaka Kamei are members of the EMSE Editorial Board. All co-authors have seen and agree with the contents of the manuscript and there is no financial interest to report. ### Data Availability Statements The datasets generated during and/or analysed during the current study are available in the [https://github.com/posl/GitHub_Discussion_Conversion](https://github.com/posl/GitHub_Discussion_Conversion).
2304.11225
First Detection of the Powerful Gamma Ray Burst GRB221009A by the THEMIS ESA and SST particle detectors on October 9, 2022
We present the first results study of the effects of the powerful Gamma Ray Burst GRB 221009A that occurred on October 9, 2022, and was serendipitously recorded by electron and proton detectors aboard the four spacecraft of the NASA THEMIS mission. Long-duration gamma-ray bursts (GRBs) are powerful cosmic explosions, signaling the death of massive stars, and, among them, GRB 221009A is so far the brightest burst ever observed due to its enormous energy ($E_{\gamma iso}\sim10^{55}$ erg) and proximity (the redshift is $z\sim 0.1505$). The THEMIS mission launched in 2008 was designed to study the plasma processes in the Earth's magnetosphere and the solar wind. The particle flux measurements from the two inner magnetosphere THEMIS probes THA and THE and ARTEMIS spacecraft THB and THC orbiting the Moon captured the dynamics of GRB 221009A with a high-time resolution of more than 20 measurements per second. This allowed us to resolve the fine structure of the gamma-ray burst and determine the temporal scales of the two main bursts spiky structure complementing the results from gamma-ray space telescopes and detectors.
O. V. Agapitov, M. Balikhin, A. J. Hull, Y. Hobara, V. Angelopoulos, F. S. Mozer
2023-04-21T19:18:18Z
http://arxiv.org/abs/2304.11225v1
First Detection of the Powerful Gamma Ray Burst GRB221009A by the THEMIS ESA and SST particle detectors on October 9, 2022. #### Abstract We present the first results study of the effects of the powerful Gamma Ray Burst GRB 221009A that occurred on October 9, 2022, and was serendipitously recorded by electron and proton detectors aboard the four spacecraft of the NASA THEMIS mission. Long-duration gamma-ray bursts (GRBs) are powerful cosmic explosions, signaling the death of massive stars, and, among them, GRB 221009A is so far the brightest burst ever observed due to its enormous energy (\(E_{\gamma\rm\,iso}\approx\)10\({}^{55}\) erg) and proximity (the redshift is z\(\approx\)0.1505). The THEMIS mission launched in 2008 was designed to study the plasma processes in the Earth's magnetosphere and the solar wind. The particle flux measurements from the two inner magnetosphere THEMIS probes THA and THE and two outer probes (renamed ARTEMIS after 2010) THB and THC orbiting the Moon captured the dynamics of GRB 221009A with a high-time resolution of 4 (up to 8) measurements per second. This allowed us to resolve the fine structure of the gamma-ray burst and determine the temporal scales of the two main bursts' spiky structure complementing the results from gamma-ray space telescopes and detectors. #### Introduction GRB 221009A was a bright and long-lasting gamma-ray burst (GRB) detected by the Burst Alert Telescope (BAT) aboard the Swift satellite, an hour later by the Gamma-ray Burst Monitor (GBM) (Dichiara et al., 2022) on board the Fermi Gamma-ray Space Telescope (FGST) (Veres et al., 2022; Bissardi et al., 2022; Lesage et al., 2022; Pilera et al., 2022), by other space observatories such as Swift (Dichiara et al., 2022), AGILE (Piano et al., 2022; Ursi et al., 2022), INTEGRAL (Gotz et al., 2022), Solar Orbiter (Xiao et al., 2022), SRG (Lapshov et al., 2022), Konus (Frederiks et al., 2022, 2023), GRBAlpha (Ripa et al., 2022), and STPSat-6 (Mitchell et al., 2022), High Energy Burst Searcher (HEBS) on SATech-01 (Liu et al., 2022), by the ground observatory LHAASO (The Large High Altitude Air Shower Observatory) with striking very-high energy features (Huang et al., 2022). The GBM light curve consists of an initial \(\sim\)10 s long pulse at 13:16:59 UTC, followed by an extraordinarily bright episode roughly \(\sim\)180 s after the trigger time, lasting at least 100 seconds (Veres et al., 2022). The afterglow outburst outshone all other GRBs seen before (Sahu et al., 2023). The Large High Altitude Air Shower Observatory (LHAASO) with the water Cherenkov detector array (WCDA) and the larger air shower kilometer square area (KM2A) detector observed more than 5000 very high energy (VHE) photons in the 500 GeV-18 TeV energy range within 2000 s from the trigger, making them the most energetic photons ever observed from a GRB (Huang et al., 2022). The event was so long and intense that it caused sudden Earth global ionospheric disturbances (both day and night) - a result of the increased ionization by X- and \(\gamma\)-ray emission (Hayes and Gallagher, 2022; Pal et al., 2023) from the VLF/LF sub-ionospheric signals dynamics in the D-region of Earth's ionosphere (\(\sim\)60-100 km). The optical observations of this burst (located at around \(\mathrm{RA}=288.282\) and \(\mathrm{Dec}=19.495\)(Pillera et al. 2022)) have a relatively small redshift z = 0.1505 (Castro-Tirado et al. 2022; de Ugarte Postigo & Izzo 2022) compared to most other long bursts, which indicates that this is one of the closest observed long-duration GRB (GRB 211211A had even lower red shift of 0.076, which corresponded to a distance of \(\sim\)346 megaparsecs (Rastinejad rt al., 2022; Troja et al., 2022; Mei et al., 2022; Gompertz et al. 2022). The total emitted isotropic-equivalent gamma-ray energy from GRB 221009A is estimated to be (2-6) \(\times\) 10\({}^{54}\) erg (de Ugarte Postigo et al. 2022; Kann & Agui 2022). The high energy gamma photons guide the sputtering of secondary particles (proton, electrons, and probably secondary photons) from the material of the spacecraft, which can be detected by particles detectors on board spacecraft (Pisacane 2008). Similar effects were reported by Schwartz et al. (2005) and Terasawa et al. (2005) after the gamma-ray giant flare of SGR 1806220. These measurements from spacecraft particle detectors provide an alternative perspective that complements observations from the gamma ray telescopes by providing high sampling rates capable of resolving the fine structure of bursts (Schwartz et al. 2005; Terasawa et al. 2005). GRB 221009A was detected in the electron flux measurements made by the HEPP-L charged particle detector on board the low Earth orbit (LEO) China Seismo-Electromagnetic Satellite (Battiston et al. 2023). Battiston et al. (2023) showed that the recorded anomalous signal of GRB221009A in the electron fluxes originated from secondary electrons produced via photon absorption in the passive material of the detector. The signal dynamics followed quite well the structure of the GRB221009A signal recorded by HEBS (Liu et al. 2022). Here we report observations of the fine structure of the GRB 221009A gamma burst signal recorded in electron and proton flux measurements (the bursts of secondary electrons produced through interactions of gamma photons with material of the detectors (Battiston et al. 2023)) by the electrostatic analyzers ESA (McFadden et al., 2008) and solid state telescopes SST (Larson et al., 2010) on board the four (of the five) THEMIS spacecraft (Angelopoulos et al., 2008) on October 9, 2022. The configuration of the spacecraft is shown in Figure 1. Specifically, we use data from THB and THC, orbiting the Moon as part of the ARTEMIS mission (Angelopoulos et al., 2015) and the two inner magnetosphere probes THA and THE. The THEMIS ESA and SST provide proton and electron distribution functions with sampling rate of the spin period (3 s for THA, THD, THE and 4 s for THB and THC). Battiston et al. (2023) showed that the signal from secondary electrons provide a full solid angle coverage, which allows us to decompose the spin resolution to individual measurements and increase the temporal resolution to 0.25 s (up to 0.125 s) to resolve the fine structure of the burst. The data recorded by SST and ESA detectors are unsaturated during the entire interval of observations, and resolves the fine structure of most intensive bursts, supplementing high-resolution measurements of the most intense period of the burst activity (some of gamma telescopes experienced saturation periods) made by HEBS (Liu et al. 2022, unsaturated data with subsecond timing resolution) and by HEPP-L (Battiston et al. 2023, unsaturated electron flux data with a second sampling rate). ### Data and Methods THEMIS (Time History of Events and Macroscale Interactions during Substorms) is NASA'a (National Aeronautics and Space Administration) mission, which consists of the five identically equipped satellites (probes THA, THB, THC, THD, and THE). The main goal of this mission is to carry out multipoint investigations of substorm phenomena in the tail of the terrestrial magnetosphere (Sibeck & Angelopoulos, 2008). After starting the Acceleration, Reconnection, Turbulence, and Electrodynamics of the Moon's Interaction with the Sun (ARTEMIS) mission, which is a spin-off from THEMIS by repositioning two of the five THEMIS probes (THB and THC) in coordinated, lunar equatorial orbits, at distances of \(\sim\)55-65 geocentric earth radii, \(R_{\rm E}\) (\(\sim\)1.1-12 selenocentric lunar radii, \(R_{\rm L}\)). They perform systematic, two-point observations of the distant magnetotail, the solar wind, and the lunar space and planetary environment (Angelopoulos et al., 2010). The three inner probes (THA, THD, and THE) continue to collect data in the inner magnetosphere (\(\sim\)1.2-9 \(R_{\rm E}\)). A pair of back-to-back top hat hemispherical electrostatic analyzers (ESA) measure the distribution functions of ions (0.005 to 25 keV) and electrons (0.005 to 30 keV) over 4\(\pi\) str to produce 3 s time resolution plasma moments (McFadden et al.2008). The instrument consists of a pair of "top hat" electrostatic analyzers with common 180\({}^{\circ}\)\(\times\)6\({}^{\circ}\) fields-of-view that sweep out 4\(\pi\) str each 3 s spin period. The sensors generally swept in energy (logarithmically) from \(\sim\)32 keV for electrons and \(\sim\)25 keV for ions, down to \(\sim\)6-7 eV. Nominal operations have 32 sweeps per spin, with 31 energy samples per sweep, plus one sample energy retrace, resulting in a typical measurement resolution of \(\Delta E/E\sim\)32%. Particle events are registered by microchannel plate (MCP) detectors. At low-time-resolution, THEMIS generally maintain the "full" 32 energies sampled, and have an 88 bins composed solid-angle map. Combining the energy and solid-angle measurements can sufficiently improve time resolution. Solid-state telescopes (SST) measure the superthermal (0.02-1 MeV) part of the ion and electron distributions over 4\(\pi\) str with similar measurement regimes. The configuration of THEMIS and ARTEMIS spacecraft during the GRB 221009A event is shown in Figure 1. Two THB and THC (ARTEMIS) were located near the Moon in the tail magnetosphere at geocentric distances of \(\sim\)58 \(R_{\rm E}\) (THB) and 61 \(R_{\rm E}\) (THC) from Earth. The inter-spacecraft distance was \(\sim\)3.5 RE (\(\sim\)13 \(R_{\rm L}\)). The inner probes were at geocentric distances of \(\sim\)8.5 \(R_{\rm E}\), with an inter-spacecraft distance of \(\sim\)1 \(R_{\rm E}\). Figure 1: The configuration of THEMIS and ARTEMIS spacecraft during the GRB 211211A event is shown in panel (a) with the traces during the interval of 1200-1400 UT shown in the ecliptic plane. The modeled locations of Earth’s magnetopause and bow shock are indicated by the dashed and dotted curves respectively. (b) – the inner THEMIS spacecraft configuration. (c) – the ARTEMIS spacecraft THB and THC in the selenocentric system (the ecliptic plane projection). The four spacecraft (THA, THB, THC, and THE) detected the effects of the gamma ray burst GRB221009A, which appear as two very intense bursts at 13:20:36-13:21:30 and a subsequent less intense burst at around 13:25:30 (Figure 2). The fifth THEMIS spacecraft, THD, did not collect particle data during this time interval. The ARTEMIS spacecraft THB and THC carried particles measurements in the same mode: spin resolution ESA and spin resolution SST (protons). The omnidirectional electron and proton flux dynamics recorded on board THA, THB, THC, and THE by the SST detectors are shown in Figure 2. The electron and proton apparent flux enhancements associated with GRB221009A are observed at energies up to 1 MeV in the SST measurements following the GRB221009A dynamics reported by Xiao et al. (2022) from Solar Orbiter STIX measurements of five energy bands in the range between 4 -150 keV (with the temporal resolution of 4 s) and by Liu et al. (2022) from HEBS detection in the range from 400-6000 keV. The decrease of the flux recorded by SST with energy above 1 MeV is in an agreement with the results of HEPP-L (Battiston et al., 2023). The data available from THA and THE was at spin averaged sampling rate of 3 s, so the processing of sub-second data could be applied only to the ARTEMIS probes THB and THC. The distance between the inner probes (THA, THD, and THE) and the ARTEMIS spacecraft (THB and THC) corresponded to \(\sim\)1 light second (less than 0.25 seconds taking into account the geometry of the event). The spin resolution data from THA and THE do not sufficiently resolve the time delay of the GRB signal between the spacecraft. In the following we compare the timing of the signal recorded by THB and THC (available with the better time resolution) with the time of the signals from HEPP-L (Battiston et al., 2023) and HEBS (Liu et al., 2022) recorded on board the LEO spacecraft. Figure 2: The electron and proton flux recorded by the SST detectors on board the THA and THE probes: (a) – the proton flux from the THA SST detector; (b) – the electron flux from the THA SST detector; (c) – the proton density in the SST energy range. Panels (d-f) present the same characteristics recorded on board THC, panels (g-i) - on board THB and panels (j-l) - on board THC. The polar and azimuthal angular resolution of the ESA and SST detectors provide information on the directionality of particle fluxes. The THEMIS spacecraft THB and THC collected spin-resolution ESA proton and electron data shown in Figure 3. Such flux enhancement dynamics demonstrate the effects of the GRB over the full energy range of the ESA (from 8 eV to \(\sim\)20 keV) and over all polar and azimuthal angles (Figure 3b, c). The particle fluxes given as a function of polar and azimuthal angles in Figure 3b,c indicate that the GRB-associated secondary particles flux distributions are similar at different polar and azimuthal angles confirming the isotropy of the electron flux measurements during GRB 221009A reported by Battiston et al. (2023). The fluxes recorded during the spacecraft spin rotation combined from THB and THC SST and ESA are shown in Figure 4a and Figure 4b respectively with 0.125 second temporal resolution, which we decimate to 0.25 second resolution by accumulating 32 energy channels and data from the two spacecraft for the higher significance of the results. The distance between THB and THC corresponded to \(\sim\)0.1 light second during GRB221009A and taking into account the geometry of the source (Pillera et al. 2022) the actual maximal time shift was \(\sim\)0.07 s, so we combined measurements from THB and THC into a merged time series. Figure 3: The apparent electron and proton fluxes recorded on board THB and THC probes with electrostatic analyzers (ESA): (a) – the omnidirectional proton flux from the THB ESA detector; (b) – the energy integrated proton flux on the azimuthal angle phi; (c) – the energy integrated proton flux on the polar angle theta; (d) – the omnidirectional electron flux from the THB SST detector; (e) – the integrated proton (red) and electron (blue) fluxes. Panels (f-j) present the same characteristics from THC. The light curve of counts from THB and THC ESA detectors is presented in Figure 4a, and the combined flux light curve recorded by the SST detectors is shown in Figure 4b. The triggering impulse at 13:16:59 UT (T0 in the following) is not seen on the background of particles flux perturbation and the three peaks are resolved continuously without any signs of saturation: the shoulder beginning of the first main peak is at T0+219 s (SST) and at T0+220 s (ESA); the first main peak at T0+225 s; the second peak is at T0+256 s (SST and ESA), and the third peak is at T0+509 (SST) and T0+511 (ESA). The time marks of the GRB 221009A temporal profile reported by Battiston et al. (2023) from HEPP-L and by Liu et al. (2022) the HEBS data collected on board the LEO spacecraft (T0+219 s for the solder, T0+225 s for the first peak, T0+256 s for the second peak, and T0+509 for third peak are indicated by the vertical dashed lines). The timing uncertainties of THEMIS spacecraft synchronization (\(\sim\)0.1 s) and time resolution do not allow us to resolve the time shift between the GRB signal recorded on board THB and THE and the data recorded on board the LEO spacecraft presented in detail by Battiston et al. (2023). Thus, we present a 1-second resolution version of the ESA and SST data to compare with the 1-second light curves of HEPP-L (Battiston et al. 2023) and HEBS (Liu et al. 2022). The two main peaks are zoomed in Figure 4b (ESA) and Figure 4d (SST). The light curve of 1-second resolution ESA counts in Figure 4b is in very good agreement with the 1-second resolution light curve of GRB 221009A profile presented by Liu et al. (2022) and Battiston et al. (2023): the first two sub-peaks of the first burst (T0+225-236) s are similarly resolved by ESA and SST as well as by HEPP-L and HEBS; SST resolved two later sub-peaks (the later one is seen in the HEPP-L light curve), which can be seen in the 0.05 s resolution HEBS data; the second burst at (T0+256-266 s) has 4 sub-peaks (3 from SST) confirming the measurements from HEPP-L and the 0.05 s resolution HEBS data. The good agreement with the direct observations from the gamma ray detector HEBS indicates that Figure 4: (a) - the light curve of the combined THB and THC ESA proton flux counts in the energy range from 8 eV to 20 keV. (b) – zoomed in plot of the two main bursts from (a) with curve shown at 1-second resolution. (c) - the light curve of the combined THB and THC SST proton flux counts in the energy range from 30 keV to 1 MeV. (d) – zoomed in plot of the two main bursts from (c) with solid curve shown at 1-second resolution. charged particle detectors can indeed provide highly useful observations of intense gamma bursts with, owing to their high time resolution of measurements, higher saturation thresholds, and long base lines, that are complementary to LEO spacecraft equipped with gamma ray telescopes From the 0.25 s sampling rate data, we note that the main peaks have complicated spiky fine structure. The two main bursts are zoomed in Figure 5a. The wavelet decomposition (based on the Morlet wavelet) reveals the scales of pulses indicating the pulse widths and inter-pulse time intervals - presented in Figure 5b. Figure 5d shows the time averaged wavelet spectra color-coded according to corresponding averaging intervals highlighted in Figure 5a. The first main burst has these scales to lie between 0.7 - 1.0 s, with a mean scale of 0.75\({}_{0.63}^{0.91}\) s. The second main burst has two groups of scales with mean values of 0.5\({}_{0.45}^{0.55}\) s and 0.75\({}_{0.7}^{0.8}\) s (the smaller scale of \(\sim\)0.5 s is close to the the double spin frequency, so its significance is not clear). These scales are in the range of the pulse scales reported by Bhat et al. (2010), where it was shown that the mean values of pulse widths were 0.81 s for long bursts and 0.04 s for short bursts, respectively: the observed GRBs statistically grouped to longer and shorter duration events with a minimum around 2 s suggesting that there are two separate populations of bursts (Bhat et al. 2010). We have searched for the effects of GRB 2210009A in the observations of other magnetosphere and solar wind missions carrying on board charged particle detectors and we found the effects of GRB 2210009 recorded by WIND and GOES15 spacecraft. The processing of these data will be a subject of a future publication. ## Conclusions We present the first report of the effects of the powerful Gamma Ray Burst GRB221009A by the spacecraft particles detectors aboard the probes of the NASA THEMIS and ARTEMIS. The four spacecraft (two inner Earth's magnetosphere probes, THA and THE, and two spacecraft orbiting the Moon, THB and THC) detected the event through their electro-static analyzer (ESA) and solid state telescope (SST) proton and electron flux measurements. By combining the energy channels and the multiple spacecraft data, the fine Figure 5: **(a)** – the light curve of composite THB and THC ESA proton flux counts showing the fine structure of the two main bursts of GRB221009A. (b) – wavelet decomposition of the light curve (b). The dashed line indicates the spacecraft spin frequency. (c) – time averaged wavelet spectra collected during the interval indicated by the corresponding color in panel (a). structure of the gamma flare has been resolved with time sampling of 4 measurements per second, which make particle detectors to be additional instruments for addressing the fine structure of the intense GRBs. The obtained time scales of the fine structure spikes of the two main GRB221009A are consistent with the characteristic parameters of long gamma ray bursts. ## Acknowledgements We acknowledge NASA contract NAS5-02099 for use of data from the THEMIS Mission, D. Larson for use of SST data and J. P. McFadden for use of ESA data. O.V.A is grateful to J. P. McFadden for the helpful discussion. O.V.A was supported by NSF grant number 1914670, NASA's Living with a Star (LWS) program (contract 80NSSC20K0218), and NASA grants contracts 80NNSC19K0848, 80NSSC22K0433, 80NSSC22K0522, 80NSSC20K0697, and 80NSSC20K0697.
2303.02598
On Probability Shaping for 5G MIMO Wireless Channel with Realistic LDPC Codes
Probability Shaping (PS) is a method to improve a Modulation and Coding Scheme (MCS) in order to increase reliability of data transmission. It is already implemented in some modern radio broadcasting and optic systems, but not yet in wireless communication systems. Here we adapt PS for the 5G wireless protocol, namely, for relatively small transport block size, strict complexity requirements and actual low-density parity-check codes (LDPC). We support our proposal by a numerical experiment results in Sionna simulator, showing 0.6 dB gain of PS based MCS versus commonly used MCS.
Evgeny Bobrov, Adyan Dordzhiev
2023-03-05T07:50:51Z
http://arxiv.org/abs/2303.02598v3
# On Probability Shaping for 5G MIMO Wireless Channel with Realistic LDPC Codes ###### Abstract Probability Shaping (PS) is a method to improve a Modulation and Coding Scheme (MCS) in order to increase reliability of data transmission. It is already implemented in some modern radio broadcasting and optic systems, but not yet in wireless communication systems. Here we adapt PS for the 5G wireless protocol, namely, for relatively small transport block size, strict complexity requirements and actual low-density parity-check codes (LDPC). We support our proposal by a numerical experiment results in Sionna simulator, showing 0.6 dB gain of PS based MCS versus commonly used MCS. Keywords:QAM MCS OFDM 5G PS FEC BICM LDPC ## 1 Introduction In the 5G New Radio downlink procedure, the user equipment proposes the serving base station for use in the next signal transmission of the optimal modulation and coding scheme (MCS) [1] based on quadrature amplitude modulation (QAM). In order to achieve the capacity of the additive white Gaussian noise (AWGN) channel, the transmit signal must be Gaussian distributed. The use of uniformly distributed QAM symbols with optimal coded modulation (CM) leads to a shaping loss of up to 1.53 dB for high order constellations [4]. Bit-interleaved coded modulation (BICM) with parallel bit-wise demapping as currently employed in LTE leads to an additional loss. Non-uniform constellations (NUC) and geometric shaping (GS) have been recently adopted for the next-generation terrestrial broadcast standard [12]. The QAM constellations are optimized for each target signal-to-noise ratio (SNR) by maximizing the BICM capacity for uniformly distributed bits. Note that, in contrast to standard Gray-labeled QAM (Figs. 3, 4, 5), the Non-Uniform constellations do not allow for a simple independent demapping of the real and imaginary part. Therefore, one-dimensional NUCs for each real dimension were also studied in [12], which provide a reduced shaping gain. The performance of BICM can be improved by using non-uniform constellations (NUC), but there remains a gap to the capacity with Gaussian transmit signal. In the traditional approach of data transmission, each point in a particular constellation has an equal chance of being transmitted. While this technique gives the highest bit rate for a given constellation size, it ignores the energy cost of the individual constellation points. So, as an alternative to GS, it is also possible to adjust the probabilities of the constellation points such that they follow an approximate discrete Gaussian distribution, using the probability shaping (PS) method [11]. Probabilistically shaped coded modulation (PSCM) enables the BICM system to close the gap to the capacity with Gaussian transmit signal. PS is a CM strategy that combines constellation shaping and channel coding. In the literature, Gallager's error exponent approach has been used to study the achievable information rates of PS [5, Ch. 5]. In particular, it was shown that the PS method has achievable capacities for additive white Gaussian noise channels [2]. In [6], the authors revisit the capacity achieving property of PS. The concept of selecting constellation points using a nonuniform Maxwell-Boltzmann PS is investigated in the study [11]. Nonuniform PS signaling scheme reduces the entropy of the transmitter output and, as a result, the average bit rate. However, if low-energy points are picked more frequently than high-energy points, energy savings may (more than) compensate for the bit rate reduction. Authors of [16] proposed a new PS distribution that outperforms Maxwell-Boltzmann is studied for the nonlinear fiber channel. In [9] the authors successfully tested the suitability of PS constellations in a German nationwide fiber ring of Deutsche Telekom's R&D field test network. In [3] the PS method is implemented in 64-QAM coherent optical transmission system. In [10], a proposed extension to the 5G New Radio polar coding chain is the introduction of a shaping encoder in front of the polar encoder, which will improve the performance with higher order modulation using this PS scheme. The main objectives of the study: * In this paper, we investigate the PS Enumerative Sphere Shaping (ESS) [7] method known in the literature with respect to a realistic MIMO OFDM wireless channel with LDPC at a given coderate. * We provide numerical experiments on the modern Sionna [8] simulation platform and find local optimal parameters for the ESS method, minimizing BLER and providing a gain of up to 0.6 dB over the QAM-16 baseline. * This study could be interesting from a scientific point of view, since there are almost no published papers on PS that consider such realistic and contemporary scenarios, while considering only theoretical distributions [15]. The basic principle of PS method is presented in Fig. 1. We change the probability of constellation points, which allows us to scale their coordinate with preserving of the mathematical expectation of constellation power. ## 2 System Model A block diagram of the proposed PSCM transmitter and receiver is shown in Fig. 2. The main difference to conventional BICM is the distribution matcher that maps the uniformly distributed data bits to bit streams with a desired distribution, which determine the amplitudes of the transmitted QAM symbols. The forward error correction (FEC) encoder generates additional parity bits, which are uniformly distributed and determine the signs of the transmitted QAM symbols. This results in an approximately Gaussian distributed transmit signal using the same constellation mapping as in LTE. At the receiver side, the QAM demapper calculates the bit-wise log-likelihood ratios (LLRs) based on the observed receive signal, taking the non-uniform transmit symbol distribution into account. These LLRs are fed to the FEC decoder as in conventional BICM, and the decoder output is finally mapped back to data bits by the distribution deshaper. Note that both the distribution matcher and deshaper correspond to simple one-to-one mappings, which can be efficiently implemented. In the numerical experiments, Coded BLER is the average error of transmitted block of bits before the LDPC encoder and after the decoding in Fig. 2, which takes into account the realistic coding-encoding procedure. Figure 1: The probability shaping method increases system performance by scaling constellation points, which is allowed while preserving the mathematical expectation of constellation power. ## 3 Optimal Distribution for Probability Shaping Let \(X\) be a random variable taking values on some finite alphabet \(\chi\), and \(P_{X}\) be the distribution function of \(X\). Further, we will call the set \(\chi\) a constellation. We consider an AWGN channel \[Y=X+N\] where \(N\) is a Gaussian random variable with zero mean and standard deviation \(\sigma\) such that \(0<\sigma<\infty\). The energy of the constellation is equal to the expectation of \(|X|^{2}\), i.e. \[\mathbb{E}[|X|^{2}]=\sum_{i=1}^{m}p_{i}|x_{i}|^{2}.\] Our goal is to minimize the energy of the constellation to reduce errors in symbols. In this paper, we study the case when the random variable \(X\) is distributed on the QAM constellation. **Example.** Initial distribution of \(X\) is uniform. For instance, the energy of QAM-16 is equal to 10 since \[\mathbb{E}[|X|^{2}]=\frac{1}{16}\cdot(4\cdot 2+8\cdot 10+4\cdot 18)=10.\] Now if we change the distribution of \(X\) in such a way that * four points with coordinates \((\pm 1,\pm 1)\) have probability 0.125 * eight points with coordinates \((\pm 1,\pm 3),(\pm 3,\pm 1)\) have probability 0.0375 * four points of coordinates \((\pm 3,\pm 3)\) have probability 0.05. In this case, the energy will be equal to 7.6 since \[\mathbb{E}[|X|^{2}]=0.125\cdot 4\cdot 2+0.0375\cdot 8\cdot 10+0.05\cdot 4 \cdot 18=7.6,\] and if we shift the points by multiplying them by the square root of ratio of the energies of the constellations \(\sqrt{\frac{10}{7.6}}\), then the energy again become equal to 10. Figure 2: Block diagram of probability shaping transmitter and receiver. It follows that the points of constellation are further apart, and the variance is the same. So, the probability of error are less. Thus, we solve the following problem. Let \(P_{X}=(p_{1},\ldots,p_{m})\) be the vector of probabilities of each constellation point, where \(m\) is a size of the constellation: \[\left\{\begin{aligned} &\mathbb{E}[|X|^{2}]=\sum_{i=1}^{m}p_{i} \cdot|x_{i}|^{2}\rightarrow\min_{P_{X}}\\ &\sum_{i=1}^{m}p_{i}=1\\ & H(X)=-\sum_{i=1}^{m}p_{i}\cdot\log_{2}p_{i}=const\end{aligned}\right. \tag{1}\] The physical meaning of the problem (1) is to minimize the constellation energy at a fixed constellation entropy. The entropy \(H(X)\) means the amount of information transmitted by the constellation, and the energy \(\mathbb{E}[|X|^{2}]\) means the power the transmitter has to expend in transmitting the data. For this problem, there is no analytical expression for the optimal distribution. Instead, the constellation points are assumed to have a Maxwell-Boltzmann distribution: \[\widehat{p}_{i}=\frac{e^{-\mu|x_{i}|^{2}}}{\sum_{j=1}^{m}e^{-\mu|x_{j}|^{2}}}, \quad i=1,\ldots,m \tag{2}\] since it is close to the optimal distribution [11] and maximise the entropy of the constellation with a constraint on its energy. However, this approach is not feasible since it requires an infinite block-length, so we need to implement the _ESS_ method [7], which we will describe in the following sections. ## 4 Coded Modulation Design for QAM-16 According to the labelling procedure, we can notice that the first two bits in the binary representation of the constellation points are responsible for symmetry about the coordinate axes, and the last two bits are responsible for absolute value. (Fig. 4). In what follows, we will refer to the first two bits as sign bits, and the last two bits as amplitude bits. Thus, amplitude bit zero corresponds to points with coordinates \(\pm 1\), and amplitude bit one corresponds to points with coordinates \(\pm 3\). ### Constellation Energy Minimisation The energy minimization process is fairly straightforward. We take the constellation points with the smallest absolute value with a higher probability, and the points with the largest absolute value with a lower probability. Thus, we are more interested in constellation points that have more zeros than ones at the amplitude bit positions in the binary representation, and then it is sufficient to maximize the probability of zero at the amplitude bit positions. We also assume that the sign bits are uniformly distributed, i.e. the probability of zero and one of the first two bits in the binary representation of each constellation point is equal to \(\frac{1}{2}\). As noted above, amplitude bit one corresponds to more distant points from the origin, and amplitude bit zero corresponds to closer points. Thus, we can assume that the _energy_ of a sequence of \(n\) amplitude bits consisting of \(k\) ones and \(n-k\) zeros, is equal to \[\underbrace{1^{2}+\ldots+1^{2}}_{n-k}+\underbrace{3^{2}+\ldots+3^{2}}_{k}.\] It can be seen that the nearest points to the origin have the lowest energy. For a given number of input amplitude bits \(k\) and block length \(n\), the most efficient way to change probabilities is to map all possible \(2^{k}\) realisations to the \(2^{k}\) sequences of \(n\) amplitude bits with minimal energy. After that, we can calculate the probability of one \(p_{a}(1)\) and probability of zero \(p_{a}(0)\) in a set of blocks of length \(n\). Now if we know the distribution of the amplitude bits, then we can find the probability of the constellation points. Each constellation point contains two sign bits and two amplitude bits, so the probability of a point is equal to \[\widehat{p}_{i}=(\frac{1}{2})^{2}\cdot p_{a}(0)^{k}\cdot(1-p_{a}(0))^{1-k}, \quad i=1,\ldots,16 \tag{3}\] where \(k\) is the number of zero amplitude bits in the bit representation of the constellation point. After we have changed the distribution of constellation points, we can calculate the scaling parameter \(\mu\) as the ratio of the initial energy to the received energy: \[\mu^{2}=\frac{\mathbb{E}[|X|^{2}]}{\mathbb{E}[|\widehat{X}|^{2}]}=\frac{10}{ \sum_{i=1}^{16}\widehat{p}_{i}\cdot|x_{i}|^{2}}, \tag{4}\] where \(\widehat{X}\) is a new random variable with distribution 3. Finally, we shift points of the constellation by multiplying them by the parameter \(\mu\), thereby reducing the probability of error. ### Amplitude Shaper and Sign Delay In this subsection we describe the model provided in Fig. 6. Initially, the input is \(k\) informational bits with a uniform distribution. These bits are divided into two groups, one of which will be the amplitude bits, and the other group will be part of the sign bits. Amplitude bits are transmitted through the shaper block, which works according to the algorithm described above. The shaper output is a block of a different length, in which the amplitude bits are already distributed according to the algorithm. We will denote the number of bits in the first group by \(k_{sh}\), the number of bits in the second group by \(k_{sign}\) and the number of bits at the shaper output by \(n_{sh}\). After that, \(k_{sign}\) bits and \(n_{sh}\) amplitudes bits are concatenated and encoded using the LDPC procedure. The LDPC procedure, in turn, generates additional \(\Delta n\) check bits, which are also considered to be uniformly distributed. We will denote the number of bits at the encoder output as \(n_{FEC}=k_{sign}+n_{sh}+\Delta n\). Note that the \(k_{sign}+\Delta n\) bits are sign bits, which have a uniform distribution, while the amplitude bits \(n_{sh}\) are distributed according to the algorithm. We also note that the number of sign bits is equal to the number of amplitude bits, i.e. \(n_{sh}=k_{sign}+\Delta n\). For this procedure, we fix system coderate \(R\), shaper input size \(k_{sh}\) and shaper output \(n_{sh}\). The values of \(k_{sign}\) and \(n_{FEC}\) can be calculated using the code rate formulas. Figure 6: Data flow through the amplitude probability shaper and encoder to modulation. Thevalues of \(R,R_{sh},R_{FEC}\) are defined as 1. system rate: \(R=\frac{k_{sh}+k_{sign}}{n_{FEC}}\), 2. shaping rate: \(R_{sh}=\frac{k_{sh}}{n_{sh}}\), 3. FEC rate: \(R_{FEC}=\frac{n_{sh}+k_{sign}}{n_{FEC}}\), System rate is expressed in terms of shaping rate and the FEC rate: \(R=\frac{1}{2}(R_{sh}+2R_{FEC}-1)\). ### Probability Shaping Mapping For mapping purposes, we form a special PS matrix (Fig. 7) with uniform sign bits and non-uniform amplitude bits, following the data flow scheme (Fig. 6). We generate \(k_{sign}\) sign bits with equal probability of zeros and ones \(p_{s}=\frac{1}{2}\), and \(n_{sh}\) amplitude bits with unequal probability of zeros and ones: \(p_{a}\neq\frac{1}{2}\) such that \(p_{a}(0)>p_{a}(1)\). The probability of amplitude bits can be determined by the proper values of \(k_{sh}\) and \(n_{sh}\) using the ESS method [7]. Finally, after the PS matrix is constructed, the mapping to the QAM is performed. With mapping procedure, bits are converted to the constellation points (or symbols) using the mapping table, which gives a specific coordinate on the complex plane for each unique sequence of bits. Notice that, given bit probabilities, there is a one-to-one correspondence to symbol probabilities. After mapping procedure is done the symbols go through the MIMO channel, demodulation and decoding, probability deshaping and BLER calculation, which are described in Sec. 2 and Fig. 2. The demodulation and deshaping methods are the same procedures described earlier and are performed in reverse order. The decoding procedure is a complex process, which uses loopy belief propagation [14] to iteratively recover the correct bits (LLRs). Figure 7: Creating of data block virtual matrix for amplitude probability shaping, generation of bits assuming unequal probabilities for sign \(k_{sign}\), LDPC \(\Delta n\) and amplitude \(n_{sh}\) bits with final symbol mapping. The red dots represent the mapped constellation points from the generated binary sequence. ### Arrangement of Finite Code Block Shapes To consistent all the shapes \(k_{sh}\), \(k_{sign}\) and \(\Delta n\), and form the PS matrix (Fig. 7) we solve the system of integer equations (5) finding LCM. Hereafter, the values of \(N_{fr}^{sh}\) and \(N_{fr}^{FEC}\) define the multiplicative constants balancing these equations. The values of \(N_{A}\) and \(N_{S}\) define the number of amplitude and sign bits in the code block. It is implicitly assumed that everywhere in Figs. (6) (7) the values are \(k_{sh}:=N_{fr}^{sh}k_{sh}\), \(k_{sign}:=N_{fr}^{sh}k_{sign}\), \(\Delta n:=N_{fr}^{FEC}\Delta n\), \(n_{FEC}:=N_{fr}^{FEC}n_{FEC}\). \[\begin{cases}N_{A}=n_{sh}N_{fr}^{sh}\\ CN_{S}=N_{A}\\ N_{S}+N_{A}=N_{fr}^{FEC}n_{FEC},\end{cases} \tag{5}\] where the value of \(C\) defines the constellation system, i.e. \(C=1\) -- QAM-16, \(C=2\) -- QAM-64, \(C=3\) -- QAM-256 and so on. ## 5 Numerical Experiments ### Energy per Bit and Noise Ratio The energy per bit to noise ratio \(E_{b}/N_{0}\) is a normalized SNR measure, also known as SNR per bit. The \(E_{b}/N_{0}\) measure can be used to express the relationship between signal power and noise power. The energy per bit measure \(E_{b}\) is the energy we use to transmit one bit of information: \[E_{b}=\frac{P}{R},\] where \(P\) is the total power and \(R\) is the LDPC coderate. The noise measure \(N_{0}\) is the noise variance per real and imaginary parts: \[N_{0}=2\sigma^{2}\] Thus, \(E_{b}/N_{0}\) can be expressed in terms of SNR: \[E_{b}/N_{0}=\frac{P}{R}\frac{1}{2\sigma^{2}}=\frac{P}{\sigma^{2}}\frac{1}{2R} =\frac{\text{SNR}}{2R}\] In decibel, \(E_{b}/N_{0}\) is \[E_{b}/N_{0}\text{ in dB}=10\log_{10}(E_{b}/N_{0})=10\log_{10}\left(\frac{P}{ \sigma^{2}}\frac{1}{2R}\right) \tag{6}\] We use the value of \(E_{b}/N_{0}\) in the Monte Carlo experiments. For a given value of \(E_{b}/N_{0}\), the noise value \(\sigma^{2}\) for the fixed power \(P\) and coderate \(R\) is expressed by Eq. (6) and disturbs the symbols transmitted over the AWGN channel (3). ### Realistic Simulations using Sionna This study considers OFDM MIMO with a base station and a user equipped with multiple cross-polarised antennas. We provide simulations in Sionna [8] on the OFDM channel using 5G LDPC codes. The architecture of the system consists of LDPC, Bit Interleaver, Resource Grid Mapper, LS Channel Estimator, Nearest Neighbor Demapper, LMMSE Equalizer [13], OFDM Modulator and presented in Fig. 2. Optimization variables are constellation type, a bit order, coderate, BLER, SNR, code block sizes and 5G model (LOS D, NLOS A). The system uses soft estimates of LLRs for the decoder. Channel model is chosen to be OFDM 5G 2.6 GHz with delay spread of 40ns. The block size is 1536 with \(10^{5}\) number of Monte-Carlo trials in the simulations and so in total \(1.536\cdot 10^{8}\) bits were processed for each point of \(E_{b}/N_{0}\). For all simulation, 20 iterations of LDPC have been used. The code source is random binary tensors. The system use 3GPP wireless both Line of Sight (LOS) and Non Line of Sight (NLOS) channel models D and A. The model is simulated in real time domain considering inter-symbol (IS) and inter-carrier (IC) interferences. In Tab. 2 we provide simulation parameters for Sionna. In Figs. 8 and 9, we provide an experiment for both QAM-16 LOS Model D and NLOS Model A probability shaped (PS) constellations. We present experiments Coded BLER (see Fig. 2) with Gaussian transmit signal with QAM16 Baseline and amplitude PS with different shaping parameters, where coderate is \(r\), block size is \(n\) and parameters of PS are \(n_{sh}\) and \(k_{sh}\). There is a local optimum for \(k_{sh}=192\) PS QAM-16 in both LOS and NLOS models. It is noteworthy that the optimal parameter \(k_{sh}\) is the same for the different LOS and NLOS models, which tells us that the chosen parameters are stable. Note that with wrong parameter settings, e.g. a strong shaping factor \(k_{sh}=128\), the quality of the PS method is worse than that of baseline QAM. In Fig. 10 present experiments with AWGN channel and LDPC, which show a higher gain than for the OFDM channel model. In Tab. 1, we provide gains in dB of the proposed PS method at 10% BLER. The PS method achieves 0.56 dB gain in Model A NLOS Uplink compared to the baseline. From the massive experiments, we conclude that the proposed PS method constellations superior the baseline QAM for real 5G Wireless System using FEC LDPC. The model codes and related experiments can be found in the repository4. Footnote 4: [https://github.com/eugenbobrov/On-Probabilistic-QAM-Shaping-for-5G-MIMO-Wireless-Channel-with-Realistic-LDPC-Codes](https://github.com/eugenbobrov/On-Probabilistic-QAM-Shaping-for-5G-MIMO-Wireless-Channel-with-Realistic-LDPC-Codes) Figure 8: Model D LOS Downlink channel Coded BLock Error Rate for OFDM QAM16. Figure 9: Model A NLOS Uplink channel Coded BLock Error Rate for OFDM QAM16. \begin{table} \begin{tabular}{|l|l|} \hline Carrier frequency & 2.6e9 \\ \hline Delay spread & 40e-9 \\ \hline Cyclic prefix length & 6 \\ \hline Num guard carriers & [5; 6] \\ \hline FFT size & 44 \\ \hline Num user terminal antennas & 2 \\ \hline Num base station antennas & 2 \\ \hline Num OFDM symbols & 14 \\ \hline Num LDPC iterations & 20 \\ \hline \end{tabular} \end{table} Table 2: Simulation parameters in Sionna. Figure 10: AWGN channel Coded BLock Error Rate QAM16. ## 6 Conclusions and Suggested Future Work In this paper, for a MIMO OFDM wireless channel with realistic LDPC code at a given code rate, we study the PS scheme of Enumerative Sphere Shaping (ESS) known from the literature. We find local optimal parameters for the ESS method that minimise the BLER and provide a gain of up to 0.6 dB over the QAM-16 baseline through numerical experiments on the state-of-the-art Sionna simulation platform, modeling physical communication system level. Since there are almost no published works on PS that consider such realistic and contemporary scenarios, while only considering theoretical distributions, this study could be of scientific interest. In the future, a detailed study of BLER performance of a combination of PS and GS methods is possible, which could be very promising in communication applications.
2305.06470
Upper bounds for the rank of powers of quadrics
We establish an upper bound for the rank of every power of an arbitrary quadratic form. Specifically, for any $s\in\mathbb{N}$, we prove that the $s$-th power of a quadratic form of rank $n$ grows as $n^s$. Furthermore, we demonstrate that its rank is subgeneric for all $n>(2s-1)^2$.
Cosimo Flavi
2023-05-10T21:34:18Z
http://arxiv.org/abs/2305.06470v2
# Upper bounds for the rank of powers of quadrics ###### Abstract. We determine an upper bound for the rank of every power of an arbitrary quadratic form. In particular, given any \(s\in\mathbb{N}\), we prove that the \(s\)-th power of a quadratic form of rank \(n\) grows as \(n^{s}\). Moreover, we guarantee that its rank is subgeneric for every \(n>(2s-1)^{2}\). Key words and phrases:Additive decompositions, tensor rank 2020 Mathematics Subject Classification: Primary 14N07 ## Introduction Given \(n\), \(d\in\mathbb{N}\) and a homogeneous polynomial \(f\in\mathbb{C}[x_{1},\ldots,x_{n}]\) of degree \(d\), a summation of the \(d\)-th powers of \(r\) different linear forms \(l_{1},\ldots,l_{r}\in\mathbb{C}[x_{1},\ldots,x_{n}]\), such that \[f=\sum_{j=1}^{r}l_{j}^{d},\] is called a _Waring decomposition_, or simply _decomposition_, of \(f\) of _size_\(r\). The _Waring rank_ (or simply _rank_) of \(f\) is the minimum natural number \(r\) such that there exists a decomposition of \(f\) of size \(r\), that is, \[\operatorname{rk}f=\min\Bigg{\{}r\in\mathbb{N}\;\Bigg{|}\;f=\sum_{j=1}^{r}l_{ j}^{d}:\,l_{j}\in\mathbb{C}[x_{1},\ldots,x_{n}]_{1}\;\Bigg{\}}.\] The problem of determining the rank for a generic form was solved by J. Alexander and A. Hirschowitz in [1]. **Theorem** (J. Alexander, A. Hirschowitz).: _There exists a Zariski open set \(\Omega\subseteq\mathbb{C}[x_{1},\ldots,x_{n}]_{d}\) such that, for every \(f\in\Omega\),_ \[\operatorname{rk}f=\Bigg{[}\frac{1}{n}\binom{d+n-1}{d}\Bigg{]},\] _with the exceptions given by the following cases: \(\bullet\) if \(d=2\), then \(\operatorname{rk}f=n\); \(\bullet\) if \(n=3\) and \(d=4\), then \(\operatorname{rk}f=6\); \(\bullet\) if \(n=4\) and \(d=4\), then \(\operatorname{rk}f=10\); \(\bullet\) if \(n=5\) and \(d=3\), then \(\operatorname{rk}f=8\); \(\bullet\) if \(n=5\) and \(d=4\), then \(\operatorname{rk}f=15\)._ More recently, this result has been accurately analyzed by M. C. Brambilla and G. Ottaviani, which provided a shorter version of the proof in [1], to which we refer for the details. However, despite the Waring rank for generic forms has been completely understood, obtaining the rank of a specific polynomial still remains a hard issue. Currently, there is no general efficient method to solve it, or even to determine some suitable decompositions, independently by the form we consider. Nevertheless, many partial results and methods to attack the Waring problem for a polynomial have been produced among the years. For a more detailed overview about Waring decompositions, there are many texts and papers in the literature. We refer, for instance, to [1], [1], [2], and [1]. For the special case of two variables, the determination of the rank is easier to approach and completely analyzed (see [10] or the more recent work of G. Comas and M. Seiguer in [12]). In particular, there are many algorithms leading to explicit decompositions. The first of these is known as Sylvester algorithm, that can be found in [10] or, with a more recent version, also in [12], [1], and [1]. It has further been analyzed among the years, with several other variants (see e.g. [1, Algorithm 2]). Of great interest, however, are also the applications of Waring decompositions. In [1, section 1], A. Bernardi, A. Gimigliano, and M. Ida briefly summarize some uses, such as telecommunications in electrical engineering (see e.g. [1] and [10]) or cumulant tensors in statistics (see e.g. [12]). It is by this reason that the Waring rank of a polynomial or, equivalently, the symmetric rank of a symmetric tensor, still preserves a special role even in more recent days, despite its classical origins. As one could expect, many forms are relevant in both classical and modern subjects, appearing several time in the literature. The central objects we will consider in this paper are the powers of the quadratic forms. It is well known that, by the classical Silvester's law ([13]), every quadratic form having rank equal to \(n\in\mathbb{N}\) can be written, after a linear change of variables, as \[q_{n}=x_{1}^{2}+\cdots+x_{n}^{2}.\] Thus, we can restrict ourselves to the study of the polynomial \(q_{n}^{s}\) for \(n,s\in\mathbb{N}\). For the case of binary forms, the problem of determining the rank of \(q_{2}^{s}\) has been completely solved by B. Reznick, which proves in [14, Theorem 9.5] that \(\operatorname{rk}(q_{2}^{s})=s+1\). Anyway, we cannot say the same for the general case in more variables, about which there is not much information in the literature. To date, the most complete analysis on this subject is due to B. Reznick, who provides in [14, Chapters 8-9] an accurate survey about both classical and more original results over \(\mathbb{R}\). Following the greater relevance which real numbers usually have in applications, with respect to complex ones, B. Reznick focuses in his notes on real Waring decomposition, to which he refers as _representations_. In particular, to get a new vision of this problem, especially considering the recent applications of which tensors are endowed, we would consider also the complex case of Waring decompositions. In relation to the powers of quadratic forms, this represents a new approach, apart from the classical point of view. Several uses due to decompositions of powers of quadratic forms have been listed by B. Reznick in [14, Section 8], such as in number theory, to study the Waring problem, or even in functional analysis. In section 1 we present some of the main decompositions analyzed by B. Reznick, focusing on some closed formulas providing a family of decompositions depending on the number of variables. In general, these formulas do not correspond to minimal decompositions, but can provide an estimate on how the rank grows as the number of variables tends to infinity. Moreover, we observe in which cases we can state that the rank of \(q_{n}^{s}\) is subgeneric. A basic example is given by the formula \[6q_{n}^{2}=\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{4}+2(4-n)\sum_{i_{1}}x _{i_{1}}^{4},\] which is a decomposition of size \(n^{2}\), presented by B. Reznick in [14, formula (8.33)]. As we will see in Theorem 1.11, this formula does not provide minimal decompositions for every \(n\in\mathbb{N}\). However, it presents a quite elegant pattern and a structure which is invariant under the action of the permutation group \(\mathfrak{S}_{n}\). Furthermore, it gives subgeneric decompositions for \(n>17\). This formula has already been took into consideration by J. Buczynski, K. Han, M. Mella, and Z. Teitler in [1, section 4.5], where they provide a similar decomposition for the exponent \(3\), which presents the same pattern of points. It is given by the formula \[60q_{n}^{3}=\sum_{i_{1}<i_{2}<i_{3}}(x_{i_{1}}\pm x_{i_{2}}\pm x_{i_{3}})^{6} +2(5-n)\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{6}+2(n^{2}-9n+38)\sum_{i_{ 1}}x_{i_{1}}^{6}.\] We generalize this pattern in section 2, providing other formulas for higher powers. Unfortunately, the pattern to describe is not so easy as one could at first imagine, because linear forms involving only \(1\) and \(-1\) as coefficients are not sufficient to cover the coefficients of all monomials of degree \(2s\). To obtain a general formula for every power \(s\) is then necessary to consider other kinds of linear forms. In order to do this, we need to recall the classical notion of \(k\)-partition function for every \(k\in\mathbb{N}\). This is defined as the function \(\operatorname{p}_{k}\colon\mathbb{N}\to\mathbb{N}\) which associates to every natural number \(s\) the number of partitions of \(s\) in exactly \(k\) parts. By this, we provide in section 3 an upper bound for the rank of \(q_{n}^{s}\), given in Corollary 3.7 and summarized in the following theorem. **Theorem**.: _For every \(n,s\in\mathbb{N}\)_ \[\operatorname{rk}(q_{n}^{s})\leq 2^{s-1}\binom{n}{s}+2^{s-2}\binom{n}{s-1}+\sum_{ k=1}^{s-2}2^{k-1}k!\,\mathrm{p}_{k}(s)\binom{n}{k}.\] _In particular, for any \(s\in\mathbb{N}\), the rank of \(q_{n}^{s}\) grows at most as \(n^{s}\)._ Given any \(s\in\mathbb{N}\), the generic rank of forms of degree \(2s\) grows as \(n^{2s-1}\). Hence, this last theorem guarantees that, for any \(s\in\mathbb{N}\), the rank can be supergeneric only for a finite number of possible values of \(n\). The catalecticant map \[\operatorname{Cat}_{f}\colon\mathbb{C}[y_{1},\dots,y_{n}]\to\mathbb{C}[x_{1},\dots,x_{n}]\] of a homogeneous polynomial \(f\) is a graded linear map, which is defined on monomials as \[\operatorname{Cat}_{f}\bigl{(}\mathbf{y}^{\alpha}\bigr{)}=\frac{\partial^{| \alpha|}f}{\partial\mathbf{x}^{\alpha}}.\] We recall a well-known fact, classically attributed to J. J. Sylvester (see [10]). It states that, if \(f\) is a homogeneous polynomial of degree \(d\), then \[\operatorname{rk}f\geq\operatorname{brk}f\geq\operatorname{rk}\bigl{(} \operatorname{Cat}_{f}^{k}\bigr{)},\] where \[\operatorname{Cat}_{f}^{k}\colon\mathbb{C}[y_{1},\dots,y_{n}]_{k}\to\mathbb{C}[ x_{1},\dots,x_{n}]_{d-k}\] is the component of degree \(k\) of the catalecticant map of \(f\), for every \(k=1,\dots,d\). In particular, thanks to a result due to B. Reznick (see [11]), we know that all the catalecticant matrices are full rank. This therefore implies that \[\operatorname{rk}(q_{n}^{s})\geq\operatorname{brk}(q_{n}^{s})\geq\binom{s+n-1 }{s}.\] Therefore, putting together the lower and upper bound obtained, we get the following result, that we prove in Corollary 3.9. **Corollary**.: _For every \(s\in\mathbb{N}\),_ \[\lim_{n\to\infty}\log_{n}\bigl{(}\operatorname{rk}(q_{n}^{s})\bigr{)}=\lim_{ n\to+\infty}\log_{n}\bigl{(}\operatorname{brk}(q_{n}^{s})\bigr{)}=s.\] This fact perfectly agrees with the value of the border rank determined in [12, Theorem 4.5] for ternary non-degenerate quadratic forms, which turns out to be equal to the rank of the middle catalecticant matrix, that is, \[\operatorname{brk}(q_{3}^{s})=\binom{s+2}{2}.\] Although the idea emerging from the initial decompositions of section 2 is that the highest of these value is quite low, it is not so immediate to prove it for an arbitrary power \(s\). However, we give in Theorem 4.7 an estimate of which is the maximum value \(n_{s}\in\mathbb{N}\) for which the rank of \(q_{n}^{s}\) is subgeneric for every \(n>n_{s}\). We give it in the following theorem. **Theorem**.: _For every \(s\in\mathbb{N}\),_ \[\operatorname{rk}(q_{n}^{s})<\frac{1}{n}\binom{2s+n-1}{2s},\] _that is, the rank of \(q_{n}^{s}\) is subgeneric, whenever_ \[n>(2s-1)^{2}.\] This result partially solves the problem proposed by J. Buczynski, K. Han, M. Mella, and Z. Teitler in [1, section 4.5], who asked which are the values of \(n\) and \(s\) for which the rank of \(q_{n}^{s}\) is subgeneric. In particular, they showed in [1, Theorem 4.1] that, denoting by \(W_{m}^{d}\) the maximum rank loci for the space of \(n\)-ary forms of degree \(d\), each of its irreducible components has dimension at least \(\binom{n+1}{2}-1\). Moreover, in the case that equality holds for an irreducible component \(W\), then \(d\) is even and it must be the set of all the \((d/2)\)-th powers of quadrics. By the result of Theorem 4.7 we improve this last statement, showing in Corollary 4.13 the following result. **Corollary**.: _Let \(V\) be a finite dimensional vector space, with \(\dim V=n\) such that \(n\geq 3\) and let \(s\in\mathbb{N}\). If \(n>(2s-1)^{2}\) and \(W\) is an irreducible component of \(W_{m}^{2s}\), then_ \[\dim(W)\geq\binom{n+1}{2}.\] ## 1 Powers of quadrics and classical decompositions Thorough the paper, we will denote by \((a_{1}x_{1}\pm\cdots\pm a_{n}x_{n})^{k}\) a summation of all the \(2^{n-1}\) possible permutations of the coefficients \(+1\) and \(-1\), multiplying \(a_{1},\ldots,a_{n}\), that is \[(a_{1}x_{1}\pm\cdots\pm a_{n}x_{n})^{m}=\sum_{\begin{subarray}{c}j_{1}\in\{0, 1\}\\ i=2,\ldots,n\end{subarray}}(a_{1}x_{1}+(-1)^{j_{2}}a_{2}x_{2}+\cdots+(-1)^{j_{n }}a_{n}x_{n})^{m},\] for every \(a_{1},\ldots,a_{n}\in\mathbb{C}\) and \(m\in\mathbb{N}\). We will also use the notation \[\sum_{i_{1}<\cdots<i_{k}}(a_{1}x_{i_{1}}\pm\cdots\pm a_{k}x_{i_{k}})^{m}\] to denote a summation among any choice of \(k\) variables \(x_{i_{1}},\ldots,x_{i_{k}}\), such that \[1\leq i_{1}<\cdots<i_{k}\leq n,\] for every \(a_{1},\ldots,a_{n}\in\mathbb{C}\) and \(m\in\mathbb{N}\). In the case of \(k>n\), then we suppose all the terms \(x_{i_{n+1}},\ldots,x_{i_{k}}\) to be \(0\). The determination of suitable decompositions of the polynomial \(q_{n}^{s}\), given any \(n,s\in\mathbb{N}\), is a classical problem. Many decompositions appear both in the old literature and in the more recent. B. Reznick provides in [10, chapters 8-9] an overview of the classical results, improving them with new decompositions. We give here some examples. The following two decompositions are due to E. Lucas and were presented by M. J. Houiel respectively in [12, Question 39, p. 129] and [12, Question 38, p. 129] as exercises. These have been then mentioned also by B. Reznick in [10, formulas (8.4) and (8.6)]. The first is given by the equality \[12q_{3}^{2}=(x_{1}\pm x_{2}\pm x_{3})^{4}+8\sum_{i_{1}}x_{i_{1}}^{4}, \tag{1.1}\] while the second one, which had already appeared previously in [11, p. 101], is given by the formula \[6q_{4}^{2}=\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{4}, \tag{1.2}\] which is a decomposition of size \(12\). Formally, this last one is exactly the same decomposition determined by J. Liouville, first presented by V. A. Lebesgue in [11] and also exposed by B. Reznick in [10, formula (8.5)]. This is obtained after a transformation of the set of coordinates and corresponds to the equality \[24q_{4}^{2}=(x_{1}\pm x_{2}\pm x_{3}\pm x_{4})^{4}+16\sum_{i_{1}}x_{i_{1}}^{4}. \tag{1.3}\] In addition to decompositions (1.1) and (1.2), concerning the second power of quadratic forms, we can also report other decompositions of higher degrees. One of these is attributed to A. Kempner and it is presented in [13, Section 5], given by the equation \[15q_{4}^{3}=\frac{1}{8}(x_{1}\pm x_{2}\pm x_{3}\pm x_{4})^{6}+\sum_{i_{1}<i_{ 2}}(x_{i_{1}}\pm x_{i_{2}})^{6}+8\sum_{i_{1}}x_{i_{1}}^{6}. \tag{1.4}\] It also appears in [10, formula (8.7)]. Another one, instead, is provided by A. Hurwitz in [12, formula (3)] and it is a decomposition of size \(72\), given by \[5040q_{4}^{4} =6(x_{1}\pm x_{2}\pm x_{3}\pm x_{4})^{8}+\sum_{i_{1}<i_{2}<i_{3}} \bigl{(}(2x_{i_{1}}\pm x_{i_{2}}\pm x_{i_{3}})^{8}+(x_{i_{1}}\pm 2x_{i_{2}}\pm x_{i_{3}})^{8}+(x_ {i_{1}}\pm x_{i_{2}}\pm 2x_{i_{3}})^{8}\bigr{)}\] \[\quad+60\,\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{8}+6\sum_{i _{1}}(2x_{i_{1}})^{8}. \tag{1.5}\] Also decomposition (1.5) has been provided by B. Reznick in [11, formula (8.8)]. All of the examples above do not represent minimal decompositions for the powers of quadratic forms. Although some of these have a quite low size, the problem of determining the rank of the powers of an arbitrary quadratic form still remains open, with the exception of a few cases. B. Reznick makes in [11] also an analysis on some possible minimal decomposition. With the notation \[\mathbf{x}^{\alpha}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}},\] for every multi-index \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\in\mathbb{N}^{n}\), the _catalecticant map_ of a homogeneous polynomial \(f\) of degree \(d\) is obtained extending by linearity the map defined on monomials as \[\operatorname{Cat}_{f}\colon\mathbb{C}[y_{1}\ldots,y_{n}] \to\mathbb{C}[x_{1}\ldots,x_{n}]\] \[\mathbf{y}^{\alpha}\xmapsto \frac{\partial^{|\alpha|}f}{\partial\mathbf{x}^{\alpha}}.\] Thanks to the catalecticant map, it is possible to establish a general inequality, which provides a well known lower bound for the Waring rank, related to catalecticant matrices and apolarity (see [10] for an overview about this last subject). It is classically attributed to J. J. Sylvester (see [11]) and also appears many times in the literature (see e.g. [1, Proposition 3.5.1.1]). We denote by \(\operatorname{Cat}_{f}^{k}\) the \(k\)-th component of catalecticant map of \(f\), that is, the restriction \[\operatorname{Cat}_{f}^{k}\colon\mathbb{C}[y_{1},\ldots,y_{n}]_{k}\to\mathbb{ C}[x_{1},\ldots,x_{n}]_{d-k}\] of the catalecticant map of \(f\) to the component of degree \(k\) of \(\mathbb{C}[x_{1},\ldots,x_{n}]\). **Proposition 1.6**.: _Let \(d,k\in\mathbb{N}\) be such that \(d\geq k\) and let \(f\) a homogeneous polynomial of degree \(d\). Then_ \[\operatorname{rk}f\geq\operatorname{brk}f\geq\operatorname{rk}\bigl{(} \operatorname{Cat}_{f}^{k}\bigr{)}\] _for every \(f\in\mathcal{R}_{n}^{d}\) and for every \(k=1,\ldots,d\)._ B. Reznick shows, in particular, that all the catalecticant matrices of \(q_{n}^{s}\) are full rank, using [11, Theorem 8.15] and referring to [11, Theorems 3.7 and 3.16]. Another kind of proof of this fact is provided by F. Gesmundo and J. M. Landsberg in [12, Theorem 2.2]. Moreover, it directly follows by the structure of the kernel of the catalecticant map of \(q_{n}^{s}\), also known as _apolar ideal_ of \(q_{n}^{s}\) and denoted as \((q_{n}^{s})^{\perp}\), which has been determined in [11, Theorem 3.8]. We have, in particular, that \[(q_{n}^{s})^{\perp}=(\mathcal{H}_{n}^{s+1}),\] where \(\mathcal{H}_{n}^{s+1}\) is the space of harmonic polynomials of degree \(s+1\). Therefore, as a direct consequence, we have that the \(k\)-th component of \((q_{n}^{s})^{\perp}\) is equal to \(0\) for every \(k\leq s\) and hence, in particular, \(\operatorname{Cat}_{q_{k}^{s}}^{s}\) has maximum rank. Thus, we get immediately the following proposition. **Proposition 1.7** (B. Reznick).: _For every \(n,s\in\mathbb{N}\),_ \[\operatorname{rk}(q_{n}^{s})\geq\operatorname{brk}(q_{n}^{s})\geq \operatorname{rk}\bigl{(}\operatorname{Cat}_{q_{n}^{s}}^{s}\bigr{)}=\binom{s +n-1}{s}. \tag{1.8}\] In view of inequality (1.8), it is natural to wonder in which cases the equality holds. B. Reznick already consider this problem in [11, Chapter 8], providing examples satisfying this condition. In particular, he refers to decompositions of \(q_{n}^{s}\) having size equal to \(\binom{s+n-1}{s}\) by the term _tight_ decompositions (see [11, p. 101]). However, this remains beyond our purposes, since we focus in this paper on general upper bounds and on determining in which cases these last ones guarantee that the rank is subgeneric. In any case, possible tight decompositions do not seem to be many. For what concerns the real ones, B. Reznick summarizes the possible tight decompositions in [11, Proposition 9.2], but, unfortunately, the generalization to the complex case is not so immediate. Nevertheless, for completeness, we report two elegant examples of tight decomposition. The first is given by \[q_{3}^{2}=\frac{1}{6}\sum_{j}^{6}(x_{j}\pm\varphi x_{j-1})^{4}, \tag{1.9}\] where \(\varphi\) is a root of the polynomial \(x^{2}-x-1\in\mathbb{R}[x]\), namely \[\varphi=\frac{1+\sqrt{5}}{2}.\] Such decomposition is made by linear forms which geometrically correspond, up to symmetry with respect to the centre, to the vertices of a regular icosahedron, inscribed in a sphere of radius \((5/6)^{1/4}\) and whose coordinates are given by H. S. M. Coxeter in [10]. This particular decomposition can be found in [11, Theorem 9.13], where B. Reznick proves its uniqueness as real decomposition. Another quite special example concerns a particular square power of \(q_{n}\) again. It is given by the formula \[q_{7}^{2}=\frac{1}{12}\sum\nolimits_{j}^{28}(x_{j}\pm x_{j+1}\pm x_{j+3})^{4} \tag{1.10}\] and represents the maximal set of \(28\) lines in \(\mathbb{R}^{7}\) having mutual angles equal to the values \(\theta\) such that \[\cos^{2}\theta=\frac{1}{9}.\] The geometric structure of decomposition (1.10), which already appears in [11, formula (8.41)], has been first analyzed by P. Delsarte, J.-M. Goethals, and J. J. Seidel in [10, p. 371]. Decomposition (1.10) represents also a particular case of a more general formula. B. Reznick provides in [11, formulas (8.35) and (8.36)] a decomposition of \(q_{n}^{2}\) for \(3\leq n\leq 7\), based on a family of integration quadrature formulas, which has been introduced by A. H. Stroud in [12]. Besides verifying that these formulas are true, we can prove that the same formula is valid also for \(n\geq 9\). Hence, this gives an upper bound of the rank with the only exception, as we will see, of \(n=8\). In these last cases, the decompositions are not real anymore, but the size remains the same, providing decompositions of size \(\binom{n+1}{2}+1\), which, in particular, are not tight. **Theorem 1.11**.: _Let \(n\in\mathbb{N}\) be such that \(n\geq 3\) and \(n\neq 8\). Then, the form \(q_{n}^{2}\) can be decomposed as_ \[3e^{4}q_{n}^{2}=a\Bigl{(}\sum_{i_{1}}x_{i_{1}}\Bigr{)}^{4}+\sum_{j_{1}}\biggl{(} b\Bigl{(}\sum_{i_{1}}x_{i_{1}}\Bigr{)}+cx_{j_{1}}\biggr{)}^{4}+\sum_{j_{1}\neq j _{2}}\biggl{(}d\Bigl{(}\sum_{i_{1}}x_{i_{1}}\Bigr{)}+e(x_{j_{1}}+x_{j_{2}}) \biggr{)}^{4}, \tag{1.12}\] _where, setting \(g=(8-n)^{\frac{1}{4}}\in\mathbb{C}\),_ \[a=8(g^{4}-1)\bigl{(}g^{2}\pm 2\sqrt{2}\bigr{)},\quad b=2g^{2}\pm 2\sqrt{2}, \quad c=\mp 2\sqrt{2}g^{4}-8g^{2},\quad d=2g,\quad e=\mp 2\sqrt{2}g^{3}-8g.\] The family of decompositions (1.12) is just one of the possible closed formulas providing us a family of decomposition for the power of a quadratic form. There are indeed several other formulas, not so convenient in term of size, but still presenting a certain symmetric among the variables. The simplest one is provided by B. Reznick in [11, formula (10.35)] and it is given by the equation \[6q_{n}^{2}=\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{4}+2(4-n)\sum_{i_{1}}x_ {i_{1}}^{4}, \tag{1.13}\] which is a decomposition of size \(n^{2}\) for \(n\geq 2\) and \(n\neq 4\). If instead \(n=4\), since the last term of the decomposition gets equal to zero, we obtain a decomposition of size \(4^{2}-4=12\) and it corresponds exactly the decomposition (1.2). Formula (1.13) also provides decompositions which are of subgeneric size. Indeed, we have \[n^{2}<\frac{1}{n}\binom{n+3}{4}=\frac{(n+3)\,(n+2)\,(n+1)}{24}\] if and only if \[(n-1)\,(n^{2}-17n-6)>0,\] that is, \(n>17\). Analogously, also decomposition (1.4) can be generalized to a family of decompositions depending on \(n\). It can be obtained by the family of quadrature formulas provided by A. H. Stroud in [12] and it has been exposed by B. Reznick in [10, formula (8.33)]. These decompositions are given, for every \(n\geq 3\), by the equalities \[15q_{n}^{3}=\frac{1}{\cdot 2^{n-1}}(x_{1}\pm\cdots\pm x_{n})^{6}+\sum_{i_{1}<i_{2 }}(x_{i_{1}}\pm x_{i_{2}})^{6}+2(8-n)\sum_{i_{1}}x_{i_{1}}^{6}. \tag{1.14}\] These are decompositions of size \(n^{2}+2^{n-1}\) if \(n\neq 8\) and of size \(184\) for \(n=8\). In all of these examples, especially in the last two formulas, we observe that, developing each block of linear forms, every coefficient of a monomial is the same for every permutation of the exponents. For instance, if we develop the linear forms of the summations in formula (1.13), we get \[6q_{n}^{2}=2\sum_{i_{1}<i_{2}}x_{i_{1}}^{2}x_{i_{2}}^{2}+2(n-1)\sum_{i_{1}}x_{ i_{1}}^{4}+2(4-n)\sum_{i_{1}}x_{i_{1}}^{4}.\] This fact, due to the particular symmetry for which linear forms are selected, allows to simplify the possible determination of such decompositions. We now generalize this pattern, defining a new set of generators. We introduce a family of polynomials which are invariant under the action of the permutation group \(\mathfrak{S}_{n}\). For every multi-index \(\mathbf{m}\in\mathbb{N}^{n}\), we denote by \((\mathfrak{S}_{n})_{\mathbf{m}}\) the isotropy group of \(\mathbf{m}\) and by \[\mathfrak{S}_{\mathbf{m}}=\mathfrak{S}_{n}/(\mathfrak{S}_{n})_{\mathbf{m}}= \{\ \sigma(\mathfrak{S}_{n})_{\mathbf{m}}\ |\ \sigma\in\mathfrak{S}_{n}\,\}\] the set of left cosets of \((\mathfrak{S}_{n})_{\mathbf{m}}\) in \(\mathfrak{S}_{n}\). **Definition 1.15**.: For every \(s\in\mathbb{N}\), the _set of \(k\)-partitions of \(s\)_ is the set \[\mathcal{P}_{k}(s)=\left\{\,(m_{1},\ldots,m_{k})\in\mathbb{N}^{k}\,\Bigg{|} \,\sum_{i=1}^{k}m_{i}=s,\ m_{1}\geq\cdots\geq m_{k}>0\,\right\}.\] The _set of partitions of \(s\)_ is the set \[\mathcal{P}(s)=\bigcup_{j=1}^{s}\mathcal{P}_{j}(s).\] Given a natural number \(n\in\mathbb{N}\), a problem assuming great relevance in both the classical and the recent literature is the number of partitions of \(n\). **Definition 1.16**.: The _partition function_ is defined as the map \[\mathrm{p}\colon\mathbb{N} \relbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar \joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar \joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar \joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrelbar\joinrel \joinrelbar\joinrelbar\joinrel\bar\joinrelbar\joinrel\joinrelbar\joinrel \bar\joinrelbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel \relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel \relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel \rel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relbar \join\relrel\relbar\join\relrel\relbar\joinrel\relbar\joinrel\relbar\joinrel \relrel\relbar\joinrel\relbar\joinrel\relbar\joinrel\relrel\relbar\joinrel \relbar\joinrel\relbar\joinrel\relrel\relbar\joinrel\relrel\relbar\joinrel \rel\relbar\joinrel\relbar\joinrel\relrelbar\joinrel\relrel\relbar\joinrel\relrelbar \rel\relrel\relrelbar\join\relrel\relbar\joinrel\relrel\relbar\join\relrel\relrelbar \rel\relrel\relbar\join\relrel\relrel\relbar\join\relrel\relrelbar\join\relrel\relbar \rel\relrel\rel\relrel\relbar\join\relrel\relrel\relbar\join\relrel\relrel\relbar \rel\relrel\relrel\relrel\relbar\join\relrelrel\relrel\relrel\relrelbar\join\relrelrel\relrel \relrel\relrelbar\relrel\relrel\relrel\relrel\relrelbar\join\relrelrel\relrel\relrelrel\relrelbar \relrel\relrelrel\relrel\relrel\relrelrel\relrelrel\relrelrelrel\relrelrelrel\rel for the partition function, improving the results contained in [11]. For what concerns, instead, the \(k\)-partition function \(\mathrm{p}_{k}\), there are some results regarding approximations and lower or upper bounds. A classical estimation is due to G. E. Andrews, who provides in [1] the formula \[\frac{1}{k!}\binom{s-1}{k}\leq\mathrm{p}_{k}(s)\leq\frac{1}{k!}\binom{s+\frac{k (k-1)}{2}-1}{k-1},\] valid for \(1\leq k\leq s-1\). In [10, p. 299], M. Merca provides an upper bound for the value \(\mathrm{p}_{k}(s)\), given by the inequality \[\mathrm{p}_{k}(s)\leq\frac{1}{2}\binom{s-1}{k-1}+\frac{1}{2}\delta_{0,s\bmod k},\] where \(\delta_{0,s\bmod k}\) is the Kronecker delta \[\delta_{0,s\bmod k}=\begin{cases}1&\quad\text{if $s\equiv 0\bmod k$},\\ 0&\quad\text{if $s\equiv 0\bmod k$}.\end{cases}\] Another tighter upper bound is given by A. T. Oruc in [11, Corollary 1] and corresponds to the formula \[\mathrm{p}_{k}(s)\leq\frac{5.44}{s-k}\mathrm{e}^{\pi\sqrt{\frac{2(s-k)}{3}}}.\] Now, fixed a partition \(\mathbf{m}\in\mathcal{P}_{k}(s)\), we define the polynomial \[g_{\mathbf{m}}=\sum_{\sigma\in\mathfrak{S}_{\mathbf{m}}}x_{\sigma(1)}^{m_{1}} \cdots x_{\sigma(n)}^{m_{n}}=\sum_{1\leq t_{1}\leq\cdots\leq t_{k}\leq n} \sum_{\sigma\in\mathfrak{S}_{\mathbf{m}}}x_{\sigma(1)}^{m_{1}}\cdots x_{ \sigma(k_{k})}^{m_{k}}, \tag{1.17}\] where by \(\sigma\in\mathfrak{S}_{\mathbf{m}}\) we mean, with an abuse of notation, its lateral class \(\sigma(\mathfrak{S}_{n})_{\mathbf{m}}\). That is, the class of permutations of \(\sigma\) modulo the subgroup \((\mathfrak{S}_{n})_{\mathbf{m}}\). This is a well defined polynomial, since whenever \(\tau^{-1}\sigma\in(\mathfrak{S}_{n})_{\mathbf{m}}\), we have \[x_{t_{1}}^{m_{\sigma(1)}}\cdots x_{t_{k}}^{m_{\sigma(k)}}=x_{t_{1}}^{m_{\tau(1 )}}\cdots x_{t_{k}}^{m_{\tau(k)}}.\] In these equations we use the fact that every \(k\)-partition can be considered as a \(n\)-uple with the last entries equal to \(0\). Clearly, the polynomial \(g_{\mathbf{m}}\) is invariant under the action of \(\mathfrak{S}_{n}\). _Example 1.18_.: Let us consider the partition \((2,1,1)\) of \(s=4\) in the case of \(n=3\). Then, the polynomial \(g_{(2,1,1)}\in\mathbb{K}[x_{1},x_{2},x_{3}]\) is given by \[g_{(2,1,1)}=x_{1}^{2}x_{2}x_{3}+x_{1}x_{2}^{2}x_{3}+x_{1}x_{2}x_{3}^{2}.\] If we take instead the partition \((2,2,1,1)\) of \(s=6\) in the case of \(n=4\), we get \[g_{(2,2,1,1)}=x_{1}^{2}x_{2}^{2}x_{3}x_{4}+x_{1}^{2}x_{2}x_{3}^{2}x_{4}+x_{1}^ {2}x_{2}x_{3}x_{4}^{2}+x_{1}x_{2}^{2}x_{3}^{2}x_{4}+x_{1}x_{2}^{2}x_{3}x_{4}^{ 2}+x_{1}x_{2}x_{3}^{2}x_{4}^{2}.\] The \(s\)-th power of the quadratic form \(q_{n}\) can be developed as \[q_{n}^{s}=\sum_{i_{1}+\cdots+i_{n}=s}\binom{s}{i_{1},\ldots,i_{n}}x_{1}^{2i_{ 1}}\cdots x_{n}^{2i_{n}},\] for every \(s\in\mathbb{N}\). Gathering all multi-indices of the same permutation class, we can write \[q_{n}^{s} =\sum_{\mathbf{m}\in\mathcal{P}(s)}\sum_{\sigma\in\mathfrak{S}_{ \mathbf{m}}}\binom{s}{m_{1},\ldots,m_{n}}x_{\sigma(1)}^{2m_{1}}\cdots x_{ \sigma(n)}^{2m_{n}}\] \[=\sum_{k=1}^{s}\sum_{\mathbf{m}\in\mathcal{P}_{k}(s)}\sum_{1\leq t _{1}\leq\cdots\leq t_{k}\leq n}\sum_{\sigma\in\mathfrak{S}_{\mathbf{m}}} \binom{s}{m_{1},\ldots,m_{k}}x_{\sigma(1)}^{2m_{1}}\cdots x_{\sigma(k)}^{2m_ {k}}\] \[=\sum_{k=1}^{s}\sum_{\mathbf{m}\in\mathcal{P}_{k}(s)}\binom{s}{m_ {1},\ldots,m_{k}}g_{\mathbf{2m}}. \tag{1.19}\] The easiest way to verify equations (1.13) and (1.14) is to solve a linear system where the unknowns are the coefficients of each monomial. We first need to remark a quite trivial fact. **Lemma 1.20**.: _Every polynomial of the type \((a_{1}x_{1}\pm\cdots\pm a_{n}x_{n})^{2k}\) does not contain any monomial having odd exponents._ Proof.: The statement follows immediately from the fact that the polynomial \[(a_{1}x_{1}\pm\cdots\pm a_{n}x_{n})^{2k}\] represents an even function in each coordinate. That is, it remains unchanged by substituting the variable \(x_{i}\) by the opposite \(-x_{i}\) for every \(i=1,\ldots,n\). By Lemma 1.20, to obtain decomposition (1.13), it is then sufficient to solve a linear system in two variables, consisting of the coefficients of polynomials \[g_{(2,2)}=\sum_{\sigma\in\mathfrak{S}_{(2,2)}}x_{\sigma(1)}^{2}x_{\sigma(2)}^{ 2},\quad g_{(4)}=\sum_{\sigma\in\mathfrak{S}_{(4)}}x_{\sigma(1)}^{4}.\] Thus, by comparing the coefficients in both sides of the equation \[q_{n}^{2}=c_{1}\sum_{i_{1},i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{4}+c_{2}\sum_{i}^{ n}x_{i}^{4},\] that is, writing \[2g_{(2,2)}=2\binom{4}{2}c_{1}g_{(2,2)},\] \[g_{(4)}=2(n-1)c_{1}x_{i}^{4}+c_{2}g_{(4)},\] we obtain a \(2\times 3\) matrix equal to \[\left(\begin{array}{c|c|c}\hline 12&0&2\\ \hline 2(n-1)&1&1\\ \hline 1&1\\ \end{array}\right),\] which provides coefficients \[c_{1}=\frac{1}{6},\quad c_{2}=\frac{4-n}{3}.\] A more recent decomposition has been provided by J. Buczynski, K. Han, M. Mella, and Z. Teitler in [1]. This represents a natural generalization of decomposition (1.13) and we provide here a proof, since we generalize the procedure to higher values of the exponent \(s\). **Proposition 1.21** ([1, Section 4.5]).: _For every \(n\in\mathbb{N}\), the form \(q_{n}^{3}\) can be decomposed as_ \[60q_{n}^{3}=\sum_{i_{1}<i_{2}<i_{3}}(x_{i_{1}}\pm x_{i_{2}}\pm x_{i_{3}})^{6}+ 2(5-n)\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{6}+2(n^{2}-9n+38)\sum_{i_{1 }}x_{i_{1}}^{6}, \tag{1.22}\] _which is a decomposition of size_ \[4\binom{n}{3}+2\binom{n}{2}+n=\frac{2}{3}n^{3}-n^{2}+\frac{4}{3}n\] _if \(n\neq 5\) and of size \(45\) if \(n=5\)._ Proof.: We can obtain this equality by solving the linear system associated to the equation \[q_{n}^{3}=c_{1}\sum_{i_{1}<i_{2}<i_{3}}(x_{i_{1}}\pm x_{i_{2}}\pm x_{i_{3}})^{ 6}+c_{2}\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{6}+c_{3}\sum_{i=1}^{n}x_{i }^{6}.\] By Lemma 1.20 and the symmetry of each summand, we can suppose the development of the decomposition to be a linear combination of the polynomials \[g_{(2,2,2)}=\sum_{\sigma\in\mathfrak{S}_{(2,2)}}x_{\sigma(1)}^{2}x_{\sigma(2)} ^{2}x_{\sigma(3)}^{2},\quad g_{(4,2)}=\sum_{\sigma\in\mathfrak{S}_{(4,2)}}x_{ \sigma(1)}^{4}x_{\sigma(2)}^{2},\quad g_{(6)}=\sum_{\sigma\in\mathfrak{S}_{(6) }}x_{\sigma(1)}^{6}.\] By comparing their coefficients, we have a linear system whose associated matrix is \[\begin{pmatrix}2^{2}\binom{6}{2,2,2}&0&0&\begin{pmatrix}3\\ 1,1,1\end{pmatrix}\\ 2^{2}(n-2)\binom{6}{4,2}&2\binom{6}{4,2}&0&\begin{pmatrix}3\\ 2,1\end{pmatrix}\\ 2^{2}\binom{n-1}{2}&2(n-1)&1&\begin{pmatrix}3\\ 1\end{pmatrix}\end{pmatrix},\] which we can transform as \[\begin{pmatrix}2^{2}&0&0\\ 2^{2}(n-2)&2&0\\ 2^{2}\binom{n-1}{2}&2(n-1)&1\end{pmatrix}=\begin{pmatrix}6\\ 4,2\end{pmatrix}\begin{pmatrix}3\\ 1,1,1\end{pmatrix}\] By solving this linear system, we get the required decomposition. In this case, the size of decompositions given by formula (1.22) grows as \(n^{3}\) for \(n\to+\infty\). Therefore, we can find a number \(n_{3}\in\mathbb{N}\) such that \[\operatorname{rk}(q_{n}^{3})<\frac{1}{n}\binom{n+5}{6}\] for every \(n>n_{3}\). In particular, we have \[\frac{2}{3}n^{3}-n^{2}+\frac{4}{3}n<\frac{1}{n}\binom{n+5}{6}\] if and only \(n>11\). Thus we get \(n_{3}=11\), implying the following proposition. **Proposition 1.23**.: _Let \(n\in\mathbb{N}\) be such that \(n>11\). Then_ \[\operatorname{rk}(q_{n}^{3})<\frac{1}{n}\binom{n+5}{6}.\] _In particular, for \(n>11\), \(\operatorname{rk}(q_{n}^{3})\) is subgeneric._ ## 2. Further examples of closed formulas The most natural idea to generalize this pattern for higher exponents would be to consider in the summation just the same kind of linear forms with additional variables, that is, linear forms of the type \[(x_{1_{1}}\pm\cdots\pm x_{i_{k}})^{2_{s}}.\] Unfortunately, this is not a sufficient condition to obtain in general powers of quadratic form. We can see this in the following example concerning the case of \(s=4\). We have in this case five polynomials, namely, \(g_{(2,2,2,2)}\), \(g_{(4,2,2)}\), \(g_{(4,4)}\), \(g_{(6,2)}\) and \(g_{(8)}\). Setting an hypothetical decomposition as \[q_{n}^{4}=c_{1}\sum_{i_{1}<i_{2}<i_{3}<i_{4}}(x_{i_{1}}\pm x_{i_{2}}\pm x_{i_{ 3}}\pm x_{i_{k}})^{8}+c_{2}\sum_{i_{1}<i_{2}<i_{3}}(x_{i_{1}}\pm x_{i_{2}}\pm x _{i_{3}})^{8}+c_{3}\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{8}+c_{4}\sum_{ i_{1}}x_{i_{1}}^{8},\] we have four coefficients \(c_{1},c_{2},c_{3},c_{4}\in\mathbb{K}\). In this case, the matrix associated to the system we get by comparing the coefficients of each monomial is \[\begin{pmatrix}2^{3}\binom{8}{2,2,2,2}&0&0&0\\ 2^{3}(n-3)\binom{8}{2,2}&2^{2}\binom{8}{4,2}&0&0\\ 2^{3}(\genfrac{}{}{}{0.0pt}{}{4}{2,1})&2^{3}(\genfrac{}{}{}{0.0pt}{}{4}{4,2 })&2^{2}(\genfrac{}{}{}{0.0pt}{}{4,4})&2(\genfrac{}{}{}{0.0pt}{}{8}{4,4})&0\\ 2^{3}\binom{n-2}{2}\binom{8}{6,2}&2^{2}(n-2)\binom{8}{6,2}&2\binom{8}{6,2}&0 \\ 2^{3}\binom{n-1}{3}&2^{2}\binom{n-1}{2}&2(n-1)&1&1\end{pmatrix}\] whose rank is the same as the one of \[\begin{pmatrix}8&0&0&0&\frac{1}{105}\\ 8(n-3)&4&0&0&\frac{1}{35}\\ 4(n-2)(n-3)&4(n-2)&2&0&\frac{3}{35}\\ 4(n-2)(n-3)&4(n-2)&2&0&\frac{1}{7}\\ 4(n-1)(n-2)(n-3)&6(n-1)(n-2)&6(n-1)&3&3\end{pmatrix}.\] It is sufficient to compare the third and the fourth row to state that the system does not have any solution. So, to obtain suitable decompositions for the case of exponent \(4\), we need another block of points, maintaining the symmetry among all of the variables. **Proposition 2.1**.: _For every \(n\in\mathbb{N}\), the form \(q_{n}^{4}\) can be decomposed as_ \[840q_{n}^{4} =\sum_{i_{1}<i_{2}<i_{3}<i_{4}}(x_{i_{1}}\pm x_{i_{2}}\pm x_{i_{3} }\pm x_{i_{4}})^{8}+2(6-n)\sum_{i_{1}<i_{2}<i_{3}}(x_{i_{1}}\pm x_{i_{2}}\pm x _{i_{3}})^{8}\] \[\quad+\frac{2}{3}(3n^{2}-33n+76)\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x _{i_{2}})^{8}+\frac{2}{3}\sum_{i_{1}<i_{2}}\bigl{(}(2x_{i_{1}}\pm x_{i_{2}})^ {8}+(x_{i_{1}}\pm 2x_{i_{2}})^{8}\bigr{)}\] \[\quad-\frac{4}{3}\bigl{(}n^{3}-15n^{2}+317n-918\bigr{)}\sum_{i_{1 }}x_{i_{1}}^{8}, \tag{2.2}\] _which is a decomposition of size_ \[8\binom{n}{4}+4\binom{n}{3}+6\binom{n}{2}+n=\frac{1}{3}(n^{4}-4n^{3}+14n^{2}-8 n),\] _for \(n\geq 4\) and \(n\neq 6\), and of size \(216\) for \(n=6\)._ Proof.: As previously, we consider the generic decomposition \[q_{n}^{4} =c_{1}\sum_{i_{1}<i_{2}<i_{3}<i_{4}}(x_{i_{1}}\pm x_{i_{2}}\pm x_ {i_{3}}\pm x_{i_{4}})^{8}+c_{2}\sum_{i_{1}<i_{2}<i_{3}}(x_{i_{1}}\pm x_{i_{2}} \pm x_{i_{3}})^{8}+c_{3}\sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{8}\] \[\quad+c_{4}\sum_{i_{1}<i_{2}}\bigl{(}(2x_{i_{1}}\pm x_{i_{2}})^{8} +(x_{i_{1}}\pm 2x_{i_{2}})^{8}\bigr{)}+c_{5}\sum_{i_{1}}x_{i_{1}}^{8}, \tag{2.3}\] and the five different polynomials \[g_{(2,2,2,2)} =\sum_{\sigma\in\mathfrak{G}_{(2,2,2,2)}}x_{\sigma(1)}^{2}x_{ \sigma(2)}^{2}x_{\sigma(3)}^{2}x_{\sigma(4)}^{2},\quad g_{(4,2,2)}=\sum_{\sigma \in\mathfrak{G}_{(4,2,2)}}x_{\sigma(1)}^{4}x_{\sigma(2)}^{2}x_{\sigma(3)}^{2},\] \[g_{(4,4)} =\sum_{\sigma\in\mathfrak{G}_{(4,4)}}x_{\sigma(1)}^{4}x_{\sigma(2 )}^{4},\quad g_{(6,2)}=\sum_{\sigma\in\mathfrak{G}_{(6,2)}}x_{\sigma(1)}^{6}x _{\sigma(2)}^{2},\quad g(8)=\sum_{\sigma\in\mathfrak{G}_{(8)}}x_{\sigma(1)}^{ 8}.\] By comparing the coefficients of these last elements with those of formula (1.19), we get the matrix \[\begin{pmatrix}2^{3}\binom{8}{2,2,2,2}&0&0&0&0&0&\bigl{(}\begin{smallmatrix}4\\ 1,1,1,1\end{smallmatrix}\bigr{)}\\ 2^{3}(n-3)\binom{8}{4,2,2}&2^{2}\binom{8}{4,2,2}&0&0&0&\bigl{(}\begin{smallmatrix}4 \\ 2,1,1\end{smallmatrix}\bigr{)}\\ 2^{3}\binom{n-2}{2}\binom{8}{4,4}&2^{2}(n-2)\binom{8}{4,4}&2(2^{8}+2^{4}) \binom{8}{4,4}&0&\bigl{(}\begin{smallmatrix}4\\ 2,2\end{smallmatrix}\bigr{)}\\ 2^{3}\binom{n-2}{2}\binom{8}{6,2}&2^{2}(n-2)\binom{8}{6,2}&2(2^{6}+2^{2}) \binom{8}{6,2}&0&\bigl{(}\begin{smallmatrix}4\\ 3,1\end{smallmatrix}\bigr{)}\\ 2^{3}\binom{n-1}{3}&2^{2}\binom{n-2}{2}&2(n-1)&2(n-1)(2^{8}+1)&1&1\end{pmatrix},\] which, for the system, is equivalent to \[\left(\begin{array}{c|ccccc|c}8&0&0&0&0&\frac{1}{105}\\ \hline 8(n-3)&4&0&0&0&\frac{1}{35}\\ 4(n-2)(n-3)&4(n-2)&2&64&0&\frac{3}{35}\\ 4(n-2)(n-3)&4(n-2)&2&136&0&\frac{1}{7}\\ 4(n-1)(n-2)(n-3)&6(n-1)(n-2)&6(n-1)&1542(n-1)&3&3\end{array}\right). \tag{2.4}\] Since the four blocks in evidence in matrix (2.4) are invertible matrices, the linear system admits a unique solution, which corresponds to coefficients of decomposition (2.2). The crucial fact in determining decomposition (2.2) is that the entries of the blocks in matrix (2.4) are not depending on \(n\). This allows to obtain an explicit decomposition for an arbitrary number of variables. However, to generalize this pattern, it is necessary that all of the blocks have non-zero determinant. As observed in Proposition 1.23 for decomposition (1.22), one could verify that \[\frac{1}{3}(n^{4}-4n^{3}+14n^{2}-8n)<\frac{1}{n}\binom{n+7}{8}\] if and only if \(n>10\). We get, in fact, the following proposition. **Proposition 2.5**.: _Let \(n\in\mathbb{N}\) be such that \(n>10\). Then_ \[\operatorname{rk}(q_{n}^{4})<\frac{1}{n}\binom{n+7}{8}.\] _In particular, for \(n>10\), \(\operatorname{rk}(q_{n}^{4})\) is subgeneric._ Before seeing another example for the case of exponent \(s=5\) we observe that decomposition (2.2) is not optimal in general. Just considering another kind of linear forms at the fourth coefficient \(c_{4}\), we can determine another decomposition of lower size, involving roots of unity. Indeed, since \(8\)-th powers of linear forms are invariant under the action of \(4\)-tooth of unity, we have, in particular, \[(x_{i_{1}}\pm\operatorname{i}\!x_{i_{2}})^{8}=(\operatorname{i}\!x_{i_{1}}\pm x _{i_{2}})^{8}.\] Thus, replacing the fourth summand of equation (2.3) by this last linear form, we can repeat the same proceeding and get the matrix \[\left(\begin{array}{c|ccccc|c}\hline 8&0&0&0&0&\frac{1}{105}\\ \hline 8(n-3)&4&0&0&0&\frac{1}{35}\\ 4(n-2)(n-3)&4(n-2)&2&2&0&\frac{3}{35}\\ 4(n-2)(n-3)&4(n-2)&2&-2&0&\frac{1}{7}\\ 4(n-1)(n-2)(n-3)&6(n-1)(n-2)&6(n-1)&6(n-1)&3&3\end{array}\right). \tag{2.6}\] We therefore obtain decomposition of size \[8\binom{n}{4}+4\binom{n}{3}+4\binom{n}{2}+n=\frac{1}{3}n^{4}-\frac{4}{3}n^{3} +\frac{11}{3}n^{2}-\frac{5}{3}n,\] which represents a small improvement of decomposition (2.2). In order to generalize this behavior, we provide one more example for the case of exponent \(s=5\). The proof is structured exactly as Proposition 2.1, but in this case we have two blocks of size \(2\) among the diagonal of the matrix. **Proposition 2.7**.: _For every \(n\in\mathbb{N}\), the form \(q_{n}^{5}\) can be decomposed as_ \[15120q_{n}^{5} =\sum_{i_{1}<i_{2}<i_{3}<i_{4}<i_{5}}(x_{i_{1}}\pm x_{i_{2}}\pm x _{i_{3}}\pm x_{i_{4}}\pm x_{i_{5}})^{10}-2(n-7)\sum_{i_{1}<i_{2}<i_{3}<i_{4}}(x _{i_{1}}\pm x_{i_{2}}\pm x_{i_{3}}\pm x_{i_{4}})^{10}\] \[+2(n^{2}-13n+36)\sum_{i_{1}<i_{2}<i_{3}}(x_{i_{1}}\pm x_{i_{2}} \pm x_{i_{3}})^{10}\] \[+\frac{2}{3}\sum_{i_{1}<i_{2}<i_{3}}\left((2x_{i_{1}}\pm x_{i_{2}}\pm x _{i_{3}})^{10}+(x_{i_{1}}\pm 2x_{i_{2}}\pm x_{i_{3}})^{10}+(x_{i_{1}}\pm x_{i_{2}}\pm 2x_{i_{3 }})^{10}\right)\] \[+\frac{4}{3}(n^{3}-18n^{2}+90n-226)\sum_{i_{1}<i_{2}}(x_{i_{1}} \pm x_{i_{2}})^{10}\] \[-\frac{4}{3}(n-4)\sum_{i_{1}<i_{2}}\left((2x_{i_{1}}\pm x_{i_{2}}) ^{10}+(x_{i_{1}}\pm 2x_{i_{2}})^{10}\right)\] \[+\frac{2}{3}(n^{4}-22n^{3}+2195n^{2}-15086n+35592)\sum_{i_{1}}x_{ i_{1}}^{10}, \tag{2.8}\] _which is a decomposition of size_ \[16\binom{n}{5}+8\binom{n}{4}+16\binom{n}{3}+6\binom{n}{2}+n=\frac{2}{15}n^{5} -n^{4}+\frac{16}{3}n^{3}-8n^{2}+\frac{68}{15}n\] _for \(n\geq 5\) and \(n\neq 7\), and of size \(1029\) for \(n=7\)._ Proof.: In this case, the polynomials involved for the decomposition are seven. These are \[g_{(2,2,2,2,2)}=\sum_{\sigma\in\mathfrak{S}_{(2,2,2,2)}}x_{\sigma (1)}^{2}x_{\sigma(2)}^{2}x_{\sigma(3)}^{2}x_{\sigma(4)}^{2}x_{\sigma(5)}^{2}, \quad g_{(4,2,2,2)}=\sum_{\sigma\in\mathfrak{S}_{(4,2,2,2)}}x_{\sigma(1)}^{4}x _{\sigma(2)}^{2}x_{\sigma(3)}^{2}x_{\sigma(4)}^{2},\] \[g_{(4,4,2)}=\sum_{\sigma\in\mathfrak{S}_{(4,4,2)}}x_{\sigma(1)}^{4 }x_{\sigma(2)}^{2}x_{\sigma(3)}^{2},\quad g_{(6,2,2)}=\sum_{\sigma\in\mathfrak{ S}_{(6,2,2)}}x_{\sigma(1)}^{6}x_{\sigma(2)}^{2}x_{\sigma(3)}^{2},\quad g_{(6,4)}= \sum_{\sigma\in\mathfrak{S}_{(6,4)}}x_{\sigma(1)}^{6}x_{\sigma(2)}^{4},\] \[g_{(8,2)}=\sum_{\sigma\in\mathfrak{S}_{(8,2)}}x_{\sigma(1)}^{8}x _{\sigma(2)}^{2},\quad g_{(10)}=\sum_{\sigma\in\mathfrak{S}_{(10)}}x_{\sigma( 1)}^{10}.\] By Theorem 3.2, we can determine seven values \(c_{1},\ldots,c_{7}\) such that \[q_{n}^{5} =c_{1}\sum_{i_{1}<i_{2}<i_{3}<i_{4}<i_{5}}(x_{i_{1}}\pm x_{i_{2}} \pm x_{i_{3}}\pm x_{i_{4}}\pm x_{i_{5}})^{10}+c_{2}\sum_{i_{1}<i_{2}<i_{3}<i_{ 4}}(x_{i_{1}}\pm x_{i_{2}}\pm x_{i_{3}}\pm x_{i_{4}})^{10}\] \[\quad+c_{3}\sum_{i_{1}<i_{2}<i_{3}}(x_{i_{1}}\pm x_{i_{2}}\pm x _{i_{3}})^{10}+c_{4}\sum_{i_{1}<i_{2}<i_{3}}\left((2x_{i_{1}}\pm x_{i_{2}}\pm x _{i_{3}})^{10}+(x_{i_{1}}\pm 2x_{i_{2}}\pm x_{i_{3}})^{10}\right.\] \[\quad+(x_{i_{1}}\pm x_{i_{2}}\pm 2x_{i_{3}})^{10}\right)+c_{5} \sum_{i_{1}<i_{2}}(x_{i_{1}}\pm x_{i_{2}})^{10}+c_{6}\sum_{i_{1}<i_{2}}\left((2 x_{i_{1}}\pm x_{i_{2}})^{10}+(x_{i_{1}}\pm 2x_{i_{2}})^{10}\right)+c_{7}\sum_{i_{1}}x_{i_{1}}^{10}.\] By comparing the coefficients, we obtain a linear system associated to the matrix \[\begin{pmatrix}16&0&0&0&0&0&0&0&0&\begin{pmatrix}10\\ 16(n-4)&8\\ 16(n-3)&8(n-3)&4&144\\ 16(\frac{n-3}{2})&8(n-3)&4&288\\ 16(\frac{n-2}{3})&8(\frac{n-2}{2})&4(n-2)&324(n-2)&2&160\\ 16(\frac{n-2}{3})&8(\frac{n-2}{2})&4(n-2)&1044(n-2)&2&520\\ 16(\frac{n-1}{3})&8(\frac{n-1}{3})&4(\frac{n-1}{2})&2052(\frac{n-1}{2})&2(n-1) &2050(n-1)&1\end{pmatrix} \tag{2.9}\] and solving it, we get the required decomposition. Again, we can compute which is the minimum number of variables which guarantees that the size of the decomposition is subgeneric. **Proposition 2.10**.: _Let \(n\in\mathbb{N}\) be such that \(n>8\). Then_ \[\operatorname{rk}(q_{n}^{5})<\frac{1}{n}\binom{n+9}{10}.\] _In particular, for \(n>8\), \(\operatorname{rk}(q_{n}^{5})\) is subgeneric._ ## 3. Asymptotic growth and upper bounds for Waring rank To obtain a pattern similar to what we have seen in matrices (2.4), (2.6) and (2.9), we have to create suitable decompositions which give us a sufficiently high number of equations. This is aimed to obtain square matrices among the diagonal with non-zero determinant, i.e. the red square matrix we have seen above. For our purposes, it is sufficient to guarantee the existence of such decompositions and to prove it, we start by a lemma for matrices of polynomials. **Lemma 3.1**.: _For every \(n,s\in\mathbb{N}\), let \(f_{1},\dots,f_{s}\in\mathbb{K}[x_{1},\dots,x_{n}]\) be linearly independent polynomials and let_ \[g\in\mathbb{K}[x_{ij}]_{i=1,\dots,s_{i},j=1,\dots,n}\] _be the polynomial defined as_ \[g(x_{11},\dots,x_{1n},\dots,x_{s1},\dots,x_{sn})=\det\begin{pmatrix}f_{1}(x_{1 1},\dots,x_{1n})\cdot\dots\cdot f_{1}(x_{s1},\dots,x_{sn})\\ \vdots\cdot\ddots\ddots\vdots\\ f_{s}(x_{11},\dots,x_{1n})\cdot\dots\cdot f_{s}(x_{s1},\dots,x_{sn})\end{pmatrix}.\] _Then there exist \(s\) points \(\mathbf{a}_{1},\dots,\mathbf{a}_{s}\in\mathbb{K}^{n}\) such that_ \[g(\mathbf{a}_{1},\dots,\mathbf{a}_{s})\neq 0,\] _that is, \(g\not\equiv 0\)._ Proof.: We can prove the statement by induction on \(s\). For \(s=1\), the proof is trivial, since by linear independence we must have \(f_{1}\not\equiv 0\) (eventually even constant). So, let us suppose the statement is true for \(s-1\) and let us consider the polynomial matrix \[A(x_{11},\dots,x_{1n},\dots,x_{s1},\dots,x_{sn})=\begin{pmatrix}f_{1}(x_{11}, \dots,x_{1n})\cdot\dots\cdot f_{1}(x_{s1},\dots,x_{sn})\\ \vdots\cdot\ddots\ddots\vdots\\ f_{s}(x_{11},\dots,x_{1n})\cdot\dots\cdot f_{s}(x_{s1},\dots,x_{sn})\end{pmatrix}.\] Let us further consider the polynomials \[g_{k}\in\mathbb{K}[x_{ij}]_{i=1,\dots,s-1,\,j=1,\dots,n},\] defined as \[g_{k}=(-1)^{k+1}\det A_{kn}(x_{11},\dots,x_{1n},\dots,x_{(s-1)1},\dots,x_{(s- 1)n})\] for every \(k=1,\dots,s\), where \(A_{kn}\) equals the \((s-1)\times(s-1)\) matrix obtained by removing the \(k\)-th row and the \(s\)-th column from the matrix \(A\). By inductive hypothesis, we have that \(g_{k}\not\equiv 0\), that is \[Z(g_{k})\neq\mathbb{K}^{n},\] and hence, it follows that \[D(g_{k})=\mathbb{K}^{n}\setminus Z(g_{k})\] is a non-empty open set for every \(k=1,\dots,s\). Thus, we also have \[\bigcap_{k=1}^{s}D(g_{k})\neq\varnothing\] and hence we can select a point \[\mathbf{a}=(\mathbf{a}_{1},\cdots,\mathbf{a}_{s-1})\in\mathbb{K}^{n(s-1)}\] such that \[g_{k}(\mathbf{a})\neq 0\] for every \(k=1,\dots,s\). Now, considering the polynomial \[h(x_{s1},\dots,x_{sn})=g(\mathbf{a}_{1},\dots,\mathbf{a}_{s-1},x_{s1},\dots,x _{sn})=\sum_{k=1}^{s}g_{k}(\mathbf{a}_{1},\dots,\mathbf{a}_{s-1})f_{k}(x_{s1},\dots,x_{sn}),\] we have by linear independence of the polynomials \(f_{1},\ldots,f_{s}\), that \[h(x_{s1},\ldots,x_{sn})\not\equiv 0.\] Therefore, we can select a point \(\mathbf{a}_{s}\in\mathbb{K}^{n}\) such that \[h(\mathbf{a}_{s})=g(\mathbf{a}_{1},\ldots,\mathbf{a}_{s})\neq 0,\] proving the statement. The main blocks we want to consider have size equal to the different numbers of \(k\)-partitions of \(s\), since they correspond to the number of monomials with exactly \(k\) variables. **Theorem 3.2**.: _For every \(n,s\in\mathbb{N}\)_ \[\operatorname{rk}(q_{n}^{s})\leq\sum_{k=1}^{s}2^{k}k!\,\mathrm{p}_{k}(s){n \choose k}.\] Proof.: As we have seen for the initial cases above, we want to consider a sum of powers of linear form maintaining a strong symmetry among all of the the variables. For every \(k=1,\ldots,s\) and for every point \(\mathbf{a}\in\mathbb{C}^{k}\), we define the polynomial \[f_{\mathbf{c},\mathbf{a}}(x_{1},\ldots,x_{n})=\sum_{1\leq n_{1}<\cdots<x_{k} \leq n}\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}}(a_{\sigma(1)}x_{t_{1}}\pm \cdots\pm a_{\sigma(k)}x_{t_{k}})^{2s}.\] As we have already seen in formula (1.19), the action of the elements in \(\mathfrak{S}_{\mathbf{a}}\) on \(\mathbf{a}\) provides all the possible permutations of \(\mathbf{a}\) without repetitions; in particular, if \(a_{j}\) is the same for every \(j=1,\ldots,k\), then \(\mathfrak{S}_{\mathbf{a}}=\{\mathrm{id}\}\). We have already seen this in formula (2.1), where we have used the points \[(1,1,1,1),\quad(1,1,1),\quad(1,1),\quad(2,1),\quad(1),\] and in formula (2.8), where we have used the points \[(1,1,1,1,1),\quad(1,1,1,1),\quad(1,1,1),\quad(2,1,1),\quad(1,1),\quad(2,1), \quad(1).\] We develop the summations to separate monomials having the exponents in the same permutation class. We get \[f_{k,\mathbf{a}}(x_{1},\ldots,x_{n})= \sum_{1\leq t_{1}<\cdots<t_{k}\leq n}\sum_{\sigma\in\mathfrak{S}_ {\mathbf{a}}}\sum_{\begin{subarray}{c}\mathbf{m}\in\mathbb{N}^{k}\\ \left|\mathbf{m}\right|=s\end{subarray}}2^{k-1}{2s\choose 2m_{1},\ldots,2m_{k}} \prod_{i=1}^{k}a_{\sigma(i)}^{2m_{i}}x_{t_{i}}^{2m_{i}}\] \[= \sum_{1\leq t_{1}<\cdots<t_{k}\leq n}\sum_{\sigma\in\mathfrak{S}_ {\mathbf{a}}}\sum_{\begin{subarray}{c}\mathbf{a}\end{subarray}}^{k}\sum_{ \begin{subarray}{c}\mathbf{m}\in\mathbb{N}^{k}_{1}\\ \left|\mathbf{m}\right|=s\end{subarray}}^{k}\sum_{\begin{subarray}{c}\mathbf{m} \in\mathbb{N}^{k}_{1}\\ \left|\mathbf{m}\right|=s\end{subarray}}\sum_{1\leq s_{1}<\cdots<s_{1}\leq k }2^{k-1}{2s\choose 2m_{1},\ldots,2m_{k}}\prod_{i=1}^{\alpha}a_{\sigma(s_{i})}^{2m_{i}} x_{t_{i}}^{2m_{i}}, \tag{3.3}\] where in the second equality we have separated the monomials having a different number \(\lambda\) of non-zero values appearing in the \(k\)-tuple \((m_{1},\ldots,m_{k})\). We can also permute some summations, getting \[f_{\mathbf{c},\mathbf{a}}(x_{1},\ldots,x_{n})= \sum_{\begin{subarray}{c}\mathbf{m}\in\mathbb{N}^{k}_{1}\\ \left|\mathbf{m}\right|=s\end{subarray}}^{k}\sum_{1\leq t_{1}<\cdots<t_{k}\leq n }\sum_{1\leq s_{1}<\cdots<s_{1}\leq k}\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}} }2^{k-1}{2s\choose 2m_{1},\ldots,2m_{k}}\prod_{i=1}^{\lambda}a_{\sigma(s_{i})}^{2m_{i} }x_{t_{i}}^{2m_{i}}\] \[= \sum_{\begin{subarray}{c}\mathbf{m}\in\mathbb{N}^{k}_{1}\\ \left|\mathbf{m}\right|=s\end{subarray}}^{k}\sum_{1\leq t_{1}<\cdots<t_{k}\leq n }\sum_{1\leq s_{1}<\cdots<s_{1}\leq k}2^{k-1}{2s\choose 2m_{1},\ldots,2m_{\lambda}} \!\left(\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}}\prod_{i=1}^{\lambda}a_{\sigma (s_{i})}^{2m_{i}}\right)\prod_{j=1}^{\lambda}x_{t_{j}}^{2m_{j}}.\] The variables \(x_{t_{1}},\cdots,x_{t_{k}}\) appear among the variables \(x_{t_{1}},\ldots,x_{t_{k}}\) a number of times equal to the binomial coefficient \[{n-\lambda\choose k-\lambda}.\] Thus, we can remove the summation depending on \(s_{1},\ldots,s_{\lambda}\) and write \[f_{k,\mathbf{a}}(x_{1},\ldots,x_{n}) =\sum_{\lambda=1}^{k}\sum_{\begin{subarray}{c}\mathbf{m}\in\mathbb{ N}_{0}^{\lambda}\\ |\mathbf{m}|=s\end{subarray}}\sum_{1\leq t_{1}<\cdots<t_{\lambda}\leq n}2^{k-1} \binom{n-\lambda}{k-\lambda}\binom{2s}{2m_{1},\ldots,2m_{\lambda}}\left(\sum_{ \sigma\in\mathfrak{S}_{\mathbf{a}}}\prod_{i=1}^{\lambda}a_{\sigma(i)}^{2m_{i} }\right)\prod_{i=1}^{\lambda}x_{t_{i}}^{2m_{i}}\] \[=\sum_{\lambda=1}^{k}\sum_{\begin{subarray}{c}\mathbf{m}\in \mathbb{N}_{0}^{\lambda}\\ |\mathbf{m}|=s\end{subarray}}2^{k-1}\binom{n-\lambda}{k-\lambda}\binom{2s}{2m_ {1},\ldots,2m_{\lambda}}\left(\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}}\prod_{ i=1}^{\lambda}a_{\sigma(i)}^{2m_{i}}\right)\sum_{1\leq t_{1}<\cdots<t_{ \lambda}\leq n}\prod_{i=1}^{\lambda}x_{t_{i}}^{2m_{i}}.\] Finally, we can gather the multi-indexes representing the same partition of order \(\lambda\) of the number \(k\). To do this, it is sufficient to consider an element \(\mathbf{m}=(m_{1},\ldots,m_{\lambda})\) such that \[m_{1}\geq\cdots\geq m_{\lambda}>0\] and its orbit under the action of the permutation group \(\mathfrak{L}_{\lambda}\) or, equivalently, the set of left cosets \[\mathfrak{S}_{\mathbf{m}}=\mathfrak{S}_{\lambda}/(\mathfrak{S}_{\lambda})_{ \mathbf{m}}.\] We can decompose polynomial \(f_{k,\mathbf{a}}\) into more summands by distinguishing the order of each partition of \(s\). Hence, we get \[f_{k,\mathbf{a}}=\sum_{\lambda=1}^{k}\sum_{\mathbf{m}\in\mathcal{P}_{\lambda} (s)}\sum_{\gamma\in\mathfrak{S}_{\mathbf{m}}}2^{k-1}\binom{n-\lambda}{k- \lambda}\binom{2s}{2m_{1},\ldots,2m_{\lambda}}\left(\sum_{\sigma\in\mathfrak{S }_{\mathbf{a}}}\prod_{i=1}^{\lambda}a_{\sigma(i)}^{2m_{\gamma(i)}}\right)\sum_ {1\leq n_{1}<\cdots<n_{\lambda}\leq n}\prod_{i=1}^{\lambda}x_{t_{i}}^{2m_{ \gamma(i)}}.\] Considering the summation \[\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}}\prod_{i=1}^{\lambda}a_{\sigma(i)}^{ 2m_{\gamma(i)}},\] since the permutation \(\gamma\) just permutes the elements \(1,\ldots,\lambda\), it varies among all the possible choice of the elements \(a_{1},\ldots,a_{k}\) and, in particular, it does not depend on the choice of the multi-index \[(\gamma_{1},\ldots,\gamma_{\lambda})\in\mathfrak{S}_{\mathbf{m}}\] and we have \[\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}}\prod_{i=1}^{\lambda}a_{\sigma(i)}^{ 2m_{\gamma(i)}}=\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}}\prod_{i=1}^{\lambda} a_{\sigma\gamma^{-1}(i)}^{2m_{i}}=\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}} \prod_{i=1}^{\lambda}a_{\sigma(i)}^{2m_{i}},\] Therefore, recalling polynomials (1.17), we can view \(f_{k,\mathbf{a}}\) as \[f_{k,\mathbf{a}} =\sum_{\lambda=1}^{k}\sum_{\mathbf{m}\in\mathcal{P}_{\lambda}(s) }2^{k-1}\binom{n-\lambda}{k-\lambda}\binom{2s}{2m_{1},\ldots,2m_{\lambda}} \left(\sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}}\prod_{i=1}^{\lambda}a_{\sigma (i)}^{2m_{i}}\right)\left(\sum_{\gamma\in\mathfrak{S}_{\mathbf{m}}}\sum_{1\leq t _{1}<\cdots<t_{\lambda}\leq n}\prod_{i=1}^{\lambda}x_{t_{i}}^{2m_{\gamma(i)}}\right)\] \[=\sum_{\lambda=1}^{k}\sum_{\mathbf{m}\in\mathcal{P}_{\lambda}(s)}2^ {k-1}\binom{n-\lambda}{k-\lambda}\binom{2s}{2m_{1},\ldots,2m_{\lambda}}\left( \sum_{\sigma\in\mathfrak{S}_{\mathbf{a}}}\prod_{i=1}^{\lambda}a_{\sigma(i)}^{2 m_{i}}\right)g_{2\mathbf{m}}. \tag{3.4}\] Thus, we have written the polynomial \(f_{k,\mathbf{a}}\) as a linear combination of the ones given by the sum of monomials belonging to the same permutation class. Since these are the same appearing in the development of the form \(q_{n}^{s}\) seen in formula (1.19), we have to satisfy exactly \(\mathrm{p}(s)\) conditions. If we fix these conditions by adding a polynomial \(f_{k,\mathbf{a}_{j}}\) for every \(j=1,\ldots,\mathrm{p}_{k}(s)\) and we repeat the same proceeding for every \(k=1,\ldots,s\), we obtain a number of equations equal to \[\mathrm{p}(s)=\sum_{k=1}^{s}\mathrm{p}_{k}(s).\] Namely, we have to guarantee the existence for every \(k=1,\ldots,s\) of a set of points \[\mathcal{A}_{k}=\{\mathbf{a}_{k,1},\ldots,\mathbf{a}_{k,\mathrm{p}_{k}(s)}\},\] such that \[q_{n}^{s}=\sum_{k=1}^{s}\sum_{j=1}^{\mathrm{p}_{k}(s)}c_{k,j}f_{k,\mathbf{a}_{k,j }}(x_{1},\ldots,x_{n}). \tag{3.5}\] We denote in this case each coordinate of the points by considering \[\mathbf{a}_{k,j}=(a_{k,j,1},\ldots,a_{k,j,k})\] for every \(j=1,\ldots,\mathrm{p}_{k}(s)\). Formula (1.19) tells us that \[q_{n}^{s}=\sum_{k=1}^{s}\sum_{\mathbf{m}\in\mathcal{P}_{k}(s)}\binom{s}{m_{1},\ldots,m_{k}}g_{2\mathbf{m}}.\] Hence, for every \(k=1,\ldots,s\), putting together formulas (1.19), (3.4) and (3.5), we obtain a linear equation for each partition \(\mathbf{m}\in\mathcal{P}_{k}(s)\), given by equalizing the coefficients of the polynomial \(g_{2\mathbf{m}}\). That is \[\sum_{\lambda=k}^{s}\sum_{j_{1}=1}^{\mathrm{p}_{1}(s)}c_{\lambda,j_{1}}\bigg{(} \sum_{\sigma\in\mathfrak{S}_{\mathbf{a}_{k,j_{1}}}}\prod_{i=1}^{\lambda}a_{ \lambda,j_{1},\sigma(i)}^{2m_{i}}\bigg{)}=2^{1-k}\binom{n-\lambda}{k-\lambda} ^{-1}\binom{2s}{2m_{1},\ldots,2m_{\lambda}}^{-1}\binom{s}{m_{1},\ldots,m_{ \lambda}}. \tag{3.6}\] Thus, formula (3.5) provides a linear system of exactly \(\mathrm{p}(s)\) equations, of the type of formula (3.6), in \(\mathrm{p}(s)\) unknowns, which correspond to coefficients \(c_{k,j_{k}}\), for \(k=1,\ldots,s\) and \(j=1,\ldots,\mathrm{p}_{k}(s)\). Now, we denote each of the values multiplying coefficients \(c_{k,j_{k}}\) as \[h_{k,j_{k},i}=h_{k,j_{k},\mathbf{m}_{i}}=\sum_{\sigma\in\mathfrak{S}_{ \mathbf{a}_{k,j}}}\prod_{i=1}^{\lambda}a_{k,j,\sigma(i)}^{2m_{i}},\] where \(\mathbf{m}_{1},\ldots\mathbf{m}_{\mathrm{p}_{k}(s)}\) represent all of the \(k\)-partitions in \(\mathcal{P}_{k}(s)\). Now, the aim is to establish in which cases the linear system admits a solution. Thus we want to determine whenever the associated matrix is invertible. This is given by \[\begin{array}{| **Corollary 3.7**.: _For every \(n,s\in\mathbb{N}\)_ \[\operatorname{rk}(q_{n}^{s})\leq 2^{s-1}\binom{n}{s}+2^{s-2}\binom{n}{s-1}+\sum_{k= 1}^{s-2}2^{k-1}k!\,\mathrm{p}_{k}(s)\binom{n}{k}. \tag{3.8}\] _In particular, for any \(s\in\mathbb{N}\), the rank of \(q_{n}^{s}\) grows at most as \(n^{s}\)._ Putting together Proposition 1.7 and Corollary 3.7, we obtain the following corollary. **Corollary 3.9**.: _For every \(s\in\mathbb{N}\),_ \[\lim_{n\to\infty}\log_{n}\bigl{(}\operatorname{rk}(q_{n}^{s})\bigr{)}=\lim_{ n\to\to\infty}\log_{n}\bigl{(}\operatorname{brk}(q_{n}^{s})\bigr{)}=s.\] Proof.: We have \[\binom{s+n-1}{s}=\frac{1}{s!}\prod_{j=1}^{s}(n-1+j)=\frac{1}{s!}\bigl{(}n^{s}+ f(n)\bigr{)},\] where \(f\in\mathbb{C}[n]\) is a polynomial of degree \(s-1\). Thus, since \[\lim_{n\to\infty}\frac{n^{s}}{n^{s}+f(n)}=1,\] we get \[\lim_{n\to\infty}\log_{n}\Biggl{(}\binom{s+n-1}{s}\Biggr{)}=\lim_{n\to\infty} \log_{n}\biggl{(}\frac{n^{s}}{s!}\biggr{)}=s.\] If we consider now the second member of inequality (3.8), we get \[2^{s-1}\binom{n}{s}+2^{s-2}\binom{n}{s-1}+\sum_{k=1}^{s-2}2^{k-1}k!\,\mathrm{p }_{k}(s)\binom{n}{k}\leq 2^{s-1}\binom{n}{s}+2^{s-2}s!\binom{n}{s-1}\leq \frac{2^{s-1}}{s!}n^{s}+2^{s-2}sn^{s-1}.\] As above, we have \[\lim_{n\to+\infty}\frac{n^{s}}{n^{s}+\frac{s}{2}s!n^{s-1}}=1\] and hence \[\lim_{n\to+\infty}\log_{n}\Biggl{(}2^{s-1}\binom{n}{s}+2^{s-2}\binom{n}{s-1}+ \sum_{k=1}^{s-2}2^{k-1}k!\,\mathrm{p}_{k}(s)\binom{n}{k}\Biggr{)}\leq\lim_{n \to\infty}\log_{n}\biggl{(}\frac{2^{s-1}}{s!}n^{s}\biggr{)}=s.\] Therefore, by Proposition 1.7 and Corollary 3.7 we get the statement. ## 4. Subgeneric rank We have seen by Proposition 2.10 that the size of decomposition (2.8) is subgeneric if \(n>8\). Although it is not easy to determine which is the exact minimal value for which the higher numbers only provide subgeneric rank, we can estimate that this value is quite low. Indeed, by Corollary 3.7, we get the inequality \[\operatorname{rk}(q_{n}^{s}) \leq 2^{s-1}\binom{n}{s}+2^{s-2}\binom{n}{s-1}+\sum_{k=1}^{s-2}2^{ k-1}k!\,\mathrm{p}_{k}(s)\binom{n}{k}\] \[\leq 2^{s-1}\binom{n}{s}+2^{s-2}\binom{n}{s-1}+\sum_{k=1}^{s-2} \frac{2^{k-1}}{(k-1)!}\frac{(s-1)!n!}{(s-k)!(n-k)!}, \tag{4.1}\] while the generic rank is equal to \[\frac{(2s+n-1)!}{(2s)!n!}.\] Hence, in the case of \(n\to+\infty\) the value of the generic rank grows as \(n^{2s-1}\), but the upper bound in formula (4.1) tells us that the rank of \(q_{n}^{s}\) grows slowly than \(n^{s}\) as \(n\to+\infty\). Following the proof of Theorem 3.2, the aim is to determine some points \(\mathbf{a}_{k,1},\ldots,\mathbf{a}_{k,p_{k}(s)}\) for every block matrix \[\begin{pmatrix}g_{k,1}(\mathbf{a}_{k,1})\,\cdot\cdot\cdot\cdot\cdot\cdot\cdot \cdot\cdot g_{k,1}\big{(}\mathbf{a}_{k,p_{k}(s)}\big{)}\\ \vdots\cdot\cdot\cdot\cdot\cdot\cdot\cdot\cdot\\ g_{k,p_{k}(s)}(\mathbf{a}_{k,1})\,\cdot\cdot\cdot\cdot\cdot g_{k,p_{k}(s)} \big{(}\mathbf{a}_{k,p_{k}(s)}\big{)}\end{pmatrix}\] such that its determinant is non-zero. Obviously, it would be convenient to choose as less distinct points as possible to get the most possible lower size of such decompositions. Although it can be easy for small values of \(k\), it becomes more difficult for higher sizes. Let us consider the upper bound of Theorem 3.2, given by \[\sum_{k=1}^{s}2^{k}k!\,\mathrm{p}_{k}(s)\binom{n}{k}. \tag{4.2}\] All the elements involved in the summation are quantities which have been largely studied in the literature and for which many estimates and approximations have been provided. A more precise estimate of values involved in formula (4.2) can be made by the classical well known Stirling's approximation of the factorial of an arbitrary natural number \(n\) (see [20], or [21] for a more recent version of the proof). This estimation shows that \[\sqrt{2\pi}n^{n+\frac{1}{2}}\mathrm{e}^{-n+\frac{1}{12n+1}}<n!<\sqrt{2\pi}n^{ n+\frac{1}{2}}\mathrm{e}^{-n+\frac{1}{12n}} \tag{4.3}\] and hence, just considering the inverse of \(n!\), we have immediately \[\frac{1}{\sqrt{2\pi}}n^{-n-\frac{1}{2}}\mathrm{e}^{n-\frac{1}{12n}}<\frac{1}{ n!}<\frac{1}{\sqrt{2\pi}}n^{-n-\frac{1}{2}}\mathrm{e}^{n-\frac{1}{12n+1}}. \tag{4.4}\] Now, considering the binomial coefficient \[\binom{n}{k}=\frac{n!}{k!(n-k)!},\] we can use relations (4.3) and (4.4) to provide the bounds \[\frac{1}{\sqrt{2\pi k}}\bigg{(}\frac{n}{n-k}\bigg{)}^{n+\frac{1}{2}}\bigg{(} \frac{n-k}{k}\bigg{)}^{k}\,\mathrm{e}^{\frac{1}{12n+1}-\frac{n}{12k(n-k)}}< \binom{n}{k}<\frac{1}{\sqrt{2\pi k}}\bigg{(}\frac{n}{n-k}\bigg{)}^{n+\frac{1} {2}}\bigg{(}\frac{n-k}{k}\bigg{)}^{k}\,\mathrm{e}^{\frac{1}{12n}-\frac{12n+1} {(12k+1)(12k(n-k)+1)}}. \tag{4.5}\] Finally, we also provide an upper bound for the partition function \(\mathrm{p}(s)\). Indeed, D. M. Kane shows in [17, Remark 1] that for every \(n\in\mathbb{N}\), we get \[\frac{C_{1,1}^{-}}{n}\,\mathrm{e}^{\pi\sqrt{\frac{2n}{3}}}\leq\mathrm{p}(n) \leq\frac{C_{1,1}^{+}}{n}\mathrm{e}^{\pi\sqrt{\frac{2}{3}}},\] where \(C_{1,1}^{-}\), \(C_{1,1}^{+}\) are some specified constant values. These inequalities are used also by A. Y. Oruc in [13], where he provides some upper bounds for the function \(\mathrm{p}_{k}(n)\) and to which we refer for further details about this. In particular, as explained for [13, formula (4)], we can suppose \(C_{1,1}^{+}=6\), getting \[\mathrm{p}(n)\leq\frac{6}{n}\mathrm{e}^{\pi\sqrt{\frac{2n}{3}}} \tag{4.6}\] for every \(n\in\mathbb{N}\). **Theorem 4.7**.: _For every \(s\in\mathbb{N}\),_ \[\mathrm{rk}(q_{n}^{s})\leq\frac{1}{n}\binom{2s+n-1}{2s},\] _that is, the rank of \(q_{n}^{s}\) is subgeneric, whenever_ \[n\geq(2s-1)^{2}.\] Proof.: By Theorem 3.2, we get inequality \[\mathrm{rk}(q_{n}^{s})\leq\sum_{k=1}^{s}2^{k}k!\,\mathrm{p}_{k}(s)\binom{n}{k }\leq 2^{s}s!\binom{n}{s}\sum_{k=1}^{s}\mathrm{p}_{k}(s)=\frac{2^{s}\,n!}{(n-s)!} \,\mathrm{p}(s). \tag{4.8}\] Considering relations (4.3), (4.4), (4.5) and (4.6), we can write \[\frac{2^{s}n!}{(n-s)!}\,{\rm p}(s)\leq\frac{6\cdot 2^{s}}{s}\bigg{(}\frac{n}{n-s }\bigg{)}^{n+\frac{1}{2}}(n-s)^{s}{\rm e}^{\pi\sqrt{\frac{2s}{3}+\frac{1}{12n} -\frac{1}{12(n-s)+1}-s}} \tag{4.9}\] Moreover, still by relation (4.5), we have inequality \[\frac{1}{n}\binom{2s+n-1}{2s}>\frac{1}{\sqrt{4\pi s}}\bigg{(}\frac{2s+n-1}{n-1} \bigg{)}^{2s+n-\frac{1}{2}}\bigg{(}\frac{n-1}{2s}\bigg{)}^{2s}{\rm e}^{\frac{- 12\pi^{2}72n-23n+48\pi^{2}-70t+11}{24(n-1)(12n+24\pi-11)}} \tag{4.10}\] We want now to determine for which cases the second member of inequality (4.9) is lower than the second member of inequality (4.10). We have \[\frac{6\cdot 2^{s}}{s}\bigg{(}\frac{n}{n-s}\bigg{)}^{n+\frac{1}{2}}(n-s)^{s}{ \rm e}^{\pi\sqrt{\frac{2s}{3}+\frac{1}{12n}-\frac{1}{12(n-s)+1}-s}}<\frac{1}{ \sqrt{4\pi s}}\bigg{(}\frac{2s+n-1}{n-1}\bigg{)}^{2s+n-\frac{1}{2}}\bigg{(} \frac{n-1}{2s}\bigg{)}^{2s}{\rm e}^{\frac{1}{12(2s+n-1)+1}-\frac{2s+n-1}{24(n -1)}}\] if and only if \[\frac{12\cdot 2^{s}\sqrt{\pi}(n-1)}{\sqrt{s}(2s+n-1)}\bigg{(}\frac{n(n-1)}{(n -s)(2s+n-1)}\bigg{)}^{n+\frac{1}{2}}\bigg{(}\frac{2s\sqrt{n-s}}{2s+n-1}\bigg{)} ^{2s}{\rm e}^{\pi\sqrt{\frac{2s}{3}+\frac{1}{12n}+\frac{2s+n-1}{24(n-2)}-\frac {1}{12(2s+n-1)+1}-\frac{1}{12(n-s)+1}-s}}<1.\] By doing some computation, it is possible to observe that \[\frac{1}{2n}+\frac{2s+n-1}{24s(n-1)}-\frac{1}{12(2s+n-1)+1}<1\] for every \(s>1\) and \(n>(2s-1)^{2}\). Thus, we can also write \[\frac{12\cdot 2^{s}\sqrt{\pi}(n-1)}{\sqrt{s}(2s+n-1)}\bigg{(}\frac{n(n-1)}{ (n-s)(2s+n-1)}\bigg{)}^{n+\frac{1}{2}}\bigg{(}\frac{2s\sqrt{n-s}}{2s+n-1} \bigg{)}^{2s}{\rm e}^{\pi\sqrt{\frac{2s}{3}-s+1}}<1,\] that is, \[\frac{(n-1)}{\sqrt{s}(2s+n-1)}2^{s}12\sqrt{\pi}{\rm e}^{\pi\sqrt{\frac{2s}{3} -s+1}}\bigg{(}\frac{n(n-1)}{(n-s)(2s+n-1)}\bigg{)}^{n+\frac{1}{2}}\bigg{(} \frac{2s\sqrt{n-s}}{2s+n-1}\bigg{)}^{2s}<1.\] We want to analyze each factor and compute when these are lower than 1. First we observe that for \(n>(2s-1)^{2}\) we have inequalities \[\frac{(n-1)}{\sqrt{s}(2s+n-1)}<1,\quad\frac{n(n-1)}{(n-s)(2s+n-1)}=\frac{n(n- 1)}{n(n-1)+s(n-2s+1)}<1.\] Considering the element \[\frac{2s\sqrt{n-s}}{2s+n-1},\] we can compute that this is lower than 1 if and only if \[4s^{2}(n-s)<4s^{2}+n^{2}+1+4sn-4s-2n,\] that is \[n^{2}-2(2s^{2}-2s+1)n+4s^{3}+4s^{2}-4s+1>0.\] The previous inequality holds in particular if \[n>(2s^{2}-2s+1)+2s\sqrt{s^{2}-3s+1},\] and hence also if \[n\geq(2s^{2}-2s+1)+2s\sqrt{s^{2}-2s+1}=(2s^{2}-2s+1)+2s(s-1)=(2s-1)^{2}.\] It remains to analyze the last factor \(2^{s}12\sqrt{\pi}{\rm e}^{\pi\sqrt{\frac{2s}{3}}-s+1}\), but it is possible to state that \[2^{s}12\sqrt{\pi}{\rm e}^{\pi\sqrt{\frac{2s}{3}}-s+1}<1\] whenever \(s\geq 95\). Despite this estimation proves the statement by \(s\geq 95\), it is possible to compute that, for \(6\leq s\leq 94\), the upper bound provided by Theorem 3.2 is lower than generic rank, i.e. \[\sum_{k=1}^{s}2^{k}k!\,{\rm p}_{k}(s)\binom{n}{k}<\frac{1}{n}\binom{2s+n-1}{2s} \tag{4.11}\] whenever \(n\geq(2s-1)^{2}\). As already mentioned in section 1 by Proposition 1.21, the problem of determining the rank of \(q_{n}^{s}\) has been recently analyzed also by J. Buczynski, K. Han, M. Mella, and Z. Teitler in [1]. They ask, in particular, whether the rank is greater than the generic rank (see [1, section 4.5]). The importance of the powers of quadrics is related to the maximal rank locus with respect to any Veronese variety. This set is defined, given a finite dimensional vector space \(V\), with \(\dim V=n\), as \[W_{m}^{d}=\overline{\left\{\,f\in\mathbb{P}(S^{d}V)\,\left|\,\operatorname{ rk}f=m\,\right.\,\right\}}\] and, in particular, we have the following theorem. **Theorem 4.12** ([1, Theorem 4.1]).: _Let \(V\) be a finite dimensional vector space, with \(\dim V=n\) such that \(n\geq 3\). If \(W\) is an irreducible component of the rank locus \(W_{m}\), then_ \[\dim(W)\geq\binom{n+1}{2}-1.\] _Moreover, if_ \[\dim(W)=\binom{n+1}{2}-1,\] _then \(d\) is even and \(W\) is the set of all the \(\left(d/2\right)\)-th powers of quadrics._ By Theorem 4.7 we can state, in particular, that the second condition of Theorem 4.12 does not hold at least for \(n>(2s-1)^{2}\). Thus, we get the following corollary. **Corollary 4.13**.: _Let \(V\) be a finite dimensional vector space, with \(\dim V=n\) such that \(n\geq 3\) and let \(s\in\mathbb{N}\). If \(n>(2s-1)^{2}\) and \(W\) is an irreducible component of \(W_{m}^{2s}\), then_ \[\dim(W)\geq\binom{n+1}{2}.\] **Acknowledgements.** This paper is the natural continuation of the author's Ph.D. thesis, which has been undertaken at Alma Mater Studiorum - Universita di Bologna, under the patient co-supervision of Alessandro Gimigliano and Giorgio Ottaviani, to whom sincere thanks are due. The author would also reserve special thanks to Enrique Arrondo and Jaroslaw Buczynski for their precious help and suggestions during the author's Ph.D. program.
2308.09967
Stable value of depth of symbolic powers of edge ideals of graphs
Let $G$ be a simple graph on $n$ vertices. We introduce the notion of bipartite connectivity of $G$, denoted by $\operatorname{bc}(G)$ and prove that $$\lim_{s \to \infty} \operatorname{depth} (S/I(G)^{(s)}) \le \operatorname{bc}(G),$$ where $I(G)$ denotes the edge ideal of $G$ and $S = \mathrm{k}[x_1, \ldots, x_n]$ is a standard graded polynomial ring over a field $\mathrm{k}$. We further compute the depth of symbolic powers of edge ideals of several classes of graphs, including odd cycles and whisker graphs of complete graphs to illustrate the cases where the above inequality becomes equality.
Nguyen Cong Minh, Tran Nam Trung, Thanh Vu
2023-08-19T09:50:41Z
http://arxiv.org/abs/2308.09967v2
# Stable value of depth of symbolic powers of edge ideals of graphs ###### Abstract. Let \(G\) be a simple graph on \(n\) vertices. We introduce the notion of bipartite connectivity of \(G\), denoted by \(\operatorname{bc}(G)\) and prove that \[\lim_{s\to\infty}\operatorname{depth}(S/I(G)^{(s)})\leq\operatorname{bc}(G),\] where \(I(G)\) denotes the edge ideal of \(G\) and \(S=\operatorname{k}[x_{1},\dots,x_{n}]\) is a standard graded polynomial ring over a field \(\operatorname{k}\). We further compute the depth of symbolic powers of edge ideals of several classes of graphs, including odd cycles and whisker graphs of complete graphs to illustrate the cases where the above inequality becomes equality. Key words and phrases:depth of symbolic powers; cycles; stable value of depth 2020 Mathematics Subject Classification: 05E40, 13D02, 13F55 ## 1. Introduction Let \(I\) be a homogeneous ideal in a standard graded polynomial ring \(S=\operatorname{k}[x_{1},\dots,x_{n}]\) over a field \(\operatorname{k}\). While the depth function of powers of \(I\) is convergent by the result of Brodmann [Br], the depth function of symbolic powers of \(I\) is more exotic. Nguyen and N. V. Trung [NT] proved that for every positive eventually periodic function \(f:\mathbb{N}\to\mathbb{N}\) there exists an ideal \(I\) such that \(\operatorname{depth}S/I^{(s)}=f(s)\) for all \(s\geq 1\), where \(I^{(s)}\) denotes the \(s\)-th symbolic power of \(I\). On the other hand, when \(I\) is a squarefree monomial ideal, Hoa, Kimura, Terai, and T. N. Trung [HKTT] proved that the limit \(\lim_{s\to\infty}\operatorname{depth}S/I^{(s)}\) exists. Nonetheless, given a squarefree monomial ideal \(I\), computing the stable value of depth of symbolic powers of \(I\) is a difficult problem even in the case of edge ideals of graphs. Let us now recall the notion of the edge ideals of graphs. Let \(G\) be a simple graph with the vertex set \(V(G)=\{1,\dots,n\}\) and edge set \(E(G)\). The edge ideal of \(G\), denoted \(I(G)\), is the squarefree monomial ideal generated by \(x_{i}x_{j}\) where \(\{i,j\}\) is an edge of \(G\). In [T], the second author showed that \(\lim_{s\to\infty}\operatorname{depth}S/I^{s}\) equals the number of bipartite connected components of \(G\), and that \(\operatorname{depth}S/I^{s}\) stabilizes when it reaches the limit depth. By the results of [NV2, HNTT], we may assume that \(G\) is a connected graph when considering the depth of (symbolic) powers of the edge ideal of \(G\). In this case, the result of [T] can be written as \[\lim_{s\to\infty}\operatorname{depth}S/I^{s}=\begin{cases}1&\text{ if $G$ is bipartite}\\ 0&\text{ otherwise,}\end{cases}\] and \(\operatorname{dstab}(I)\) the stabilization index of depth of powers of \(I\) is the smallest exponent \(s\) such that \(\operatorname{depth}S/I^{s}\) equals the limit depth of powers. Since we expect that the depth functions of symbolic powers of edge ideals are non-increasing, this property should hold for symbolic powers of \(I(G)\) as well. In [HLT], Hien, Lam, and N. V. Trung characterized graphs for which \(\lim_{s\to\infty}\operatorname{depth}S/I(G)^{(s)}=1\) and proved that the stabilization index of depth of symbolic powers in this case is also the smallest exponent \(s\) such that \(\operatorname{depth}S/I(G)^{(s)}=1\). For a general non-bipartite graph \(G\), we do not know the value \(\lim_{s\to\infty}\operatorname{depth}S/I(G)^{(s)}\). In this paper, we introduce the notion of bipartite connectivity of \(G\) and show that this is tightly connected to the stable value of depth of symbolic powers of \(I(G)\). Let \(\mathcal{B}(G)\) denote the set of maximal induced bipartite subgraphs \(H\) of \(G\), i.e., for any \(v\in V(G)\setminus V(H)\), the induced subgraph of \(G\) on \(V(H)\cup\{v\}\) is not bipartite. Note that \(H\) might contain isolated vertices. Since \(H\) is maximal, it contains at least one edge. Then we define \(\operatorname{bc}(G)=\min\{c(H)\mid H\in\mathcal{B}(G)\}\) and call it the bipartite connectivity number of \(G\), where \(c(H)\) is the number of connected components of \(H\). With this notation, the result of Hien, Lam, and N. V. Trung can be stated as \(\lim_{s\to\infty}\operatorname{depth}S/I(G)^{(s)}=1\) if and only if \(\operatorname{bc}(G)=1\), i.e., there exists an induced connected bipartite subgraph \(H\) of \(G\) such that \(H\) dominates \(G\). In this paper, we generalize this result and prove: **Theorem 1.1**.: _Let \(G\) be a simple graph. Then_ \[\lim_{s\to\infty}\operatorname{depth}S/I(G)^{(s)}\leq\operatorname{bc}(G).\] We then provide examples to show that the above inequality becomes equality and that \(\lim_{s\to\infty}\operatorname{depth}S/I^{(s)}\) could be arbitrarily large even when \(G\) is a connected graph. **Proposition 1.2**.: _Let \(W_{n}=W(K_{n})\) be the whisker graph on the complete graph on \(n\) vertices. Then, \(\operatorname{bc}(W_{n})=n-1\) and_ \[\operatorname{depth}S/I(W_{n})^{(s)}=\begin{cases}n&\text{ if $s=1$}\\ n-1&\text{ if $s\geq 2$}.\end{cases}\] We also note that the inequality in Theorem 1.1 could be strict as given in the following example. **Example 1.3**.: Let \(W\) be the graph obtained by gluing two whiskers at the vertices of a 3 cycle. Then \(\operatorname{bc}(W)=3\) while \[\operatorname{depth}S/I(W)^{(s)}=\begin{cases}7&\text{ if $s=1$}\\ 4&\text{ if $s=2$}\\ 2&\text{ if $s\geq 3$}.\end{cases}\] Nonetheless, if we cluster the isolated points in \(H\) by the bouquets in \(G\) of the graph in Example 1.3 then we obtain a finer invariant of \(G\) that gives the stable value of depth of symbolic powers. More precisely, assume that \(H=H_{1}\cup\cdots\cup H_{c}\cup\{p_{1},\ldots,p_{t}\}\) where \(H_{i}\) are connected components of \(H\) with at least one edge and \(p_{1},\ldots,p_{t}\) are isolated points in \(H\). We says that \(p_{i_{1}},\ldots,p_{i_{u}}\) are clustered if there exists a \(v\in V(G)\setminus V(H)\) such that the induced subgraph of \(G\) on \(\{v,p_{i_{1}},\ldots,p_{i_{u}}\}\) is a bouquet. Let \(\operatorname{bou}_{G}(H)\) be the smallest number \(b\) such that the set \(\{p_{1},\ldots,p_{t}\}\) can be clustered into \(b\) bouquets in \(G\). We call \(c^{\prime}(H)=c+\operatorname{bou}_{G}(H)\) the number of restricted connected components of \(H\). We then define \(\operatorname{bc}^{\prime}(G)=\min\{c^{\prime}(H)\mid H\in\mathcal{B}(G)\}\) the restricted bipartite connectivity number of \(G\). We conjecture that **Conjecture 1.4**.: Let \(G\) be a simple graph. Then \[\lim_{s\to\infty}\operatorname{depth}S/I(G)^{(s)}=\operatorname{bc}^{\prime} (G).\] We verify this conjecture for whisker graphs of complete graphs. **Theorem 1.5**.: _Let \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{N}^{n}\) and \(W_{\mathbf{a}}\) be the graph obtained by gluing \(a_{i}\) leaves to the vertex \(i\) of a complete graph \(K_{n}\). Assume that \(a_{i}\geq 1\) for all \(i=1,\ldots,n\). Then \(\operatorname{bc}^{\prime}(W_{\mathbf{a}})=n-1\) and_ \[\lim_{s\to\infty}\operatorname{depth}S/I(W_{\mathbf{a}})^{(s)}=n-1.\] Finally, we compute the depth of symbolic powers of edge ideals of odd cycles by extending our argument in [MTV]. This shows that the bound for the index of depth stability of symbolic powers of \(I\) given in [HLT] is sharp. **Theorem 1.6**.: _Let \(I(C_{n})\) be the edge ideal of a cycle of length \(n=2k+1\geq 5\). Then_ \[\operatorname{depth}S/I(C_{n})^{(s)}=\begin{cases}\lceil\frac{n-1}{3}\rceil& \text{ if }s=1\\ \max(1,\lceil\frac{n-t+1}{3}\rceil)&\text{ if }s\geq 2.\end{cases}\] _In particular, \(\operatorname{sdstab}(I(C_{n}))=n-2\), where \(\operatorname{sdstab}(I)\) is the index of depth stability of symbolic powers of \(I\)._ We structure the paper as follows. In Section 2, we set up the notation and provide some background. In Section 3, we prove Theorem 1.1 and compute the depth of symbolic powers of edge ideals of whisker graphs of complete graphs. In Section 4, we prove Theorem 1.6. ## 2. Preliminaries In this section, we recall some definitions and properties concerning depth, graphs and their edge ideals, and the symbolic powers of squarefree monomial ideals. The interested readers are referred to [BH] for more details. Throughout the paper, we denote \(S=\operatorname{k}[x_{1},\ldots,x_{n}]\) a standard graded polynomial ring over a field \(\operatorname{k}\). Let \(\mathfrak{m}=(x_{1},\ldots,x_{n})\) be the maximal homogeneous ideal of \(S\). ### Depth For a finitely generated graded \(S\)-module \(L\), the depth of \(L\) is defined to be \[\operatorname{depth}(L)=\min\{i\mid H^{i}_{\mathfrak{m}}(L)\neq 0\},\] where \(H^{i}_{\mathfrak{m}}(L)\) denotes the \(i\)-th local cohomology module of \(L\) with respect to \(\mathfrak{m}\). We have the following estimates on depth along short exact sequences (see [BH, Proposition 1.2.9]). **Lemma 2.1**.: _Let \(0\to L\to M\to N\to 0\) be a short exact sequence of finitely generated graded \(S\)-modules. Then_ 1. \(\operatorname{depth}M\geq\min(\operatorname{depth}L,\operatorname{depth}N),\)__ 2. \(\operatorname{depth}L\geq\min(\operatorname{depth}M,\operatorname{depth}N+1).\)__ We make repeated use the following two results in the sequence. The first one is [R, Corollary 1.3]. The second one is [CHHKTT, Theorem 4.3]. **Lemma 2.2**.: _Let \(I\) be a monomial ideal and \(f\) a monomial such that \(f\notin I\). Then_ \[\operatorname{depth}S/I\leq\operatorname{depth}S/(I:f)\] **Lemma 2.3**.: _Let \(I\) be a monomial ideal and \(f\) a monomial. Then_ \[\operatorname{depth}S/I\in\{\operatorname{depth}(S/I:f),\operatorname{depth} (S/(I,f))\}.\] ### Graphs and their edge ideals Let \(G\) denote a finite simple graph over the vertex set \(V(G)=[n]=\{1,2,\ldots,n\}\) and the edge set \(E(G)\). The edge ideal of \(G\) is defined to be \[I(G)=(x_{i}x_{j}\ |\ \{i,j\}\in E(G))\subseteq S.\] For simplicity, we often write \(i\in G\) (resp. \(ij\in G\)) instead of \(i\in V(G)\) (resp. \(\{i,j\}\in E(G)\)). By abuse of notation, we also call \(x_{i}\) a vertex of \(G\) and \(x_{i}x_{j}\in I(G)\) an edge of \(G\). A path \(P_{n}\) of length \(n-1\) is the graph on \([n]\) whose edges are \(\{i,i+1\}\) for \(i=1,\ldots,n-1\). A cycle \(C_{n}\) of length \(n\geq 3\) is the graph on \([n]\) whose edges are \(\{i,i+1\}\) for \(i=1,\ldots,n-1\) and \(\{1,n\}\). A graph \(H\) on \([n]\) is called bipartite if there exists a partition \([n]=X\cup Y\), \(X\cap Y=\emptyset\) such that \(E(H)\subseteq X\times Y\). When \(E(H)=X\times Y\), \(H\) is called a complete bipartite graph, denoted by \(K_{X,Y}\). A bouquet is a complete bipartite graph with \(|X|=1\). For a vertex \(x\in V(G)\), let the neighbours of \(x\) be the subset \(N_{G}(x)=\{y\in V(G)\mid\{x,y\}\in E(G)\}\), and set \(N_{G}[x]=N_{G}(x)\cup\{x\}\). The degree of a vertex \(x\), denoted by \(\deg_{G}(x)\) is the number of neighbours of \(x\). A leaf is a vertex of degree \(1\). The unique edge attached to a leaf is called a leaf edge. Denote \(d_{G}(x)\) the number of non-leaf edges incident to \(x\). ### Symbolic powers of edge ideals Let \(I\) be a squarefree monomial ideal in \(S\) with the irreducible decomposition \[I=\mathfrak{p}_{1}\cap\cdots\cap\mathfrak{p}_{m}.\] The \(s\)-th symbolic power of \(I\) is defined by \[I^{(s)}=\mathfrak{p}_{1}^{s}\cap\cdots\cap\mathfrak{p}_{m}^{s}.\] By the proof of [KTY, Theorem 5.2], we have **Lemma 2.4**.: _Assume that \(e\) is a leaf edge of \(G\). Then for all \(s\geq 2\) we have \(I(G)^{(s)}:e=I(G)^{(s-1)}\). In particular, \(\operatorname{depth}S/I(G)^{(s)}\) is a non-increasing function._ ## 3. Stable value of depth of symbolic powers of edge ideals In this section, we prove that the stable value of depth of symbolic powers of edge ideals is at most the bipartite connectivity number of \(G\). We assume that \(S=\operatorname{k}[x_{1},\ldots,x_{n}]\) and \(G\) is a simple graph on \(V(G)=\{1,\ldots,n\}\). For a monomial \(f\in S\), the support of \(f\), denoted by \(\operatorname{supp}f\) is defined by \(\operatorname{supp}f=\{i\mid x_{i}\text{ divides }f\}\). We first introduce some notation. Let \(H\) be a connected bipartite graph with the partition \(V(H)=X\cup Y\). The bipartite completion of \(H\), denoted by \(\tilde{H}\) is the complete bipartite graph \(K_{X,Y}\). Now, assume that \(H=H_{1}\cup\cdots\cup H_{c}\cup\{p_{1},\ldots,p_{t}\}\) where \(H_{1},\ldots,H_{c}\) are connected components of \(H\) with at least one edge, and \(p_{1},\ldots,p_{t}\) are isolated points of \(H\). Then the bipartite completion of \(H\) is defined by \(\tilde{H}=\tilde{H}_{1}\cup\cdots\cup\tilde{H}_{c}\cup\{p_{1},\ldots,p_{t}\}\). We have **Lemma 3.1**.: _Let \(H\) be a bipartite graph. Let \(\mathbf{a}=\mathbf{d}(H)=(d_{H}(1),\ldots,d_{H}(n))\in\mathbb{N}^{n}\) and \(s=|\mathbf{a}|/2\). Then_ \[\sqrt{I(H)^{s+1}:x^{\mathbf{a}}}=I(\tilde{H}),\] _where \(\tilde{H}\) is the bipartite completion of \(H\)._ Proof.: Since variables corresponding to isolated points do not appear in \(I(H)\), we may assume that \(H\) does not have isolated points. Assume that \(H=H_{1}\cup\cdots\cup H_{c}\) where \(H_{i}\) are connected components of \(H\) with at least one edge. Let \(\mathbf{a}_{i}=\mathbf{d}(H_{i})\). Note that \(x^{\mathbf{a}_{i}}\) is equal to the product of non-leaf edges of \(H_{i}\), hence \(|\mathbf{a}_{i}|\) is even for all \(i\). Let \(s_{i}=|\mathbf{a}_{i}|/2\). Now assume that \(f\in\sqrt{I(H)^{s+1}:x^{\mathbf{a}}}\) with \(f=f_{1}\cdots f_{c}\) and \(\operatorname{supp}f_{i}\subseteq V(H_{i})\). Then we have \(f^{m}x^{\mathbf{a}}\in I(H)^{s+1}\) for some \(m>0\). Thus, we must have \(f_{i}^{m}x^{\mathbf{a}_{i}}\in I(H_{i})^{s_{i}+1}\) for some \(i\). Hence, we may assume that \(H\) is connected. The conclusion then follows from [T, Lemma 3.1] and [MNPTV, Lemma 2.19]. Now, let \(H\) be a maximal induced bipartite subgraph of \(G\), i.e., for any \(v\in V(G)\setminus V(H)\) the induced subgraph of \(G\) on \(V(H)\cup\{v\}\) is not bipartite. In particular, \(H\) contains at least one edge. Assume that \(H=H_{1}\cup\cdots\cup H_{c}\cup\{p_{1},\ldots,p_{t}\}\) where \(H_{i}\) are connected components of \(H\) with at least one edge and \(p_{1},\ldots,p_{t}\) are isolated points of \(H\). Then \(c(H)=c+t\) is the number of connected components of \(H\). We have **Lemma 3.2**.: _Let \(H\) be a maximal induced bipartite subgraph of \(G\). Then_ \[\operatorname{depth}(S/(I(G)^{(s)})\leq c(H),\] _for all \(s\geq|E(H)|+1\), where \(c(H)\) is the number of connected components of \(H\)._ Proof.: Assume that \(H=H_{1}\cup\cdots\cup H_{c}\cup\{p_{1},\ldots,p_{t}\}\) where \(H_{1},\ldots,H_{c}\) are connected components of \(H\) with at least one edge and \(p_{1},\ldots,p_{t}\) are isolated points of \(H\). Let \(\mathbf{b}=\mathbf{d}(H)\) and \(x^{\mathbf{a}}=x^{\mathbf{b}}\cdot\prod(e\mid e\) is a leaf edge of H). Then \(x^{\mathbf{a}}\) is the product of edges of \(H\). Let \(s=|\mathbf{a}|/2=|E(H)|\). By [MNPTV, Corollary 2.7], \(x^{\mathbf{a}}\notin I(G)^{(s+1)}\). We claim that \[\sqrt{I(G)^{(s+1)}:x^{\mathbf{a}}}=I(\tilde{H})+(x_{j}\mid j\in V(G)\setminus V (H)). \tag{3.1}\] By Lemma 3.1, it suffices to prove that \(x_{j}\in\sqrt{I(G)^{(s+1)}:x^{\mathbf{a}}}\) for all \(j\in V(G)\setminus V(H)\). Since the induced subgraph of \(G\) on \(\{j\}\cup H\) is not bipartite, there must exist a connected component, say \(H_{1}\) of \(H\) such that the induced subgraph of \(G\) on \(V(H_{1})\cup\{j\}\) has an odd cycle. Let \(G_{1}\) be the induced subgraph of \(G\) on \(H_{1}\cup\{j\}\). Let \(j,1,\ldots,2k\) be an induced odd cycle in \(G_{1}\). Then we have \(x_{j}x_{1}\cdots x_{2k}\in I(G_{1})^{(k+1)}\). Furthermore, \(x_{1}\cdots x_{2k}=\prod_{j=1}^{k}e_{j}\) is a product of \(k\) edges of \(H_{1}\). By the definition of \(\mathbf{a}\), we have \(x^{\mathbf{a}_{1}}\) equals the products of all edges of \(H_{1}\). In other words, \(x^{\mathbf{a}_{1}}=x_{1}\cdots x_{2k}\cdot h\) with \(h\in I(H_{1})^{|E(H_{1})|-k}.\) Hence, \(x_{j}x^{\mathbf{a}_{1}}\in I(G_{1})^{(s_{1}+1)}\) where \(s_{1}=|E(H_{1})|\). Eq. (3.1) follows. By Lemma 2.2, we deduce that \[\operatorname{depth}S/I(G)^{(s+1)}\leq\operatorname{depth}S/I(G)^{(s+1)}:x^{ \mathbf{a}}\leq\operatorname{depth}S/\sqrt{I(G)^{(s+1)}:x^{\mathbf{a}}}=c(H).\] For any \(t\geq s+1\), let \(x^{\mathbf{c}}=x^{\mathbf{a}}\cdot e^{t-s-1}\) where \(e\) is an arbitrary edge of \(H\). Then we have \(x^{\mathbf{c}}\notin I(G)^{(t)}\) and \(\sqrt{I(G)^{(t)}:x^{\mathbf{c}}}\supseteq\sqrt{I(G)^{(s+1)}:x^{\mathbf{a}}}\). Hence, \(\operatorname{depth}S/I(G)^{(t)}\leq c(H)\) for all \(t\geq s+1\). The conclusion follows. We are now ready for the main result of this section. Recall that \(\mathcal{B}(G)\) denotes the set of all maximal induced bipartite subgraphs of \(G\) and \(\operatorname{bc}(G)=\min(c(H)\mid H\in\mathcal{B}(G))\). **Theorem 3.3**.: _Let \(G\) be a simple graph. Then_ \[\lim_{s\to\infty}\operatorname{depth}S/I(G)^{(s)}\leq\operatorname{bc}(G).\] Proof.: The conclusion follows immediately from the definition and Lemma 3.2. We now provide an example to show that the above inequality is equality for a family of graphs and that the limit depth of symbolic powers of \(I(G)\) could be arbitrarily large even when \(G\) is a connected graph. **Proposition 3.4**.: _Let \(W_{n}=W(K_{n})\) be the whisker graph on the complete graph on \(n\) vertices. Then, \(\operatorname{bc}(W_{n})=n-1\) and_ \[\operatorname{depth}S/I(W_{n})^{(s)}=\begin{cases}n&\text{ if }s=1\\ n-1&\text{ if }s\geq 2.\end{cases}\] Proof.: We may assume that \(V(W_{n})=\{x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\}\) and the edge set \[E(W_{n})=\{\{x_{i},x_{j}\},\{x_{i},y_{i}\}\mid 1\leq i\neq j\leq n\}.\] Let \(H\) be a maximal bipartite subgraph of \(W_{n}\). Then \(y_{1},\ldots,y_{n}\in H\) and \(H\) contains at most two vertices in \(\{x_{1},\ldots,x_{n}\}\). By the maximality of \(H\), we deduce that \(H\) must be the induced subgraph of \(W_{n}\) on \(\{y_{1},\ldots,y_{n}\}\cup\{x_{i},x_{j}\}\) for some \(i\neq j\). Hence, \(c(H)=n-1\). Thus, \(\operatorname{bc}(W_{n})=n-1\). By Lemma 2.4, \(\operatorname{depth}S/I(W_{n})^{(s)}\) is non-increasing. Furthermore, we have \[I(W_{n})^{(2)}:(x_{1}x_{2})=(x_{1}y_{1},x_{2}y_{2},x_{1}x_{2},y_{1}y_{2},x_{3}, \ldots,x_{n}).\] Hence, \(\operatorname{depth}S/I(W_{n})^{(2)}\leq n-1\). It remains to prove that \(\operatorname{depth}S/I(W_{n})^{(s)}\geq n-1\) for all \(s\geq 2\). We prove by induction on \(n\) and \(s\) the following statement. Let \(I_{k}=I(K_{n})+(x_{1}y_{1},\ldots,x_{k}y_{k})\) and \(S_{k}=\operatorname{k}[x_{1},\ldots,x_{n},y_{1},\ldots,y_{k}]\). Then \(\operatorname{depth}S_{k}/I_{k}^{(s)}\geq k-1\) for all \(2\leq k\leq n\) and all \(s\geq 1\). Note that \(I_{k}=I(G_{k})\) where \(G_{k}=K_{n}\cup\{\{x_{i},y_{i}\}\mid i=1,\ldots,k\}\). We have \(G_{k}\) is a chordal graph with the induced matching number equals \(1\). Thus, by [15, Theorem 7.7], \(\operatorname{depth}S_{k}/I_{k}=k\). Since \(\mathfrak{m}_{k}\), the maximal homogeneous ideal of \(S_{k}\), is not an associated prime of \(I_{k}\), \(\operatorname{depth}S_{k}/I_{k}^{(s)}\geq 1\) for all \(k\). Thus, we may assume that \(s\geq 2\) and \(n\geq k\geq 3\). By Lemma 2.3, \[\operatorname{depth}S_{k}/I_{k}^{(s)}\in\{\operatorname{depth}(S_{k}/(I_{k}^{ (s)},x_{k}y_{k})),\operatorname{depth}(S_{k}/I_{k}^{(s)}:x_{k}y_{k})\}.\] By Lemma 2.4, \(I_{k}^{(s)}:x_{k}y_{k}=I_{k}^{(s-1)}\). Thus, by induction, it suffices to prove that \[\operatorname{depth}S/(I_{k}^{(s)},x_{k}y_{k})\geq k-1.\] We have \(J=(I_{k}^{(s)},x_{k}y_{k})=(J,x_{k})\cap(J,y_{k})\). The conclusion follows from induction on \(k\) and Lemma 2.1. As mentioned in the Introduction, the inequality in Theorem 3.3 might be strict. We will now define a finer invariant of \(G\), called the restricted bipartite connectivity number of \(G\). Let \(H=H_{1}\cup\cdots\cup H_{c}\cup\{p_{1},\ldots,p_{t}\}\) be a maximal induced bipartite subgraph of \(G\) where \(H_{1},\ldots,H_{c}\) are connected components of \(H\) with at least one edge and \(p_{1},\ldots,p_{t}\) are isolated points. We say that \(\{p_{i_{1}},\ldots,p_{i_{u}}\}\) are clustered if there exists \(v\in V(G)\setminus V(H)\) such that the induced subgraph of \(G\) on \(\{v,p_{i_{1}},\ldots,p_{i_{u}}\}\) is a bouquet. Let \(\operatorname{bou}_{G}(H)\) be the smallest number \(b\) such that the set \(\{p_{1},\ldots,p_{t}\}\) can be clustered into \(b\) bouquets in \(G\). We call \(c^{\prime}(H)=c+\operatorname{bou}_{G}(H)\) the number of restricted connected components of \(H\). We then define \(\operatorname{bc}^{\prime}(G)=\min\{c^{\prime}(H)\mid H\in\mathcal{B}(G)\}\) the restricted bipartite connectivity number of \(G\). We will now verify Conjecture 1.4 for whisker graphs of complete graphs. **Theorem 3.5**.: _Let \(\mathbf{a}=(a_{1},\ldots,a_{n})\in\mathbb{N}^{n}\). Let \(W_{\mathbf{a}}\) be a graph obtained by gluing \(a_{i}\) leaves to the vertex \(i\) of a complete graph \(K_{n}\). Assume that \(a_{i}\geq 1\) for all \(i=1,\ldots,n\). Then \(\operatorname{bc}^{\prime}(W_{\mathbf{a}})=n-1\) and_ \[\lim_{s\to\infty}\operatorname{depth}S/I(W_{\mathbf{a}})^{(s)}=n-1.\] Proof.: We may assume that \(a_{1}\geq a_{2}\geq\cdots\geq a_{n}\geq 1\). For simplicity, we assume that the vertex set of \(W_{\mathbf{a}}\) is \[V(W_{\mathbf{a}})=\{x_{1},\ldots,x_{n},y_{1,1},\ldots,y_{1,a_{1}},\ldots,y_{n,1}, \ldots,y_{n,a_{n}}\},\] and the edge set is \[E(W_{\mathbf{a}})=\{\{x_{i},x_{j}\},\{x_{i},y_{i,\ell}\}\ |\ \text{ for all $i,j,\ell$ such that $1\leq i\neq j\leq n,1\leq\ell\leq a_{i}$}\}.\] For ease of reading, we divide the proof into several steps. **Step 1.**\(\operatorname{bc}^{\prime}(W_{\mathbf{a}})=n-1\). As in the proof of Proposition 3.4, we deduce that a maximal induced bipartite subgraph \(H\) of \(W_{\mathbf{a}}\) is an induced subgraph of \(W_{\mathbf{a}}\) on \(\{y_{i,\ell}\}\cup\{x_{i},x_{j}\}\) for some \(i\neq j\). For such \(H\), we have \(c(H)=|\mathbf{a}|-(a_{i}+a_{j})+1\) but \(c^{\prime}(H)=n-1\) as \(\{y_{\ell,1},\ldots,y_{\ell,a_{\ell}}\}\) can be clustered into a bouquet in \(G\) for all \(\ell=1,\ldots,n\). Thus, \(\operatorname{bc}(W_{\mathbf{a}})=a_{3}+\cdots+a_{n}+1\) and \(\operatorname{bc}^{\prime}(W_{\mathbf{a}})=n-1\). **Step 2.**\(\operatorname{depth}S/I(W_{\mathbf{a}})^{(s)}\geq n-1\) for all \(s\geq 1\) and all \(\mathbf{a}\) such that \(a_{i}\geq 1\) for \(i=1,\ldots,n\). If \(s=1\), \(W_{\mathbf{a}}\) is a chordal graph with induced matching number \(1\); hence, by [2, Theorem 7.7], \(\operatorname{depth}S/I(W_{\mathbf{a}})=a_{2}+\cdots+a_{n}+1\). When \(a_{1}=\cdots=a_{n}=1\), the conclusion follows from Proposition 3.4. Thus, we may assume that \(s\geq 2\) and \(a_{1}\geq 2\). By induction, Lemma 2.3, and Lemma 2.4, it suffices to prove that \[\operatorname{depth}S/(I(W_{\mathbf{a}})^{(s)},x_{1}y_{1,a_{1}})\geq n-1.\] Let \(J=I(W_{\mathbf{a}})^{(s)}\). Then \((J,x_{1}y_{1,a_{1}})=(J,x_{1})\cap(J,y_{1,a_{1}})\). Denote \(\mathbf{a}^{\prime}=(a_{2},\ldots,a_{n})\) and \(W_{\mathbf{a}^{\prime}}\) the whisker graph obtained by gluing \(a_{i}\) leaves to the vertex \(i\) of the complete graph on \(\{2,\ldots,n\}\). We have \((J,x_{1})=(I(W_{\mathbf{a}^{\prime}})^{(s)},x_{1})\) and \((J,x_{1},y_{1,a_{1}})=(I(W_{\mathbf{a}^{\prime}})^{(s)},x_{1},y_{1,a_{1}})\). Thus, \[\operatorname{depth}S/(J,x_{1})=a_{1}+\operatorname{depth}R/I(W_{\mathbf{a}^{ \prime}})^{(s)},\] \[\operatorname{depth}S/(J,x_{1},y_{1,a_{1}})=a_{1}-1+\operatorname{depth}R/I(W _{\mathbf{a}^{\prime}})^{(s)},\] where \(R=\Bbbk[x_{2},\ldots,x_{n},y_{2,1},\ldots,y_{2,a_{2}},\ldots,y_{n,1},\ldots,y _{n,a_{n}}]\). By induction, both terms are at least \(n-1\). Finally, \((J,y_{1,a_{1}})=(I(W_{\mathbf{a}^{\prime\prime}})^{(s)},y_{1,a_{1}})\) where \(\mathbf{a}^{{}^{\prime\prime}}=(a_{1}-1,a_{2},\ldots,a_{n})\). Hence, \[\operatorname{depth}S/(J,y_{1,a_{1}})=\operatorname{depth}T/I(W_{\mathbf{a}^{ \prime\prime}})^{(s)},\] where \(T=\Bbbk[x_{1},\ldots,x_{n},y_{1,1},\ldots,y_{1,a_{1}-1},\ldots,y_{n,1}, \ldots,y_{n,a_{n}}]\). Thus, the conclusion of Step 2 follows from induction and Lemma 2.1. **Step 3.**\(\operatorname{depth}S/I(W_{\mathbf{a}})^{(s)}\leq n-1\) for all \(s\geq n\). By Lemma 2.2 and Lemma 2.4, it suffices to prove that \[\operatorname{depth}S/I(W_{\mathbf{a}})^{(n)}:(x_{1}\cdots x_{n})\leq n-1.\] Since \(W_{\mathbf{a}}\) is a chordal graph, by [1, Theorem 3.10], we deduce that \[J=I(W_{\mathbf{a}})^{(n)}:(x_{1}\cdots x_{n})=I(W_{\mathbf{a}})+(y_{1,1}, \ldots,y_{1,a_{1}})\cdot(y_{2,1},\ldots,y_{2,a_{2}})\cdots(y_{n,1},\ldots,y_{n,a_{n}}).\] In particular, \(\Delta(J)\) has exactly \(n\) facets. The conclusion follows from [DDDGHL, Theorem 5.2]. **Remark 3.6**.: 1. The notion of maximal bipartite subgraphs of a graph is studied by many researchers as early as [E, M]. They are interested in finding the maximum number of edges of a maximal bipartite subgraph of \(G\). 2. In general, the problem of finding a maximum induced bipartite subgraph of a graph is NP-complete [LY]. Nonetheless, we do not know if the problem of computing the bipartite connectivity number or restricted bipartite connectivity number is NP-complete. **Remark 3.7**.: 1. The Cohen-Macaulay property, or depth of the edge ideal of a graph might depend on the characteristic of the base field. For example, consider the following ideal in [V, Exercise 5.3.31] \[I=(x_{1}x_{3},x_{1}x_{4},x_{1}x_{7},x_{1}x_{10},x_{1}x_{11},x_{2}x_{4},x_{2}x _{5},x_{2}x_{8},x_{2}x_{10},x_{2}x_{11},\] \[x_{3}x_{5},x_{3}x_{6},x_{3}x_{8},x_{3}x_{11},x_{4}x_{6},x_{4}x_{9 },x_{4}x_{11},\] \[x_{5}x_{7},x_{5}x_{9},x_{5}x_{11},x_{6}x_{8},x_{6}x_{9},x_{7}x_{9 },x_{7}x_{10},x_{8}x_{10}).\] Then \[\operatorname{depth}S/I=\begin{cases}2&\text{ if }\operatorname{char} \operatorname{k}=2\\ 3&\text{ otherwise.}\end{cases}\] But \(\operatorname{depth}S/I^{(s)}=1\) for all \(s\geq 2\), regardless of the characteristic of the base field k. 2. By the result of the second author [T], the stable value of depth of powers of edge ideals of graphs does not depend on the characteristic of the base field. If Conjecture 1.4 holds, the stable value of depth of symbolic powers of edge ideals also does not depend on the characteristic of the base field. This is in contrast to the asymptotic behaviour of the regularity of (symbolic) powers of edge ideals as [MV, Corollary 5.3] shows that the linearity constant of the regularity function of (symbolic) powers of edge ideals of graphs might depend on the characteristic of the base field. ## 4. Depth of symbolic powers of edge ideals of cycles In this section, we compute the depth of symbolic powers of edge ideals of cycles. The purpose of this is twofold. First, together with Proposition 3.4, this gives the first classes of non-bipartite graphs where one compute explicitly the depth of symbolic powers of their edge ideals. Second, this shows that the stabilization index of depth of symbolic powers of \(G\) is tightly connected to the stabilization index of depth of powers of maximal induced bipartite subgraphs of \(G\). We fix the following notation. Let \(C_{n}\) be a cycle of length \(n\). For each \(i=1,\dots,n-1\), we denote \(e_{i}=x_{i}x_{i+1}\). By the result of Simis, Vasconcelos, and Villarreal [SVV], \(G\) is bipartite if and only if \(I(G)^{(s)}=I(G)^{s}\) for all \(s\geq 1\). By [MTV, Theorem 3.10], we may assume that \(n=2k+1\geq 5\). Let \(\varphi(n,t)=\lceil\frac{n-t+1}{3}\rceil\). First, we prove **Lemma 4.1**.: _Assume that \(I=I(C_{n})\), \(e_{i}=x_{i}x_{i+1}\) for all \(i=1,\ldots,n-1\). Then for all \(t\leq n-2\), we have_ \[\operatorname{depth}S/I^{(t)}\leq\operatorname{depth}S/(I^{(t)}:e_{2}\cdots e _{t-1})\leq\varphi(n,t).\] Proof.: By [GHOS, Theorem 3.4] and [MTV, Lemma 3.9], we may assume that \(n=2k+1\) and \(k+1\leq t\leq n-2=2k-1\). In this case, we have \[I^{(t)}=I^{t}+fI^{t-k-1},\] where \(f=x_{1}\cdots x_{n}\). Hence, \(I^{(t)}:e_{2}\cdots e_{t-1}=I^{t}:e_{2}\cdots e_{t-1}\). The conclusions then follows from Lemma 2.2 and [MTV, Lemma 3.9]. Also, we have **Lemma 4.2**.: _Let \(f=x_{1}\cdots x_{n}\). Then for all \(t\) such that \(k+1\leq t\leq n-2\),_ \[I^{(t)}:f=I^{t-k-1}.\] Proof.: Let \(\mathfrak{p}_{1},\ldots,\mathfrak{p}_{s}\) be the associated primes of \(I\). Then \[I^{(t)}=\mathfrak{p}_{1}^{t}\cap\cdots\cap\mathfrak{p}_{s}^{t}.\] Since \(\mathfrak{p}_{i}\) is generated by \(k+1\) variables for all \(i=1,\ldots,s\), we have \(\mathfrak{p}_{i}^{t}:f=\mathfrak{p}_{i}^{t-k-1}\). Hence, \(I^{(t)}:f=I^{(t-k-1)}=I^{t-k-1}\) since \(t\leq 2k-1\). We are now ready for the computation of depth of symbolic powers of edge ideals of cycles. **Theorem 4.3**.: _Let \(C_{n}\) be a cycle on \(n=2k+1\) vertices. Then for all \(t\geq 2\)_ \[\operatorname{depth}(R/I(C_{n})^{(t)})=\max(\varphi(n,t),1).\] Proof.: By [GHOS, Theorem 3.4] and [MTV, Theorem 3.10], it remains to consider the cases where \(k+1\leq t\leq 2k-1\). Let \(f=x_{1}\cdots x_{n}\). By Lemma 2.3, Lemma 4.2, Lemma 4.1, and [MTV, Theorem 3.10], it suffices to prove that \[\operatorname{depth}(R/(I^{(t)}+f))\geq\varphi(n,t). \tag{4.1}\] Write \(f=e_{1}f_{1}\) where \(f_{1}=x_{3}\cdots x_{n}\). We have \(I^{(t)}+f=(I^{(t)},e_{1})\cap(I^{(t)},f_{1})\). For each \(i=1,\ldots,k-1\), we can write \(f_{i}=e_{2i+1}f_{i+1}\). By repeated use of Lemma 2.1 and the fact that for any subgraph \(H\) of \(C_{n}\) we have \[I^{(t)}+I(H)+f_{i}=(I^{(t)}+I(H)+(e_{2i+1}))\cap(I^{(t)}+I(H)+(f_{i+1})),\] it suffices to prove the following two claims **Claim 1**.: For any non-empty subgraph \(H\) of \(C_{n}\), we have \[\operatorname{depth}S/(I^{(t)}+I(H))\geq\varphi(n,t).\] **Claim 2**.: For any (possibly empty) subgraph \(H\) of \(C_{n}\), we have \[\operatorname{depth}(S/(I^{(t)}+I(H)+(x_{n-2}x_{n-1}x_{n})))\geq\varphi(n,t).\] By [GHOS, Theorem 3.4], for any non-empty subgraph \(H\) of \(C_{n}\) we have \(I^{(t)}+I(H)=I^{t}+I(H)\). Hence, the first claim follows from [MTV, Lemma 3.8]. For Claim 2, let \(J=I^{(t)}+I(H)+(x_{n-2}x_{n-1}x_{n})\) and \(e=x_{n-2}x_{n-1}\). Note that \(J+e\) can be expressed as \(I^{(t)}+I(H)\) and \(J:e=I(P_{n-1})^{t-1}+I(H^{\prime})+(x_{n})\) where \(H^{\prime}\) is a subgraph of \(P_{n-1}\). The claim follows from Lemma 2.3, Claim 1, and [MTV, Lemma 3.2]. The conclusion follows. ## Acknowledgments Tran Nam Trung is partially supported by the NAFOSTED (Vietnam) under the grant number 101.04-2023.36.
2306.09213
Stationarity and Fredholm Theory in Subextremal Kerr-de Sitter Spacetimes
In a recent paper, we proved that solutions to linear wave equations in a subextremal Kerr-de Sitter spacetime have asymptotic expansions in quasinormal modes up to a decay order given by the normally hyperbolic trapping, extending the results of Vasy (2013). One central ingredient in the argument was a new definition of quasinormal modes, where a non-standard choice of stationary Killing vector field had to be used in order for the Fredholm theory to be applicable. In this paper, we show that there is in fact a variety of allowed choices of stationary Killing vector fields. In particular, the horizon Killing vector fields work for the analysis, in which case one of the corresponding ergoregions is completely removed.
Oliver Petersen, AndrΓ‘s Vasy
2023-06-15T15:50:01Z
http://arxiv.org/abs/2306.09213v2
# Stationarity and Fredholm theory in ###### Abstract. In a recent paper, we proved that solutions to linear wave equations in a subextremal Kerr-de Sitter spacetime have asymptotic expansions in quasinormal modes up to a decay order given by the normally hyperbolic trapping, extending the results of [14]. One central ingredient in the argument was a new definition of quasinormal modes, where a non-standard choice of stationary Killing vector field had to be used in order for the Fredholm theory to be applicable. In this paper, we show that there is in fact a variety of allowed choices of stationary Killing vector fields. In particular, the horizon Killing vector fields work for the analysis, in which case one of the corresponding ergoregions is completely removed. _2020 Mathematics Subject Classification._ 35L05, 35P25, 58J45, 83C30. _Key words and phrases._ Subextremal Kerr-de Sitter spacetime, resonances, quasinormal modes, radial points, normally hyperbolic trapping. Killing vector field \(\partial_{t}\). However, there is an ambiguity in what precise symmetry should describe the stationarity of the black hole. Indeed, \[c_{1}\partial_{t}+c_{2}\partial_{\phi}\] is a Killing vector field, for any constants \(c_{1},c_{2}\in\mathbb{R}\). Moreover, one can check that no such Killing vector field is timelike everywhere between the horizons (c.f. Remark 1.7). In the Kerr spacetime, there is on the other hand a canonical choice of Killing vector field to describe the stationarity, \(\partial_{t}\) namely is the only one which is timelike at large distances from the black hole. In the Kerr-de Sitter spacetime, there is no such analogue, and it is a priori not clear what Killing vector field should be modeling the stationarity. The purpose of this paper is to illustrate that, in the Kerr-de Sitter spacetime, many natural properties are satisfied if we choose \[\mathrm{T}:=\partial_{t}+\frac{a}{r_{0}^{2}+a^{2}}\partial_{\phi}\] to be the stationary Killing vector field, where we may choose any \[r_{0}\in[r_{e},r_{c}],\] where \(r_{e}\) is the radius to the event horizon and \(r_{c}\) is the radius to the cosmological horizon. Note that as \(\Lambda\to 0\), then \(r_{c}\to\infty\), so in the limit, an allowed choice for \(\mathrm{T}\) is indeed the standard choice \(\partial_{t}\) in the Kerr spacetime. The main new observation in this paper is that there are no trapped lightlike geodesics with trajectories orthogonal to \(\mathrm{T}\). In fact, the geodesic flow of the lightlike geodesics with trajectories orthogonal to \(\mathrm{T}\) is very similar to the much easier case when \(a=0\) (where indeed \(\mathrm{T}=\partial_{t}\)). However, it was observed in [22, p. 486] that this is not the case if we consider the lightlike geodesics with trajectories orthogonal to \(\partial_{t}\) instead. Our results in this paper generalize the main results of [4], where the same statements were proven for \(\mathrm{T}\), when \(r_{0}\in(r_{e},r_{c})\) was the unique point such that \[\mu^{\prime}(r_{0})=0.\] Besides using the computations in [4], this paper relies heavily on the microlocal analysis developed in [22], in particular on the radial point estimates and the Fredholm theory for non-elliptic operators. For our application to wave equations, we rely on microlocal estimates near normally hyperbolic trapping in the sense of Wunsch and Zworski in [15], see also the improved results by Dyatlov in [16, 17, 18]. For more references on results related to this paper, we refer to the introductions in [4]. ### Kerr-de Sitter spacetimes The geometry of the Kerr-de Sitter spacetimes depends on a certain polynomial, given by \[\mu(r):=-\frac{\Lambda r^{4}}{3}+\left(1-\frac{\Lambda a^{3}}{3}\right)r^{2}- 2mr+a^{2}. \tag{1}\] **Definition 1.1**.: Assume that \(\mu\) has four distinct real roots \[r_{-}<r_{C}<r_{e}<r_{e}.\] The manifold \[M:=\mathbb{R}_{t}\times(r_{e},r_{c})_{r}\times S^{2}_{\phi,\theta},\] with real analytic metric \[g =(r^{2}+a^{2}\cos^{2}(\theta))\left(\frac{\mathrm{d}r^{2}}{\mu(r)}+ \frac{\mathrm{d}\theta^{2}}{c(\theta)}\right)\] \[\quad+\frac{c(\theta)\sin^{2}(\theta)}{b^{2}\left(r^{2}+a^{2}\cos ^{2}(\theta)\right)}\left(\mathrm{d}t-\left(r^{2}+a^{2}\right)\mathrm{d} \phi\right)^{2}\] \[\quad-\frac{\mu(r)}{b^{2}\left(r^{2}+a^{2}\cos^{2}(\theta)\right) }\left(\mathrm{d}t-a\sin^{2}(\theta)\mathrm{d}\phi\right)^{2},\] where \[b:=1+\frac{\Lambda a^{2}}{3},\quad c(\theta):=1+\frac{\Lambda a^{2}}{3}\cos^{2 }(\theta),\] is called the domain of outer communication in a _subextremal Kerr-de Sitter spacetime_ (in Boyer-Lindquist coordinates). One easily verifies that this metric extends real analytically to the north and south poles \(\theta=0,\pi\). **Remark 1.2**.: Note that \(\partial_{t}\) and \(\partial_{\phi}\) are Killing vector fields of \(g\). The Boyer-Lindquist coordinates used above become singular at the roots of \(\mu(r)\). As a physical model of a rotating black hole in an expanding spacetime, however, the two largest roots of \(\mu(r)\) are supposed to point out the position of the event and cosmological horizons. We extend the coordinates over the future event horizon and future cosmological horizon with the following coordinate change: \[t_{*} :=t-\Phi(r), \tag{2}\] \[\phi_{*} :=\phi-\Psi(r),\] where \(\Phi\) and \(\Psi\) satisfy \[\Phi^{\prime}(r) =b\frac{r^{2}+a^{2}}{\mu(r)}f(r),\] \[\Psi^{\prime}(r) =b\frac{a}{\mu(r)}f(r)\] and \[f:(r_{e}-\delta,r_{c}+\delta)\to\mathbb{R},\] is a real analytic function, for a small enough \(\delta>0\) such that \[f(r_{e})=-1,\quad f(r_{c})=1. \tag{3}\] The new form of the metric is \[g_{*} =(r^{2}+a^{2}\cos^{2}(\theta))\frac{1-f(r)^{2}}{\mu(r)}\mathrm{d}r ^{2} \tag{4}\] \[\qquad-\frac{2}{b}f(r)(\mathrm{d}t_{*}-a\sin^{2}(\theta)\mathrm{ d}\phi_{*})\mathrm{d}r\] \[\qquad-\frac{\mu(r)}{b^{2}\left(r^{2}+a^{2}\cos^{2}(\theta) \right)}\left(\mathrm{d}t_{*}-a\sin^{2}(\theta)\mathrm{d}\phi_{*}\right)^{2}\] \[\qquad+\frac{c(\theta)\sin^{2}(\theta)}{b^{2}\left(r^{2}+a^{2} \cos^{2}(\theta)\right)}\left(a\mathrm{d}t_{*}-\left(r^{2}+a^{2}\right) \mathrm{d}\phi_{*}\right)^{2}\] \[\qquad+(r^{2}+a^{2}\cos^{2}(\theta))\frac{\mathrm{d}\theta^{2}}{c (\theta)},\] which extends real analytically to \[M_{*}:=\mathbb{R}_{t_{*}}\times(r_{e}-\delta,r_{c}+\delta)_{r}\times S^{2}_{ \phi_{*},\theta}.\] Now the coordinates are not anymore singular at \(r_{e}\) and \(r_{c}\) and we get two new real analytic lightlike hypersurfaces \[\mathcal{H}_{e}^{+} :=\mathbb{R}_{t_{*}}\times\{r_{e}\}\times S^{2}_{\phi_{*},\theta},\] \[\mathcal{H}_{e}^{+} :=\mathbb{R}_{t_{*}}\times\{r_{c}\}\times S^{2}_{\phi_{*},\theta},\] which are called the _future event horizon_ and the _future cosmological horizon_, respectively. **Remark 1.3**.: The Killing vector fields \(\partial_{t}\) and \(\partial_{\phi}\) extend to Killing vector fields \(\partial_{t_{*}}\) and \(\partial_{\phi_{*}}\) over the horizons. ### The first main result The main novel observation in this paper is that there are no trapped lightlike geodesics in the domain of outer communication of a subextremal Kerr-de Sitter spacetime, with trajectories orthogonal to a certain Killing vector field. **Theorem 1.4** (No orthogonal trapping).: _Let_ \[r_{0}\in[r_{e},r_{c}].\] _All lightlike geodesics in the domain of outer communication \((M,g)\) of a subextremal Kerr-de Sitter spacetime, with trajectories orthogonal to the Killing vector field_ \[\mathrm{T}:=\partial_{t}+\frac{a}{r_{0}^{2}+a^{2}}\partial_{\phi},\] _eventually leave the region_ \[\mathbb{R}_{t_{*}}\times[r_{e}+\epsilon,r_{c}-\epsilon]_{r}\times S^{2}\] _for any \(\epsilon>0\). Moreover, there is an open subset \(\mathcal{U}\subset(r_{e},r_{c})\), with \(r_{0}\in\overline{\mathcal{U}}\) and such that no such lightlike geodesic intersects_ \[\mathbb{R}_{t_{*}}\times\mathcal{U}\times S^{2}. \tag{5}\] **Remark 1.5**.: A special case of Theorem 1.13, namely when \(r_{0}\) was the unique \(r_{0}\in(r_{e},r_{c})\) such that \[\mu^{\prime}(r_{0})=0,\] was proven in [PVb, Lem. 2.4] and was one of the two key observations in that paper. **Remark 1.6** (The second assertion in Theorem 1.4).: The second assertion in Theorem 1.4 is an immediate consequence of the fact that \(\mathrm{T}\) is timelike in a region of the form (5). To check this, we compute \[g_{*}(\mathrm{T},\mathrm{T})|_{r=r_{0}}=-\frac{\mu(r_{0})(r_{0}^{2}+a^{2}\cos^ {2}(\theta))}{b^{2}(r_{0}^{2}+a^{2})^{2}}, \tag{6}\] which is negative if \(r_{0}\in(r_{e},r_{c})\) and vanishes if \(r_{0}=r_{e}\) or \(r_{c}\). If \(r_{0}\in(r_{e},r_{c})\), it follows that \(\mathrm{T}\) is _timelike_ at \[\mathbb{R}_{t_{*}}\times\{r_{0}\}\times S^{2}, \tag{7}\] and it is therefore timelike in an open neighborhood of (7). In this region, there can therefore be no lightlike geodesics with trajectories orthogonal to \(\mathrm{T}\), proving the second assertion in Theorem 1.4 in this special case. If instead \(r_{0}=r_{e}\) or \(r_{c}\), then \(\mathrm{T}\) is lighlike at (7), and we cannot immediately deduce that \(\mathrm{T}\) will be causal in a neighborhood. We therefore compute \[\partial_{r}g_{*}(\mathrm{T},\mathrm{T})|_{r=r_{e/e}}=\frac{-\mu^{\prime}(r_{ e/c})(r_{e/c}^{2}+a^{2}\cos^{2}(\theta))}{b^{2}(r_{e/c}^{2}+a^{2})^{2}}.\] If now \(r_{0}=r_{e}\), then the right hand side is negative, which implies that T is timelike in a subset of the form \[\mathbb{R}_{t_{*}}\times(r_{e},r_{e}+\gamma)\times S^{2},\] for some \(\gamma>0\). Similarly, if \(r_{0}=r_{c}\), then T is timelike in a subset of the form \[\mathbb{R}_{t_{*}}\times(r_{c}-\gamma,r_{c})\times S^{2},\] for some \(\gamma>0\). This completes the proof of the second assertion in Theorem 1.4. **Remark 1.7** (The new ergoregions).: The computations in the previous remark raise the question whether we can choose \(r_{0}\in[r_{e},r_{c}]\) such that T is _timelike_ everywhere in the domain of outer communication in the Kerr-de Sitter spacetime, and lightlike at the horizons. Let us compute the Lorentzian length of T at the horizons: \[g(\text{T},\text{T})|_{r=r_{e}} =\frac{a^{2}c(\theta)\sin^{2}(\theta)}{b^{2}\left(r^{2}+a^{2} \cos^{2}(\theta)\right)}\left(\frac{r_{0}^{2}-r_{e}^{2}}{r_{0}^{2}+a^{2}} \right)^{2},\] \[g(\text{T},\text{T})|_{r=r_{e}} =\frac{a^{2}c(\theta)\sin^{2}(\theta)}{b^{2}\left(r^{2}+a^{2}\cos ^{2}(\theta)\right)}\left(\frac{r_{0}^{2}-r_{c}^{2}}{r_{0}^{2}+a^{2}}\right)^ {2}.\] In the Schwarzschild-de Sitter spacetime, when \(a=0\), both expressions actually vanish at the horizons and one can easily check that T is timelike at any \(r\in(r_{e},r_{c})\). However, if \(a\neq 0\), then these expressions only vanish if \(r_{0}=r_{e}\) or \(r_{0}=r_{c}\), respectively. If \(r_{0}\in(r_{e},r_{c})\), then both values are positive. This shows that T is spacelike at least at one of the horizons. By analogy with the classical terminology, we call the regions in the domain of outer communication, where T is spacelike, the _ergoregions_ with respect to T. These computations show that there are two ergoregions if \(r_{0}\in(r_{e},r_{c})\), which are non-intersecting by the results in Remark 1.6, and only one ergoregion if \(r_{0}=r_{e}\) or \(r_{c}\). As a comparison, with the classical choice \(\partial_{t}\) as the stationary Killing vector field, the two ergoregions intersect for large \(a\) which is undesirable for the analysis. ### The second and third main results We begin with our assumptions: **Assumption 1.8**.: * Let \((M_{*},g_{*})\) be a subextremal Kerr-de Sitter spacetime, extended over the future event horizon and the future cosmological horizon, where \[M_{*}:=\mathbb{R}_{t_{*}}\times(r_{e}-\delta,r_{c}+\delta)_{r}\times S^{2}_{ \phi_{*},\theta},\] with \(\delta>0\) small enough so that the boundary hypersurfaces \[\mathbb{R}_{t_{*}}\times\{r_{e}-\delta\}\times S^{2}_{\phi_{*},\theta}, \quad\mathbb{R}_{t_{*}}\times\{r_{c}+\delta\}\times S^{2}_{\phi_{*},\theta}\] are _spacelike_ and with \(f\) chosen as in [PVb, Rmk. 1.1] so that the hypersurfaces \[\{t_{*}=c\}\times(r_{e}-\delta,r_{c}+\delta)_{r}\times S^{2}_{\phi_{*},\theta}\] are _spacelike_, for all \(c\in\mathbb{R}\). * Let \(A\) be a smooth complex function on \(M_{*}\) such that \[\partial_{t_{*}}A=\partial_{\phi_{*}}A=0.\] We let \(P\) be the linear wave operator given by \[P=\square+A,\] where \(\square\) denotes the d'Alembert operator on scalar functions on \(M_{*}\). For any subset \(\mathcal{U}\subset M_{*}\), we use the notation \(C^{\infty}(\mathcal{U})\) for the smooth complex functions on \(\mathcal{U}\). As in Theorem 1.4, we let \[\mathrm{T}:=\partial_{t_{*}}+\frac{a}{r_{0}^{2}+a^{2}}\partial_{\phi_{*}},\] for any fixed \(r_{0}\in[r_{e},r_{c}]\). #### 1.3.1. Quasinormal modes One of the novelties in [PVb] was a new definition of quasinormal modes with respect to the Killing vector field \(\mathrm{T}\), with \(r_{0}\in(r_{e},r_{c})\) uniquely determined by the condition \(\mu^{\prime}(r_{0})=0\), instead of the standard choice of Killing vector field \(\partial_{t_{*}}\). In this paper, we show that the analogous result holds as in [PVb], if we choose to define our quasinormal modes with respect to \(\mathrm{T}\), for any choice of \(r_{0}\in[r_{e},r_{c}]\). **Definition 1.9** (Quasinormal mode).: Let \[r_{0}\in[r_{e},r_{c}].\] A complex function \[u\in C^{\infty}(M_{*})\] is called a _quasinormal mode_, with _quasinormal mode frequency_\(\sigma\in\mathbb{C}\), if \[\mathrm{T}u=-i\sigma u\] and \[Pu=0.\] **Remark 1.10**.: Quasinormal modes and mode frequencies are also called resonant states and resonances. **Remark 1.11**.: Note that we can write any quasinormal mode as \[u=e^{-i\sigma t_{*}}v_{\sigma},\] where \[\mathrm{T}v_{\sigma}=0.\] Our second main result is the following: **Theorem 1.12** (Discrete set of quasinormal modes).: _Let \((M_{*},g_{*})\) and \(P\) be as in Assumption 1.8. Then there is a discrete set \(\mathcal{A}\subset\mathbb{C}\) such that_ \[\sigma\in\mathcal{A}\] _if and only if there is a quasinormal mode_ \[u\in C^{\infty}(M_{*})\] _with mode frequency \(\sigma\). Moreover, for each \(\sigma\in\mathcal{A}\), the space of quasinormal modes is finite dimensional. If the coefficients of \(P\) are real analytic, then the quasinormal modes are real analytic._ This result generalizes [PVb, Thm. 1.5] by allowing us to choose any \(r_{0}\in[r_{e},r_{c}]\) in the definition of quasinormal modes, instead of only the unique one such that \(\mu^{\prime}(r_{0})=0\). For more comments on the statement of Theorem 1.12, we therefore refer to the discussion following [PVb, Thm. 1.5]. #### 1.3.2. Asymptotic expansion The last main result concerns the asymptotics of solutions to linear wave equations when \(t_{*}\to\infty\). We formulate the statement using the standard Sobolev spaces on \[M_{*}=\mathbb{R}_{t_{*}}\times(r_{e}-\delta,r_{c}+\delta)_{r}\times S^{2}_{ \phi_{*},\theta},\] i.e. the ones associated with the Riemannian metric \[\mathrm{d}t_{*}^{2}+\mathrm{d}r^{2}+g_{S^{2}},\] where \(g_{S^{2}}\) is the round metric on the \(2\)-sphere. For non-negative integers \(s\), a Sobolev norm (unique up to equivalence) is given by \[\|u\|_{\dot{H}^{s}}^{2}=\sum_{i+j+k\leq s}\left\|\partial_{t_{*}}^{i}\partial_ {r}^{j}\left(\Delta_{S^{2}}+1\right)^{k/2}u\right\|_{L^{2}(M_{*})}^{2}.\] The bar over \(H\) corresponds to Hormander's notation for extendible distributions, see [10]. We have the following: **Theorem 1.13** (The asymptotic expansion of waves).: _Let \((M_{*},g_{*})\) and \(P\) be as in Assumption 1.8 and let \(t_{0}\in\mathbb{R}\). There are \(C,\delta>0\) such that for \(0<\epsilon<C\) and_ \[s>\frac{1}{2}+\beta\epsilon,\] _where_ \[\beta:=2b\max_{r\in\{r_{e},r_{c}\}}\left(\frac{r^{2}+a^{2}}{|\mu^{\prime}(r)| }\right),\] _any solution to_ \[Pu=f\] _with \(f\in e^{-\epsilon t_{*}}\bar{H}^{s-1+\delta}(M_{*})\) and with \(\mathrm{supp}(u)\cup\mathrm{supp}(f)\subset\{t_{*}>t_{0}\}\) has an asymptotic expansion_ \[u-\sum_{j=1}^{N}\sum_{k=0}^{k_{j}}t_{*}^{k}e^{-i\sigma_{j}t_{*}}v_{jk}\in e^{ -\epsilon t_{*}}\bar{H}^{s}(M_{*}),\] _where \(\sigma_{1},\dots,\sigma_{N}\) are the (finitely many) quasinormal mode frequencies with_ \[\mathrm{Im}\,\sigma_{j}>-\epsilon\] _and \(k_{j}\) is their multiplicity, and where \(e^{-i\sigma_{j}t_{*}}v_{jk}\) are the \(C^{\infty}\) (generalized) quasinormal modes with frequency \(\sigma_{j}\) which are real analytic if the coefficients of \(P\) are such._ Analogously to above, this result generalizes [4, Thm. 1.6] by allowing us to choose any \(r_{0}\in[r_{e},r_{c}]\) in the definition of quasinormal modes, instead of only the unique one such that \(\mu^{\prime}(r_{0})=0\). For more comments on the statement of Theorem 1.13, we therefore refer to the discussion following [4, Thm. 1.6]. Theorem 1.13 is naturally combined with mode stability results, i.e. statements saying that under suitable assumptions there cannot be any modes with \[\mathrm{Im}\,(\sigma)\geq 0,\] except certain geometric modes where \(\sigma=0\). Indeed, in view of Theorem 1.13, mode stability would imply exponential decay to the zero mode. One such statement, for the standard d'Alembert wave operator \(\square\), was recently proven by Hintz in [10]. Combining [10, Thm. 1.1] with Theorem 1.13 gives the following decay statement for a certain range of Kerr-de Sitter parameters \(\Lambda,a\) and \(m\): **Corollary 1.14**.: _Let \((M_{*},g_{*})\) be as in Assumption 1.8 and let \(t_{0}\in\mathbb{R}\). Assume that_ \[\frac{|a|}{m}<1.\] _Then there is a \(\gamma>0\) such that if \(\Lambda m^{2}<\gamma\), then there are \(C,\delta>0\) such that for \(0<\epsilon<C\) and_ \[s>\frac{1}{2}+\beta\epsilon,\] _where_ \[\beta:=2b\max_{r\in\{r_{*},r_{*}\}}\left(\frac{r^{2}+a^{2}}{|\mu^{\prime}(r)|} \right),\] _any solution to_ \[\Box u=f\] _with \(f\in e^{-\epsilon t_{*}}\bar{H}^{s-1+\delta}(M_{*})\) and with \(\operatorname{supp}(u)\cup\operatorname{supp}(f)\subset\{t_{*}>t_{0}\}\) satisfies_ \[u-c\in e^{-\epsilon t_{*}}\bar{H}^{s}(M_{*}),\] _for some constant \(c\in\mathbb{C}\)._ ## 2. The T-orthogonal lightlike geodesics The goal of this section is to prove Theorem 1.4. For this, let \[r_{0}\in[r_{e},r_{c}]\] and \[\operatorname{T}=\partial_{t}+\frac{a}{r_{0}^{2}+a^{2}}\partial_{\phi}\] throughout the section. When computing properties of lightlike geodesics, we are going to use the Hamiltonian formalism. The Hamiltonian for geodesics is the metric \(G\) dual to \(g\), considered as a function on the cotangent bundle of \(M\): \[G:T^{*}M \to T^{*}M\] \[\xi \mapsto G(\xi,\xi).\] The dual metric \(G\) of \(g\) is given in Boyer-Lindquist coordinates by \[(r^{2}+a^{2}\cos^{2}(\theta))G(\xi,\xi) =\mu(r)\xi_{r}^{2}+\frac{b^{2}}{c(\theta)\sin^{2}(\theta)}\left( a\sin^{2}(\theta)\xi_{t}+\xi_{\phi}\right)^{2} \tag{8}\] \[\qquad-\frac{b^{2}}{\mu(r)}\left((r^{2}+a^{2})\xi_{t}+a\xi_{\phi} \right)^{2}+c(\theta)\xi_{\theta}^{2},\] where \(\xi=(\xi_{t},\xi_{r},\xi_{\phi},\xi_{\theta})\) are the dual coordinates to \((t,r,\phi,\theta)\). Since the bicharacteristic flow is invariant under conformal changes, we study from now on the Hamiltonian \[\operatorname{q}(\xi):=(r^{2}+a^{2}\cos^{2}(\theta))G(\xi,\xi),\] given in (8). Since \(\partial_{t}\) and \(\partial_{\phi}\) are Killing vector fields, the dual variables \(\xi_{t}\) and \(\xi_{\phi}\) will be constant along the Hamiltonian flow. Note that a vector \(v\in TM\) is orthogonal to T if and only if the metric dual vector \[\xi:=g(v,\cdot)\] satisfies \[0=g(v,T)=\xi(T)=\xi_{t}+\frac{a}{r_{0}^{2}+a^{2}}\xi_{\phi}. \tag{9}\] We may now prove Theorem 1.4: Proof of Theorem 1.4.: The main step in the proof is to show a convexity property for the radial function \(r\) along the bicharacteristic flow in the domain of outer communication. More precisely, we want to prove that \[\mathrm{H}_{\mathrm{q}}r=0\quad\Rightarrow\quad\mathrm{sgn}\left(\mathrm{H}_{ \mathrm{q}}^{2}r\right)=\mathrm{sgn}(r-r_{0}) \tag{10}\] at all points in the characteristic set in the domain of outer communication. Here, the Hamiltonian vector field is given by \[\mathrm{H}_{\mathrm{q}}:=\sum_{j=1}^{4}(\partial_{\xi_{j}}\mathrm{q})\partial _{j}-(\partial_{j}\mathrm{q})\partial_{\xi_{j}}.\] We compute \[\mathrm{H}_{\mathrm{q}}r=2\mu(r)\xi_{r}.\] Assuming that \(\mathrm{H}_{\mathrm{q}}r=0\) at some \(r\in(r_{e},r_{c})\), we conclude that \(\xi_{r}=0\) there. The second derivative along the Hamiltonian flow at such a point is given by \[\begin{split}\mathrm{H}_{\mathrm{q}}^{2}r|_{\xi_{r}=0}& =2\mu(r)\mathrm{H}_{\mathrm{q}}\xi_{r}|_{\xi_{r}=0}\\ &=-2\mu(r)\partial_{r}\left(-\frac{b^{2}}{\mu(r)}\left((r^{2}+a^{ 2})\xi_{t}+a\xi_{\phi}\right)^{2}\right)\\ &=2\mu(r)b^{2}\partial_{r}\frac{\left((r^{2}+a^{2})\xi_{t}+a\xi_ {\phi}\right)^{2}}{\mu(r)}.\end{split} \tag{11}\] Recall that \(\mu(r)>0\) at all points in the domain of outer communication. By [4, Thm. 3.2 (a)], the function \[F(r):=\frac{\left((r^{2}+a^{2})\xi_{t}+a\xi_{\phi}\right)^{2}}{\mu(r)}\] either vanishes at \(r_{e}\) or \(r_{c}\) and has no critical point in \((r_{e},r_{c})\) or has precisely one critical point in \((r_{e},r_{c})\). Since \(F\) is a non-negative function and vanishes at \(r_{0}\) by (9), \(F\) can have no other critical points than \(r_{0}\). It follows that \(F^{\prime}(r)\), and therefore the right hand side of (11), has the signs claimed in (10). We now define an escape function \[\mathcal{E}:=e^{C(r-r_{0})^{2}}\mathrm{H}_{\mathrm{q}_{e}}r\] for any \(C>0\) and note that \[\mathrm{H}_{\mathrm{q}_{e}}\mathcal{E}=e^{C(r-r_{0})^{2}}\left(2C(r-r_{0}) \left(\mathrm{H}_{\mathrm{q}_{e}}r\right)^{2}+\mathrm{H}_{\mathrm{q}_{e}}^{2} r\right).\] Since the characteristic set is disjoint from \(\{r=r_{0}\}\), by Remark 1.6, and since \(\mathrm{H}_{\mathrm{q}}^{2}r\) has the same sign as \(r-r_{0}\) whenever \(\mathrm{H}_{\mathrm{q}}r\) vanishes, by the implication (10), and since the Hamiltonian flow is invariant under translations in \(\mathbb{R}_{t_{*}}\), we can choose the constant \(C\) large enough to make sure that \(\mathrm{H}_{\mathrm{q}_{e}}\mathcal{E}\) is nowhere vanishing on \[\mathbb{R}_{t_{*}}\times[r_{e}+\epsilon,r_{c}-\epsilon]\times S^{2}\] and has the same sign as \(r-r_{0}\). Hence \(\mathcal{E}\) gives an escape function for all bicharacteristics satisfying (9). This finishes the proof of Theorem 1.4. ## 3. Fredholm theory The purpose of this section is to prove Theorem 1.12 and Theorem 1.13. For this, let again \[r_{0}\in[r_{e},r_{c}]\] and \[\mathrm{T}=\partial_{t_{*}}+\frac{a}{r_{0}^{2}+a^{2}}\partial_{\phi_{*}}\] throughout the section. The theory of wave equations in [21] is based on first Fourier transforming the wave operator in the variable \(t_{*}\). We thus want to consider the induced operator \[P_{\sigma}v:=e^{i\sigma t_{*}}P(e^{-i\sigma t_{*}}v),\] for a fixed \(\sigma\in\mathbb{C}\), where \[\mathrm{T}v=0.\] This latter condition can be interpreted as \(v\) being only dependent on a certain set of coordinates, c.f. [43, Eq. (11)], but this viewpoint is not necessary for the discussion here. Since \(\mathrm{T}\) is a Killing vector field, and the coefficients of \(P\) are invariant under \(\mathrm{T}\) (c.f. Assumption 1.8), we can think of \(P_{\sigma}\) as a differential operator \[P_{\sigma}:C^{\infty}(L_{*})\to C^{\infty}(L_{*}),\] where \[L_{*}:=t_{*}^{-1}(0)\subset M_{*}\] is a spacelike hypersurface. Now, since \(\sigma\in\mathbb{C}\) is fixed, the bicharacteristic flow of \(P_{\sigma}\) is canonically identified with the lightlike geodesics in \(M_{*}\) with trajectories orthogonal to \(\mathrm{T}\). Thus, we already know by Theorem 1.4 that the bicharacteristic flow of \(P_{\sigma}\) is non-trapping in the domain of outer communication. Following [21], we want to show that \(P_{\sigma}\) is in fact a Fredholm operator between appropriate Sobolev spaces. For any \(s\in\mathbb{R}\), we write \[\bar{H}^{s}:=\bar{H}^{s}\left(L_{*}\right),\] for the space of extendible Sobolev distributions on \(L_{*}\), in the sense of Hormander [10], of degree \(s\). The following is an improvement over [43, Lem. 2.1] and [21, Thm. 1.1], where we allow to choose any \(r_{0}\in[r_{e},r_{c}]\) in the definition of \(\mathrm{T}\): **Theorem 3.1**.: _Define_ \[\beta:=2b\max_{r\in\{r_{e},r_{c}\}}\left(\frac{r^{2}+a^{2}}{|\mu^{\prime}(r)| }\right),\] _and let \(s\geq\frac{1}{2}\). The operator_ \[P_{\sigma}:\{u\in\bar{H}^{s}\mid P_{\sigma}u\in\bar{H}^{s-1}\}\to\bar{H}^{s-1}\] _is an analytic family of Fredholm operator of index \(0\) for all \(\sigma\in\mathbb{C}\) such that_ \[\operatorname{Im}\sigma>\frac{1-2s}{2\beta}.\] _Moreover, \(P_{\sigma}\) is invertible for \(\operatorname{Im}\left(\sigma\right)\gg 1\)._ The proof of Theorem 3.1 relies on the Fredholm theory for non-elliptic operators developed in [21], which requires refined understanding of the behavior of the bicharacteristics. The number \(\beta>0\) in Theorem 3.1 is related to the surface gravity of the horizons and corresponds to a threshold in the radial point estimates in [21]. Note that there is a canonical identification \[\operatorname{Char}(P_{\sigma})\subset\operatorname{Char}(P)\subset T^{*}M_{*}.\] We may therefore define \[\Sigma_{\pm}=\{\xi\in\operatorname{Char}(P_{\sigma})\mid\pm G_{*}(\mathrm{d}t _{*},\xi)>0\},\] and note that \(\Sigma_{-}\cap\Sigma_{+}=\emptyset\), which in particular implies that \(\Sigma_{-}\) and \(\Sigma_{+}\) are invariant under the bicharacteristic flow. Moreover, since \(L_{*}\) is spacelike by Assumption 1.8, it follows that \(\mathrm{d}t_{*}\) is timelike along \(L_{*}\) and hence \[\operatorname{Char}(P_{\sigma})=\Sigma_{-}\cup\Sigma_{+}.\] Analogously to [43, Lem. 2.5] and [21, Sec. 6], we have: **Proposition 3.2**.: _Let_ \[\xi_{r}:=\xi(\partial_{r}),\] _for any \(\xi\in\operatorname{Char}(P_{\sigma})\). The conormal bundles \(N^{*}\{r=r_{e}\}\) and \(N^{*}\{r=r_{c}\}\) are contained in the characteristic set of \(P_{\sigma}\) and the bicharacteristic flow is radial in the generalized sense as in [20] at these. All other bicharacteristics of \(P_{\sigma}\) in \(\Sigma_{+}\) either starts at fiber infinity of_ \[N^{*}\{r=r_{e}\}\cap\{\xi_{r}>0\}\] _and or ends at \(r=r_{e}-\delta\) or starts at the fiber infinity of_ \[N^{*}\{r=r_{c}\}\cap\{\xi_{r}<0\}\] _and or ends at \(r=r_{c}+\delta\). All other bicharacteristics of \(P_{\sigma}\) in \(\Sigma_{-}\) either starts at \(r=r_{e}-\delta\) and ends at fiber infinity of_ \[N^{*}\{r=r_{e}\}\cap\{\xi_{r}<0\}\] _or starts at \(r=r_{c}+\delta\) and ends at the fiber infinity of_ \[N^{*}\{r=r_{c}\}\cap\{\xi_{r}>0\}.\] _Moreover, the fiber infinity of_ \[N^{*}\{r=r_{e}\}\cap\{\pm\xi_{r}>0\}\text{ and }N^{*}\{r=r_{c}\}\cap\{\mp\xi_{r }>0\}\] _are generalized normal source/sink manifolds of the bicharacteristic flow, respectively, in the sense of [20]. Furthermore, if \(r_{0}=r_{e}\), then no bicharacteristics between the fiber infinities of \(N^{*}\{r=r_{e}\}\) and \(\{r=r_{e}-\delta\}\) intersect the domain of outer communication \(M\) and \(N^{*}\{r=r_{e}\}\cap\{\pm\xi_{r}>0\}\) is a stable radial point source/sink, in the sense of [20]. If \(r_{0}=r_{c}\), then the corresponding holds at the cosmological horizon._ Proof.: As explained above, Theorem 1.4 implies that no bicharacteristics of \(P_{\sigma}\) are trapped in \(L_{*}\cap M\). Moreover, it implies that no bicharacteristics of \(P_{\sigma}\) can approach the event horizon to the past/future and the cosmological horizon to the future/past. As described in the proof of [21], combining Theorem 1.4 with the semiclassical considerations in [20] near the horizons, the first two assertions in Proposition 3.2 in fact follow. However, in the proof of [21], more details were provided for the case when \(r_{0}\in(r_{e},r_{c})\) with \(\mu^{\prime}(r_{0})\). The exact same argument as written there goes through, line by line, in the case when \(r_{0}\in(r_{e},r_{c})\), using Theorem 1.4. Only the cases when \(r_{0}=r_{e}\) or \(r_{0}=r_{c}\) require some extra care. Let us only discuss the case when \(r_{0}=r_{e}\), since the other case is similar. For the bicharacteristics starting or ending at \(r=r_{c}+\delta\), the same analysis as in the proof of [21], Lem. 2.5 applies. The bicharacteristic flow at the event horizon is, however, slightly different. Though a similar computation was already done in the proof of [21, Thm. 1.7], let us explicitly compute the Hamiltonian vector field at \(N^{*}\{r=r_{e}\}\) in case \(r_{0}=r_{e}\). The principal symbol \(\mathrm{p}_{\sigma}\) of \(P_{\sigma}\) is given by \[(r^{2}+a^{2}\cos^{2}(\theta))\mathrm{p}_{\sigma}(\xi)\\ =\mu(r)\xi_{r}^{2}-2abf(r)\frac{r_{e}^{2}-r^{2}}{r_{e}^{2}+a^{2}} \xi_{\phi_{*}}\xi_{r}+c(\theta)\xi_{\theta}^{2}\\ +\left(\frac{b^{2}}{c(\theta)\sin^{2}(\theta)}\left(\frac{r_{e}^ {2}+a^{2}\cos^{2}(\theta)}{r_{e}^{2}+a^{2}}\right)^{2}-b^{2}\frac{1-f(r)^{2}} {\mu(r)}\left(a\frac{r_{e}^{2}-r^{2}}{r_{e}^{2}+a^{2}}\right)^{2}\right)\!\! \xi_{\phi_{*}}^{2}. \tag{12}\] It immediately follows that \(N^{*}\{r=r_{e}\}\subset\operatorname{Char}(P_{\sigma})\) and the Hamiltonian vector field at \(N^{*}\{r=r_{e}\}\) is given by \[\operatorname{H}_{\operatorname{p}_{\sigma}}=(r^{2}+a^{2}\cos^{2}(\theta))^{-1} \mu^{\prime}(r_{e})\xi_{r}^{2}\partial_{\xi_{r}}.\] It follows that the bicharacteristic flow at the the conormal bundle of \(\{r=r_{e}\}\) is exactly _radial_, as opposed to radial in the generalized sense. The stability of the source/sink can be show, for example, as in the proof of [4, Lem. 2.5]. The fact that no bicharacteristics between \(\{r=r_{e}-\delta\}\) and the fiber infinities of \(N^{*}\{r=r_{e}\}\) intersect the domain of outer communication \(M\) is an immediate consequence of Remark 1.6. The analogous computations at \(r=r_{c}\) when \(r_{0}=r_{c}\) complete the proof. Proof of Theorem 3.1.: By Proposition 3.2, the dynamics of the bicharacteristics of \(P_{\sigma}\) in \(L_{*}\) precisely analogous as in [11, Section 6.1]. The proof of Theorem 3.1 therefore follows the same lines as the proof of [11, Thm. 1.4]. Proof of Theorem 1.12.: We again consider the analytic Fredholm family \[P_{\sigma}:\{u\in\bar{H}^{s}\mid P_{\sigma}u\in\bar{H}^{s-1}\}\to\bar{H}^{s-1}\] from Theorem 3.1. A standard energy estimate shows that \(P_{\sigma}\) is invertible for \(\operatorname{Im}\left(\sigma\right)\gg 1\). Analytic Fredholm theory therefore implies that \(P_{\sigma}\) has a meromorphic extension to the open set \[\Omega_{s}:=\left\{\operatorname{Im}\left(\sigma\right)>\frac{1-2s}{2\beta} \right\}.\] In particular, \(P_{\sigma}\) is invertible everywhere in \(\Omega_{s}\) apart form a discrete set. Moreover, since \(P_{\sigma}\) has index zero, \(P_{\sigma}\) is invertible if and only if the kernel of \(P_{\sigma}\) is trivial. Since \[\mathbb{C}=\bigcup_{s\in\mathbb{R}}\Omega_{s},\] we conclude that \(\ker(P_{\sigma})\) is non-trivial precisely on a discrete set \(\mathcal{A}\subset\mathbb{C}\). Following the arguments in the proof of [4, Thm. 1.2] line by line, using Theorem 3.1 in place of the [11, Thm. 1.1], it follows that the elements in \(\ker(P_{\sigma})\) are real analytic if the coefficients of \(P\) are real analytic. Proof of Theorem 1.13.: We again consider the analytic Fredholm family \[P_{\sigma}:\{u\in\bar{H}^{s}\mid P_{\sigma}u\in\bar{H}^{s-1}\}\to\bar{H}^{s-1}\] from Theorem 3.1. The semi-classical trapping is corresponding to the trapping of bicharacteristics of the full wave operator \(P\). Since [4, Thm. 3.2] implies that the trapping of bicharacteristics of \(P\) is normally hyperbolic, the proof of the semi-classical estimates and consequently the proof of Theorem 1.13 proceeds completely analogous to the proof of [11, Thm. 1.4]. ## Acknowledgements This paper is dedicated to Christian Bar's 60th birthday. We would like to thank him for all his inspiring work. The first author gratefully acknowledges the support from Swedish Research Council under grant number 2021-04269. The second author gratefully acknowledges support from the National Science Foundation under grant number DMS-1953987.
2304.07447
Unavoidable emergent biaxiality in chiral molecular-colloidal hybrid liquid crystals
Chiral nematic or cholesteric liquid crystals (LCs) are mesophases with long-ranged orientational order featuring a quasi-layered periodicity imparted by a helical configuration but lacking positional order. Doping molecular cholesteric LCs with thin colloidal rods with a large length-to-width ratio or disks with a large diameter-to-thickness ratio adds another level of complexity to the system because of the interplay between weak surface boundary conditions and bulk-based elastic distortions around the particle-LC interface. By using colloidal disks and rods with different geometric shapes and boundary conditions, we demonstrate that these anisotropic colloidal inclusions exhibit biaxial orientational probability distributions, where they tend to orient with the long rod axes and disk normals perpendicular to the helix axis, thus imparting strong local biaxiality on the hybrid cholesteric LC structure. Unlike the situation in achiral hybrid molecular-colloidal LCs, where biaxial order emerges only at modest to high volume fractions of the anisotropic colloidal particles, the orientational probability distribution of colloidal inclusions immersed in chiral nematic hosts are unavoidably biaxial even at vanishingly low particle volume fractions. In addition, the colloidal inclusions induce local biaxiality in the molecular orientational order of the LC host medium, which enhances the weak biaxiality of the LC in a chiral nematic phase coming from the symmetry breaking caused by the presence of the helical axis. With analytical modeling and computer simulations based on minimizing the Landau de Gennes free energy of the host LC around the colloidals, we explain our experimental findings and conclude that the biaxial order of chiral molecular-colloidal LCs is strongly enhanced as compared to both achiral molecular-colloidal LCs and molecular cholesteric LCs and is rather unavoidable.
Jin-Sheng Wu, Marina Torres Lazaro, Souvik Ghosh, Haridas Mundoor, Henricus H. Wensink, Ivan I. Smalyukh
2023-04-15T01:47:39Z
http://arxiv.org/abs/2304.07447v1
# Unavoidable emergent biaxiality in chiral molecular-colloidal hybrid liquid crystals ###### Abstract Chiral nematic or cholesteric liquid crystals (LCs) are chiral mesophases with long-ranged orientational order featuring a quasi-layered periodicity imparted by a helical director configuration but lacking positional order. Doping molecular cholesteric LCs with thin colloidal rods with a large length-to-width ratio or disks with a large diameter-to-thickness ratio adds another level of complexity to the system because of the interplay between weak surface anchoring boundary conditions and bulk-based elastic distortions around the particle-LC interface. By using colloidal disks and rods with different geometric shapes and boundary conditions, we demonstrate that these anisotropic colloidal inclusions exhibit biaxial orientational probability distributions, where they have tendencies to orient with the long rod axes and disk normals perpendicular to the helix axis, thus imparting strong local biaxiality on the hybrid cholesteric LC structure. Unlike the situation in non-chiral hybrid molecular-colloidal LCs, where biaxial order emerges only at modest to high volume fractions of the anisotropic colloidal particles, above a uniaxial-biaxial transition concentration, the orientational probability distribution of colloidal inclusions immersed in chiral nematic hosts are unavoidably biaxial even at vanishingly low particle volume fractions. In addition, the colloidal inclusions induce local biaxiality in the molecular orientational order of the LC host medium, which enhances the weak biaxiality of the LC in a chiral nematic phase coming from the symmetry breaking caused by the presence of the helical axis. With the help of analytical modeling and computer simulations based on minimizing the Landau de Gennes free energy of the host LC around the colloidal inclusions, we explain our experimental findings and conclude that the biaxial order of chiral molecular-colloidal LCs is strongly enhanced as compared to both achiral molecular-colloidal LCs and molecular cholesteric LCs and is rather unavoidable. ## I Introduction Since the experimental discovery of chiral nematic liquid crystals (LCs) over 150 years ago [1; 2], LC mesophases featuring chirality and long-range orientational order have been the focus of many research studies. The fundamental studies of geometry and topology of chiral nematic LCs as model systems provide extensive insights into physics principles associated with experimentally less accessible systems like particle physics or cosmology [3; 4; 5; 6; 7; 8; 9; 10; 11; 12], in addition to their technological application in electro-optics and displays. On the other hand, biaxial nematic mesophases have been highly sought-after in soft matter systems since their first theoretical consideration in 1970 [13]. However, even in a soft-matter system with strongly biaxial building blocks such as brick-shaped molecules, biaxiality was experimentally elusive and hard to unambiguously demonstrate in equilibrium states. Recent reports of the experimental discovery of biaxial nematic order include observations in micellar and molecular LCs formed by amphiphilic and bent-core molecules, respectively [14; 15], and also colloidal dispersion of highly anisotropic particles immersed in molecular LC hosts, so-called hybrid molecular-colloidal nematics [16; 17; 18]. The interplay between chirality and biaxiality in orientational order has been intensively studied for LC systems [19; 20; 21; 22; 23; 24; 25; 26; 27; 28]. It has been concluded that cholesteric twisted alignment and biaxial order of LC molecules amplify each other and that a chiral twist configuration cannot be observed without building blocks featuring a certain degree of biaxiality in their orientational distributions at the molecular level. However, for purely molecular systems, the chirality-enhanced biaxiality of the molecular distribution was predicted and experimentally found to be rather weak [19; 20; 21; 22; 23; 24; 25; 26], scaling as \(\ (qL_{m})^{2}\) according to the prediction by Priest and Lubensky for single-component molecular LCs [19] (here \(q=2\pi/p\), \(p\) is the helical pitch of the chiral nematic and \(L_{m}\) the molecular length). To date, to the best of our knowledge, there are no experimental or theoretical considerations of how the biaxiality of the orientational distribution of anisotropic colloidal particles could interplay with the chirality of the nematic host in hybrid molecular-colloidal LC systems. In this work, we demonstrate an unavoidably biaxial orientation probability distribution for uniaxial col loidal particles dispersed in a weakly chiral molecular host, which is rather unexpected. We report strongly enhanced biaxial order in the orientational distribution probability for colloidal rods scaling as \((qL_{\rm c})^{2}\), where the length of the colloidal particles \(L_{\rm c}\) is in the micron range, more than 3 orders of magnitude larger than that of LC molecules, albeit still an order of magnitude or so smaller than the pitch \(p\). The geometry of the cholesteric LC is described by three non-polar, orthogonal director fields [Fig. 1]: molecular director field \(\mathbf{\hat{n}}=-\mathbf{\hat{n}}\) representing the local average molecular alignment, the helical axis field \(\mathbf{\hat{\chi}}=-\mathbf{\hat{\chi}}\) along which \(\mathbf{\hat{n}}\) rotates, and the third field \(\mathbf{\hat{\tau}}=\pm\mathbf{\hat{n}}\times\mathbf{\hat{\chi}}\)[29; 30]. Here, the helicoidal configuration with a mutually perpendicular molecular frame \((\mathbf{\hat{n}},\mathbf{\hat{\chi}},\mathbf{\hat{\tau}})\) and helical pitch \(p\) is hardly perturbed by the introduction of thin colloidal disks or rods in view of their low concentration and weak surface anchoring boundary conditions. The colloidal particles are uniaxial on their own with high aspect ratios, and their orientations are well-controlled by pre-designed boundary conditions. The biaxiality of the colloidal orientational distribution is found to exceed values known for biaxiality on the molecular scale in cholesteric LCs and in nematic molecular colloidal-molecular hybrid systems, despite the low colloidal concentration and weak chirality of the molecular host. In cases where the colloids align along the \(\mathbf{\hat{\tau}}\) axis, the biaxiality is found to be more pronounced than when they are aligned along the molecular director \(\mathbf{\hat{n}}\). Furthermore, the molecular biaxiality of the LC host medium is further boosted by surface anchoring-induced distortions at the edges of the colloidal particles. In contrast to the orientational distribution of colloidal inclusions in nematic hybrid molecular-colloidal LCs, in which biaxial order emerges only at modest to high volume fractions of anisotropic colloidal particles [17], the orientational probability distribution of colloidal inclusions in chiral nematic hosts is unavoidably biaxial even at very low colloidal volume fractions. In turn, colloidal inclusions impart local biaxiality onto the molecular orientational distributions of the LC host medium, which enhances the weak biaxial order of the LC in a chiral nematic phase due to the symmetry breaking caused by the presence of the helical axis. With the help of computer simulations based on minimizing the Landau de Gennes free energy of the LC host around the colloidal inclusions [Fig. 2], we explain our experimental findings and conclude that the biaxial order of chiral molecular-colloidal LCs is enhanced compared to that of both nematic molecular-colloidal and cholesteric molecular counterparts. We demonstrate that the interplay between chirality and biaxiality in hybrid molecular-colloidal LCs is stronger than that in purely molecular or colloidal systems and that the biaxial symmetry of the orientational distributions of the anisotropic colloids is a universal feature. Finally, we discuss how our findings may allow for expanding the use of chiral molecular-colloidal LCs as model systems in studies of nonabelian defect lines and topological solitons hosted by states of matter with high-dimensional order parameter spaces. ## II Methods and techniques ### Synthesis of colloidal disks and rods Disk or rod-shaped \(\beta-\)NaYF\({}_{4}\):Yb/Er particles are synthesized following the hydrothermal synthesis methods described in detail elsewhere [17; 18; 31; 32; 33]. Precursors and solvents used for the synthesis of colloidal particles are of analytical grade and used without additional purifications, and they are bought from Sigma Aldrich if not specified otherwise. To synthesize nanodisks, 0.7 g of sodium hydroxide (purchased from Alfa Aesar) is dissolved in 10 ml of deionized water and then added with 5 ml of oxalic acid solution (2g, 19.2 mmol) at room temperature to obtain a transparent solution. Under vigorous stirring, we then add 5 ml of sodium fluoride solution (202 mg, 4.8 mmol) to the mixture. After 15 minutes of stirring, 1.1 ml of Y(NO\({}_{3}\))\({}_{3}\) (0.88 mmol), 0.35 ml of Yb(NO\({}_{3}\))\({}_{3}\) and 0.05 ml of Er(NO\({}_{3}\))\({}_{3}\) are added into the mixture while the stirring continues for another 20 minutes at room temperature. Subsequently, the solution is transferred to a 40-ml Teflon chamber (Col-Int. Tech.) and heated to and kept at 200 \({}^{\circ}\)C for 12 hours (h). The Figure 1: (a) Helical structure of a chiral LC with helical pitch length \(p\), with gray ellipsoids representing LC molecules and colored axes depicting the orthogonal molecular frame: LC director \(\mathbf{\hat{n}}\) (red), helical axis \(\mathbf{\hat{\chi}}\) (green), and the third axis \(\mathbf{\hat{\tau}}\) (blue). (b)-(c) Visualizations of a colloidal disk (b) and rod (c) immersed in a chiral LC at their equilibrium orientations. Colloids are colored in gray, and the yellow contours mark a director deviation of 0.67\({}^{\circ}\) (b) and 0.3\({}^{\circ}\) (c), respectively, of the numerically-relaxed LC structures from the ideal helical state indicated by the colored double arrows. For all simulations the anchoring at the colloid surface is homeotropic with strength \(W_{0}=10^{-4}\)Jm\({}^{-2}\). mixture is then cooled down naturally to room temperature, and the particles precipitated at the bottom are collected by centrifugation, rinsed with deionized water multiple times, and finally dispersed in 10 ml of deionized water. Colloidal rods are prepared using a similar protocol: 1.2 g of NaOH is dissolved in 5 ml of deionized water and mixed with 7 ml of ethanol and 20 ml of oleic acid under stirring, followed by adding 8ml of NaF (1 M), 950 \(\mu\)l of Y(NO\({}_{3}\))\({}_{3}\) (0.5 M), 225 \(\mu\)l of Yb(NO\({}_{3}\))\({}_{3}\) (0.2 M), and 50 \(\mu\)l of Er(NO\({}_{3}\))\({}_{3}\) (0.2 M) and stirring for 20 minutes. The obtained white viscous mixture is transferred into a 50 ml Teflon chamber, kept at 190\({}^{\circ}\)C for 24 h, and then cooled down to room temperature. The particles deposited at the bottom of the Teflon chamber are collected and washed with ethanol and deionized water multiple times and finally dispersed in cyclohexane. In some cases, silica microrods synthesized following an emulsion-templated wet-chemical approach [34] are also used. To synthesize them, 1 gm of polyvinylpyrrolidone (PVP, molecular weight 40000) is dissolved in 10 ml of 1-pentanol, followed by the addition of 950 \(\mu\)l of absolute ethanol (Decon labs), 280 \(\mu\)l of deionized water, 100 \(\mu\)l of sodium citrate solution (0.18 M), and 130 \(\mu\)l of ammonia solution (28%). The bottle is shaken vigorously using a vortex mixer after each addition. Then, 100 \(\mu\)l of tetraethyl orthosilicate (TEOS, 98%) is added under agitation. The bottle is incubated at 25 \({}^{\circ}\)C for the next 8 h. The solution becomes milky white after the reaction, and it is centrifuged at 6000 revolutions per minute (RPM) for 10 minutes to separate the as-synthesized rods. The precipitated rods are then washed two times with water followed by another two rounds of washing with ethanol at 3000 RPM for 5 minutes. Finally, to improve the monodispersity and to remove other lightweight impurities, the rods are centrifuged at 500 RPM for 30 minutes and dispersed in ethanol, with the procedure repeated two more times. ### Surface functionalization of the colloids Homeotropic surface anchoring boundary conditions for the director and 5CB (pentylcyanobiphenyl or 4-cyano-4'-pentylbiphenyl) molecules on the \(\beta\)-NaYF\({}_{4}\):Yb/Er disk surfaces is controlled through surface-functionalized with a thin layer of silica and polyethylene glycol. First, 5 ml of hydrogen peroxide (H\({}_{2}\)O\({}_{2}\)) is added to 1 ml of colloidal disk dispersion in deionized water. Then, under vigorous mechanical agitation, 100 \(\mu\)l of nitric acid is added drop by drop into the solution. After 12 h of agitation, disks are separated from the liquid by centrifugation and transferred into 1 ml of ethanol. The colloidal dispersion is then mixed with 75 mg of polyvinyl pyrrolidone (molecular weight 40,000) in 4 ml of ethanol and kept under continuous mechanical agitation for another 24 h. The particles are collected and redispersed in 5 ml of ethanol, before the addition of 200 \(\mu\)l of ammonia solution and 6 \(\mu\)l of tetraethyl orthosilicate under mechanical agitation that lasts 12 h. Disks are collected, washed with ethanol and deionized water, and redispersed in 4 ml of ethanol. The pH value of the mixture is adjusted to 12 by adding ammonia solution (28% in water). Then, under mechanical agitation at 35\({}^{\circ}\)C, we add 35 mg of silane-terminated polyethylene glycol (molecular weight 5,000, dissolved in 1 ml of ethanol at 50\({}^{\circ}\)C) to the solution. After another 12 h of agitation, the surface-functionalized disks are again collected, washed with ethanol and water, and dispersed in 1 ml of ethanol. As for the hydrothermal-synthesized rods, the surface chemical treatment not only provides the desired anchoring preference but also controls the cylinder aspect ratio. For this, 4 ml of the nanorod dispersion is added with 200 \(\mu\)l of HCl in 2 ml of water and kept stirred overnight. The nanorods are then transferred from organic to aqueous phases. The nanorods are collected by centrifugation, washed with deionized water and ethanol three times, dispersed in deionized water, and then finally re-dispersed in ethanol. The process of etching with acid and redispersion is repeated two more times, with HCl treatments of 12 hours and 3 hours, respectively. The aspect ratio of the nanorods is increased during acid treatment to an average value of \(L_{\rm c}/D_{\rm c}\approx 60\). Similarly, the emulsion-templated rods are slowly etched in a mild basic condition [35] with 0.5 mM NaOH for 24 h, followed by drying at 80\({}^{\circ}\)C for another 4 h. After this, the functionalization of silica rods is done by adding 100 \(\mu\)l of perfluorocotyltriethoxysilane (TCI America) to 0.9 mL ethanol dispersion of the silica rods. The mixture is kept at room temperature for 3 h before being washed and redispersed three times in ethanol. After vacuum-drying inside a desicator and heating at 60 \({}^{\circ}\)C for 1 h, the microrods are immersed in a perfluorocarbon liquid (Fluorinert FC-70, Alfa Aesar) and kept at 60 \({}^{\circ}\)C for 1 h before being cooled down to the room temperature and redispersed into ethanol for storage. The fusion of perfluorocarbon oil onto the perfluorosilane functionalized rods results in a fully covered and stable slippery surface layer, giving desired boundary conditions. ### Colloidal particle dispersion in chiral molecular LC A small amount of left-handed chiral dopant cholesterol polarognate is added into molecular 5CB (Frinton Labs and Chengzhi Yonghua Display Materials Co. Ltd). To obtain the equilibrium pitch \(p\) of the molecular chiral mixture, the weight fraction of the used chiral additive is roughly estimated by \(c_{d}=\frac{1}{6.25p}\). The actual pitch is later revealed using optical microscopy by observing the periodicity of defect lines in Gradjean-Cano wedge cells [36]. The surface-functionalized particles are then dispersed into such molecular chiral LC. In a typical experiment, 20 \(\mu\)l of colloidal dispersion in ethanol is mixed with 20 \(\mu\)l of the molecular LC. The mixture is then heated to 75\({}^{\circ}\)C and kept for 2 h to completely evaporate the organic solvent. A well-dispersed colloidal-molecular hybrid LC is usually obtained after quenching back to room temperature under mechanical agitation [37; 38; 39]. Additional centrifugation can be carried out to remove the particle aggregation formed during the isotropic to chiral nematic phase transition of the molecular LC. Hybrid LCs containing the colloidal dispersion are infiltrated into glass cells with gap thickness typically chosen to be between \(p/2\) and \(10p\), which is experimentally set using Mylar films or silica spheres. To achieve unidirectional planar boundary conditions for 5CB molecules, cell substrates are coated with 1wt.% aqueous polyvinyl alcohol and rubbed unidirectionally. Typically, the geometry and planar boundary conditions of the cell give a sample with its helical axis \(\hat{\chi}\) perpendicular to the glass substrate and with the helical twist of the cholesteric host LC in compliance with the designed boundary conditions at the confining glass surfaces. ### Optical microscopy and characterization of colloidal orientations We use different optical microscopy methods to visualize the colloidal orientations inside the hybrid LC, among which are three-photon excitation fluorescence polarizing microscopy (3PEFPM), photon-upconverting confocal microscopy, polarizing optical microscopy and phase contrast microscopy. Using 3PEFPM, optical imaging of director structures of the molecular host medium is performed using a multimodal 3-dimensional (3D) nonlinear imaging system built around a confocal system FV300 (Olympus) and an inverted microscope (Olympus IX-81) [38; 40]. The 3D imaging of the \(\beta\)-NaYF\({}_{4}\):Yb/Er particles designed to exhibit upconversion luminescence is performed with the same setup when the colloidal dispersions are excited with a laser light at 980 nm; this photon-upconversion-based imaging of colloidal particles minimizes the background signal from the molecular LC, making such a technique ideal for our study. A 100\(\times\) objective (Olympus UPlanFL, numerical aperture 1.4) and a 980-nm pulsed output from a Ti:Sapphire oscillator (80 MHz, Coherent, Chameleon ultra) are utilized, along with a set of Galvano mirrors on the optical path to achieve sufficient positional accuracy while scanning the sample horizontally. In addition, the vertical repositioning is achieved by a stepper motor on which the objective could be adjusted to focus at the desired sample depth, enabling 3D scanning with high accuracy. Luminescence signals are epi-collected using the same objective before being sent through a pinhole and detected by a photomultiplier tube. The data obtained from several scanning planes are combined into a 3D iff image to be analyzed at a later time. The phase contrast images are taken using a 60\(\times\) objective (Olympus UPlanFL N, variable numerical aperture 0.65-1.25), mounted on another microscope system (Olympus IX-83), at various vertical positions controlled by a motorized sample stage. The colloidal orientations, representing the normal direction of disks or the long axis of rods, are analyzed on the basis of two-dimensional (2D) slices of a 3D sample using ImageJ software (freeware from the National Institute of Health, [41]), with the error in measured colloidal angles is about \(\pm 1^{\circ}\). The ensuing statistical data are transferred to Matlab software for visualization, as well as for further analysis. The color thresholds of the images are carefully adjusted to avoid the interference of colloids out of focus. From the 3D stacks of images, the slice plane perpendicular to the helical axis \(\hat{\chi}\) gives the azimuthal orientational distribution (\(\varphi\)), whereas the vertical slice plane reveals the polar distribution (\(\theta\)). Since the colloidal orientations are highly confined, as shown by the high value of uniaxial order \(S_{\rm cc}\), we assume that the two distributions are independent so that the overall distribution can be written in factorized form \(f(\varphi,\theta)=f(\varphi)f(\theta)\). For the same reason, we ignore the effect of the projection from the 3D volume to the slice planes. After the analysis of particles by ImageJ, average azimuthal colloidal orientations are calculated for the data obtained in each \(\mathbf{\hat{n}}-\hat{\tau}\) slice plane and plotted against the sample depth (\(z\)) position of the cross-sectional plane, revealing the helical twist of the colloidal axes. The corresponding helical pitch \(p\) of each 3D volume is subsequently calculated from the slope of the linear dependence of the azimuthal angle on the vertical position (\(d\varphi/dz=q=360^{\circ}/p\)) and is in agreement with the initially designed value mentioned above, confirming the undisturbed molecular helical pitch at low colloidal concentrations. Finally, the colloidal orientation distribution is visualized in the molecular director coordinate system frame. The azimuthal angle in the molecular coordinate is calculated by subtracting the molecular twist from the measured colloidal orientations, \(\delta=\varphi-qz\) representing the fluctuation of colloidal orientation around that of a perfect helix. The non-orientable property of the colloidal axis \(\mathbf{\hat{u}}=-\mathbf{\hat{u}}\) enabled us to express the fluctuation angles, \(\delta=\delta+\pi\), for example, within a [-90\({}^{\circ}\),90\({}^{\circ}\)] range. Histograms of angular probability distribution with 5\({}^{\circ}\) bin width are calculated for each fluctuation angle, and Gaussian fitting \(e^{-\sigma\text{angle}^{2}}\) is performed to each distribution accordingly with \(\sigma\) being the fitting parameter later used to quantify the peak width. The choice of the fitting equation is justified by the analytical prediction of energy dependence on deviation angles (to the leading order), which is detailed in the Results section. In the case of narrow orientational distributions, the visualization is cropped to a smaller angle range after calculation performed in the full [-90\({}^{\circ}\),90\({}^{\circ}\)] range. In the case of planar rods and the longer homeotropic rods, it is impractical to do the re-slicing given the limited number of images taken, and the distributions found within achiral nematic LCs are adopted instead as we expect no difference between the horizontal and vertical distributions in such condition with no biaxiality. The same histograms are subsequently utilized in the computation of the colloidal orientation order pa rameters, which is summarized in Results. ### Computer simulation of perturbed order of the molecular LC host around the colloidal particles Computer simulations are carried out to study the interplay between molecular LC order near the colloidal surface and the colloidal orientation. The simulations are based on minimizing the mean-field Landau-de Gennes free energy for the molecular LC host [5; 17; 42; 43; 44]. We consider a thermotropic bulk free energy density describing the isotropic-nematic transition of LCs complemented with elastic contributions associated with LC director distortions occurring in the bulk volume of the LC: \[f_{\rm bulk}^{\rm LC} = \frac{A}{2}\mathbf{Q}_{ij}^{\rm(m)}\mathbf{Q}_{ji}^{\rm(m)}+ \frac{B}{3}\mathbf{Q}_{ij}^{\rm(m)}\mathbf{Q}_{jk}^{\rm(m)}\mathbf{Q}_{ki}^{ \rm(m)}+\frac{C}{4}(\mathbf{Q}_{ij}^{\rm(m)}\mathbf{Q}_{ji}^{\rm(m)})^{2} \tag{1}\] \[+ \frac{L_{1}}{2}\left(\frac{\partial\mathbf{Q}_{ij}^{\rm(m)}}{ \partial x_{k}}\right)^{2}+\frac{L_{2}}{2}\frac{\partial\mathbf{Q}_{ij}^{\rm(m) }}{\partial x_{j}}\frac{\partial\mathbf{Q}_{ik}^{\rm(m)}}{\partial x_{k}}\] \[+ \frac{L_{3}}{2}\frac{\partial\mathbf{Q}_{ij}^{\rm(m)}}{\partial x _{k}}\frac{\partial\mathbf{Q}_{ik}^{\rm(m)}}{\partial x_{j}}+\frac{L_{4}}{2} \epsilon_{ijk}\mathbf{Q}_{il}^{\rm(m)}\frac{\partial\mathbf{Q}_{kl}^{\rm(m)}} {\partial x_{j}}\] \[+ \frac{L_{6}}{2}\mathbf{Q}_{ij}^{\rm(m)}\frac{\partial\mathbf{Q}_{ kl}^{\rm(m)}}{\partial x_{i}}\frac{\partial\mathbf{Q}_{kl}^{\rm(m)}}{\partial x _{j}}\] with the 3-by-3 matrix \(\mathbf{Q}^{\rm(m)}\) being the molecular tensorial order parameter describing the local average molecular ordering, \(x_{i}\) (\(i=1-3\)) being cartesian coordinates, and \(\epsilon\) the 3D Levi-Civita tensor. Summation over all indices is implied. Among the bulk energy terms, \(A\), \(B\), and \(C\) are thermotropic constants and \(L_{i}\) (\(i=1-4,6\)) are the elastic constants related to the Frank-Oseen elasticities via: \[L_{1} = \frac{2}{27(S_{\rm eq}^{\rm(m)})^{2}}\left(K_{33}-K_{11}+3K_{22}\right)\] \[L_{2} = \frac{4}{9(S_{\rm eq}^{\rm(m)})^{2}}\left(K_{11}-K_{24}\right)\] \[L_{3} = \frac{4}{9(S_{\rm eq}^{\rm(m)})^{2}}\left(K_{24}-K_{22}\right)\] \[L_{4} = \frac{8}{9(S_{\rm eq}^{\rm(m)})^{2}}K_{22}\frac{2\pi}{p}\] \[L_{6} = \frac{4}{27(S_{\rm eq}^{\rm(m)})^{3}}\left(K_{33}-K_{11}\right) \tag{2}\] with \(K_{11}\), \(K_{22}\), \(K_{33}\) and \(K_{24}\) respectively denoting the splay, twist, bend and saddle-splay elastic moduli, and \(S_{\rm eq}^{\rm(m)}\) being the equilibrium uniaxial scalar order parameter. In addition to the bulk LC energy there is a contribution due to the boundary condition of the molecular LC at the colloidal surfaces which reads: \[f_{\rm surf}^{\rm LC}=W_{0}\left(\mathbf{P}_{ik}\tilde{\mathbf{Q}}_{kl} \mathbf{P}_{lj}-\frac{3}{2}S_{\rm eq}^{\rm(m)}{\rm cos}^{2}\theta_{\rm e} \mathbf{P}_{ij}\right)^{2} \tag{3}\] with \(W_{0}\) the surface anchoring strength, \(\mathbf{P}=\mathbf{\hat{v}}\otimes\mathbf{\hat{v}}\) the surface projection tensor, \(\mathbf{\hat{v}}\) the surface normal director, and \(\mathbf{\tilde{Q}}=\mathbf{Q}^{\rm(m)}+\frac{1}{2}S_{\rm eq}^{\rm(m)} \mathbf{I}\). The equilibrium angle \(\theta_{\rm e}=0\) corresponds to vertical or homeotropic anchoring at the boundary, and \(\theta_{\rm e}=\pi\) leads to planar degenerate anchoring [45]. The dimensions and anchoring forces associated with colloidal particles are represented as boundary conditions inside the numerical volume with the parameters kept constant for each simulation. Specifically, width \(D_{\rm c}=1\mu\)m and thickness \(L_{\rm c}=10\)nm are used for thin disks in all computer simulated results, while \(D_{\rm c}=28\)nm and \(L_{\rm c}=1.7\mu\)m for long rods, if not specified otherwise. The total energy is then given by the integration of Eq. (1) over LC volume and Eq. (3) over colloid-molecule interfaces, with the colloidal volume excluded in the integral of free energy densities. The total energy is numerically minimized based on the forward Euler method integrating: \[\frac{d\mathbf{Q}^{\rm(m)}}{dt}=-\frac{dF_{\rm total}^{\rm LC}}{d\mathbf{Q} ^{\rm(m)}} \tag{4}\] with \(t\) being the scaled energy-relaxation time of the LC. Adaptive Runge-Kutta method (ARK23) and FIRE, Fast Inertial Relaxation Engine, are adopted to increase numerical efficiency and stability [46; 47]. The steady-state and termination of simulation are determined by the change in total free energy in each numerical iteration, which is usually monotonic decreasing. The values of biaxiality and local orientations of molecular directors are subsequently identified as eigenvalues and eigenvectors of \(\mathbf{Q}^{\rm(m)}\)[48]. Alternatively, chirality axes (\(\mathbf{\hat{n}}\),\(\mathbf{\hat{\chi}}\),\(\mathbf{\hat{\tau}}\)) are also represented as the eigenpairs of handedness tensor (or chirality tensor, [49; 50]) with results having high consistency with the biaxial approach mentioned above. A more detailed description and comparison of the two approaches are to be given in Discussion. For a thin homeotropic rod, the anchoring effect at the two ends of the particle is ignored in our simulations by setting the length of the simulation box equal to the rod length \(L_{\rm c}\), which hardly changes the energies but greatly improves the numerical stability. The following parameters are used for all computer simulations: \(A=-1.72\times 10^{5}\)Jm\({}^{-3}\), \(B=-2.12\times 10^{6}\)Jm\({}^{-3}\), \(C=1.73\times 10^{6}\)Jm\({}^{-3}\), \(K_{11}=6\)pN, \(K_{22}=3\)pN, \(K_{33}=10\)pN, \(K_{24}=3\)pN and \(S_{\rm eq}^{\rm(m)}=0.533\). The simulations are carried out in a Cartesian colloidal frame using equidistant grid sets and are consistent with those based on a radial-basis-function approach performed within the molecular frame [51]. ## III Results ### Symmetry-breaking at single particle level The symmetry-breaking of the nematic colloidal geometry, induced by the twisted alignment of chiral molecules, can be revealed at the single particle level by visualizing the LC distortion field around a single colloidal particle [Fig. 2]. For cylinder-shaped particles dispersed in isotropic solvent, such as thin disks or slender rods in ethanol, a continuous rotational symmetry can be observed locally with the symmetry axis being the disk normal or the long axis of the rod because the other two orthogonal axes are geometrically equivalent. When the cylindrical colloids are dispersed into a chiral nematic, however, the uniaxial symmetry is broken in view of the boundary condition at particle-molecule interfaces and the far-field helical configuration of the LC molecules. Although the realigning effect induced by the homeotropic boundary conditions at the colloidal surfaces is rather weak with \(W_{0}=10^{-6}\)J/m\({}^{2}\) and deviation angle \(\ll 1^{\circ}\) [Fig. 2a,b], for example, it is evident that the rotational symmetry of the surface-defect-dressed cylindrical colloids becomes discrete (2-fold rotation) once the Figure 2: (a)-(b) Computer simulations of a thin colloidal disk (a) and a slender rod (b) immersed in a LC with weak chirality. Yellow contour surfaces mark the region where LC director deviations for \(\mathbf{\hat{n}}\) (red axis) are \(0.01^{*}\) from its ideal helical state with no colloids present. The sectional areas perpendicular to \(\hat{\tau}\) (blue axis) are colored by the deviation angle as shown in the color scale. Homeotropic anchoring \(W_{0}=10^{-6}\)Jm\({}^{-2}\) and a helical pitch \(p=30\mu\)m are used for both simulations. (c) Simulations of colloidal disks and rods in energy-minimizing orientations within chiral LCs using various anchoring strengths \(W_{0}\) and pitch lengths \(p\), with values labeled for each simulation. Yellow contours enclose regions with director distortions larger than or equal to \(0.1^{*}\), showing different levels of weak biaxiality. Rods in (c) are cropped for clarity. Axes defining the molecular frame are colored as in Fig. 1. Disk width \(D_{c}=1\mu\)m and rod length \(L_{c}=1.7\mu\)m for all simulations. colloids are immersed in a chiral LC [Fig. 2c]. Clearly, stronger surface anchoring forces and higher chirality (shorter pitch) lead to the significantly stronger molecular LC distortion as well as the ensuing emergent biaxiality as shown by the computer-simulated distortion in nematic director. Also, the single-particle symmetry-breaking is observed even when the helical pitch \(p\) is much larger than the particle dimension, with the particle sizes around 1-2 \(\mu\)m [Fig. 2c]. This demonstrates that the shape biaxiality of the dressed colloidal particle imparted by the molecular chirality of the host is unavoidably developed at all strengths of surface anchoring and values of molecular chirality [Fig. 2c]. ### Orientational distribution of the colloidal particles To analyze the equilibrium orientation of the cylindrical particles, we perform several sets of simulations at various colloidal orientations and resolve the corresponding free energies. A thin disk with perpendicular boundary condition [Fig. 3], for example, favors alignment in which the disk normal vector orients along the molecular director \(\mathbf{\hat{n}}\). Deviations away from the equilibrium direction give rise to an increase in the overall free energy of the system [Eq. (1) and Eq. (3)]. We emphasize that the free energy profiles are distinct for the two deviation angles \(\delta\) and \(\zeta\) in Fig. 3 (with lower energy penalty for orientational fluctuation along \(\delta\)). Though weak, the difference between the two angles and the broken uniaxial symmetry as a consequence of the chirality in the molecular LC host are unambiguous. Furthermore, Fig. 3 (c,d) demonstrate that a stronger surface anchoring force with a higher value of \(W_{0}\) leads to more pronounced energetical nondegeneracy of the two deviation angles. Using mean-field numerical simulation of the LC host, we are able to validate the local biaxial symmetry of the orientational probability distribution of an individual colloid, arising from the inequivalence of \(\hat{\chi}\) and \(\hat{\tau}\) in the molecular LC host. In contrast to the case of a disk, a homeotropic rod feels a strong energy penalty when aligned towards \(\mathbf{\hat{n}}\) and reaches a state of minimal surface anchoring energy when the long axis points along the \(\hat{\tau}\)-direction such that the LC director at the rod surface naturally complies with the homeotropic surface anchoring condition [Fig. 4][18; 52]. The symmetry-breaking of \(\hat{\chi}\) and \(\hat{\tau}\), evident from the difference between the two energy landscapes [Fig. 4], is observed again with deviation along the angle \(\gamma\) being more energetically favored than that along \(\theta\). With the cases of colloidal rods along \(\mathbf{\hat{n}}\), \(\hat{\chi}\), and \(\hat{\tau}\) all giving distinct free energies, the biaxiality in the ensuing colloidal orientation probability distribution is explicit, which is contributed solely by the chirality in the 5CB host. The results are in agreement with the empirical evidence shown below. ### Experimental observation and analysis of the colloidal orientation The alignment of the colloidal particles with respect to the helical director field of the molecular host LC is probed optically in our experiments. For example, Fig. 5. (a) demonstrates confocal fluorescence micrographs of disk-shaped particles immersed in a 5CB liquid crystal doped with a chiral dopant. The molecular orthogonal frame (\(\mathbf{\hat{n}}\),\(\mathbf{\hat{\chi}}\),\(\hat{\tau}\)), which is marked in each micrograph, is robustly controlled by substrates with planar anchoring force (Methods). The average normal direction of the colloidal disks in each vertical slice, which is expected to lie parallel to \(\mathbf{\hat{n}}\), rotates along the sample depth, as clearly shown with the edge-on perspective [Fig. 5. (a)]. Subsequently, the twisted arrangement of the disk direction is analyzed and a quasi-uniform twisting rate is found throughout the sample depth [Fig. 5. (b)], with thermal fluctuation present. We assume that the helix of molecular director \(\mathbf{\hat{n}}\) has a linear trend identical to the one found using colloidal orientations, which is shown by the red line in Fig. 5. (b), since the period of the twisted arrangement of the colloids closely matches the designed molecular pitch \(p\). We also ascertain that the colloidal density remains very low such that the molecular LC alignment is not expected to be disturbed by the introduction of colloidal particles. Once the orientational distribution of the thin disks is projected onto the co-rotating molecular frame, we observed Gaussian-like distributions [Fig. 5. (c)]. With a particle number density (volume fraction \(\approx\) 0.026%) far below the phase transition threshold [17], direct interactions between colloidal disks are negligible and each particle experiences an orientational potential imposed mainly by the surrounding molecular LC, as designed in our numerical simulations discussed above [Fig. 3]. The statistical results of particles can thus be treated as the thermal distribution of a single particle and qualitative agreement is found when compared to the modeling [Fig. 3]. As shown by the numerical modeling earlier, the colloidal director distributions are weakly asymmetric due to the biaxiality imparted by the chiral molecular host, demonstrating a stronger energy barrier of the thin disk fluctuating in the \(\zeta\) direction. As a result, the peak widths (full width at half maximum, FWHM) of the colloidal orientation distributions differ along two deviation angles (27.6\({}^{\circ}\) in \(\delta\) and 23.1\({}^{\circ}\) in \(\zeta\), Fig. 5. (c)), indicating a biaxial \(D_{2}\) symmetry of an individual disk with 2-fold rotations around \(\mathbf{\hat{n}}\) instead of uniaxial \(D_{\infty}\) one, even though the cylindrical particles themselves are of uniaxial symmetry. We further use phase contrast microscopy to determine the colloidal alignment directions for planar rods [Fig. 6. (a)]. In this case, the colloidal thin rods with a planar surface condition are dispersed within the 5CB chiral molecular host. By carefully changing the focal position inside the sample, we obtain a good linear relationship between the average rod direction and depths, represent ing a helical structure within which the colloidal rods point along \(\mathbf{\hat{n}}\) [Fig. 6. (b)]. Like the case of homeotropic disks, we again clearly observe a biaxial orientational symmetry in the molecular frame revealed by distinct probability distributions along \(\hat{\tau}\) and \(\hat{\chi}\) (FWHM=\(15.6^{*}\) in \(\delta\) and \(11.8^{*}\) in \(\zeta\)) despite the weak chirality (\(p\approx 100\mu\)m) of the hybrid LC system [Fig. 6. (c)]. The explicit experimental observations of the biaxial symmetry in the colloidal orientation probability distribution, in agreement with the numerical simulation, can only be attributed to the chiral twisting of the surrounding molecular LC. To see the symmetry-breaking behavior in another type of alignment, such as perpendicular, we adopt the same methods of colloidal orientation analysis but for the colloidal inclusion of thin rods with a perpendicular boundary condition [Fig. 7]. In consistency with the numerical calculation [Fig. 4], the rods aligns towards \(\hat{\tau}\) axis in thermal equilibrium [Fig. 7 (a,d)]. After measuring the azimuthal angle \(\varphi\) of rod long axis in each verticle frame and converting them to 3D distribution in the molecular orthogonal frame, we clearly see a narrower distribution is discovered for the longer rods [Fig. 7 (c,f)]. Due to the larger surface area of the longer particle, to which the surface energy is proportional, a stronger energy barrier develops for the longer colloidal rods to fluctuate away from the energetically ideal configuration along \(\hat{\tau}\). Furthermore, to our surprise, the orientational distributions of homeotropic rods behave dramatically differently from the non-chiral limit, in which case a degeneracy of alignment along the \(\hat{\chi}\) and \(\hat{\tau}\)-axes would be expected and the distribution along \(\eta\) should be uniform to have the uniaxial symmetry. Instead, we find an exceptionally strong Figure 3: (a) Computer-simulated free energy of molecular chiral LC in the presence of a homeotropic disk at different surface anchoring strengths \(W_{0}\) as a function of the azimuthal angle \(\delta\) describing a rotation of the disk normal about the pitch axis \(\hat{\chi}\) (green) defined in the molecular frame (inset). Data points for \(W_{0}=10^{-4},10^{-5}\), and \(10^{-6}\)J/m\({}^{2}\) are marked with triangle, square, and circle respectively. The energy is scaled by \(Kp\) where \(K=5.6\)pN is the average elastic constant and \(p=30\mu\)m is the pitch of cholesterics. (b) Numerical free energy profile for a homeotropic disk rotated about \(\hat{\tau}\) (blue axis). (c)-(d) The energy difference in (a) and (b) calculated for surface anchoring strengths \(W_{0}=10^{-4}\)J/m\({}^{2}\) (c) and \(10^{-6}\)J/m\({}^{2}\) (d), respectively. The lowest energies (disk normal aligned along red axis \(\mathbf{\hat{n}}\)) for each simulation set are chosen to be \(10^{-6}Kp\) instead of 0 to avoid singularities when converting to a log-scale in (a) and (b). The axes in the insets define the molecular frame and are colored as in Fig. 1. Cholesteric pitch \(p=30\mu\)m for all simulations. Figure 4: (a) Computer-simulated free energy of chiral 5CB-based LC surrounding a homeotropic rod at different surface anchoring strengths, \(W_{0}\), and the azimuthal angle, \(\gamma\), defined in the inset. (b) Simulated free energy of rods with different values of polar angle \(\theta\). (c) The difference in energy profiles of homeotropic rods rotated along \(\hat{\tau}\) and \(\hat{\chi}\) axis, as shown in the inset, simulated using anchoring strength \(W_{0}=10^{-6}\). The case of rods aligning long \(\mathbf{\hat{n}}\) (red axis) with the highest energy cost is taken as an energy reference point for each value of \(W_{0}\), while the free energy value is chosen to be \(-10^{-8}Kp\) instead of 0 to avoid singularities when converting to a log-scale in (a) and (b). The axes in the insets define the molecular frame and are colored as in Fig. 1. Cholesteric pitch \(p=30\mu\)m for all simulations. Figure 5: (a) Luminescence confocal images of homeotropic disks dispersed in chiral LC taken perpendicular to \(\hat{\chi}\) with the molecular frame (\(\mathbf{\hat{n}}\) and \(\hat{\tau}\)) marked in each depth slice. (b) The azimuthal angle of the disk normal orientation \(\varphi\) at different depth \(z\) obtained from the same sample. Dots are the average values in each \(z\) slice, and their linear fit is given by the red line, revealing the LC pitch \(p\approx 30\mu\)m. The inset illustrates the observed orientational fluctuation of disks (black double arrow) in the molecular frame (\(\mathbf{\hat{n}}\), \(\hat{\chi}\), \(\hat{\tau}\)). (c) Disk azimuthal orientational fluctuation \(\delta=\varphi-qz\) and polar orientational fluctuation \(\zeta=\pi/2-\theta\) within the molecular frame (insets), with \(\delta=0\) or \(\zeta=0\) correspond to orientation along \(\mathbf{\hat{n}}\) (red axis). Scale bars are \(30\mu\)m. energy barrier for rods deviating along \(\eta\) towards the helical axis \(\hat{\mathbf{\chi}}\), with the Gaussian fittings showing peaks even sharper than those along \(\gamma\) [Fig. 7 (c,f)]. The symmetry of the hybrid LC system is thus strongly biaxial as experimentally illustrated by the distinct peak widths of the orientational probability distributions. We will demonstrate that the rod orientation distributions can not be interpreted from surface anchoring effects only, but can be explained by considering the elastic distortions generated in the bulk of the molecular host. This is addressed in detail in the following sections using a comprehensive analytical model. ### Analytical model #### iii.4.1 Surface anchoring free energy of a cylindrical disk immersed in a cholesteric host We consider a low-molecular-weight chiral liquid crystal with a director field \(\mathbf{\hat{n}}(z)\) twisted along the \(\hat{\mathbf{\chi}}\)-axis of a Cartesian laboratory frame that we denote by the normalized unit vectors \((\mathbf{\hat{x}},\mathbf{\hat{y}},\mathbf{\hat{z}})\) where \(\mathbf{\hat{z}}\) coincides with the helical axis \(\hat{\mathbf{\chi}}\) in Fig. 1. The helical director field of a cholesteric, denoted by subscript "\(h\)", may be parameterized as follows: \[\mathbf{\hat{n}}_{h}(z)=\mathbf{\hat{x}}\cos qz+\mathbf{\hat{y}}\sin qz \tag{5}\] in terms of the cholesteric pitch \(p=2\pi/q\) and handedness \(q<0\) that we assume left-handed in agreement with experimental reality without loss of generality. Next, we immerse an infinitely thin cylindrical disk with aspect ratio \(D_{c}/L_{\mathrm{c}}\rightarrow\infty\) into a cholesteric host. The main symmetry axis of the colloidal disk is parameterized in the lab frame as \(\mathbf{\hat{n}}=\mathbf{\hat{x}}\sin\theta\sin\varphi+\mathbf{\hat{y}}\sin \theta\cos\varphi+\mathbf{\hat{z}}\cos\theta\) in terms of a polar \(\theta\) and azimuthal angle \(\varphi\) with respect to the helical axis \(\mathbf{\hat{z}}=\mathbf{\hat{\chi}}\). The presence of the colloid will generate elastic distortions of the uniform director field \(\mathbf{\hat{n}}_{h}(\mathbf{r})\) due to the specific anchoring of the molecules at the colloidal surface, quantified by the surface anchoring strength \(W_{0}>0\) (units energy per surface area). The extent of the elastic distortions around the colloid surface depends on the surface extrapolation length \(\ell_{s}=K/W_{0}\) where \(K\) denotes the average elastic constant of the thermotropic liquid crystal [53]. In our analysis, we first focus on the regime of infinitely large surface extrapolation length (\(\ell_{s}\rightarrow\infty\)), in which case the elastic distortions around the immersed colloid are absent. For finite \(\ell_{s}\), such as in the experimental situation, elastic distortions are weak but non-negligible and will be accounted for in the subsequent sections. If we assume the molecular director field \(\mathbf{\hat{n}}\) to remain completely undistorted, the Figure 6: (a) Phase contrast micrographs showing depth slices of planar rods dispersed in chiral molecular 5CB. (b) Average rod orientation in each \(z\)-slice (dots) and linear fit (red line). The cholesteric pitch is \(p\approx 100\mu\)m. (c) Azimuthal and polar orientational distribution of the rods, with \(\mathbf{\hat{n}}\) (red axis) most populated in the molecular frame (insets). Scale bars are \(30\mu\)m. Figure 7: (a) Depth slices of homeotropic rods dispersion in a chiral LC taken using luminescence confocal microscopy. (b) The average orientation of the long axis of each rod \(\varphi\) in each depth \(z\) slice (dots) and its linear fit (blue line). LC pitch \(p\approx 30\mu\)m and average rod length \(L_{c}=1.7\mu\)m (c) Orientational fluctuation of the rods measured in the LC molecular frame (inset), with the average direction being \(\hat{\tau}\) (blue axis). (d)-(f) Another sample of thin homeotropic rod dispersion in 5CB with cholesteric pitch \(p\approx 150\mu\)m and colloidal rod length \(L_{c}=3.0\mu\)m analyzed using phase contrast micrographs. All scale bars are \(30\mu\)m. surface anchoring free energy can be obtained by using the Rapini-Papoular model for Eq. (5) and integrating over the colloid surface denoted by \(\mathcal{S}\)[54, 55]: \[F_{s}=-\frac{1}{2}W_{0}\oint d\mathcal{S}(\mathbf{\hat{n}}_{h}\cdot\mathbf{\hat{ v}}(\mathcal{S}))^{2} \tag{6}\] where \(\mathbf{\hat{v}}\) represents a unit vector normal to the colloid surface in case of homeotropic (H) anchoring and tangential to the surface if the anchoring is planar (P). Let us denote its normal by \(\mathbf{\hat{u}}\) and ignore anchoring at the rim. We further define two unit vectors \(\mathbf{\hat{e}}_{1,2}\) orthogonal to the disk normal vector \(\mathbf{\hat{u}}\). The two principal anchoring scenarios, homeotropic (H) and planar (P), are expressed as follows: \[\mathbf{\hat{v}}=\begin{cases}\mathbf{\hat{u}}&\text{H}\\ \mathbf{\hat{e}}_{1}\cos\xi+\mathbf{\hat{e}}_{2}\sin\xi&\text{P}\end{cases} \tag{7}\] The angle \(0<\xi<2\pi\) must be chosen randomly in the case when planar anchoring is degenerate across all directions on the disk surface, which is the case in the experimental situation. Ignoring finite-thickness effects for \(L_{\text{c}}\ll D_{\text{c}}\) we then parameterize the face of the disk as follows: \[\mathbf{r}_{\mathcal{S}}=\mathbf{r}_{0}+\frac{D_{\text{c}}}{2}t[\mathbf{\hat{ e}}_{1}\sin\phi+\mathbf{\hat{e}}_{2}\cos\phi] \tag{8}\] with \(0<t<1\) and \(0<\phi<2\pi\). The surface anchoring energy per disk face is expressed as follows: \[F_{s}=-\frac{1}{4}W_{0}D_{\text{c}}^{2}\int_{0}^{2\pi}d\phi\int_{0}^{1}dtt\int _{0}^{2\pi}\frac{d\xi}{2\pi}[\mathbf{\hat{n}}_{h}(\mathbf{r}_{\mathcal{S}} \cdot\mathbf{\hat{v}})\cdot\mathbf{\hat{v}}]^{2} \tag{9}\] Leading to the following generic expression: \[F_{s}=-\frac{\pi}{4}W_{0}D_{\text{c}}^{2}\left(w_{1}+w_{2}\cos(2\delta)\frac{J _{1}(qD_{\text{c}}|\sin\theta|)}{qD_{\text{c}}|\sin\theta|}\right) \tag{10}\] with \(J_{1}(x)\) a Bessel function of the first kind, \(\delta=\varphi-qz\) the azimuthal angle with respect to the local cholesteric director, and coefficients: \[w_{1}=\begin{cases}\frac{1}{2}\sin^{2}\theta&\text{H}\\ \frac{1}{8}(3+\cos(2\theta))&\text{P}\end{cases} \tag{11}\] and \[w_{2}=\begin{cases}\sin^{2}\theta&\text{H}\\ -\frac{1}{2}\sin^{2}\theta&\text{P}\end{cases} \tag{12}\] The surface anchoring strength of disks is expressed in dimensionless form by \(\bar{W}=\beta W_{0}D_{\text{c}}^{2}\) with \(\beta^{-1}=k_{B}T\) the thermal energy in terms of temperature \(T\) and Boltzmann's constant \(k_{B}\). Taking disks with diameter \(D_{\text{c}}\approx 2\mu\text{m}\) and \(W_{0}\approx 10^{-6}-10^{-5}\text{Jm}^{-2}\) we find \(\bar{W}\sim 10^{3}-10^{4}\), indicating that surface anchoring realignment is robust against thermal fluctuations in the experimental regime. For the case of homeotropic anchoring, the surface anchoring energy Eq. (10) reaches a minimum at an equilibrium angle \(\theta^{*}=\pi/2\) and \(\delta^{*}=0\), demonstrating preferential alignment of the disk normal along the local LC host director \(\mathbf{\hat{n}}\), in agreement with experimental observation [Fig. 5]. #### iii.2.2 Surface anchoring free energy of a cylindrical rod immersed in a cholesteric host We may repeat the previous analysis to describe the case of a thin colloidal rod with \(L_{\text{c}}/D_{\text{c}}\rightarrow\infty\) by neglecting small contributions associated with the ends of the cylinder so we only need to parameterize the cylindrical surface of magnitude \(\pi L_{\text{c}}D_{\text{c}}\) following the principal contour \(\mathbf{r}_{\mathcal{S}}(t)=\mathbf{r}_{0}+\frac{L}{2}t\mathbf{\hat{u}}\) with \(-1<t<1\) of a cylinder with centre-of-mass \(\mathbf{r}_{0}\). The surface anchoring free energy then becomes: \[F_{s}=-\frac{1}{8}L_{\text{c}}D_{\text{c}}W_{0}\int_{0}^{2\pi}d\phi\int_{-1}^{ 1}dt[\mathbf{\hat{n}}_{h}(\mathbf{r}_{\mathcal{S}}\cdot\mathbf{\hat{v}})\cdot \mathbf{\hat{v}}]^{2} \tag{13}\] In order to describe various anchoring situations we define two unit vectors \(\mathbf{\hat{e}}_{1,2}\) orthogonal to \(\mathbf{\hat{u}}\) and parameterize: \[\mathbf{\hat{v}}=\begin{cases}\mathbf{\hat{e}}_{1}\cos\phi+\mathbf{\hat{e}}_{2 }\sin\phi&\text{H}\\ -\mathbf{\hat{e}}_{1}\sin\phi\cos\xi+\mathbf{\hat{e}}_{2}\cos\phi\cos\xi+ \mathbf{\hat{u}}\sin\xi&\text{DP}\\ \mathbf{\hat{u}}&\text{SP}\end{cases} \tag{14}\] In the case of homeotropic (H) anchoring the molecular director favors perpendicular alignment to the cylindrical surface, whereas for simple planar (SP) surface anchoring along the main rod direction is favored. For completeness, we also include the more general degenerate planar (DP) case where all anchoring directions perpendicular to the local surface normal are equally probable. In order to account for all possible rod orientations with respect to the molecular field, the angle \(\xi\) can take values between \(0\) and \(\pi\). We obtain the following generic expression: \[F_{s}=-\frac{\pi}{8}L_{\text{c}}D_{\text{c}}W_{0}\left(w_{1}+w_{2}\cos(2\delta) \frac{\sin(qL_{\text{c}}\cos\theta)}{qL_{\text{c}}}\right) \tag{15}\] with \(\delta=\varphi-qz\) the azimuthal angle along a particle frame co-rotating with the helical director so that \(\int d\mathbf{\hat{u}}=\int_{0}^{2\pi}d\delta\int_{-1}^{1}d(\cos\theta)\) and \(w_{1}\) and \(w_{2}\) are angle-dependent coefficients that depend on the particular anchoring situation: \[w_{1}=\begin{cases}(1+\cos^{2}\theta)&\text{H}\\ \frac{1}{2}(3-\cos^{2}\theta)&\text{DP}\\ 2\sin^{2}\theta&\text{SP}\end{cases} \tag{16}\] and \[w_{2}=\begin{cases}-\sin\theta\tan\theta&\text{H}\\ \frac{1}{2}\sin\theta\tan\theta&\text{DP}\\ 2\sin\theta\tan\theta&\text{SP}\end{cases} \tag{17}\] in terms of the polar \(\theta\) and azimuthal rod angle \(\varphi\) with respect to the helical axis along the \(\hat{\upgamma}\)-direction. For the homeotropic (H) case the free energy is minimal at an equilibrium angle \(\theta^{*}=0\) (with the azimuthal angle \(\varphi\) randomly distributed) which corresponds to the rod being aligned along the \(\hat{\upgamma}\) direction. However, there is a second, degenerate minimum at \(\theta^{*}=\pi/2\) and \(\delta^{*}=\pi/2\), that describes a rod pointing along the \(\hat{\upgamma}\)-axis. The minimum surface anchoring energy is \(F_{s}=-(\pi/4)L_{\mathrm{c}}D_{\mathrm{c}}W_{0}\) for both cases. The energy barrier between the two minima is only about \(1\)\(k_{B}T\) per rod so thermal fluctuations should easily make the colloids switch from one state to the other while staying perpendicular to \(\mathbf{\hat{n}}\). For both simple planar (SP) and degenerate planar (DP) anchoring we only find a single minimum at \(\theta^{*}=\pi/2\) and \(\delta^{*}=0\), i.e., the rod preferentially aligns along the revolving local nematic director \(\mathbf{\hat{n}}\) as observed in experiments [see Fig. 6]. #### iv.2.3 Equilibrium colloid orientation Balancing the surface anchoring free energy with the orientational entropy of the individual colloids we easily establish the orientational probability distribution through the Boltzmann distribution: \[f(\mathbf{\hat{u}})=\mathcal{N}\exp(-\beta F_{s}) \tag{18}\] with \(\mathcal{N}\) a normalization constant ensuring that \(\int d\mathbf{\hat{u}}f(\mathbf{\hat{u}})=1\). It is easy to infer from Eq. (10) and Eq. (15) that the polar and azimuthal angles are strongly coupled in general. This indicates that the local distribution of colloidal orientations around the principal alignment directions (\(\hat{\upgamma}\), \(\hat{\upgamma}\) and \(\mathbf{\hat{n}}\)) [Fig. 1] is rendered _biaxial_ by the chiral twist, in line with our experimental observations. Some examples of \(f\) for disks are depicted in Fig. 8 clearly demonstrating preferred alignments of the disk normals along \(\mathbf{\hat{n}}\) (red arrow). We point out that the consistency found between the numerical modeling and the analytic theory is remarkable considering the complexity involved in computer simulation due to a wide range of length scales including molecular, colloidal (with high aspect ratios) and surface extrapolation lengths as well as the simplicity and approximations adopted in the analytic model. The most interesting situation arises in the case of colloidal rods with homeotropic (H) anchoring where there is a subpopulation of rods aligned along the helical axis (\(qL_{\mathrm{c}}=1\)). In order to gain further insight into the orientational symmetry of those rods, we perform a small-angle expansion around the equilibrium angle \(\theta^{*}=0\) and retain the leading order coupling term between the two principal angles \(\theta\) and \(\delta\). The angular fluctuations about the helical axis (green) are then described by the following free energy \[F_{s}\approx\frac{\pi}{8}L_{\mathrm{c}}D_{\mathrm{c}}W_{0}j_{0}(qL_{\mathrm{c }})\cos(2\delta)\theta^{2} \tag{19}\] with \(j_{0}(x)=\sin(x)/x\). It suggests that the subpopulation of rods aligned along the helical axis in fact adopt a _twist-bend_-type organization with a pitch \(q\) identical to that of the molecular host. Contrary to cholesterics, these phases are characterized by a nematic director co-aligning with the helical axis. However, the situation here is more subtle given that chirality is only manifested at the level of orientational fluctuations around a mean director "backbone" that itself is not chiral. We identify a further interesting feature; depending on the sign of \(j_{0}(qL_{\mathrm{c}})\) the twist-bend helix may be either in phase with the molecular helix (\(\delta_{e}=0\) for \(qL_{\mathrm{c}}=4\)) or out-of-phase (\(\delta_{e}=\pi/2\) for \(qL_{\mathrm{c}}=2\)). In the next Section, we will show that the pure Rapini-Papoular description is inadequate in accounting for the experimental observations in Fig. 7 for one of the experimental geometries (rods with perpendicular boundary conditions) and that weak elastic distortions around the colloidal rods must be accounted for to explain their strong preference for pointing along \(\hat{\upgamma}\). #### iv.2.4 Elastic deformations surrounding the rod surface So far we have completely ignored the role of weak elastic deformations of the host director (\(\ell_{s}=K/W_{0}\rightarrow\infty\)) and assumed that the rod orientation is dominated entirely by surface anchoring effects. The experimental reality, however, is that the surface anchoring extrapolation length is large but finite (\(\ell_{s}\approx 600nm\gg D_{\mathrm{c}}\)). Experimental observations compiled in Fig. 7 point at a scenario where rods orient preferentially along the \(\tau\) direction, rather than the helical axis (\(\hat{\upgamma}\)) as predicted from minimizing the bare Rapini-Papoular surface anchoring energy. A plausible reason as to why rod alignment along the helical axis (\(\hat{\upgamma}\)) seems unfavorable is that it involves a twisting of the surface disclination that runs along the rod contour which costs elastic energy. No such twisting is required if the rod points along \(\hat{\upgamma}\). Clearly, the discrepancy between experiment and theory must be attributed to the elastic distortions running along the rod surface (and their subsequent twisting) which has been ignored in our considerations thus far. In principle, weak director distortions may also lead to a mild decrease in the bulk nematic order parameter, particularly in regions where the director curvature is strong. In our analysis, we will assume that the bulk scalar order parameter (\(S_{\mathrm{eq}}^{\mathrm{(m)}}\)) of the host is constant throughout the system. Even in the near-field limit close to the rod surface where deviations from bulk nematic order are strongest, we expect local distortions in bulk nematic order to be minor compared to the (infinitely) strong anchoring scenario that is considered in the theoretical study by Brochard and De Gennes [56]. #### iii.1.5 Elastic energy of a twisted disclination along the main rod direction We will now attempt to quantify the twisted disclination effect by introducing an angular deviation \(\Phi(\mathbf{r})\) and express the helical host director as follows: \[\mathbf{\hat{n}}_{h}(\mathbf{r})=\mathbf{\hat{x}}\cos(qz+\Phi(\mathbf{r}_{\perp} ))+\mathbf{\hat{y}}\sin(qz+\Phi(\mathbf{r}_{\perp})) \tag{20}\] with \(\mathbf{r}\) denoting a 3D distance vector and \(\mathbf{r}_{\perp}\) the lateral distance perpendicular to the helical axis \(\mathbf{\hat{\chi}}\). The total free energy of a colloidal rod inclusion aligned along the helical axis is given by the Rapini-Papoular surface anchoring term Eq. (6) combined with the Frank elastic free energy in the presence of chirality [57]: \[F =\tfrac{1}{2}\int d\mathbf{r}\left[K_{11}(\nabla\cdot\mathbf{ \hat{n}}_{h})^{2}+K_{22}(\mathbf{\hat{n}}_{h}\cdot\nabla\times\mathbf{\hat{n}} _{h}+q)^{2}\right.\] \[\left.+K_{33}(\mathbf{\hat{n}}_{h}\times\nabla\times\mathbf{\hat {n}}_{h})^{2}\right]-\tfrac{1}{2}W_{0}\oint d\mathcal{S}(\mathbf{\hat{n}}_{h} \cdot\mathbf{\hat{v}}(\mathcal{S}))^{2} \tag{21}\] with \(K_{11}\), \(K_{22}\) and \(K_{33}\) respectively denoting the splay, twist and bend elastic modulus, as defined in our simulation model in Section II. E. For simplicity, we ignore any contributions due to surface elasticity and assume the rod to be infinitely long and elastic distortions to occur only along the radial direction \(\mathbf{r}_{\perp}\). Employing cylindrical coordinates \(\Phi(\mathbf{r}_{\perp})=\Phi(r,\vartheta)\), expanding up to second Figure 8: (a)-(b) Computer-simulated LC surface anchoring energy (a) and elastic distortion energy (b) using a Q-tensor description of a chiral 5CB-based LC surrounding a homeotropic disk at different angles \(\delta\) defined in the inset. The left and right axes provide different units of energy. (c) The prediction from analytical theory for different values of LC chiral strengths \(qD_{\mathrm{c}}\). Solid lines correspond to the surface anchoring energy alone, while dots include the contribution of weak elastic distortion around the colloidal disk (see Appendix B). (d)-(f) Numerical simulation of surface energy (d) and elastic energy (e) and analytical prediction [Eq. (10), Appendix B] (f) of the free energies for homeotropic disks at different angles \(\zeta\) (defined in the inset). (g) Unit-sphere projections of the predicted local orientational probability of a disk immersed distribution in a chiral LC with various anchoring strengths \(W_{0}\). Surface anchoring strength \(W_{0}=10^{-6}\mathrm{Jm}^{-2}\) for (a)-(f) and cholesteric pitch \(p=30\mu\)m, \(qD_{\mathrm{c}}=0.21\) if not otherwise specified. Disk dimensions \(L_{\mathrm{c}}=10\)nm and \(D_{\mathrm{c}}=1\mu\)m for all simulations and calculations. The energy zero points are chosen at \(\delta=0\) or \(\zeta=0\) for clarity. order in \(q\) and integration over \(\vartheta\) we obtain for the free energy \(F_{el}\) per unit rod length: \[\frac{F_{el}}{L_{\rm c}} = \tfrac{1}{2}\int d\mathbf{r}_{\perp}\left\{\frac{K_{11}}{r^{2}}(1+ \partial_{\vartheta}\Phi)^{2}+K_{33}(\partial_{r}\Phi)^{2}\right. \tag{22}\] \[\left.+\frac{(qL_{\rm c})^{2}}{12}\Delta K\left[\frac{1}{r^{2}}(1 +\partial_{\vartheta}\Phi)^{2}-(\partial_{r}\Phi)^{2}\right]\right\}\] where \(\Delta K=K_{33}-K_{11}>0\) denotes the difference between the bend and splay moduli. The elastic anisotropy turns out to be of crucial importance since the twist correction \(\mathcal{O}(q^{2})\) vanishes in case of the one-constant approximation \(K_{11}=K_{33}=K_{22}=K\). Similarly, the surface anchoring free energy reads up to quadratic order in \(qL_{\rm c}\ll 1\): \[\frac{F_{s}}{L_{\rm c}}=-\frac{W_{0}}{2}\oint_{\mathcal{C}}d\vartheta\left\{ \cos^{2}(\vartheta-\Phi)-\frac{(qL_{\rm c})^{2}}{12}\cos[2(\vartheta-\Phi)]\right\} \tag{23}\] where \(\mathcal{C}\) denotes the circular contour of the rod cross-section with diameter \(D_{\rm c}\). For weak distortions \(\Phi\ll 1\) we linearize for \(\Phi\) and obtain: \[\frac{F_{s}}{L_{\rm c}}\approx\frac{F_{s}^{(0)}}{L_{\rm c}}-\frac{W_{0}}{2}(1- \tfrac{1}{6}(qL_{\rm c})^{2})\oint_{\mathcal{C}}d\vartheta\sin 2\vartheta\Phi \tag{24}\] The first term is the contribution for the _undistorted_ director field previously analyzed: \[F_{s}^{(0)} = -\frac{L_{\rm c}W_{0}}{2}\oint_{\mathcal{C}}d\vartheta\left\{ \cos^{2}\vartheta-\frac{(qL_{\rm c})^{2}}{12}\cos 2\vartheta\right\} \tag{25}\] \[\sim -\frac{\pi}{4}W_{0}L_{\rm c}D_{\rm c}\] which corresponds to Eq. (13) for a homeotropic rod aligned perpendicular to the helical axis (\(\theta=\delta=\pi/2\)) in the large pitch limit \(qL_{\rm c}\ll 1\). The second term in Eq. (24) accounts for the change of surface anchoring free energy generated by the elastic distortions. The change of elastic free energy induced by the twist follows from: \[\Delta F_{\rm twist}^{(el)}\approx\frac{1}{24}(qL_{\rm c})^{2}L_{\rm c}\Delta K \mathcal{F}[\Phi_{0}] \tag{26}\] where \(\Phi_{0}\) denotes the distortion angle for the _untwisted_ system, and: \[\mathcal{F}[\Phi_{0}]=\int d\mathbf{r}_{\perp}\left[\frac{1}{r^{2}}(1+ \partial_{\vartheta}\Phi_{0})^{2}-(\partial_{r}\Phi_{0})^{2}\right] \tag{27}\] is a dimensionless quantity measuring the extent of the surface disclination surrounding the cylinder. Applying the one-constant approximation which does not lead to qualitative changes in this context, we determine \(\Phi_{0}\) from minimizing: \[\frac{F_{el}(q=0)}{KL_{\rm c}}=\tfrac{1}{2}\int d\mathbf{r}_{\perp}\left\{ \frac{1}{r^{2}}(1+\partial_{\vartheta}\Phi)^{2}+(\partial_{r}\Phi)^{2}\right\} \tag{28}\] so that \((\delta F_{el}/\delta\Phi)_{\Phi_{0}}=0\) and \(\ell_{s}=K/W_{0}\) defines the (finite) surface anchoring extrapolation length. Functional minimization of the free energy we obtain the Laplace equation in polar coordinates: \[\partial_{r}^{2}\Phi_{0}+\frac{1}{r}\partial_{r}\Phi_{0}+\frac{1}{r^{2}} \partial_{\vartheta}^{2}\Phi_{0}=0 \tag{29}\] subject to the boundary conditions: \[\Phi_{0}(\infty,\vartheta) = 0\] \[\partial_{r}\Phi_{0}(D_{\rm c}/2,\vartheta) = (4\ell_{s})^{-1}\sin 2\vartheta \tag{30}\] with the latter denoting a Neumann boundary condition at the colloid surface imparted by surface anchoring contribution Eq. (24). This ensures that the interior of the rod cross-section is excluded from the spatial integrations. The result is a simple dipolar field: \[\Phi_{0}(r,\vartheta)=-\frac{D_{\rm c}}{16\ell_{s}}\left(\frac{D_{\rm c}}{2r} \right)^{2}\sin 2\vartheta \tag{31}\] Plugging this back into Eq. (27) and integrating we find that the difference in elastic energy between the twisted (\(\hat{\chi}\)) and untwisted (\(\hat{\tau}\)) alignment directions in independent of the surface anchoring extrapolation length \(\ell_{s}\) and increases logarithmically with system size \(\ell_{\rm max}\): \[\Delta F_{\rm twist}^{(el)}\sim\frac{\pi}{12}(qL_{\rm c})^{2}L_{\rm c}\Delta K \ln\left(\frac{2\ell_{\rm max}}{D_{\rm c}}\right) \tag{32}\] Taking \(\ell_{\rm max}=L_{\rm c}\) as typical size cut-off, a splay-bend elastic anisotropy \(\Delta K=4\)pN we find that \(\Delta F_{\rm twist}\sim\mathcal{O}(10^{2}k_{B}T)\). The change in Rapini-Papoular surface anchoring free energy associated with a twist of the director distortions reads: \[\Delta F_{\rm twist}^{(s)}\sim-\frac{\pi W_{0}L_{\rm c}D_{\rm c}}{92}\frac{D_{ \rm c}}{\ell_{s}}(qL_{\rm c})^{2} \tag{33}\] which is only a fraction of the thermal energy so that the total distortion-induced free energy change is estimated from \(\Delta F_{\rm twist}\approx\Delta F_{\rm twist}^{(el)}\). In Appendix A we discuss an analytical model that allows us to quantify the weak elastic distortions that occur when the rod remains perpendicular to the helical axis \(\hat{\chi}\) but is allowed to display angular fluctuations in the \(\hat{\mathbf{n}}-\hat{\tau}\)-plane, as illustrated by the angular probability distribution \(f(\gamma)\) in Fig. 9. For disks, a similar model is discussed in Appendix B. There we demonstrate that the elastic distortions around the disk surface are intrinsically chiral but are very weakly developed and do not lead to qualitative changes in their realignment behavior as compared to predictions based on the Rapini-Papoular energy alone Eq. (10). #### iii.1.6 Effective realigning potential per rod Gathering the findings of the previous paragraph we revisit the realigning potential acting on a rod immersed in a cholesteric host. The total external potential is given by the bare Rapini-Papoular contribution Eq. (13) for the undistorted host director plus the free energy contributions from elastic distortions: \[F_{s,\text{tot}}\sim F_{s}+\Delta F_{\text{dist}} \tag{34}\] Since the distortion term cannot be resolved for any rod orientation but only for cases when the rod is aligned along either of the directions of the local frame (\(\mathbf{\hat{n}},\mathbf{\hat{\tau}},\mathbf{\hat{\chi}}\)) of the helical LC host frame we use the following interpolation form: \[\Delta F_{\text{dist}}(\eta,\gamma)\sim \Delta F_{\text{twist}}\sin^{2}\eta+\Delta F_{\text{tilt}}\cos ^{2}\eta\sin^{2}\gamma \tag{35}\] in terms of the two angles \(\eta=\theta-\frac{\pi}{2}\) and \(\gamma=\delta-\frac{\pi}{2}\) represented in Fig. 7(c,f) and key elastic contributions; Figure 10: (a) The orientational fluctuation of a cylindrical rod (grey) along angle \(\theta\) in the molecular frame. (b) Corresponding energy values are calculated based on Eq. (34) with and without the LC elastic distortion energy. The inset shows that the surface energy is overpowered by the elastic counterpart. (c) Unit sphere projection of the orientational distribution of thin rods dispersed in a chiral LC. Surface anchoring strength \(W_{0}=10^{-6}\text{Jm}^{-2}\) for (a)-(b) and cholesteric pitch \(p=30\mu\)m, \(L_{\text{c}}=1.7\mu\)m and \(D_{\text{c}}=28\)nm are used for all calculations. Figure 9: (a)-(b) Numerical LC surface anchoring energy (a) and elastic distortion energy (b) with homeotropic rods at different angles \(\gamma\) defined in the inset. (c) The corresponding theoretical values with [Eq. (35)] or without [Eq. (15)] elastic distortion. Surface anchoring strength \(W_{0}=10^{-6}\text{Jm}^{-2}\), pitch \(p=30\mu\)m, rod size \(L_{\text{c}}=1.7\mu\)m and \(D_{\text{c}}=28\)nm are used for all simulations and calculations. The energy zero points are chosen at \(\gamma=0\) for clarity. \(\Delta F_{\rm tilt}=F(\mathbf{\hat{u}}\parallel\mathbf{\hat{n}})-F(\mathbf{\hat{u}} \parallel\mathbf{\hat{\tau}})\) associated with tilting the rod away from the \(\tau\)-axis towards the \(\mathbf{\hat{n}}\)-direction, discussed in Appendix A, and \(\Delta F_{\rm twist}=F(\mathbf{\hat{u}}\parallel\hat{\chi})-F(\mathbf{\hat{u}} \parallel\mathbf{\hat{\tau}})\) [Eq. (32)] the energy cost associated with twisting the surface disclination wrapped along the body of the cylinder. From the analysis in the previous section, we found that \(\Delta F_{\rm twist}\) is a few hundreds of \(k_{B}T\) [Fig. 10(b)] whereas the elastic distortions due to tilting are much weaker (\(\Delta F_{\rm tilt}<k_{B}T\)) and may, in fact, be neglected altogether for the weak anchoring regime considered in this study (Appendix A). The elastic energy is then minimal (zero) when the rods align along the \(\mathbf{\hat{\tau}}\) directions (\(\theta^{*}=\pi/2\) and \(\delta^{*}=\pi/2\)) as observed in our experiments [Fig. 7]. The best correspondence with experimental data is obtained for a surface anchoring amplitude of about \(W_{0}\sim 6\times 10^{-7}\mathrm{Jm}^{-2}\). An overview of the orientational probability distributions associated with Eq. (34), based on the Boltzmann exponent Eq. (18), are depicted in Fig. 10(c) indicating that the rod preferentially aligns along the \(\mathbf{\hat{\tau}}\)-axis with considerable orientational biaxiality developing around the main alignment direction. For the case of homeotropic rods reported in Fig. 7 we may roughly estimate the energy contribution due to the twisted disclination from the width of the distributions depicted in panels (c) and (f). For small angles \(\eta\) the Boltzmann factor of Eq. (35) translates into a simple Gaussian distribution: \[f(\eta)\propto\exp(-\Delta F_{\rm twist}\eta^{2}) \tag{36}\] and we identify a standard Gaussian \(\mathrm{FWHM}=2.355/\sqrt{2\Delta F_{\rm twist}}\). This subsequently gives \(\Delta F_{\rm twist}\approx 22k_{B}T\) for homeotropic rods with \(L_{\rm c}=1.7\mu\mathrm{m}\) and \(\Delta F_{\rm twist}\approx 76k_{B}T\) for the longer rods with \(L_{\rm c}=3\mu\mathrm{m}\) suggesting that, in both cases, the thermal motion of the rods is assuredly insufficient to overcome the energy barrier between the \(\tau\) and \(\chi\) alignment directions. The values are in qualitative agreement with the prediction from our analytical model Eq. (32) where \(\Delta F_{\rm twist}\propto L_{\rm c}^{3}\) suggests that the elastic energy cost of orientating the rods from \(\mathbf{\hat{\tau}}\) to \(\mathbf{\hat{\chi}}\) directions is indeed quite sensitive to the colloidal rod length \(L_{\rm c}\). The actual values from Eq. (32), however, should be considered as an upper bound for \(\Delta F_{\rm twist}\) mainly because in our model the local nematic order parameter \(S_{\rm m}\) of the host is constrained at its far-field bulk value and is not allowed to relax in regions where director distortions are the largest, as observed in our experiment and simulations. ### Colloidal order parameters In order to facilitate comparison with experimental results, we define the colloidal orientational order which measures the principal direction of alignment of the colloids along the cholesteric helix. Taking the local molecular LC director \(\mathbf{\hat{n}}\) as a reference frame we define a colloidal uniaxial order parameter as follows: \[S_{\rm cm}=\langle\mathcal{P}_{2}(\mathbf{\hat{u}}\cdot\mathbf{\hat{n}}) \rangle_{f} \tag{37}\] with \(\langle...\rangle_{f}\) denoting a thermal average, and a colloidal biaxial nematic order parameter that measures the relative orientational order with respect to the principal directions orthogonal to \(\mathbf{\hat{n}}_{h}\): \[\Delta_{\rm cm}=\langle(\mathbf{\hat{u}}\cdot\mathbf{\hat{\tau}})^{2}-( \mathbf{\hat{u}}\cdot\mathbf{\hat{\chi}})^{2}\rangle_{f} \tag{38}\] Alternatively, we can probe the orientational order from the tensorial order parameter for colloids \(\mathbf{Q}_{\rm c}=\frac{3}{2}(\mathbf{\hat{u}}\otimes\mathbf{\hat{u}})_{f}- \frac{1}{2}\mathbf{I}\) which measures orientational order with respect to the principal colloidal alignment direction independently from the chosen reference frame. The corresponding uniaxial and biaxial order parameters defined within the colloidal frame are denoted by \(S_{\rm cc}\) and \(\Delta_{\rm cc}\), respectively. In case of colloids aligning along the molecular director \(\mathbf{\hat{n}}\), the two frames coincide and the corresponding values of order parameters are identical. To quantify the symmetry-breaking of the colloidal orientational distribution in experiments, we measured the uniaxial \(S_{\rm cc}\) and biaxial \(\Delta_{\rm cc}\) order parameters for both disks and rods (Table 1). The uniaxial order parameter \(S_{\rm cc}\), as a measure of unidirectional ordering (Eq. (37)), represents the strength of orientational confinement which greatly depends on the synthesized materials and the ensuing surface anchoring effects. Subsequently, the non-equivalence of axes orthogonal to the average colloidal/molecular axis is evaluated using the biaxial order parameter \(\Delta_{\rm cc}\) (Eq. (38)), with \(-1<\Delta_{\rm cc}<1\). The values of \(S\) are experimentally determined to be \(S_{\rm cc}=S_{\rm cm}=0.66\) for homeotropic disks dispersed in chiral 5CB-based LC [Fig. 5] and \(S_{\rm cc}=S_{\rm cm}=0.94\) for rods with planar boundary condition [Fig. 6]. The values of \(\Delta_{\rm cc}\) are found to be \(0.067\) and \(0.014\), respectively, showing a robust symmetry-breaking among \(\mathbf{\hat{\chi}}\) and \(\mathbf{\hat{\tau}}\) leading to biaxial orientational symmetry. When the average orientations of the two components differ, however, the choice of reference frame determines the values of order parameters. Stronger orientational fluctuations are found for the shorter rods with homeotropic anchoring (Fig. 7 a-c) with \(S_{\rm cc}=0.70\) and \(\Delta_{\rm cc}=0.12\), while the dispersion of the longer rods showed a smaller orientational distribution with higher uniaxial order parameter \(S_{\rm cc}=0.86\) and lower biaxiality \(\Delta_{\rm cc}=0.065\), though still higher than that measured for planar rods. The order parameters obtained are in good agreement with the analytical prediction using Eq. (19) and Eq. (35), which give \(S_{\rm cc}=0.75,\Delta_{\rm cc}=0.17\) for the shorter rods, and \(S_{\rm cc}=0.90,\Delta_{\rm cc}=0.058\) for the longer rods using the experimental parameters. \begin{table} \begin{tabular}{c c c c c} \hline \hline Sample & \(S_{\rm cc}\) & \(\Delta_{\rm cc}\) & \(S_{\rm cm}\) & \(\Delta_{\rm cm}\) \\ \hline Homeotropic disk & 0.66 & 0.067 & 0.66 & 0.067 \\ Planar rod & 0.94 & 0.014 & 0.94 & 0.014 \\ Homeotropic rod (\(L_{\rm c}=1.7\mu\)m) & 0.70 & 0.12 & -0.26 & 0.76 \\ Homeotropic rod (\(L_{\rm c}=3.0\mu\)m) & 0.86 & 0.065 & -0.38 & 0.89 \\ \hline \hline \end{tabular} \end{table} Table 1: Colloidal order parameters measured in colloidal coordinates (c) and molecular frame (m) for each set of experiments shown in Fig. 5, Fig. 6, and Fig. 7. The enhanced biaxiality for homeotropic rods aligning perpendicular to the molecular director \(\mathbf{\hat{n}}\) will be discussed in the following section. Calculated in the molecular reference frame, the negative values of \(S_{\mathrm{cm}}\) and large values of \(\Delta_{\mathrm{cm}}\) simply represent the geometry in which the average colloidal director lies orthogonal to the molecular one \(\mathbf{\hat{n}}\). ## IV Discussion ### Enhanced biaxial symmetry-breaking at perpendicular colloidal-molecular alignment For colloidal rods immersed in a chiral LC, the biaxial order developed at the level of the colloids is much more pronounced for rods with homeotropic boundary condition, whose energy-favored orientation is along \(\mathbf{\hat{\tau}}\) and perpendicular to \(\mathbf{\hat{n}}\). As a consequence, the rotational symmetry (with the rotation axis being \(\mathbf{\hat{\tau}}\)) is obviously not continuous, with \(\mathbf{\hat{n}}\) being the material axis representing the actual molecular direction and \(\mathbf{\hat{\chi}}\), in contrast, an "imaginary" one. The dissimilarity and the resulting symmetry breaking are thus more pronounced than those in the case of colloids aligned along \(\mathbf{\hat{n}}\), as clearly shown above by biaxial colloidal distribution probabilities in the experimental results. Analytical theory predicts a similar type of realignment and enhanced biaxiality to occur for homeotropic rods immersed in chiral host LCs where the chiral and biaxial "dressing" around the rod will also be much more pronounced than that of rods with planar or parallel boundary conditions. With the significant contribution from the elastic energy of the background molecular LC to the total free energy [Eq. (35)], we are provided an additional control of this emergent biaxiality by tuning elasticities of the molecular host to boost the biaxiality of the hybrid LC [Eq. (26)]. Likewise, disks with planar anchoring, which could be realized through appropriate surface functionalization [31], exhibit equivalent perpendicular alignment (but this time along the helical axis \(\mathbf{\hat{\chi}}\)) which would also give strongly enhanced biaxial order in the disk orientation distribution. ### Quadratic scaling of biaxial order parameter with chirality In the weak molecular chirality regime, we may characterize the leading order contribution of chirality to colloidal biaxiality by expanding the biaxial order parameter up to the quadratic order in the inverse pitch \(q=2\pi/p\): \[\Delta=\Delta_{0}(qa)^{2}+\mathcal{O}[(qa)^{4}] \tag{39}\] where the length scale corresponds to the colloidal dimensions; \(a=D_{\mathrm{c}}\) for thin disks and \(a=L_{\mathrm{c}}\) for cylinderical rods. The zeroth order term must be zero given that no intrinsic biaxiality can be expected from purely uniaxial components at zero chirality. Also, the linear term proportional to \(qa\) must vanish since the value of biaxiality should not depend on the handedness of the host material. Following the results Eq. (18), Eq. (38), and the free energies for each type of colloids, we computationally verify the quadratic scaling \(\Delta\sim\Delta_{0}(qa)^{2}\) within the weak chirality approximation \(qa\ll 1\) [Fig. 11(a)]. Interestingly, the quadratic scaling of biaxial order with the parameters \(qL_{\mathrm{c}}\) or \(qD_{\mathrm{c}}\) resembles the theoretical prediction by Priest and Lubensky for a single-component molecular LC [19], in which case \(L_{\mathrm{c}}\) needs to be replaced by \(L_{\mathrm{m}}\) denoting the size of the molecules. Despite the different derivations of biaxiality from component material(s), the agreement between the results from our hybrid LC system and single-compound LC reveals the underlying physical principle, namely a close relationship between biaxial order and chirality. Most interestingly, the prefactor \(\Delta_{0}\) turns out to be very different for each system considered and has a distinct, non-trivial dependence on the surface anchoring strength [Fig. 11(b)]. ### Enhanced biaxiality of the molecular host at the colloidal surface The molecular biaxial order parameter \(\Delta_{\mathrm{m}}\) measures the broken uniaxial symmetry of the LC host (which is 5CB). The \(\Delta_{\mathrm{m}}\) is associated to the tensorial local mean-field order parameter by [48; 17]: \[\mathbf{Q}^{(\mathrm{m})}=S_{\mathrm{m}}\left(\frac{3}{2}\mathbf{\hat{n}} \otimes\mathbf{\hat{n}}-\frac{\mathbf{I}}{2}\right)+\Delta_{\mathrm{m}}\left( \frac{3}{2}\mathbf{\hat{m}}\otimes\mathbf{\hat{m}}-\frac{\mathbf{I}}{2}\right) \tag{40}\] with the molecular director field \(\mathbf{\hat{n}}\) and the biaxial director \(\mathbf{\hat{m}}\) orthogonal to each other. Here, \(S_{\mathrm{m}}\) is the scalar order parameter measuring the uni-directionality of \(\mathbf{\hat{n}}\), with \(S_{\mathrm{m}}\geq\Delta_{\mathrm{m}}\geq 0\). Accordingly, in the numerical computation, the order parameters are determined by the diagonalization of the Q tensor: \[\Delta_{\mathrm{m}} =\frac{2}{3}(\lambda_{2}-\lambda_{3})\] \[S_{\mathrm{m}} =\lambda_{1}+\Delta_{\mathrm{m}}/2 \tag{41}\] where \(\lambda_{1}>\lambda_{2}>\lambda_{3}\) are the eigenvalues of \(\mathbf{Q}^{(\mathrm{m})}\). The directors \(\mathbf{\hat{n}}\) and \(\mathbf{\hat{m}}\) are then found by calculating the eigenvectors corresponding to \(\lambda_{1}\) and \(\lambda_{2}\), respectively. Since eigenvalues are interpreted as the "directionalities" along each orientation (eigenvector), the calculation of \(\Delta_{\mathrm{m}}\) in Eq. (41) corresponds exactly to finding the inequivalence of the two minor axes (\(\mathbf{\hat{m}}\) and \(\mathbf{\hat{n}}\times\mathbf{\hat{m}}\)), and the value of biaxiality is a measure of the broken rotational symmetry along \(\mathbf{\hat{n}}\), analogous to the colloidal orientation distributions illustrated above. Using numerical modeling based on the Q-tensor representation of the LC order parameters, we find \(\Delta_{\mathrm{m}}\) at a far-field helical background to be of the order of \(10^{-7}\), which is precisely the value predicted using \(\Delta_{\mathrm{m}}\sim(qL_{\mathrm{m}})^{2}\) with the size of a 5CB molecule being in the nanometer range \(L_{\rm m}=2\)nm [19], showing the intrinsic biaxial order in the molecular chiral liquid crystal. Interestingly, we also discover that \(\Delta_{\rm m}\) greatly increases from \(10^{-7}\) in the far-field limit to \(10^{-4}\) or even \(10^{-3}\) near the colloidal surfaces [Fig. 12], being especially prominent at the regions where the surface anchoring force favors a distinct molecular director alignment from the helical far-field. The enhanced biaxiality induced by the colloidal particles is qualitatively interpreted as the mismatch of two axes - the particle surface anchoring orientation \(\hat{\bf v}\) and background LC aligning direction \(\hat{\bf n}\). As a quantification of the broken uniaxial rotation symmetry of \(\hat{\bf n}\), higher values of \(\Delta_{\rm m}\) can be found at particle surfaces with a greater discrepancy in the two orientations, with maximum \(\Delta_{\rm m}\) located at regions where surface normal director perpendicular to background far-field [Fig. 12]. Furthermore, within LC regions with a \(\Delta_{\rm m}\) dominated by particle surface and far exceeding the background value \(10^{-7}\), the biaxial director \(\hat{\bf m}\) is found to coincide with the perpendicular component of the surface anchoring director to the nematic director \(\hat{\bf m}=\hat{\bf v}-(\hat{\bf v}\cdot\hat{\bf n})\hat{\bf n}\), confirming the idea that the colloidal surface induces molecular biaxiality order by introducing an energy landscape for \(\hat{\bf n}\) without uniaxial symmetry. In case of no host chirality, biaxial order stabilized by correlations between colloidal particles immersed in nematic 5CB was reported in [18]. Furthermore, compared to pure molecular LCs without colloids, the frustrated alignment of \(\hat{\bf n}\) induced by the presence of colloidal particles also leads to a reduced bulk nematic order parameter \(S_{\rm m}\) and the formation of defects in cases with strong surface anchoring. In our systems, though, we expect the two independent contributions to the "biaxialization" of uniaxial 5CB liquid crystal - the introduction of chiral dopant and of colloidal particles - to have negligible ef Figure 11: (a) Biaxial order in the colloidal orientation distribution for the weak chirality regime \(qa\ll 1\), with \(a\) the typical colloid size. The results are based on the Rapini-Papoular surface anchoring energy Eq. (6) and Eq. (18) using \(W_{0}=10^{-6}\)Jm\({}^{-2}\). (b) Dependence of the prefactor \(\Delta_{0}\), defined by \(\Delta\sim\Delta_{0}(qa)^{2}\) on the anchoring strength based on the colloidal dimensions \(L_{\rm c}=1.7\mu\)m and \(D_{\rm c}=28\)nm for the rods and \(D_{\rm c}=2\mu\)m for the disks. Figure 12: (a)-(c) Contours of molecular biaxiality (magenta) around the colloids (gray) marking the regions with \(\Delta_{\rm m}\) larger than \(10^{-3}\) (a) and \(10^{-4}\) (b,c), respectively. The orthogonal frame defining the molecular axes is colored as in Fig. 1. Homeotropic anchoring condition is used for (a,c) and planar anchoring for (b). Surface anchoring strength \(W_{0}=10^{-6}\)Jm\({}^{-2}\) and LC helical pitch \(p=30\mu\)m is used for all simulations. fects on the free energies in our analytical model, which is evident by the induced values of \(\Delta_{\rm m}\) and has been confirmed by the numerical modeling using a tensorial order parameter \(\mathbf{Q}^{\rm(m)}\). ### Biaxial interpretation of chiral liquid crystals As suggested in the section above, the intrinsic biaxiality of a chiral nematic LC allows us to define local biaxial directors even in absence of colloidal particles. The molecular biaxial order persists \(\Delta_{\rm m}\sim(qL_{\rm m})^{2}\) as long as the chirality \(q\), or the helicity in the director alignment, is non-vanishing. To accurately account for this unavoidable biaxiality, we modify and expand the calculation in Ref. [49; 50] for uniaxial chiral nematics, in which the chirality-associated directors (\(\mathbf{\hat{n}}\),\(\mathbf{\hat{\chi}}\),\(\mathbf{\hat{\tau}}\)) are found by diagonalizing a 3-by-3 handedness tensor \(\mathbf{H}\) defined as: \[\mathbf{H}_{ij}=\epsilon_{ikl}\mathbf{\hat{n}}_{k}\frac{\partial\mathbf{\hat{ n}}_{l}}{\partial x_{j}} \tag{42}\] with summation over indices assumed. The trace \(\sum_{i}\mathbf{H}_{ii}=-\mathbf{\hat{n}}\cdot(\nabla\times\mathbf{\hat{n}})\) gives the helicity of the LC director alignment field. Considering the intrinsic biaxial order in chiral LCs, we can similarly construct the handedness tensor using the molecular tensorial order parameter \(\mathbf{Q}^{\rm(m)}\): \[\mathbf{H}_{ij}=\frac{4}{9{S_{\rm m}}^{2}}\epsilon_{ikl}\mathbf{Q}_{kn}^{\rm(m )}\frac{\partial\mathbf{Q}_{ln}^{\rm(m)}}{\partial x_{j}} \tag{43}\] The uniaxial definition Eq. (42) can be recovered by expanding the equation using Eq. (40) with \(\Delta_{\rm m}=0\). Note that the trace of the handedness tensor again represents the helicity and is identical to the chiral part of elastic free energy (\(L_{4}\) term in Eq. (1)). Strikingly, in numerical simulation we discovered that the helical director field \(\mathbf{\hat{\chi}}\), which is computed as the eigenvector corresponding to the eigenvalue with the largest absolute value, thoroughly matches the directors calculated by diagonalizing \(\mathbf{Q}^{\rm(m)}\): \(\mathbf{\hat{\chi}}=\mathbf{\hat{n}}\times\mathbf{\hat{m}}\) [Fig. 13], which also immediately suggests \(\mathbf{\hat{\tau}}=\mathbf{\hat{m}}\) (Note all directors are head-tail symmetric). The excellent overlap of the two orthogonal frames, (\(\mathbf{\hat{n}}\), \(\mathbf{\hat{\tau}}\), \(\mathbf{\hat{\chi}}\)) originating from chirality and (\(\mathbf{\hat{n}}\), \(\mathbf{\hat{m}}\), \(\mathbf{\hat{n}}\times\mathbf{\hat{m}}\)) representing biaxiality, directly demonstrates the biaxial feature in chiral nematic LCs. The energy-minimizing of \(\mathbf{Q}^{\rm(m)}\) automatically incorporates these symmetries once all degrees of freedom beyond those for pure uniaxial nematics are allowed. Consequently, one can straightforwardly identify chirality through the concomitant biaxial properties using \(q\sim\sqrt{\Delta_{\rm m}}/a\) and \(\mathbf{\hat{\chi}}=\mathbf{\hat{n}}\times\mathbf{\hat{m}}\) instead of investigating the helical twisting and spatial derivatives of the LC directors. These values are well-defined from the biaxiality calculation even inside LC defects with reduced uniaxial order parameter \(S_{\rm m}\). Therefore, with the chirality-driven biaxial symmetry taken into account, one can naturally analyze structures within a chiral LC using considerations similar to the ones derived for biaxial nematics [Fig. 13]. Since the theory describing the topological classification of defects and solitons in biaxial nematics, which has an order parameter space \(SO(3)/D_{2}\), is distinct from those emerging in a uniaxial LC with \(\mathbb{S}^{2}/\mathbb{Z}_{2}\) counterpart [3; 57; 58; 59; 30], the biaxial symmetry in a chiral LC offers an alternative interpretation of topological objects in cholesterics differing from their more conventional description. For example, a helical configuration resembling a Bloch wall can be found across our experiments. By identifying the \(\mathbf{\hat{\chi}}\) director field within, which is uniformly aligned, we can visualize the configuration as a 1-dimensional soliton formed in brick-like LCs with matching director fields [Fig. 13] with a uniform \(\mathbf{\hat{\chi}}\) field and helical twisting in \(\mathbf{\hat{n}}\) and \(\mathbf{\hat{\tau}}\) fields. Furthermore, unlike uniaxial LCs with a single director field, biaxial systems with three orthogonal director fields cannot accommodate a fully nonsingular solitonic structure as 2D translationally invariant fully nonsingular objects, implied by \(\pi_{2}(SO(3)/D_{2})=0\)[3; 58]. As shown in Fig. 13(b), a meron-like arrangement of directors is a nonsingular soliton embedded in the molecular director \(\mathbf{\hat{n}}\). The meron, or half-sky Figure 13: (a) The director profiles simulated inside a Bloch-wall-like structure resembling a helical-twist. Treated as in a uniaxial LC, \(\mathbf{\hat{n}}\) (red) and \(\mathbf{\hat{\chi}}\) (green) are calculated using chirality tensor [49] and visualized as ellipsoids (left). The directors simulated instead by biaxial Q-tensor are visualized using bricks (right), with red, blue, and green faces respectively corresponding to principle \(\mathbf{\hat{n}}\), biaxial \(\mathbf{\hat{m}}\), and the third \(\mathbf{\hat{n}}\times\mathbf{\hat{m}}\) orthogonal axes [48]. (b) Numerical simulation of molecular \(\mathbf{\hat{n}}\) and helical \(\mathbf{\hat{\chi}}\) axes in a 2D meron-like structure using chirality-based (left) and biaxiality-based approaches (right). constructed as a 2D soliton composed of a single director or vector with the absence of singularity [60; 61; 5]. The structure becomes, however, a singular defect in a biaxial system, as demonstrated by the emergence of singularities found at the center in the \(\hat{\chi}\) and \(\hat{\tau}\) director fields orthogonal to the material director field. Similarly, 3D topological solitons, fingers, and other non-singular structures in cholesterics can be viewed as defects lines and loops in a biaxial system by thoroughly analyzing all three directors as well [62; 3]. Besides, some phenomena of the defects and soliton structures in a system of chiral nematics, including nonabelian disclinations and their entanglement behaviors, are elucidated only from the perspective of biaxial topological descriptions that are distinct from uniaxial topology [63; 30; 64; 3]. With the biaxial directors defined and simulated in consistency with the chiral description, the biaxial features of chiral nematics, including their topological defects, solitons, and frustrated structures can be easily and naturally explored. This opens up the possibility of using molecular-colloidal chiral nematics as model systems in the exploration of nonabelian vortices, solitonic structures with low-symmetry order parameters, etcetera. ## V Conclusion and outlook We have explicitly demonstrated that immersing uniaxial, non-chiral colloidal rods and disks into a low-molecular-weight cholesteric liquid crystal host leads to emergent biaxial order that we identify at both colloidal and molecular levels by combining experiment with numerical simulation and analytical theory [Fig. 14]. Unlike the previously studied case of hybrid molecular-colloidal biaxial phases [16; 17; 18], we observe multi-level biaxial symmetry-breaking at ultralow colloidal content where colloid-colloid interactions are negligible. By exploring a variety of colloidal shapes and surface anchoring symmetries we report biaxial order emerging at three distinct levels. First, molecular director distortions develop around each colloid which, although being of marginal extent because of weak surface anchoring conditions, display a distinct two-fold signature imparted by the cholesteric host. Second, the orientational distribution of the colloids around the local cholesteric director is demonstrated to adopt a clear biaxial signature, and the response of the corresponding biaxial order parameter is found to depend non-trivially upon the surface anchoring strength as well as on the ratio of the cholesteric pitch and the principal colloidal dimension (rod length or disk diameter). Finally, at the molecular scale, we demonstrate that enhanced biaxiality emerges close to the colloidal surface at levels strongly exceeding those expected for purely molecular cholesterics. A particularly striking manifestation of biaxial symmetry-breaking is encountered for thermotropic cholesterics doped with colloidal rods with homeotropic surface anchoring. Driven by a combination of surface anchoring forces and an energy penalty incurred by twisting a weakly developed surface disclination along the rod main axis, these rods have a strong tendency to align perpendicular to both the helical axis and the local cholesteric director, thus imparting a two-fold \(D_{2h}\) orientational symmetry onto the hybrid system at each point along the cholesteric helix. By means of numerical minimization of the Landau de Gennes energy and mean-field theory based on the Rapini-Papoular surface anchoring energy, we have revealed that the multi-level expression of emergent biaxiality in our systems is already manifest at ultralow colloid concentrations, essentially as a single-colloid effect, and we find consistent agreement between our predictions from modeling and the experimental observations. Our results pave the way towards controlled biaxial order at both colloidal and molecular levels. By harnessing the interplay of chiral and biaxial symmetries, future research efforts could be directed along the following several emergent avenues. At larger colloidal concentrations a richer phenomenology could be expected and explored due to the more prominent roles expected to be played by steric, electrostatic or defect-mediated colloid-colloid interactions further enriching the surface anchoring and elastic forces discussed here. Besides the emergent symmetry breaking discussed here, one could, in principle, also apply electric or magnetic fields to reconfigure either molecular or colloidal sub-systems, or both, to achieve even lower externally induced symmetries of LCs, for instance, corresponding to triclinic or monoclinic point groups. Finally, by realizing topological solitons in the molecular-colloidal hybrid system with nontrivial chirality and biaxiality, one could reveal the stability of topological structures for various low-symmetry order parameter spaces. While ferromagnetic colloidal particle dispersions have already provided insight into the possibility of formation of solitons in polar chiral liquid crystals [65], this study could be extended to symmetries differing from nonpolar and polar uniaxial LCs, for example, by exploring multi-dimmensional solitonic structures corresponding to the \(SO(3)/D_{2}\) order parameter space. ###### Acknowledgements. We acknowledge discussions with M. Bowick, T. Lee, T.C. Lubensky, B. Senyuk, M. Ravnik and M. Tasinkevych. We are grateful to T.C. Lubensky for providing helpful suggestions and feedback on the initial versions of this manuscript. The experimental and numerical simulations research at University of Colorado Boulder was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under contract DE-SC0019293 with the University of Colorado at Boulder. M.T.L. and H.H.W. acknowledge financial support from the French National Research Agency (ANR) under grant ANR-19-CE30-0024 "ViroLego". I.I.S. acknowledges the support of the International Institute for Sustainability with Knotted Chiral Meta Matter at Hiroshima University in Japan during part of his sabbatical stay, as well as the hospitality of the Kavli Institute for Theoretical Physics in Santa Barbara, when he was partially working on this manuscript. This research was also supported in part by the National Science Foundation under Grant No. NSF PHY-1748958 (I.I.S. and J.-S.W.). Appendix A Elastic distortions around the rod surface for \(\mathbf{\hat{u}}\perp\mathbf{\hat{\chi}}\) In order to complete our understanding of the strength of the elastic distortions surrounding the main section of a thin rod we now focus on the case where a rod is perpendicular to the helical axis \(\mathbf{\hat{\chi}}\) and aligned at an angle \(\gamma\) away from the \(\mathbf{\hat{\tau}}\)-axis. This situation is depicted in Fig. 9(a). Since the rod is perpendicular to the helical axis \(\mathbf{\hat{\chi}}\) we may ignore the effect of chiral twist and parameterize the host director field case within a Cartesian reference frame spanned by the tripod \((\mathbf{\hat{x}},\mathbf{\hat{y}},\mathbf{\hat{z}})\) with \(\mathbf{\hat{z}}=\mathbf{\hat{\chi}}\): \[\mathbf{\hat{n}}_{h}(\mathbf{r})=\mathbf{\hat{x}}\cos\Phi(\mathbf{r})\cos \epsilon(\mathbf{r})+\mathbf{\hat{y}}\sin\Phi(\mathbf{r})\cos\epsilon(\mathbf{ r})+\mathbf{\hat{z}}\sin\epsilon(\mathbf{r}) \tag{44}\] As before, we ignore end effects and express the spatial variation of the distortion angles in polar coordinates, i.e. \(\Phi(r,\vartheta)\) and \(\epsilon(r,\vartheta)\) that parameterize space in the lab frame. In principle, the Euler-Lagrange expressions emerging from minimizing the elastic free energy are strongly coupled and cannot be solved analytically even in the case of weak surface anchoring. We expect, however, that a tilted rod will mostly experience distortions along its main axis \(\mathbf{\hat{\chi}}\), expressed by a non-zero \(\epsilon\), while the director deviations \(\Phi\) surrounding the lateral cross-section of the rod remain far less affected by the rotation. Then, we can pursue a hybrid route by 'constraining \(\Phi=\Phi_{0}\) to its solution for the perpendicular case Eq. (30) and minimize the free energy only with respect to \(\epsilon\). To render the model analytically tractable we assume that the rod cross-section along which the director distortions are expected to occur is curvature-free and can be described by a strip of length \(L_{s}\) and \(D_{s}\ll L_{s}\). We define a tilt angle \(\gamma=\delta-\frac{\pi}{2}\) (with \(0<\gamma<\pi/2\)) so that \(\gamma=0\) corresponds to the case where the rod points perpendicular to the LC host director \(\mathbf{\hat{n}}\). All distances are normalized in terms of the colloidal rod diameter \(D_{\mathrm{c}}\). The distortions are then described by the 2D Laplacian: \[(\partial_{x}^{2}+\partial_{y}^{2})\epsilon=0 \tag{45}\] The general solution reads: \[\epsilon(x,y)=\sum_{n=1}^{\infty}e^{-n\pi x}[a_{n}\cos(n\pi y)+b_{n}\sin(n\pi y )] \tag{46}\] which vanishes in the far-field limit \(\epsilon(x\rightarrow\infty)=0\). The Rapini-Papoular surface anchoring free energy reads: \[\frac{F_{s}}{KL_{\mathrm{c}}}=-\frac{1}{2\ell_{s}}\int_{0}^{1}dy\cos^{2}( \gamma-\epsilon(0,y)) \tag{47}\] which translates into the following boundary condition at the surface of the strip located at \(x=0\): \[\partial_{x}\epsilon(0,y)=\frac{1}{4\ell_{s}}\sin[2(\gamma-\epsilon(0,y))] \tag{48}\] Further, for symmetry reasons we require the distortion angle to be vanishing at both sides of the strip: \[\epsilon(0,0)=\epsilon(0,1)=0 \tag{49}\] which implies that \(a_{n}=0\). The coefficients \(b_{n}\) need to be resolved from: \[\frac{n\pi}{2}b_{n}=\frac{1}{4\ell_{s}}\int_{0}^{1}dy\sin(n\pi y)\sin\left[2 \left(\gamma-\sum_{k=1}^{\infty}b_{k}\sin(k\pi y)\right)\right] \tag{50}\] Figure 14: Our molecular-colloidal hybrid system with emergent biaxial symmetry consists of purely uniaxial building blocks. The chirality effects at different scales yield effective behavior of a biaxial chiral molecular-colloidal LC. For small tilt angles \(\gamma\ll 1\) distortions are expected to be weak \(\epsilon\ll 1\) so that we linearize \(\sin 2(\gamma-\epsilon)\approx 2(\gamma-\epsilon)\). This enables us to resolve the coefficients analytically: \[b_{n}=\left(\frac{1-(-1)^{n}}{(n\pi)^{2}}\right)\frac{\gamma}{\ell_{s}} \tag{51}\] The free energy increase induced by the elastic distortions is given by: \[\Delta F_{el}=\frac{\pi KL_{\rm c}}{4}\sum_{n=1}^{\infty}nb_{n}^{2} \tag{52}\] which in the linearized regime for small \(\gamma\) gives a simple analytical result: \[\Delta F_{el}=\frac{7KL_{\rm c}}{8\pi^{3}}\zeta(3)\left(\frac{\gamma}{\ell_{s} }\right)^{2} \tag{53}\] with \(\zeta(3)\approx 1.2\) a constant from the Riemann-Zeta function \(\zeta(x)\). The surface anchoring free energy reads: \[F_{s}=-\frac{L_{\rm c}D_{\rm c}W_{0}}{2}2\int_{0}^{1}dy\cos^{2}(\gamma- \epsilon(0,y)) \tag{54}\] where the factor two reflects the two opposing sides of the rectangular strip with surface \(LD\) whose contributions are equivalent. Then, in the absence of elastic distortions and no tilt (\(\gamma=0\)) the surface anchoring free energy would simply be \(F_{s}=-L_{\rm c}D_{\rm c}W_{0}\) which only marginally differs from the result for the cylindrical case \(F_{s}=-(\pi/4)L_{\rm c}D_{\rm c}W_{0}\). Within the linearized regime for small tilt angles \(\gamma\ll 1\) the change in surface anchoring free energy imparted by the elastic distortions is given by: \[\Delta F_{s} \approx L_{\rm c}D_{\rm c}W_{0}\int_{0}^{1}dy(\gamma-\epsilon(0,y))^{2}\] \[\approx W_{0}L_{\rm c}D_{\rm c}\left(1+\frac{1}{48\ell_{s}^{2}}-\frac{7 \zeta(3)}{\pi^{3}\ell_{S}}\right)\gamma^{2} \tag{55}\] This expression along with Eq. (53) clearly reflects the basic trade-off between surface anchoring and elasticity where the cost in elastic free energy is partly compensated by a reduction of the surface anchoring free energy (last term). The total free energy change for small tilt angles now reads: \[\Delta F_{\rm tot}\sim W_{0}L_{\rm c}D_{\rm c}\left(1-\frac{49\zeta(3)}{8\pi^ {3}\ell_{s}}\right)\gamma^{2}+{\cal O}(\gamma^{2}/\ell_{s}^{2}) \tag{56}\] Let us now compare our results with the simple Rapini-Papoular expression Eq. (13) in the _absence_ of elastic distortions. Taking \(\theta=\pi/2\) and expanding for small \(\gamma\) we find: \[\Delta F_{\rm tot}^{(s)}\sim\frac{\pi}{4}W_{0}L_{\rm c}D_{\rm c}\gamma^{2} \tag{57}\] Disregarding the trivial curvature prefactor \(\pi/4\) in the last expression, we find that the impact of the elastic distortions is rather marginal, since the correction term in Eq. (56) is less than \(1\ k_{B}T\). Numerical resolution of Eq. (50) reveals that weak elastic distortions occur mostly when the rod is at an oblique angle \(\gamma=\pi/4\). The predictions from our analytical model are depicted in Fig. 9(c). ## Appendix B Elastic distortions around the disk surface Ignoring elastic distortions we find that disks with homeotropic surface anchoring tend to orient along the local molecular director \(\mathbf{\hat{n}}\), as observed in experiment. This is the optimal situation that incurs the least amount of elastic distortions, compared to the other principal directions in which cases the disk surface would experience strongly unfavorable tangential surface ordering. However, even when the disk normal is aligned along the local nematic director there are local mismatches between the far-field and preferred surface director due to the weak twisting of the host director along the helix axis \(\hat{\chi}\) and when the rod normal fluctuates away its equilibrium orientation. The elastic distortions are expected to be weak but they will become more outspoken at shorter cholesteric pitches. It is instructive to compute the extent of these distortions along the lines of our previous analysis for rods. Let us consider an infinitely thin disk with its normal pointing along \(\mathbf{\hat{n}}\) and rotated over an angle \(\delta\) through the helix axis \(\hat{\chi}\) so that the disk vector is restricted to lie in the plane perpendicular to it. We assume weak elastic distortions \(\Phi\) developing in this plane. Defining a host director in the Cartesian lab frame \(\mathbf{\hat{n}}_{h}=\mathbf{\hat{x}}\cos\Phi(x,y)+\mathbf{\hat{y}}\sin\Phi(x,y)\) we find, assuming elastic isotropy, that the distortions are described by the Laplace equation: \[(\partial_{x}^{2}+\partial_{y}^{2})\Phi=0 \tag{58}\] The effect of a twisting host director is accounted for through the surface anchoring free energy: \[F_{s}=-\frac{W_{0}}{2}\oint d\mathcal{S}[\mathbf{\hat{n}}_{h}\cdot(\mathcal{R} (qz+\delta)\cdot\mathbf{\hat{v}}(\mathcal{S}))]^{2} \tag{59}\] where \(\mathcal{S}\) parameterizes the face of the disk (as previously we ignore finite thickness effects for disks with \(D_{\rm c}\gg L_{\rm c}\)) and \(\mathbf{\hat{v}}=(1,0,0)\) indicating homeotropic anchoring along the surface normal. The rotation matrix reads: \[\mathcal{R}(qz+\delta)=\begin{pmatrix}\cos(qz+\delta)&-\sin(qz+\delta)&0\\ \sin(qz+\delta)&\cos(qz+\delta)&0\\ 0&0&1\end{pmatrix} \tag{60}\] A key distinction with the rod case discussed previously is that the distortions are not uniform across the disk surface but depend on the location of the surface element with respect to the helical axis. It is convenient to divide the disk surface into infinitely thin strips, with each surface element on the strip being equidistant from the centre-of-mass along the helical axis \(\hat{\chi}\) thus experiencing the same degree of elastic distortions. For notational brevity, we implicitly normalize all lengths in units of the disk diameter \(D_{\rm c}\) and parameterize the disk surface in terms of \(y=\frac{1}{2}\cos\alpha\) and \(z=\frac{1}{2}\sin\alpha\) with \(-\pi<\alpha<\pi\). Each strip then has length \(L_{s}=\cos\alpha\) and thickness \(D_{s}=\frac{1}{2}\cos\alpha d\alpha\) and surface \(ds=L_{s}D_{s}\). The surface anchoring free energy of an arbitrary strip with surface \(ds\) and centre-of-mass distance \(z\) then reads: \[F_{s}^{\rm strip}=-W_{0}[\cos(\Phi(0,y)-qz-\delta)]^{2}ds \tag{61}\] The boundary condition at the strip the disk equator (\(\alpha=0\)) reads: \[\Phi(\infty,0) =0\] \[\ell_{s}\partial_{x}\Phi(0,y) =-\frac{1}{2}\sin[2(\Phi(0,y)-qz-\delta)]\] \[\approx\frac{1}{2}\sin[2(qz+\delta)]-\cos[2(qz+\delta)]\Phi(0,y) \tag{62}\] where we take \(0<y<1\) for convenience. The distortions should be symmetric at the edges (\(\Phi(0,0)=\Phi(0,1)\)). The general solution of the Laplace equation Eq. (58) reads: \[\Phi(x,y)=\sum_{n=1}^{\infty}e^{-n\pi x}b_{n}\sin(n\pi y) \tag{63}\] Applying the boundary conditions we obtain the following expression for the coefficients: \[b_{n}=\frac{\sin[2(qz+\delta)]}{\cos[2(qz+\delta)]-n\pi\ell_{s}}\left(\frac{1 -(-1)^{n}}{n\pi}\right) \tag{64}\] Given that \(q\) and \(-q\) do not give equivalent results we conclude that the distortions created near the disk surface carry a distinct chiral signature imparted by the chirality of the host LC, as evidence by the Landau - de Gennes simulations [Fig. 1 and Fig. 12]. The nature of the imprint depends on the twist angle \(\delta\) between the disk normal and the molecular director \(\mathbf{\hat{n}}\). We further deduce that the distortions vanish at infinitely weak surface anchoring (\(\ell_{s}\rightarrow\infty\)) and in the absence of twist and tilting (\(q=0\) and \(\delta=0\)), as we expect. The elastic free energy for the total disk is given by: \[\Delta F_{el}=\frac{\pi KD_{\rm c}}{4}\int_{-\pi/2}^{\pi/2}d\alpha\cos\alpha \sum_{n}nb_{n}^{2} \tag{65}\] which may be evaluated as a function of the angle \(\delta\) between the disk normal and the molecular director taking the surface anchoring extrapolation length (in units of the disk diameter \(D_{\rm c}\)) to be about \(\ell_{s}\approx 3\). The change in surface anchoring free energy induced by the distortions follows from linearizing Eq. (61) and integrating over all strips: \[\Delta F_{s} =\frac{W_{0}D_{\rm c}^{2}}{2}\int_{-\pi/2}^{\pi/2}d\alpha\cos^{2} \alpha\sin[2(qz+\delta)]\] \[\times\sum_{n}b_{n}\left(\frac{1-(-1)^{n}}{n\pi}\right) \tag{66}\] We reiterate that \(z\) depends on the angle \(\alpha\) via \(z=\frac{D_{\rm c}}{2}\sin\alpha\). We finish our analysis by considering the case where the disk normal rotates over the \(\hat{\tau}\)-axis by an angle \(\zeta\). This is equivalent to the situation depicted in Fig. 3(c) and (d). In this situation, the tilting will generate additional weak LC director distortions across the \(\hat{\chi}\)-direction that we denote by the angle \(\epsilon\). The spatially-dependent host director now reads: \[\mathbf{\hat{n}}_{h}(\mathbf{r})=\begin{pmatrix}\cos\Phi(\mathbf{r})\cos \epsilon(\mathbf{r})\\ \sin\Phi(\mathbf{r})\cos\epsilon(\mathbf{r})\\ \sin\epsilon(\mathbf{r})\end{pmatrix} \tag{67}\] with \(\mathbf{r}=(x,y)\). Each distortion angle obeys the Laplace equation in the \(\mathbf{\hat{n}}-\hat{\tau}\)-plane: \[(\partial_{x}^{2}+\partial_{y}^{2})\Phi =0\] \[(\partial_{x}^{2}+\partial_{y}^{2})\epsilon =0 \tag{68}\] The surface anchoring free energy now takes the following form: \[F_{s}=-\frac{W_{0}}{2}\oint d\mathcal{S}[\mathbf{\hat{n}}_{h}\cdot(\mathcal{R }_{\zeta}\mathcal{R}(qz)\cdot\mathbf{\hat{v}}(\mathcal{S}))]^{2} \tag{69}\] where the matrix \(\mathcal{R}_{\zeta}\) describes a rotation of the disk normal over the \(\hat{\tau}\)-axis (cf. Fig. 3(c)): \[\mathcal{R}_{\zeta}=\begin{pmatrix}\cos\zeta&0&\sin\zeta\\ 0&1&0\\ -\sin\zeta&0&\cos\zeta\end{pmatrix} \tag{70}\] Analogous to the previous case, we may derive boundary conditions from linearizing \(F_{s}\) for weak distortions \(\Phi\ll 1\) and \(\epsilon\ll 1\). Plugging in the general solution [Eq. (63)] and defining \(b_{n}\) as the distortion modes pertaining to \(\Phi(x,y)\) and \(d_{n}\) as those for \(\epsilon(x,y)\) we find that both distortion angles are intricately coupled, as expected: \[b_{n} =c_{n}\cos\zeta\sin(2qz)\] \[d_{n} =c_{n}\sin(2\zeta)\cos^{2}(qz) \tag{71}\] From these we immediately assert the most basic scenarios; both distortions vanish for a disk in an achiral host (\(q=0\)) at zero tilt (\(\zeta=0\)), whereas at nonzero tilt angle only \(\epsilon(d_{n})\) is nonzero. For a disk immersed in a chiral host (\(q\neq 0\)) at zero tilt (\(\zeta=0\)) we recover the previous scenario with \(\Phi(b_{n})\) given by Eq. (64) and \(\epsilon(d_{n})=0\)]. Both distortion angles are expected to be nonzero in case the disk normal is tilted away from the local director of the chiral host. The common prefactor reads: \[c_{n}=\frac{2\left(\frac{1-(-1)^{n}}{n\pi}\right)}{1+2\ell_{s}n\pi-\cos(2\zeta)-2 \cos^{2}\zeta\cos(2qz)} \tag{72}\] The change in elastic free energy is a simple superposition of amplitudes: \[\Delta F_{el}=\frac{\pi KD_{\mathrm{c}}}{4}\int_{-\pi/2}^{\pi/2}d\alpha\cos \alpha\sum_{n}n\left(b_{n}^{2}+d_{n}^{2}\right) \tag{73}\] The contribution arising from the host chirality turns out zero for symmetry reasons: \[\Delta F_{\mathrm{chiral}}=Kq\int d\mathbf{r}\partial_{y}\epsilon(x,y)=0 \tag{74}\] which is easily inferred from inserting the expansion Eq. (63) and integrating over \(y\). The reduction in surface anchoring free energy caused by the distortions \(\Phi\) is as follows: \[\Delta F_{\mathrm{s},\Phi}=W_{0}D_{\mathrm{c}}^{2}\cos\zeta\int_{ -\pi/2}^{\pi/2}d\alpha\cos^{2}\alpha\sin(2qz)\] \[\times\sum_{n}b_{n}\left(\frac{1-(-1)^{n}}{n\pi}\right) \tag{75}\] supplemented with a similar contribution accounting for the distortions \(\epsilon\): \[\Delta F_{\mathrm{s},\epsilon}=W_{0}D_{\mathrm{c}}^{2}\sin(2\zeta) \int_{-\pi/2}^{\pi/2}d\alpha\cos^{2}\alpha\cos^{2}(qz)\] \[\times\sum_{n}d_{n}\left(\frac{1-(-1)^{n}}{n\pi}\right) \tag{76}\] We find that the surface anchoring is always negative and outweighs the cost in elastic free energy thus lower the overall free energy of the system, as it should. The results in Fig. 8(c) and (f). We find that the elastic distortions are most developed at oblique orientations (\(\delta\) or \(\zeta\approx\pi/4\)) and do not strongly depend on the direction along which the disk is rotated. If we now reconsider the total alignment potential for disks accounting for corrections derived above we conclude that the ordering of the disks is hardly affected by the distortions. The free energy changes are typically several tens of \(k_{B}T\) which is about two orders of magnitude smaller than the typical Rapini-Papoular surface anchoring free energy \(W_{0}D_{\mathrm{c}}^{2}\) which is about 1500 \(k_{B}T\). disks experiencing weak surface anchoring with a cholesteric host with large pitch (\(qD_{\mathrm{c}}<1\)) will therefore simply follow the local molecular director with thermal fluctuations around the optimum angle being strongly suppressed. The considerable penalty incurred by angular fluctuations away from the local cholesteric director is demonstrated in Fig. 8(c) and (f) for a number of different host pitches. Although the presence of elastic distortions around the disk surface lead to a systematic reduction of the total free energy, their effect on the realigning properties of a colloidal disk immersed in a cholesteric host LC seems rather marginal.
2301.04948
Discrimination and certification of unknown quantum measurements
We study the discrimination of von Neumann measurements in the scenario when we are given a reference measurement and some other measurement. The aim of the discrimination is to determine whether the other measurement is the same as the first one. We consider the cases when the reference measurement is given without the classical description and when its classical description is known. Both cases are studied in the symmetric and asymmetric discrimination setups. Moreover, we provide optimal certification schemes enabling us to certify a known quantum measurement against the unknown one.
Aleksandra Krawiec, Łukasz Pawela, Zbigniew PuchaΓ…Β‚a
2023-01-12T11:38:24Z
http://arxiv.org/abs/2301.04948v3
# Discrimination and certification of unknown quantum measurements ###### Abstract. We study the discrimination of von Neumann measurement in the scenario when we are given a reference measurement and some other measurement. The aim of the discrimination is to determine whether the other measurement is the same as the first one. We consider the cases when the reference measurement is given without the classical description and when its classical description is known. Both cases are studied in the symmetric and asymmetric discrimination setups. Moreover, we provide optimal certification schemes enabling us to certify a known quantum measurement against the unknown one. ## 1. Introduction The need for appropriate certification tools is one of the barriers to the development of large-scale quantum technologies. [1] In this work, we propose tests that verify if a given device corresponds to its classical description or the reference device. But why should we care about the discrimination of devices which description we do not know? A lot is known about discrimination of quantum states, channels and measurements, which description we do know. In the standard discrimination problem, there are two quantum objects, and one of them is secretly chosen. The goal of discrimination is to decide which of the objects was chosen. These objects can be quantum states but also quantum channels and measurements. However, what if we were given a reference quantum measurement or channel instead of its classical description? Then we may want to discriminate them regardless of their classical descriptions. Therefore, we arrive at the new problem of discrimination of unknown objects. Discrimination of known quantum channels was mainly studied for certain classes of channels like unitary channels [2, 3, 4]. Advantage of using entangled states for minimum-error discrimination of quantum channels was studied in [5, 6]. General conditions when quantum channels can be discriminated in the minimum error, unambiguous and asymmetric scenarios were derived in [7], [8] and [9] respectively. Another formalism used for for studying discrimination of quantum channels is based on process POVM (PPOVM) [10]. It was applied to discrimination of unitary channels in [11, 12]. Discrimination of unknown unitary channels was first studied in the work [13] in both minimum-error and unambiguous setups. The authors calculated that the probability of successful minimum-error discrimination between two random qubit unitary channels Introduction The quantum measurement of quantum measurements is a classical problem in quantum field theory. The measurement of quantum measurements is a classical problem in quantum field theory. which outputs a diagonal matrix where \(i\)-th entry on the diagonal corresponds to the probability of obtaining \(i\)-th label. The Choi-Jamiolkowski representation of quantum operation \(\Psi\in\mathcal{T}(\mathcal{X})\) is defined as \(J\left(\Psi\right)\coloneqq\left(\Psi\otimes\openone_{\mathcal{X}}\right)(| \openone\rangle\!\langle\openone|)\), where \(\openone_{\mathcal{X}}\) is the identity channel on the space \(\mathcal{L}(\mathcal{X})\) and \(|X\rangle\!\rangle\) denotes the (lexicographical) vectorization of the operator \(X\). The diamond norm of a quantum operation \(\Psi\in\mathcal{T}(\mathcal{X})\) is defined as \[\|\Psi\|_{\diamond}\coloneqq\max_{X:\|X\|_{1}=1}\left\|\left(\Psi\otimes \openone_{\mathcal{X}}\right)(X)\right\|_{1}, \tag{2}\] where \(\openone_{\mathcal{X}}\) is, as previously, the identity channel on the space \(\mathcal{L}(\mathcal{X})\). We will often use the bounds on the diamond norm [22, 23] \[\frac{1}{d}\|J(\Psi)\|_{1}\leq\|\Psi\|_{\diamond}\leq\|\operatorname{Tr}_{1}|J (\Psi)|\|. \tag{3}\] In this work we will focus on two approaches to discrimination of quantum measurements, which are symmetric and asymmetric discrimination. ### Symmetric discrimination The goal of symmetric discrimination is to maximize the probability of correct discrimination. It is also known as minimum-error discrimination. The schematic representation of symmetric discrimination of quantum measurements is depicted in Figure 1. There are two black boxes. In the first black box there is a measurement \(\mathcal{P}_{0}\). In the second box there is a measurement \(\mathcal{P}_{?}\), which can either the same measurement \(\mathcal{P}_{0}\), or some other measurement, \(\mathcal{P}_{1}\). In other words \(\mathcal{P}_{?}\in\{\mathcal{P}_{0},\mathcal{P}_{1}\}\). As the input state to the discrimination procedure we take a state \(|\psi\rangle\in\mathcal{X}\otimes\mathcal{Y}\otimes\mathcal{Z}\) and we will write \(\psi\coloneqq|\psi\rangle\!\langle\psi|\) for the sake of simplicity. The measurement in the first black box acts on the register \(\mathcal{X}\) and the second black box acts on the register \(\mathcal{Y}\). Basing on the outcomes of both measurements in the black boxes, we prepare a final measurement on the register \(\mathcal{Z}\). Having the output of the final register, we make a decision whether \(\mathcal{P}_{?}=\mathcal{P}_{0}\) or \(\mathcal{P}_{?}=\mathcal{P}_{1}\). To calculate the probability of the successful discrimination between quantum measurements, we will make use of the Holevo-Helstrom theorem. It states that the optimal probability of successful discrimination between any quantum channels \(\Psi_{0}\) and \(\Psi_{1}\in\mathcal{C}(\mathcal{X})\) is upper-bounded by \[p_{succ}\leq\frac{1}{2}+\frac{1}{4}\left\|\Psi_{0}-\Psi_{1}\right\|_{\diamond} \tag{4}\] and this bound can be saturated. This optimal probability of successful discrimination will be denoted \(p_{succ}^{H}\coloneqq\frac{1}{2}+\frac{1}{4}\left\|\Psi_{0}-\Psi_{1}\right\|_{\diamond}\). Figure 1. Entanglement-assisted discrimination of von Neumann measurements ### Asymmetric discrimination Asymmetric discrimination is based on hypothesis testing. The null hypothesis \(H_{0}\) corresponds to the situation when \(\mathcal{P}_{?}=\mathcal{P}_{0}\). The converse situation, \(\mathcal{P}_{?}=\mathcal{P}_{1}\) corresponds to alternative hypothesis \(H_{1}\). The scheme of asymmetric discrimination is as follows. We begin with preparing an input state \(\ket{\psi}\in\mathcal{L}(\mathcal{X}\otimes\mathcal{Y}\otimes\mathcal{Z})\) and apply \(\mathcal{P}_{0}\) and \(\mathcal{P}_{?}\) on registers \(\mathcal{X}\) and \(\mathcal{Y}\) respectively. Therefore, in the case when \(\mathcal{P}_{?}=\mathcal{P}_{0}\), we obtain as the output \(\left(\mathcal{P}_{0}\otimes\mathcal{P}_{0}\otimes\openone\right)(\psi)\) and if \(\mathcal{P}_{?}=\mathcal{P}_{1}\), then the output state yields \(\left(\mathcal{P}_{0}\otimes\mathcal{P}_{1}\otimes\openone\right)(\psi)\). Having the output states, we prepare a binary measurement \(\left\{\Omega,\openone-\Omega\right\}\), where the effect \(\Omega\) accepts the null hypothesis and the effect \(\openone-\Omega\) accepts the alternative hypothesis. The type I error (false positive) happens when we reject the correct null hypothesis. When the input state \(\psi\) and measurement \(\Omega\) are fixed, then the probability of making the type I error is given by the expression \[p_{\mathrm{I}}^{(\psi,\Omega)}\coloneqq\mathrm{Tr}\left(\left(\openone- \Omega\right)\left(\mathcal{P}_{0}\otimes\mathcal{P}_{0}\otimes\openone\right) \left(\psi\right)\right)=1-\mathrm{Tr}\left(\Omega\left(\mathcal{P}_{0} \otimes\mathcal{P}_{0}\otimes\openone\right)\left(\psi\right)\right). \tag{5}\] The optimized probability of the type I error yields \[p_{\mathrm{I}}\coloneqq\min_{\psi,\Omega}p_{\mathrm{I}}^{(\psi,\Omega)} \tag{6}\] The probability of making the type II error (also known as false negative) for fixed input state and measurement equals \[p_{\mathrm{II}}^{(\psi,\Omega)}=\mathrm{Tr}\left(\Omega\left(\mathcal{P}_{0} \otimes\mathcal{P}_{1}\otimes\openone\right)\left(\psi\right)\right) \tag{7}\] and corresponds to the situation when we accept the null hypothesis when the alternative one was correct. The optimized probability of making the type II error yields \[p_{\mathrm{II}}\coloneqq\min_{\psi,\Omega}p_{\mathrm{II}}^{(\psi,\Omega)}. \tag{8}\] For both symmetric and asymmetric schemes we will study two cases. First we will assume that both measurements are unknown. Later, we will assume that we know the description of the reference measurement and the other measurement is unknown. We will be also interested whether the additional register is necessary for optimal discrimination. The summary of results is presented in the following table. ## 3. Discrimination of both unknown von Neumann measurements In this section we will study a situation when we are given a von Neumann measurement \(\mathcal{P}_{0}\) but no classical description of it. This measurement will be our reference. We also have another von Neumann measurement \(\mathcal{P}_{1}\), which can be the same as the reference one, but it does not have to. In this section we will study the problem how to verify \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline & \(p_{succ}^{H}\) & \(p_{err}^{H}\) & \(p_{\mathrm{I}}\) & additional register \\ \hline \hline both unknown & \(\frac{1}{2}+\frac{1}{2d}\) & \(\frac{1}{2}-\frac{1}{2d}\) & \(0\) & \(1-\frac{1}{d}\) & no \\ \hline one fixed & \(1-\frac{1}{2d}\) & \(\frac{1}{2d}\) & \(0\) & \(\frac{1}{d}\) & yes \\ \hline \end{tabular} \end{table} Table 1. Summary of for symmetric and asymmetric discrimination of unknown von Neumann measurements whether the second measurement is the same as the first one or not. Similar problem of discrimination of both unknown unitary channels was recently studied in [15]. ### Symmetric discrimination We will be calculating the success probability for the discrimination of von Neumann measurements in the scenario depicted in Fig. 1. Therefore we will be actually discriminating between \(\mathcal{P}_{0}\otimes\mathcal{P}_{0}\) and \(\mathcal{P}_{0}\otimes\mathcal{P}_{1}\) in the entanglement-assisted scenario. Thus, in order to use Holevo-Helstrom theorem we will need to calculate the value of the diamond norm. As we do not have classical description of either \(\mathcal{P}_{0}\) or \(\mathcal{P}_{1}\), we will assume that both measurement are Haar-random, that is we will be discriminating between \(\int\mathcal{P}_{U}\otimes\mathcal{P}_{U}dU\) and \(\int\mathcal{P}_{U}\otimes\mathcal{P}_{V}dUdV\). The probability of successful discrimination is formulated as the following theorem. **Theorem 1**.: Let \(\mathcal{P}_{0}\) be a reference von Neumann measurement of dimension \(d\) given without classical description. Let \(\mathcal{P}_{1}\) be another von Neumann measurement of the same dimension, also given without classical description. The optimal probability of correct verification if \(\mathcal{P}_{1}\) is the same as the reference channel in the scheme described in Subsection 2.1 equals \[p_{succ}^{H}=\frac{1}{2}+\frac{1}{2d}. \tag{9}\] **Remark 1**.: The above theorem is a direct application of Holevo-Helstrom Theorem (see Eq. (4)) for discrimination between channels \(\int\mathcal{P}_{U}\otimes\mathcal{P}_{U}dU\) and \(\int\mathcal{P}_{U}\otimes\mathcal{P}_{V}dUdV\), that is \[p_{succ}^{H}=\frac{1}{2}+\frac{1}{4}\left\|\int\mathcal{P}_{U}\otimes \mathcal{P}_{U}dU-\int\mathcal{P}_{U}\otimes\mathcal{P}_{V}dUdV\right\|_{ \diamond}=\frac{1}{2}+\frac{1}{2d}. \tag{10}\] Proof.: Let \(U\in\mathcal{U}(\mathcal{X}),\ V\in\mathcal{U}(\mathcal{Y})\) be unitary operators and \(\dim(\mathcal{X})=\dim(\mathcal{Y})=d\). The probability of successful discrimination is given by the Holevo-Helstrom theorem. To calculate this probability (Eq. (4)), we need to calculate the diamond norm distance between the averaged channels \[\left\|\int\mathcal{P}_{U}\otimes\mathcal{P}_{U}dU-\int\mathcal{P}_{U}\otimes \mathcal{P}_{V}dUdV\right\|_{\diamond}. \tag{11}\] As the von Neumann measurement \(\mathcal{P}_{U}\) can be seen as \(\Delta\Phi_{U^{\dagger}}\), where \(\Delta\) is a dephasing channel defined in Eq. (2), we will actually be discriminating between \[\int(\Delta\otimes\Delta)(\Phi_{U^{\dagger}}\otimes\Phi_{U^{\dagger}})dU \quad\text{and}\quad\int(\Delta\otimes\Delta)(\Phi_{U^{\dagger}}\otimes\Phi _{V^{\dagger}})dUdV. \tag{12}\] Using [24, 25] we calculate the Choi-Jamiolkowski representations of averaged unitary channels \[\begin{split}& J\left(\int\Phi_{U}\otimes\Phi_{U}dU\right)= \frac{1}{d^{2}-1}\left(\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.5mu l}{\rm 1\mskip-5.0mu l}{\rm 1\mskip-5.0mu l}+S\otimes S\right)- \frac{1}{d(d^{2}-1)}\left(S\otimes\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1 \mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1\mskip-5.0mu l}+\mathchoice{\rm 1 \mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1\mskip-5.0mu l} \otimes S\right),\\ & J\left(\int\Phi_{U}\otimes\Phi_{V}dUdV\right)=\frac{1}{d^{2}} \mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1 \mskip-5.0mu l}\otimes\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1\mskip-5.0mu l},\end{split} \tag{13}\] where, unless said otherwise, \(S\) is the Swap matrix of dimension \(d^{2}\) and identity matrices \(\mathchoice{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.0mu l}{\rm 1\mskip-4.5mu l}{\rm 1 \mskip-5.0mu l}\)-s are also of dimension \(d^{2}\). Using the above, we can calculate the Choi-Jamiolkowski representations of the averaged measurements, that is \[J\left(\int\mathcal{P}_{U}\otimes\mathcal{P}_{U}dU\right)=\frac{1}{d^{2}-1}\left( \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\otimes\left(\leavevmode\hbox{ \small 1\kern-3.8pt\normalsize 1}-\frac{1}{d}S\right)+T\otimes\left(S-\frac{1}{d} \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\right)\right) \tag{14}\] where \(T\coloneqq\Delta(S)\), and \[J\left(\int\mathcal{P}_{U}\otimes\mathcal{P}_{V}dUdV\right)=\frac{1}{d^{2}} \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\otimes\leavevmode\hbox{ \small 1\kern-3.8pt\normalsize 1}. \tag{15}\] For later convenience, we introduce \(J\) as a difference of Choi matrices of both randomized measurements, that is \[\begin{split} J&\coloneqq J\left(\int\mathcal{P}_{U} \otimes\mathcal{P}_{U}dU\right)-J\left(\int\mathcal{P}_{U}\otimes\mathcal{P} _{V}dUdV\right)\\ &=\frac{1}{d^{2}-1}\left(\leavevmode\hbox{\small 1\kern-3.8pt \normalsize 1}\otimes\left(\frac{1}{d^{2}}\leavevmode\hbox{\small 1\kern-3.8pt \normalsize 1}-\frac{1}{d}S\right)+T\otimes\left(S-\frac{1}{d}\leavevmode \hbox{\small 1\kern-3.8pt\normalsize 1}\right)\right).\end{split} \tag{16}\] The remaining part of the proof goes as follows. We will first calculate the upper bound on the diamond norm \(\|\int\mathcal{P}_{U}\otimes\mathcal{P}_{U}dU-\int\mathcal{P}_{U}\otimes \mathcal{P}_{V}dUdV\|_{\diamond}\leq\|\mathrm{Tr}_{\mathcal{X},\mathcal{Y}} \left|J\right|\|\) from Eq. (3). Later, we will show that this inequality is saturated by Proposition 3 in [22]. Now we will focus on the upper bound. To calculate the upper bound we first need to find \(|J|=\sqrt{J^{\dagger}J}\). From Lemma 1 in Appendix A, taking \(W\coloneqq(2T-\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1})\otimes S\) it holds that \((WJ)^{2}=J^{2}\), and this gives a polar decomposition of \(J\). To calculate the upper bound for the diamond norm from Eq. (3) we need to calculate \(\|\mathrm{Tr}_{\mathcal{X},\mathcal{Y}}\left|J\right|\|=\|\mathrm{Tr}_{ \mathcal{X},\mathcal{Y}}\left WJ\right\|\). Hence we calculate \[\begin{split}\mathrm{Tr}_{\mathcal{X},\mathcal{Y}}(WJ)& =\frac{1}{d^{2}-1}\,\mathrm{Tr}_{\mathcal{X},\mathcal{Y}}\left( \frac{1}{d}\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\otimes \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}-\frac{1}{d^{2}} \leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\otimes S+\frac{d-2}{d}T \otimes\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}-\frac{d-2}{d^{2}}T \otimes S\right)\\ &=\frac{1}{d^{2}-1}\left(\frac{d^{2}}{d}\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}-\frac{d^{2}}{d^{2}}S+\frac{d(d-2)}{d}\leavevmode\hbox{ \small 1\kern-3.8pt\normalsize 1}-\frac{d(d-2)}{d^{2}}S\right)\\ &=\frac{1}{d^{2}-1}\left((2d-2)\leavevmode\hbox{\small 1 \kern-3.8pt\normalsize 1}-\frac{2d-2}{d}S\right)=\frac{2}{d+1}\left(\leavevmode\hbox{ \small 1\kern-3.8pt\normalsize 1}-\frac{1}{d}S\right)\end{split} \tag{17}\] and eventually we have \[\|\mathrm{Tr}_{\mathcal{X},\mathcal{Y}}\left|J\right|\|=\left\|\frac{2}{d+1} \left(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}-\frac{1}{d}S\right)\right\|=\frac{2}{d+1} \left\|\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}-\frac{1}{d}S\right\|=\frac{2}{d}. \tag{18}\] Now we proceed to proving that the upper bound is saturated. By Proposition 3 in [22] we need to check whether there exist a vector \(\left|a\right>\) and a unitary matrix \(W\) such that 1. \(\left<a\right|\mathrm{Tr}_{\mathcal{X},\mathcal{Y}}\,\sqrt{J^{\dagger}J}|a \rangle=\left\|\mathrm{Tr}_{\mathcal{X},\mathcal{Y}}\,\sqrt{J^{\dagger}J}\right\|\) 2. \(\left(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1}\otimes\left|a\right>\!\! \!\left<a\right|\right)W=W\left(\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1} \otimes\left|a\right>\!\!\left<a\right|\right)\) 3. \(W\) is the angular part of some polar decomposition of \(J\) (_i.e._\(J=WP\) for some positive semidefinite \(P\)) As the matrix \(W\) we take \(W\coloneqq(2T-\leavevmode\hbox{\small 1\kern-3.8pt\normalsize 1})\otimes S\) and as the vector \(\left|a\right>\) we take some vector \(\frac{1}{\sqrt{2}}\left(\left|ij\right>-\left|ji\right>\right)\in\mathcal{Z}\), where \(i>j\) and \(\dim(\mathcal{Z})=d^{2}\). The condition (ii) translates to \((\mbox{\rm 1\kern-2.2ptl}\otimes|a\rangle\!\langle a|)\,S\otimes S=S\otimes S\left( \mbox{\rm 1\kern-2.2ptl}\otimes|a\rangle\!\langle a|\right)\) hence it suffices to note that \(|a\rangle\!\langle a|S=S|a\rangle\!\langle a|\). The condition (iii) follows directly. Therefore \[\left\|\int\mathcal{P}_{U}\otimes\mathcal{P}_{U}dU-\int\mathcal{P}_{U}\otimes \mathcal{P}_{V}dUdV\right\|_{\diamond}=\frac{2}{d} \tag{19}\] and eventually \[p_{succ}^{H}=\frac{1}{2}+\frac{1}{2d}. \tag{20}\] ### Asymmetric discrimination In the asymmetric discrimination we will consider two types of errors separately. We would like to verify whether measurements in both black boxes are the same (which corresponds to \(H_{0}\) hypothesis) or they are different (which corresponds to \(H_{1}\) hypothesis). Formally, when the measurement in the first black box, \(\mathcal{P}_{0}\), is unknown, we say that \(\mathcal{P}_{0}=\int\mathcal{P}_{U}dU\). The measurement in the second black box can be either the same as in the first black box (\(\mathcal{P}_{?}=\mathcal{P}_{0}\)) or it can be some other measurement, that is \(\mathcal{P}_{?}=\int\mathcal{P}_{V}dV\). When performing asymmetric discrimination, we prepare an input state \(|\psi\rangle\in\mathcal{X}\otimes\mathcal{Y}\otimes\mathcal{Z}\). If in both black boxes there were the same measurements, then the output state yields \(\rho_{0}^{(\psi)}=\int\left(\mathcal{P}_{U}\otimes\mathcal{P}_{U}\otimes \mbox{\rm 1\kern-2.2ptl}_{\mathcal{Z}}\right)(\psi)dU.\) If the measurements in the black boxes were different, when the output state is \(\rho_{1}^{(\psi)}=\int\left(\mathcal{P}_{U}\otimes\mathcal{P}_{V}\otimes \mbox{\rm 1\kern-2.2ptl}_{\mathcal{Z}}\right)(\psi)dUdV.\) Next, we measure the output state by a binary measurement \(\{\Omega,\mbox{\rm 1\kern-2.2ptl}-\Omega\}\). We will focus on the case when he type I error cannot occur. The optimal probability of the type II error is formulated as the following theorem. **Theorem 2**.: Let \(\mathcal{P}_{0}\) be a reference von Neumann measurement of dimension \(d\) given without classical description. Let \(\mathcal{P}_{1}\) be another von Neumann measurement of the same dimension, also given without classical description. Consider the hypotheses testing problem described in Subsection 2.2. Let \(H_{0}\) hypothesis state that \(\mathcal{P}_{?}=\mathcal{P}_{0}\) and let the alternative \(H_{1}\) hypothesis state that \(\mathcal{P}_{?}=\mathcal{P}_{1}\). If no false positive error can occur, then the optimal probability of false negative error yields \[p_{\rm II}=1-\frac{1}{d}. \tag{21}\] Moreover, no additional register is needed to obtain this value. Proof.: As the input state to the discrimination procedure we take some state \(|\psi\rangle\in\mathcal{X}\otimes\mathcal{Y}\). Note that we assumed that this state is only on two registers. In this proof we will calculate the probability of the type II error assuming that the register \(\mathcal{Z}\) is trivial. Later, we will prove that this gives the optimal probability and the additional register is not needed. If both measurements are the same, then the output state will be \[\rho_{0}^{(\psi)}=\int\left(\mathcal{P}_{U}\otimes\mathcal{P}_{U}\right)(\psi )dU. \tag{22}\] If the measurement in the black boxes are different, then the output state will be \[\rho_{1}^{(\psi)}=\int\left(\mathcal{P}_{U}\otimes\mathcal{P}_{V}\right)(\psi )dUdV. \tag{23}\] We begin with calculating \(\int\left(\mathcal{P}_{U}\otimes\mathcal{P}_{U}\right)(\psi)dU\) by the use of formula for recovering the action of a quantum channel given its Choi matrix. Using the formula for the Choi matrix from Eq. (14) and using the notation \(T\coloneqq\Delta(S)\) we calculate \[\begin{split}\rho_{0}^{(\psi)}&=\operatorname{Tr}_{ \mathcal{Z}}\left(J\left(\int\mathcal{P}_{U}\otimes\mathcal{P}_{U}dU\right) \left(\operatorname{\text{1}\!\!\!1}\otimes\psi^{\top}\right)\right)\\ &=\frac{1}{d(d^{2}-1)}\left(\left(d-\operatorname{tr}\left(S\psi^{ \top}\right)\right)\operatorname{\text{1}\!\!\!1}+\left(d\operatorname{tr} \left(S\psi^{\top}\right)-1\right)T\right).\end{split} \tag{24}\] Let us take the input state to be antisymmetric, that is it satisfies \(\operatorname{tr}\left(S\psi^{\top}\right)=-1\). We calculate \[\rho_{0}^{(\psi)}=\frac{1}{d(d^{2}-1)}\left(\left(d+1\right) \operatorname{\text{1}\!\!\!1}-\left(d+1\right)T\right)=\frac{1}{d(d-1)} \left(\operatorname{\text{1}\!\!\!1}-T\right). \tag{25}\] By similar calculation, using the antisymmetric input state we have \[\begin{split}\rho_{1}^{(\psi)}&=\operatorname{Tr}_{ \mathcal{Z}}\left(J\left(\int\mathcal{P}_{U}\otimes\mathcal{P}_{V}dU\right) \left(\operatorname{\text{1}\!\!\!1}\otimes\psi^{\top}\right)\right)= \operatorname{Tr}_{\mathcal{Z}}\left(\left(\frac{1}{d^{2}}\operatorname{\text{1 }\!\!\!1}\otimes\operatorname{\text{1}\!\!\!1}\right)\left(\operatorname{ \text{1}\!\!\!1}\otimes\psi^{\top}\right)\right)\\ &=\frac{1}{d^{2}}\operatorname{Tr}_{\mathcal{Z}}\left( \operatorname{\text{1}\!\!\!1}\otimes\psi^{\top}\right)=\frac{1}{d^{2}} \operatorname{\text{1}\!\!\!1}.\end{split} \tag{26}\] As the measurement effect we take \(\Omega\coloneqq\operatorname{\text{1}\!\!\!1}-T\). Hence \[p_{\operatorname{I}}^{(\psi,\Omega)}=1-\operatorname{tr}\left(\Omega\rho_{0}^ {(\psi)}\right)=1-\frac{1}{d(d-1)}\operatorname{tr}\left(\left(\operatorname{ \text{1}\!\!\!1}-T\right)\left(\operatorname{\text{1}\!\!\!1}-T\right)\right)=0, \tag{27}\] and \[p_{\operatorname{II}}^{(\psi,\Omega)}=\operatorname{tr}\left(\Omega\rho_{1}^ {(\psi)}\right)=\frac{1}{d^{2}}\operatorname{tr}\left(\operatorname{\text{1} \!\!\!1}-T\right)=\frac{d(d-1)}{d^{2}}=1-\frac{1}{d}. \tag{28}\] From Appendix B we know that the probability of erroneous discrimination is the symmetric scheme (which equals \(1-p_{succ}^{H}\)) is never bigger than the arithmetic mean of probabilities of the type I and type II errors. As \[\frac{1}{2}\left(p_{\operatorname{I}}^{(\psi,\Omega)}+p_{\operatorname{II}}^{ (\psi,\Omega)}\right)=\frac{1}{2}-\frac{1}{2d}, \tag{29}\] then we conclude that our value of \(p_{\operatorname{II}}^{(\psi,\Omega)}=1-\frac{1}{d}\) is optimal and hence \(p_{\operatorname{II}}=p_{\operatorname{II}}^{(\psi,\Omega)}\). Finally, note the optimal value \(p_{\operatorname{II}}\) can be achieved for the input state \(\left|\psi\right\rangle\in\mathcal{X}\otimes\mathcal{Y}\), that is when the register \(\mathcal{Z}\) is trivial. Hence, the additional register is not needed for asymmetric discrimination in this case. ## 4. Discrimination between a fixed and unknown von Neumann measurements In this section we assume that instead of the unknown reference measurement from the previous section, we are given \(\mathcal{P}_{0}\) as a fixed von Neumann measurement \(\mathcal{P}_{U}\). We will begin with studying symmetric discrimination and later proceed to studying the asymmetric discrimination scheme. ### Symmetric discrimination Now we focus on the situation when we want to distinguish between a fixed von Neumann measurement \(\mathcal{P}_{U}\) and a Haar-random measurement \(\int\mathcal{P}_{V}dV\). The probability of successful discrimination is formulated as a theorem. **Theorem 3**.: Let \(\mathcal{P}_{0}=\mathcal{P}_{U}\) be a reference von Neumann measurement of dimension \(d\). Let \(\mathcal{P}_{1}\) be another von Neumann measurement of the same dimension, but given without classical description. The optimal probability of correct verification whether \(\mathcal{P}_{1}=\mathcal{P}_{0}\) or \(\mathcal{P}_{1}\neq\mathcal{P}_{0}\) in the scheme described in Subsection 2.1 equals \[p_{succ}^{H}=1-\frac{1}{2d}. \tag{30}\] Proof.: Without loss of generality we can take \(U=\leavevmode\hbox{\small 1\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize \kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\normalsize\kern-3.8pt\normalsize\kern-3.8pt \normalsize\kern-3.8pt\normalsize\kern-3. \(\mathcal{P}_{\gamma}=\mathcal{P}_{1}\). Consider the discrimination scheme described in Subsection 2.2. If no false positive error can occur, then the optimal probability of false negative error yields \[p_{\mathrm{II}}=\frac{1}{d}. \tag{35}\] Proof.: This proof goes similar as the proof of Theorem 1. We will choose a fixed input state on only two registers. We will also fix the final measurement and calculate the probabilities of making the false positive and false negative errors. Later, from inequality between errors in symmetric and asymmetric schemes in Appendix B we will see that the calculated \(p_{\mathrm{II}}\) is the optimal one. As the input state we take \(\psi\coloneqq\frac{1}{d}|\!\!1\rangle\langle\!1|\). We calculate the output states \[\rho_{0}^{(\psi)}\coloneqq\left(\mathcal{P}_{U}\otimes\mathrm{1}\!\!1 \right)\left(\psi\right)=\frac{1}{d}\left(\mathcal{P}_{U}\otimes\mathrm{1} \!\!1\right)\left(|\!1\rangle\rangle\langle\!1|\right)=\frac{1}{d}\sum_{i}|i \rangle\!\langle i|\otimes|u_{i}\rangle\!\langle u_{i}|^{\top} \tag{36}\] and \[\rho_{1}^{(\psi)} \coloneqq\int\left(\mathcal{P}_{V}\otimes\mathrm{1}\!\!1 \right)\left(\psi\right)dV=\frac{1}{d}\int\left(\mathcal{P}_{V}\otimes \mathrm{1}\!\!1\right)\left(|\mathrm{1}\!\!1\rangle\rangle\langle\!1| \right)dV\] \[=\frac{1}{d}\int\sum_{i}|i\rangle\!\langle i|\otimes|v_{i} \rangle\!\langle v_{i}|^{\top}dV=\frac{1}{d}\sum_{i}|i\rangle\!\langle i| \otimes\int|v_{i}\rangle\!\langle v_{i}|^{\top}dV=\frac{1}{d^{2}}\mathrm{1}\! \!1\otimes\mathrm{1}\!\!1. \tag{37}\] Recall that the measurement effect \(\Omega\) correspond to \(H_{0}\) hypothesis and \(\mathrm{1}\!\!1-\Omega\) correspond to \(H_{1}\) hypothesis. Hence we have probabilities of false positive and false negative errors (for given input state) equal \[p_{\mathrm{I}}^{(\psi,\Omega)}=1-\mathrm{tr}\left(\Omega\rho_{0}^{(\psi)} \right),\quad p_{\mathrm{II}}^{(\psi,\Omega)}=\mathrm{tr}\left(\Omega\rho_{1 }^{(\psi)}\right). \tag{38}\] Without loss of generality we can consider \(\Omega\) in the block-diagonal form, ie. \[\Omega\coloneqq\sum_{i}|i\rangle\!\langle i|\otimes\Omega_{i}^{\top}. \tag{39}\] As the unitary matrix \(U\) is known, we can use it to construct the final measurement. Let \[\Omega_{i}\coloneqq|u_{i}\rangle\!\langle u_{i}| \tag{40}\] for every \(i=1,\ldots,d\). Then \[\mathrm{tr}\left(\Omega\rho_{0}^{(\psi)}\right) =\mathrm{tr}\left(\left(\sum_{i}|i\rangle\!\langle i|\otimes|u_{ i}\rangle\!\langle u_{i}|^{\top}\right)\left(\frac{1}{d}\sum_{j}|j\rangle\! \langle j|\otimes|u_{j}\rangle\!\langle u_{j}|^{\top}\right)\right)\] \[=\frac{1}{d}\sum_{i}\mathrm{tr}\left(|u_{i}\rangle\langle u_{i}|u _{i}\rangle\langle u_{i}|\right)=\frac{1}{d}\sum_{i}|\langle u_{i}|u_{i} \rangle|^{2}=1 \tag{41}\] and hence \[p_{\mathrm{I}}^{(\psi,\Omega)}=1-\mathrm{tr}\left(\Omega\rho_{0}^{(\psi)} \right)=0. \tag{42}\] Eventually \[\begin{split} p_{\mathrm{II}}^{(\psi,\Omega)}&=\mathrm{ tr}\left(\Omega\rho_{1}^{(\psi)}\right)=\mathrm{tr}\left(\left(\sum_{i}|i\rangle\! \langle i|\otimes|u_{i}\rangle\!\langle u_{i}|^{\top}\right)\left(\frac{1}{d^ {2}}\openone\otimes\openone\right)\right)\\ &=\frac{1}{d^{2}}\sum_{i}\mathrm{tr}\left(|u_{i}\rangle\!\langle u _{i}|\right)=\frac{1}{d}.\end{split} \tag{43}\] It remains to explain why \(p_{\mathrm{II}}^{(\psi,\Omega)}=p_{\mathrm{II}}\). Note that the arithmetic mean of probabilities of both types of errors equals \(\frac{1}{2d}\) which is equal to the probability of erroneous discrimination in the symmetric scheme (see Theorem 3). From the inequality between errors in the symmetric and asymmetric schemes in Appendix B we conclude that \(p_{\mathrm{II}}=\frac{1}{d}\). ## 5. Conclusion We were studying the problem whether the given von Neumann measurement is the same as the reference one. We were considering the situation when the reference measurement is given without classical description and when its classical description is known. Both situations were studied in the symmetric and asymmetric scenarios. We proved that in both cases one can achieve the probability of false positive error equal zero and we calculated optimal probabilities of false negative errors. We also calculated the probabilities of successful discrimination in the symmetric discrimination scheme. ## Acknowledgements This work was supported by the project,,Near-term quantum computers Challenges, optimal implementations and applications" under Grant Number POIR.04.04.00-00-17C1/18-00, which is carried out within the Team-Net programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund.
2304.06981
QNEAT: Natural Evolution of Variational Quantum Circuit Architecture
Quantum Machine Learning (QML) is a recent and rapidly evolving field where the theoretical framework and logic of quantum mechanics are employed to solve machine learning tasks. Various techniques with different levels of quantum-classical hybridization have been proposed. Here we focus on variational quantum circuits (VQC), which emerged as the most promising candidates for the quantum counterpart of neural networks in the noisy intermediate-scale quantum (NISQ) era. Although showing promising results, VQCs can be hard to train because of different issues, e.g., barren plateau, periodicity of the weights, or choice of architecture. This paper focuses on this last problem for finding optimal architectures of variational quantum circuits for various tasks. To address it, we propose a gradient-free algorithm inspired by natural evolution to optimize both the weights and the architecture of the VQC. In particular, we present a version of the well-known neuroevolution of augmenting topologies (NEAT) algorithm and adapt it to the case of variational quantum circuits. We refer to the proposed architecture search algorithm for VQC as QNEAT. We test the algorithm with different benchmark problems of classical fields of machine learning i.e. reinforcement learning and combinatorial optimization.
Alessandro Giovagnoli, Yunpu Ma, Volker Tresp
2023-04-14T08:03:20Z
http://arxiv.org/abs/2304.06981v1
# QNEAT: Natural Evolution of Variational Quantum Circuit Architecture ###### Abstract Quantum Machine Learning (QML) is a recent and rapidly evolving field where the theoretical framework and logic of quantum mechanics are employed to solve machine learning tasks. Various techniques with different levels of quantum-classical hybridization have been proposed. Here we focus on variational quantum circuits (VQC), which emerged as the most promising candidates for the quantum counterpart of neural networks in the noisy intermediate-scale quantum (NISQ) era. Although showing promising results, VQCs can be hard to train because of different issues, e.g., barren plateau, periodicity of the weights, or choice of architecture. This paper focuses on this last problem for finding optimal architectures of variational quantum circuits for various tasks. To address it, we propose a gradient-free algorithm inspired by natural evolution to optimize both the weights and the architecture of the VQC. In particular, we present a version of the well-known neuroevolution of augmenting topologies (NEAT) algorithm and adapt it to the case of variational quantum circuits. We refer to the proposed architecture search algorithm for VQC as QNEAT. We test the algorithm with different benchmark problems of classical fields of machine learning i.e. reinforcement learning and combinatorial optimization. ## 1 Introduction The field of quantum computing has, in the last decades, attracted much attention due to the promise of enabling us to solve problems that classically, altaugh they be solved, are practically not feasible [1, 2, 3, 4]. These problems range from quantum metrology [5, 6], mathematics [7, 8], chemistry [9] to optimization in general [10, 11, 12]. The possibility of using intrinsic quantum algorithms that proved to have a speedup like the Deutsch-Jozsa, Grover search, quantum Fourier transform, or a hybrid of classical and quantum techniques [13] is pushing the academy and industry into exploring the extent of the capabilities of this field. At this stage, the theoretical methods and algorithms related to the field of quantum information theory are limited by the current state of the art in quantum hardware production, which is developing at a slower pace. In the current noisy intermediate-scale quantum (NISQ) era, quantum computers contain qubits ranging from 50 to 100, which are not fault-tolerant. They are still affected by decoherence, meaning that the qubits cannot be kept for a long time in the desired quantum state, and are not able to continuously implement quantum error correction, features needed to achieve the so-called quantum supremacy [14, 15, 15]. In the current NISQ era emerged the field of quantum machine learning (QML), which exploits and mixes classical and quantum techniques to solve classical or pure quantum machine learning problems [16]. Some algorithms and techniques have already proved to give a quantum speedup over their classical counterparts once the hardware allows managing a higher number of qubits with higher stability. Other methods are instead being still explored. A promising class of algorithms is the variational quantum algorithms that use variational quantum circuits (VQCs) as the building block. In these parametrized quantum circuits, the physical gates, usually rotations and controlled-nots, depend on adjustable parameters that can be adapted to make the circuit perform better on various tasks. Similar to classical machine learning, we use neural networks as function approximators that rely on the theory of the well-known Universal Approximation Theorem [17]. Analogously we assume here the existence of a function mapping the quantum state containing the features of the problem of interest into the quantum state containing the labels. According to the laws of quantum mechanics, this function is a unitary evolution mapping an initial state into the final one. It can be shown that any unitary operator acting on multiple qubits can be expressed through ROTs and CNOTs, which justifies the interest in using VQCs and exploring their capabilities to solve machine-learning tasks. Due to the low amount of qubits needed to encode the information and the low amount of parameters necessary to approximate the function of interest adequately, VQCs are promising candidates for quantum neural networks in the NISQ era. They are already employed in several machine learning tasks. In these tasks, VQCs are set in a classical neural network pipeline, and their weights are trained to minimize a selected loss function once a fixed architecture has been chosen. Nonetheless, they present issues that may limit their usage. One major problem is the classical barren plateau phenomenon [18, 19, 20, 21, 22], which is linked to the expressibility of VQCs and not only to the optimizer. Another issue is that the depth of the circuit increases the noise, leading to the loss of entanglement or coherence between qubits. A key factor in the study of the VQCs is thus choosing an appropriate architecture, or _ansatz_, that can adequately approximate the function of interest. However, simultaneously, it doesn't compromise the stability of the circuit by making it too deep, i.e., with an unnecessary amount of quantum gates. This motivates the interest in studying variable architecture algorithms that evolve, finding the most suitable architecture and weights rather than starting from a fixed architecture and only updating its weights. In Section 2 we will first briefly review some of the techniques already shown in the literature to perform such tasks. In Section 3 and 4 we present our algorithm, in Section 5 the experiments setting on which it has been tested on and in Section 6 we present the results. In conclusion, in Section 7 we show a variation of the QNEAT algorithm for multiple objective optimization purposes. ## 2 Related work The classical machine learning pipeline, which has also been applied to VQCs, is based on the traditional gradient-based approach. In this approach, a selected loss function is evaluated, gradients are computed through the backpropagation, and the weights of the function approximator are updated. One of the main problems in applying the classical gradient-based methods is the barren plateau of gradients during the training of VQCs. It has been shown [23] that with the increasing depth of the circuit, the probability of finding a non-zero entry in the gradient becomes exponentially small. This issue is related to the intrinsic expressibility of variational quantum circuits and not simply to the selected optimizer. Therefore, gradient-free methods have been proposed to overcome problems, such as barren plateaus or the risk of being stuck in local minima. Some examples are particle swarm optimization [24], evolution strategies [25], or genetic algorithms [26]. These gradient-free techniques have also been applied to the case of quantum circuit optimization. Some start from a fixed ansatz and only focus on optimizing the weights [27]. The chosen fixed initial architecture can be _problem inspired_, meaning that the way the gates are placed in the circuit depends on the given problem, which is optimal for its solution. Some examples are Ansatze derived from the field of quantum chemistry [28], [29] or combinatorial optimization, as the famous quantum approximate optimization algorithm (QAOA) [30], [31]. In alternative, they could be _problem agnostic_, meaning that the architecture is independent of the problem as the hardware efficient ansatz [32], often used because of its ease in physical hardware implementation. Other techniques instead attempt to find the optimal architecture for the task of interest, which means heuristically trying to place new gates to make the circuit perform better according to some carefully designed metrics. Inspired by the field of quantum chemistry and designed to gradually evolve a variational quantum eigensolver are the ADAPT-VQE algorithm [33], one of the first algorithms of this type to be proposed, or EVQE [34] where new gates are added smoothly, and an informed removal of redundant gates is performed. Various algorithms have been proposed to try to optimise the architecture of a quantum circuit in order to match a target matrix [35], [36], [37], [38], [39]. The techniques here proposed are not easily generalisable though to the task of optimising VQCs also because they make use of non-parametrised gates. In this paper we propose a genetic algorithm which takes inspiration from the classical neuroevolution of augmenting topologies (NEAT) [40] proposed in 2002 by Stanley and Miikkulainen, which is categorized as a sexual algorithm since it employs the crossover technique. As it will be shown, here the architecture of the circuit as well as the weights are optimized at the same time, and trough the speciation diversity is preserved, meaning that different areas of the search space are explored. An adaptation of the NEAT algorithm to the quantum case is proposed, having a generic multi-purpose and NISQ-friendly algorithm for solving a variety of tasks in the field of quantum machine learning: from the tasks closely inspired by quantum problems to the classical machine learning tasks. In the following sections we will explore in detail the quantum neat algorithm and show some of the settings where it has been tested and the results. ## 3 NEAT for Variational Quantum Circuits In this chapter, we present our algorithm, which consists of an adapted version of the Neuroevolution for Augmented Topologies for the case of quantum variational architectures. We will first consider the case of a free or a pre-defined structure in the architecture and then explain how the genome, crossover, mutation, and speciation process have been adapted. ### The architecture To adapt the architecture to the quantum case, we first distinguish between a constrained and a free architecture. We define a variational quantum circuit with a **free architecture** as a VQC where there is no constraint on the placements of rotation gates (ROTs) and controlled-not gates (CNOTs). In other words, ROTs gates are allowed to concatenate one after the other, and CNOTs can connect any two wires. In such a case, no regularity is encountered, and thus no concept of a layer is defined. A VQC with a **constrained architecture** is instead a circuit where, after the initial encoding layer, the ROTs and CNOTs follow a regular pattern. There are many architectures, namely architecture Ansatze, that can be used. Some common examples are the _hardware efficient_ architecture or the _strongly entangling_ architecture. We will take the last one as a reference and describe it in more detail since it's often used in literature when dealing with the type of problems we will test our algorithm. After the usual encoding layer, the architecture consists of repeating layers of first a set of ROTs, one for each wire, and then a sequence of CNOTs, where each one of them goes from wire \(i\) to wire \(i+1\), \(\mod(n)\), with \(n\) the number of wires, as showed in Figure 1. The gates grouped into the dotted lines constitute one layer, which can be repeated for an arbitrary amount of time. In our case, the algorithm will determine how deep the circuit will be and, thus, how many layers it will be made of. To give e deeper insight into the resulting architecture once the algorithm made it evolve and added gates respecting the constraints, we look at one example in Figure 2, where the encoding part has been left out for simplicity. The dotted lines identify the layers previously discussed. According to the constraints, these layers are not entirely filled but only partially. Layer number \(1\) is completely filled; layer \(2\) only has two ROTs and two CNOTs, while layer \(3\) is filled with two ROTs and one CNOT. In any case, we can see that the constraints are respected inside each layer: no more than one ROT for each wire is placed, and the CNOTs always connect wires \(i\) and \(i+1,\mod(n)\). Inside each layer, we can thus identify a sublayer of ROTs and a sublayer of CNOTs. Clearly, the set of all possible architectures compatible with the constraints we posed is a subset of the ones that a free search could find. Nonetheless, the task would be computationally much more expensive. For example, only the number of CNOTs that could be placed in a layer grows with the number \(n\) of wires as \(\frac{n!}{2(n-2)!}\), while it grows linearly with \(n\) in the constrained case. Also, no better performance on a practical level is guaranteed. For these reasons, from now on, we will only consider the case of a **constrained architecture** and leave the free case as a possible future research. ### Genome Each agent of the population is endowed with genetic information that encodes which gates are present at a given moment of the evolution and where they are placed. Through the genes of the agent, its architecture can be reconstructed. If we consider a VQC with \(n\) layers and \(m\) wires, then each gate is uniquely identified by a tuple \((t,l,w)\) containing information about the type, layer, and wire of the gate. More specifically \[t\in\{\text{ROT},\text{CNOT}\},\ l\in\{1,\ldots,n\},and\ w\in\{1,\ldots,m\}.\] This information is enough to identify any gate because in the constrained architecture, once a layer \(l\) is specified, we also know to which sublayer the gate belongs, meaning that all the ROTs must be placed before the CNOTs. Parameter \(w\) indicates which wire it belongs to, and if we are dealing with a ROT gate, then it can already be placed without any other ambiguity. Otherwise, if it's a CNOT gate, we know that it will go from \(w\to w+1,\mod(m)\), which also determines uniquely where to place it. The genes also contain the innovation numbers, which, analogously to the classical NEAT algorithm, keep track of the chronological time step in which the mutation happened and are later used to compare mutations that occurred at the same time for the crossover. Each innovation number uniquely identifies a tuple \((l,w)\). Since the architecture is constrained, ROTs and CNOTs are always placed in their respective sublayers, which means that a mutation producing a CNOT could not have produced a ROT in the same place. Thus the innovation numbers of CNOTs and ROTs should be compared separately. On this basis, we divide the genome lists depending on the type of gate. Every ROT gate also contains information about the rotation angles. As an explicit example we show the genome of the architecture in Figure 2 in Table 1 and 2. ### Crossover Crossover is the process through which the genome of two members of the population gets mixed during the reproduction of the fittest members to produce the offspring's genome. Given two genomes, we define **matching genes** the ones that, given a type \(t\in\{\text{ROT},\text{CNOT}\}\) have the same tuple \((l,w)\). In the case of ROTs, this means that two rotation gates, even though they may have different angles, are considered matching genes if placed at the same point of \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Layer & 1 & 1 & 1 & 1 & 2 & 2 & 3 & 3 \\ Wire & 1 & 2 & 3 & 4 & 2 & 3 & 1 & 3 \\ Angles & \(\vec{\theta}_{11}\) & \(\vec{\theta}_{12}\) & \(\vec{\theta}_{13}\) & \(\vec{\theta}_{14}\) & \(\vec{\theta}_{22}\) & \(\vec{\theta}_{23}\) & \(\vec{\theta}_{31}\) & \(\vec{\theta}_{33}\) \\ Innov. n. & 1 & 2 & 3 & 4 & 6 & 7 & 9 & 10 \\ \hline \end{tabular} \end{table} Table 1: Genome of ROTs for architecture in Figure 2. The rotation angles \((\theta_{x},\theta_{y},\theta_{z})\) are encoded in a vector \(\vec{\theta}_{l,w}\) where \(l,w\) stand for the layer and wire. Figure 1: Quantum circuit with the Strongly Entangling Layers architecture Figure 2: Example of a quantum constrained variational quantum circuit. Each layer is encircled with dotted lines. In each layer first the rotation gates and than the cnot gates are placed. the circuit's architecture. In the case of CNOTs, instead, the above definition corresponds to saying that two CNOTs are matching if they connect identical qubits. We then define **disjoint genes** as those that differ for one of the values in the tuple \((l,w)\). When two members reproduce, we want the fittest member's genome to be the basis for the future offspring, with some modifications deriving from the less fit one. The crossover method works by aligning the genomes with respect to the innovation numbers so that the chronology of mutations allows us to compare and see which are the matching or disjoint genes. Then we select the genes for the offspring with the following rules: 1. **Matching genes** are inherited randomly. 2. **Disjoint genes** are inherited from the fittest parent. If the two parents have the same fitness, disjoint genes will also be chosen randomly. The crossover process is performed separately for CNOTs and ROTs since so are the genome lists. In Figure 3, we give an explicit example of crossover for two simple architectures. We assume that two architectures have the same fitness, and we highlight the layers with the dotted boxes. ### Mutation After the crossover has been performed, a mutation can take place. In the mutation process, there are three possible features that we can modify: The weights of the rotations can be adjusted (adding, for example, a small value sampled from a normal distribution); A new gate (ROT or CNOT) can be added respecting the rules of the constrained architecture. All the recent changes are then encoded into the genome. ### Speciation Furthermore, just like in the original idea of the NEAT algorithm, species are introduced to diversify the different evolutions and preserve the diversity for some time. This is because a new architecture may perform well only once the weights have been appropriately adapted, which may take some evolution steps. We thus divide the population into species, and, in order to understand if a generic agent is a member of a given species, we use the following metric to compare the agent with the best-performing member of the species. \[\delta=c_{1}\frac{E}{N}+c_{2}\frac{D}{N}+c_{3}\overline{W}, \tag{1}\] where \(c_{1},c_{2},c_{3}\) are linear coefficients that can be chosen arbitrarily, \(E\) is the number of genes in excess, so the ones whose invention number is not reached by the other agent, \(D\) is the number of disjoint genes, as defined before. \(\overline{W}\) is the average distance between the rotation angles of matching genes. \(N\) is the number of genes of the architecture containing the longest genome. The reproduction process is thus performed inside a single species. If a species is performing poorly, we want to penalize it by reducing its members, and the opposite if, on average, it's producing fit members. To do so, we consider species \(j\) containing in a given moment \(N_{j}\) members. The number of members of that species after the reproduction process will be \[N_{j}^{\prime}=\sum_{i=0}^{N_{j}}\frac{f_{ij}}{\tilde{f}}, \tag{2}\] where \(\tilde{f}\) is the average fitness of the whole population. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Layer & 1 & 1 & 1 & 1 & 2 & 2 & 3 \\ Wire (from) & 1 & 2 & 3 & 4 & 2 & 4 & 1 \\ (Wire to) & 2 & 3 & 4 & 1 & 3 & 1 & 2 \\ Innov. n. & 1 & 2 & 3 & 4 & 6 & 8 & 9 \\ \hline \end{tabular} \end{table} Table 2: Genome of CNOTs for architecture in Figure 2. The genome of each CNOT contains the information of the wire on which the control depends on, here called _Wire (from)_. The information about the wire it is acting on (_Wire to_) is redundant because of the constrains on the architecture. Here is shown anyway for completeness. Figure 3: Crossover process for two simple VQCs where the encoding layer has been omitted. The crossover process takes place separetly for CNOTs and ROTs. In each box the genes are aligned with respect to the innovation number. The fist line lists the genome of the left VQC; the second line the genome of the right VQC; the third line the genome of the offspring, where the gates have been selected according to the rules explained. Here the two VQC are supposed to have the same fitness, thus the choice of matching or disjoint genes is made randomly. The algorithm We present the complete algorithm of QNEAT. The algorithm is explicitly described in Algorithm 1. The primary hyperparameters of the algorithm are the followings: the number of generations we want to run it for \(N_{g}\), the population size \(N\), the weights mutation coefficient, e.g., the standard deviation of the normal distribution \(\mathcal{N}(0,\sigma)\) from which we will sample to change the rotation weights. Moreover, we define the probability that the weights of a gate will be changed as \(p_{w}\), that a ROT gate will be added as \(p_{ROT}\), and that a CNOT gate will be added as \(p_{CNOT}\), the compatibility threshold \(\delta_{0}\) to determine if a member is or is not part of a species. One last crucial hyperparameter given in input is the number of initial total layers \(i_{L}\). While we could start from an empty circuit and let QNEAT evolve it, as we will do in the experiments, another option is to start from a circuit with an initial number of layers filled in the sense of Figure 1. Initial layers may lead to a faster convergence from observation of experimental results, even though it's not guaranteed. ## 5 Experiments settings To explore the range of tasks the algorithm can be applied to, we test it on a variety of benchmark problems. Namely, the algorithm has been tried on reinforcement learning and optimization tasks. In each task of benchmarks, the algorithm remains the same apart from the function we use to evaluate the fitness of population members. ### Reinforcement Learning The field of quantum reinforcement learning is rapidly evolving [41]. Different techniques are being employed, each one with a different degree of quantum-classical hybridization. As a reference we take the case of deep reinforcement learning, where the well known Q-value method is solved with a function approximator: a classical neural network is used to learn the \(Q\) function, so the expected future total reward, by an update rule that comes from the Bellman equation. This leads the function \(Q\) to its desired value: \[Q(s,a)=\mathbb{E}\left[r+\gamma\max_{a\in\mathcal{A}}Q(s^{\prime},a)\right] \tag{3}\] where \(a\in\mathcal{A}\) is an action in the action space, \(s,s^{\prime}\in\mathcal{S}\) are the current and next observation in the observation space, \(r\) the reward of the action taken and \(\gamma\) the discount factor. ``` Input:\(N_{g},N,\sigma,p_{w},p_{ROT},p_{CNOT},\delta_{0},i_{L}\) begin Create random population of size \(N\) for every generationdo /* Evaluate fitness */ for every agent of the populationdo Evaluate the fitness of the agent end for end for /* Speciation */ for every agent of the populationdo for every speciesdo Select best agent of species \(\delta\gets d(agent,\text{best agent})\) if\(\delta<\delta_{0}\)then Add agent to the current species end if else Add a new species with current agent end if end for /* Crossover */ for every speciesdo Select the best performing agents and kill the others Calculate the number of necessary spawns \(N^{\prime}\) for\(N^{\prime}\) timesdo Select random agent1 and agent2 between the most performing ones Child \(\leftarrow\)\(\mathit{crossover}(agent1,agent2)\) /* Mutation */ Child \(\leftarrow\)\(\mathit{mutate}(child)\) Add child to the species end for end for end for end for ``` **Algorithm 1**QNEAT Algorithm Different attempts have been made to reproduce the same algorithm using a VQC as a function approximator for the \(Q\) function [42, 43, 44]. Starting from this framework, we also want the QNEAT algorithm to learn the \(Q\) value function. Since no gradient-update is employed here, in no point of the algorithm the constraint of the Bellman equation is applied, and thus the policy found may not respect it. We can conclude thus that in this context the QNEAT algorithm behaves like an informed random search for a good (supposedly optimal) policy. We test the algorithm on the Cart Pole and Frozen Lake benchmarks. In the first case we encode the observation space information in the qubits by a simple angle rotation with respect to the \(x\) axis: for the \(i\)-th observation, whose value we call \(\theta\), a rotation \(R_{x}(\theta)\) is applied to the gate \(i\). We measure as many qubits as the dimension of the action space. In the Cart Pole problem the dimensions of the observation and action space are, respectively, four and two. In the Frozen Lake benchmark we have an 8x8 grid for a total of 64 squares. We identify each square with a number from 1 to 64 and convert the number into a bit string of length \(6=\log_{2}(64)\). We thus use 6 qubits to encode the information applying to the \(i\)-th qubit a rotation of angle \[\theta_{i}=\pi x_{i},\ \ \text{with}\ \ x_{i}\in\{0,1\}. \tag{4}\] ### Optimization As an example of optimization, we consider combinatorial optimization problems. In particular, we study the Max-Cut problem, which is particularly important since many quadratic unconstrained binary optimization problems can be mapped into the MaxCut problem. Being able to solve it means thus the ability to solve a broader class of combinatorial optimization problems. One traditional approach to solving combinatorial optimization problems is the Quantum Approximate Optimization Algorithm (QAOA) [30]. The QAOA can be seen as a particular case of Variational Quantum Eigen-solvers (VQEs), a class of VQCs used to approximate the Schrodinger equation to evolve an initial state into a final one under an evolving Hamiltonian. More specifically, given an initial Hamiltonian \(H_{i}\) with a known ground state \(\psi_{i,0}\), and a final Hamiltonian \(H_{f}\) encoding our optimization problem with ground state \(\psi_{f,0}\) encoding the optimal solution, then we can start from the state \(\psi_{i,0}\) and make it evolve with an appropriate unitary operation into the final one \[\left|\psi_{f,0}\right\rangle=U\left|\psi_{i,0}\right\rangle, \tag{5}\] where the action of the unitary operator \(U\) is performed by the VQC. We can thus map this problem into a VQC by encoding the initial ground state \(\left|\psi_{i,0}\right\rangle\) into the qubits and find an appropriate architecture to simulate a unitary time evolution \(U\) to the final state encoding the optimal solution. To give a concrete example, one possible choice of \(U\) is the one given by the Schrodinger equation. Namely, we can choose an evolving Hamiltonian \(H(t)=tH_{f}+(1-t)H_{i}\), with \(H_{f}\) the one encoding our problem. Then we can exponentiate it to obtain the Schrodinger equation \[\left|\psi_{f,0}\right\rangle=\exp\left\{\frac{-iH(t)t}{\hbar}\right\}\left| \psi_{i,0}\right\rangle, \tag{6}\] where the exponential can be decomposed analytically into physical gates, and a precise architecture can be found. This is only one possible choice of architecture, and many have been proposed [45]. Here we use the QNEAT algorithm to evolve the circuit and evolutionarily find the optimal architecture. In this class of problems, we aim to find the state that encodes the lowest energy of the problem Hamiltonian \(H_{f}\). In other words, we want to minimize the following expectation value \[\left\langle\psi_{f}\middle|H_{f}\middle|\psi_{f}\right\rangle. \tag{7}\] The MaxCut problem we took into account consists of dividing the nodes on a graph into two separate sets so that the imaginary line drawn to encircle one, or the other set goes through the maximum number of edges in the graph. We can label the nodes \(0\) or \(1\) depending on the set they belong to, and these values can be encoded into the qubits. With an appropriate Hamiltonian, we can encode the information of the edge between the two being cut or not. The benchmark graphs used are shown in Figure 4 and have been taken as example benchmarks from [46]. For fitness we use the average value of the Hamiltonian of MaxCut, namely: \[H=\sum_{(i,j)\in G}\frac{1}{2}\left(1-Z_{i}\otimes Z_{j}\right), \tag{8}\] considering \(Z_{k}\) the spin measured on the \(k-\)th qubit and \(G\) being the set of the edges of the graph. Considering the output of the measure on the qubit to be in \(\{-1,1\}\), then if two nodes \((i,j)\) belong to the same set, so they have values \((1,1)\) or \((-1,-1)\), then the edge contributes with a factor of \(0\) to the Hamiltonian, which corresponds to the fact that no cut goes through the edge. If instead they have different values, \((-1,1)\) or \((1,-1)\), then the edge contributes with a factor \(1\). We thus aim at maximizing the Hamiltonian. At the end of the evolution process, we select the best member and sample \(100\) times to see the distribution on the outcome results, so how many times the circuit gives a measured bit-string corresponding to one of the optimal solutions of the MaxCut problem for the given graph. ## 6 Results We present in Table 3 the hyperparameters used to solve the different problems. With _Reinforcement learning_ we include both CartPole and FrozenLake8x8. With MaxCut we refer to all four graphs. The parameters have been chosen by doing a grid search. ### Reinforcement Learning The results of Cart Pole and Frozen Lake are presented in Figure 5 and 6 respectively. We study the score of the VQE evolved by the QNEAT algorithm starting with \(0,1,2\) initial layers. We present separately the results of: the top \(5\) member, the top member averaged an extra \(100\) times and the whole population. The maximum score is \(500\). During the evolution, we also keep track of the number of ROTs and CNOTs that contribute to the architecture. We can see how the top \(5\) agents rapidly reach the maximum score, converging faster when the number of initial layers increases. The population instead converges. Figure 4: Benchmark graphs used for testing the performance of QNEAT on combinatorial optimization tasks. ### Combinatorial Optimization We solve the MaxCut problem on the four different graphs that we have shown before. For each one of them, we evaluate the QAOA algorithm with 1, 2, 3, and 4 layers. Higher numbers of layers have also been tested, but they show the same behavior as the case with 4, and have thus been omitted. We compare it then with the QNEAT. In figure 6, the results can be seen. The four rows refer to the different graphs. In each row, we study the performance of the _best member_: the first column shows the accuracy of measuring an optimal solution obtained by the QNEAT and QAOA algorithm. The last three rows show separately the number of total gates, CNOTs, and ROTs used in the architecture of each algorithm. Each epoch represents a generation of 200 members in the case of QNEAT, while only one gradient update in the case of the QAOA algorithm. From the graph, it can be easily seen how the QAOA improves with the number of layers used. This comes with a linear increase of gates used. Also, it can be seen how the QNEAT algorithm reaches good accuracies, comparable with the case of QAOA with four layers but with a much lower amount of gates. This makes the implementation more compatible with the current NISQ-era architectures. ### Evolution We also show visually the evolution of one VQC starting from an empty configuration in the case of the Cart Pole benchmark. The parameters used for the next evolution are shown in Table 3 for the Reinforcement Learning tasks. At the end of the evolution of the population, the best agent has been tracked backward and its architecture visualized generation by generation. Figure 8 shows the evolution process. Only some of the 50 generations have been shown, namely the ones where new gates have been added. In the others, either nothing happened, or only the weights have been changed. It can be seen that, starting from an empty architecture, apart from the encoding part, gates are slowly added until reaching slightly more than 20 gates, in accordance with the results shown in Figure 5. In the case of the Cart Pole problem such a small architecture is capable of solving the benchmark and perform the top scores. the number of gates that were added during the evolution. In doing so we always relied on the fact that, because of how the QNEAT algorithm works, new gates are added only if the resulting performance is better, so only if it is necessary. We never explicitly requested, in any point of the algorithm, that the number of gates should be minimized. Obviously, we want the number of gates to be as low as possible, provided that the agent is performing the top score of whatever the problem is considered. QNEAT algorithm, as it is, deals with this problem by only letting species with bigger architectures if they are performing better. Nonetheless, no penalty is added to deep, complex architectures. In order to enforce the algorithm to prioritize short architectures, we apply a technique known as _Multi Objective Optimization_ (MOO). In particular, here we studied the NSGA-II algorithm [47], an algorithm proposed to deal with optimization problems when having multiple scores or fitnesses to take account at the same time. NSGA-II is a genetic algorithm based on the idea of finding the Pareto fronts in the space of the fitness functions, and then evolving the members of the Pareto fronts closer to the highest values of the scores. The algorithm works as follows: we start with a set of fitnesses \(\{f_{1},\ldots,f_{n}\}\), which we suppose we want to minimize. This assumption can always be made true by simply inverting any eventual \(f_{i}\) that should be maximized as \(1/f_{i}\). Given a population of \(N\) members, the algorithm starts with reproduction and mutation of the original population \(P\) to generate the new set of members \(Q\). The population will have thus \(2N\) members, where half of them are the mutation of the initial ones. After all of them have been let perform the tasks, we can plot each one as a point \((f_{1},\ldots,f_{n})\in\mathbb{R}^{n}\) in the fitnesses space. Here the set of points, representing the members, are di Figure 7: Results for the MaxCut problem. Each row describes one of the four graphs showed in Figure 4. The first column shows the accuracy while the other three show, respectively, the number of total gates, CNOTs and ROTs used for the architecture. vided in Pareto fronts, or _nondomination fronts_. A member \(A\) is said to dominate another member \(B\) if all the fitnesses of \(A\) are equal to the fitness of \(B\), and at least one is strictly better. Once the different members have been compared, they can be divided into fronts, where each front has a rank. Each member of the front does not dominate and is not dominated by any of the other members; it dominates all the members of every other front with a lower rank, and is dominated by the members of higher-rank fronts. In different words, a domination front is the set of all those members that are roughly at the same distance with respect to the fitness axis, compared to the other members. Once the population, now still counting \(2N\) members, has been divided into Pareto fronts, we then order all the members according to the overall performance and select the first half of them to make the new population. The way the members are ordered in the end depends on two factors: the rank and the crowding distance. 1. **Rank**. Once the members are divided into Pareto fronts, each receives a rank, simply an integer number, corresponding to the Pareto front it belongs to. Once this has been done, all the members with a higher rank are considered to be better than those with a lower rank, and starting from these ones the mutation and crossover processes will take place. 2. **Crowding distance**. Inside a nondomination front, where the rank is the same for every member, another metric is used to classify the members, the crowding distance. Given one element \(i\), we define its crowding distance as the distance in the fitnesses space of the nearest members to him in the same front, averaged on all the fitnesses directions. The memebers who are in an area of the space that is more crowded are considered to be better. More exactly, the condition to establish if a member \(i\) is better than a member \(j\) is Figure 8: Evolution of an empty VQC with the QNEAT algorithm for the Cart Pole environment with parameters on Table 3. From top to bottom, each row is a different generation. Not all of the generations have been shown: of a total of 50 generations, only the ones where anew gate has been added. In the others, either no mutation happened or only the weights changed. \[i<j\text{ if }(i_{\text{rank}}<j_{\text{rank}}) \tag{9}\] \[\text{or }((i_{\text{rank}}=j_{\text{rank}})\text{ and }(i_{\text{ distance}}>j_{\text{distance}}))\] According to the condition in equation 9, all the \(2N\) members are ordered and then the first \(N\) of them are selected to be part of the next generation. The algorithm's main loop is shown in Algorithm 2, while the other algorithms mentioned inside can be consulted in the original work. ``` begin \(R_{t}=P_{t}\cup Q_{t}\) \(\mathcal{F}=\text{fast-non-dominated-sort}(R_{t})\) \(P_{t+1}=\) while\(|P_{t+1}|+|\mathcal{F}_{i}|\leq N\)do crowding-distance-sort\((\mathcal{F}_{i})\) \(P_{t+1}=P_{t+1}\cup\mathcal{F}_{i}\) \(i\gets i+1\) end while Sort \((\mathcal{F}_{i})\) \(P_{t+1}\gets P_{t+1}\cup\mathcal{F}_{i}\left[1:(N-|P_{n}|)\right]\) \(Q_{t+1}=\text{make-new-pop}(P_{t+1})\) end while ``` **Algorithm 2**Multi Objective Optimization - NSGA-II In our case, we merged the QNEAT and MMO algorithms. More specifically, we kept the species structure of the first algorithms, and, inside each species, the MMO was applied, looking for the Pareto fronts separately for each species. This is done to keep the different species evolve independetly, so that the members inside each species don't need to compete with members from other species. The mutation and crossover to generate new offsprings are instead drawn from the QNEAT algorithm. The hyperparameters used to run the algorithm are the same as in Table 3. We show and compare the results of the QNEAT algorithm with and without the MOO variation. We evolve a VQC, both with and without MOO, starting from 0, 1 and 2 layers. The results are shown in Figure 9. It can be seen how the version with MOO performs worse both in terms of average and variance when it comes to the fitness of the environment. Nonetheless, it can be seen how the number of gates is actually lower. In the case of 2 initial layers though, the difference between the two versions in not that sharp neither in the fitness nor in the number of gates, even though the QNEAT without any variation converges faster. This behavior can be explained by the fact that the MOO algorithm looks for Pareto fronts in the space of fitnesses, where a front could be constituted of members that perform well but have a high number of gates, as well as members that have a small number of gates but perform poorly. Finding a good threshold to decide which part of the Pareto front to decide is essential to balance this possible trade-off. ## 8 Conclusions In this paper, we explored the Quantum NeuroEvolution of Augmenting Topologies (QNEAT) algorithm. We defined how we adapted the genome and the process of crossover and mutation from the classical NEAT in order to be able to evolve the architectures of Variational Quantum Circuits. We then tested the algorithm of classical machine learning tasks, such as Reinforcement Learning and Combinatorial Optimization problems, and kept track during the evolution of the growth and depth of the architecture. In conclusion, we merged the QNEAT algorithm with the NSGA-2 algorithm for multi objective optimization. One topic that should be studied in future works is the difference between the constrained and free architecture, in order to see how the last one would perform in terms of generations necessary to evolve a circuit up to the desired fitness, and also to see if an architecture can be found that will improve the score. Also, the QNEAT algorithm should be tested on harder and bigger problems, to see how effective it is on more complex reinforcement learning environments as well as bigger graphs for the case of the Max Cut problem. Figure 9: Comparison between QNEAT without the MOO alteration, and with the MOO. Each row represents an experiment performed with, respectively, 0, 1 and 2 initial layers, starting form the top row. In the first column, the average score of the top 5 members are shown, while in the right column the total gates are tracked. It can be seen how, as the number of initial layers increases, the averages of the top 5 members converge. Nonetheless, the number of gates is lower.
2301.12694
Photonic corner skin modes
Useful in the enhancement of light-matter interaction, localization of light is at the heart of photonics studies. Different approaches have been proposed to localize light, including those based on dynamical localization, topological trivial or nontrivial defects in the band gap of photonic crystals, and bound states in the continuum. Recent studies on non-Hermitian skin effect have provided us new means to localize waves. In this work, we propose a new method towards localized light, called photonic corner skin modes arising from second-order non-Hermitian skin effect and gain-loss symmetry on a lattice. Specifically, we propose to make use of small pseudo-inversion symmetric gain/loss, which does not close the band gap, to realize a photonic Chern insulator with chiral edge states. The chiral edge states then accumulate at certain corners of the system. Intriguing phenomena such as corner skin modes arising from an underlying bipolar second-order non-Hermitian skin effect and multiple-corner skin modes are predicted in continuous systems.
Weiwei Zhu, Jiangbin Gong
2023-01-30T06:57:16Z
http://arxiv.org/abs/2301.12694v1
# Photonic corner skin modes ###### Abstract Useful in the enhancement of light-matter interaction, localization of light is at the heart of photonics studies. Different approaches have been proposed to localize light, including those based on dynamical localization, topological trivial or nontrivial defects in the band gap of photonic crystals, and bound states in the continuum. Recent studies on non-Hermitian skin effect have provided us new means to localize waves. In this work, we propose a new method towards localized light, called photonic corner skin modes arising from second-order non-Hermitian skin effect and gain-loss symmetry on a lattice. Specifically, we propose to make use of small pseudo-inversion symmetric gain/loss, which does not close the band gap, to realize a photonic Chern insulator with chiral edge states. The chiral edge states then accumulate at certain corners of the system. Intriguing phenomena such as corner skin modes arising from an underlying bipolar second-order non-Hermitian skin effect and multiple-corner skin modes are predicted in continuous systems. ## I Introduction Due to the non-Hermitian skin effect (NHSE), the spectrum of a non-Hermitian system is sensitive to its boundary condition. Under open boundary conditions (OBC), NHSE in one-dimensional systems causes all the bulk states to localize at one edge of the system [1; 2; 3; 4]. While the NHSE itself has topological origin associated with point-gap topology [3; 4], it also challenged our understanding of the usual bulk-boundary correspondence of topological band theory and even led to the concept of generalized bulk-boundary correspondence, via which the topological invariants are defined in the so-called generalized Brillouin zone [5; 6; 7; 8; 9]. To date the first-order NHSE has been widely studied both in theory and experiment [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. However, there are much less work on the second-order NHSE, where certain states (the number of states is proportional to the length of the system) are localized at corner, with the bulk states still extended [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24]. Some studies proposed to design second-order NHSE from the Benalcazar-Bernevig-Hughes model as a topological quadrupole insulator with corner modes [13; 14; 15; 16]. This design requires positive couplings along one direction and negative couplings along opposite direction, clearly exotic features not easy to be realized in experiment. Hybrid skin-topological modes represents a second type of second-order NHSE. It can be realized in experiment but still requires specially designed asymmetric couplings, limiting its possible extensions [11; 12; 18; 22]. Another kind of hybrid skin-topological modes without asymmetric couplings has been proposed recently by adding gain/loss to Chern insulators or anomalous Floquet topological insulators [23; 24]. Up to now, all the constructions are based on tight-binding lattices. To further explore second-order NHSE for possible applications, we shall explore in this work possible second-order NHSE in continuous systems, specifically in photonic crystals. Photonic crystals, whose material parameters and geometric structure can be easily tuned, have already been proven to be a good platform to study different topological states [25; 26; 27; 28; 29; 30; 31; 32]. Actually the classical wave analogy of Chern insulator was first proposed and realized in gyromagnetic photonic crystals [33; 34; 35]. The Floquet topological insulator and Weyl semi-metal were all first realized in photonic crystals even earlier than their electric counterparts [36; 37]. More importantly, photonic crystals are also a good platform to study non-Hermitian physics where the loss is ubiquitous from material absorption or mode leaking and the gain can be obtained from electrical or optical pumping [38; 39]. There are also great efforts to study non-Hermitian photonic topological states, mainly in one-dimensional systems [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50]. However there is less work on non-Hermitian photonic Chern insulators using two-dimensional photonic crystals [51; 52; 53; 54]. Our study here shall stimulate further interest in non-Hermitian Chern insulator phases in two-dimensional photonic crystals as continuous systems. Studies of topological states and non-Hermitian physics in photonic crystals may be relevant to a wide variety of applications, such as topological delay line, high sensitivity sensors and topological lasers [55; 56; 57; 58; 59; 60]. Indeed, topological defects as one way to localize light has attracted much attention for the possibility to enhance light-matter interactions [61; 62; 63; 64]. NHSE as a new way to localize light has also been studied in photonic crystals most recently [65; 66; 67; 68; 69; 70]. However the second-order version of NHSE featured in this work as photonic corner skin modes is still not investigated until now. In this paper, we propose one way to obtain photonic corner skin modes by adding pseudo inversion symmetric gain/loss to gyromagnetic photonic crystals, which supports chiral edge states. The spectrum of the edge states is complex under mixed boundary condition and real under OBC. Correspondingly, the eigen field is extended along the edge under mixed boundary condition and localized at the corner under OBC. Similar to the bipolar NHSE from twisted spectral winding numbers in one-dimensional system [71; 72; 73; 74], we observe that part of the chiral edge states are localized at one corner and others are localized at opposite corner. This is so even though the spectrum of the edge states does not form a loop due to quantum anomaly [33]. Such intriguing phenomena exist in both square lattice and triangle lattice with pseudo inversion symmetry. Furthermore, by constructing a unit cell preserving pseudo six-fold rotation symmetry, it is found that the corner skin modes can be simultaneously localized at multiple corners in a triangle lattice. Our work clearly shows that features of non-Hermitian photonic Chern insulators are sensitive to their boundary conditions. This work is organized as follows. In Section II, we review the Hermitian gyromagnetic photonic crystal which supports topological chiral edge states in both square lattice and triangle lattice. In Section III, we first provide a general understanding of the formation of corner skin modes by accumulation of chiral edge states with gain/loss. Then we discuss the concrete behavior of corner skin modes in various lattices with different designs of gain/loss. Section IV concludes this work. All the simulations are performed in COMSOL Multiphysics and we only consider the transverse-magnetic modes. We use the sign convention of RF modulo in COMSOL, where the positive (negative) imaginary parts represent loss (gain). ## II Topological chiral edge states in photonic crystals without non-Hermiticity We first introduce the gyromagnetic photonic crystals, which has been used to realize reflection-free topological edge states [75; 76; 34; 77]. One example is shown in Fig. 1(a), where the rods made of gyromagnetic materials are regularly distributed in a square lattice and the background is air. The parameters are the same as Ref. [34] with the lattice constant being \(a\), the radius of the rods being \(r=0.11a\) and the relative permittivity of the rods being \(\epsilon_{r}=15\). The time-reversal symmetry is broken by magnetic filed along the \(z\) direction and the corresponding relative permeability is a tensor \[\mu_{r}=\left(\begin{array}{ccc}14&12.4i&0\\ -12.4i&14&0\\ 0&0&1\end{array}\right) \tag{1}\] From the unit cell in Fig. 1(a), one can see that the system preserves 4-fold rotation symmetry which contains the inversion symmetry with \(\epsilon_{r}(\mathbf{r})=\epsilon_{r}(-\mathbf{r})\) and \(\mu_{r}(\mathbf{r})=\mu_{r}(-\mathbf{r})\). The first Brillouin zone is plotted in Fig. 1(a). \(\Gamma\), X and M are the high symmetric momentum points. The bulk band structure along high symmetric line in the first Brillouin zone is studied and shown in Fig. 1(b). Usually, the topological properties of the bulk band can be described by the Chern number defined from the Bloch states over the whole Brillouin zone [78; 34]. Here we quickly check the topological properties by considering the inversion symmetry eigenvalues at inversion symmetric momentum points. From Fig. 1(b), it is seen that the inversion symmetry eigenvalues are all positive for the first band and possesses an odd number of positive eigenvalues for the second band. This indicates that the second band gap is a Chern insulator. Such results are confirmed by the projected band structure shown in Fig. 1(c), where right propagating chiral edge states (colored by purple) are localized at the upper edge and left propagating chiral edge states (colored by green) are localized at the lower edge. We carry out a parallel study with the same system parameters but in a triangle lattice as shown in Fig. 2(a). The unit cell can be chosen as hexagon which preserves Figure 1: Topological chiral edge state of gyromagnetic photonic crystal in a square lattice. (a) Two-dimensional photonic crystal in a square lattice is composed of gyromagnetic rods of radius \(0.11a\), where \(a\) being the lattice constant. The background is air. The unit cell and first BZ are shown at right. (b) The bulk bands along high symmetric line in first BZ. The parity symmetries at high symmetric momentum points are marked for the lower two bands. \(+\) (\(-\)) indicates even (odd) parity of the eigenstates. The second band gap is a Chern insulator. (c) The projected band structure as a function of \(k_{x}\). In calculation, periodic boundary condition (PBC) is used along \(x\) and OBC is used along \(y\). OBC is realized by setting the boundary as perfect electric conductors. The second band gap supports chiral edge states, one (colored by purple) propagating to right at the upper boundary and the other one (colored by green) propagating to left at the lower boundary. six fold rotation symmetry or rhombus which preserves inversion symmetry. Different from the square lattice case, the high symmetric momentum points for the triangle lattice are \(\Gamma\), M and K. In addition, only \(\Gamma\) and M are inversion symmetry invariant momentum points. The bulk band structure is shown in Fig. 2(b), which is quite similar to the square lattice case due to that the triangle lattice can be obtained by a slightly geometry transformation from the square lattice. As the square lattice, the inversion symmetry eigenvalues are all positive for the first band and possesses an odd number of positive eigenvalues for the second band, indicating again that the second band gap is a Chern insulator. Indeed, the chiral edge states can be observed in the second band gap of projected band structure as shown in Fig. 2(c). Because the right propagating edge states and left propagating edge states are spatially separated (they are orthogonal to each other), we can describe the topological edge states shown in Figs. 1(c) and 2(c) by a diagonal Hamiltonian under the basis of upper edge state and lower edge state \((\psi_{\text{up}},\psi_{\text{down}})^{T}\). The Hamiltonian of the edge states can be described by \[H_{\text{edge}}=\omega_{r}\sigma_{0}+v(k_{x}-k_{r})\sigma_{z} \tag{2}\] Here \(\omega_{r}\) being a reference angular frequency in the middle of the second band gap. \(k_{r}\) is a reference momentum. \(v\) is the group velocity. \(v>0\) in our system, meaning the upper (lower) edge state propagate to right (left). \(\sigma_{0}\) is two by two identity matrix and \(\sigma_{z}\) is the third Pauli matrix. This Hamiltonian of the edge states is useful for the understanding of photonic corner skin modes later. ## III Photonic corner skin modes in non-Hermitian photonic crystals with pseudo-inversion symmetry ### General understanding of photonic corner skin modes The photonic Chern insulators are quite robust even when the system has non-Hermiticity. With non-Hermiticity, the topological chiral edge states are still there without the band gap closing [51; 52; 53]. We introduce gain and loss to the unit cell as some examples in Fig. 3(a), where the red area represents gain and blue area represent loss. The unit cell still preserves pseudo-inversion symmetry with \(\epsilon_{r}(\mathbf{r})=\epsilon_{r}^{\dagger}(-\mathbf{r})\) and \(\mu_{r}(\mathbf{r})=\mu_{r}^{\dagger}(-\mathbf{r})\). As a general analysis, note that if the chiral edge states contain gain (loss) in the upper edge, then we can conclude that the chiral edge states in the lower edge contains loss (gain) due to the pseudo-inversion symmetry. The Hamiltonian of the edge states is then modified as \[H_{\text{edge}}^{m}=\omega_{r}\sigma_{0}+v(k_{x}-k_{r})\sigma_{z}+i\gamma \sigma_{z} \tag{3}\] The gain/loss can be treated as a modification to the quasi momentum \(\tilde{k}_{x}=k_{x}+i\gamma/v\), which make the waves localize to right (left) for \(\gamma<0\) (\(\gamma>0\)) due to NHSE Figure 3: Schematic for corner skin modes in non-Hermitian photonic crystals with pseudo inversion symmetry. (a) Three examples of the unit cell with pseudo inversion symmetry. The red (blue) part indicates gain (loss). (b) One possible configuration for the chiral edge states localized at one corner. (c) One pseudo inversion symmetry forbidden configuration where the chiral edge states are localized at two corners. (d)(e) Two possible configurations for the chiral edge states localized at one or three corners. Figure 2: Same as in Fig. 1, except for triangle photonic crystal. (a) Structure of the photonic crystal in a triangle lattice. (b) The bulk bands along a high symmetric line. (c) The projected band structures. under OBC. Different from the usual NHSE in 1D chain, here the spectrum of the edge state does not form a loop so that it cannot be described by spectral winding numbers but by the sign of \(\gamma\). Figs. 3(b)-3(e) show some configurations of the chiral edge states with gain/loss. The red (blue) arrow represents gain (loss) chiral edge states. Gain (loss) chiral edge states accumulate waves to (opposite to) the propagation direction. Fig. 3(b) is one possible configuration for square lattice where the waves are localized at the right-down corner due the accumulation effect. Fig. 3(c) shows the configuration where the waves are localized at two opposite corners. However, the chiral edge states on opposite edge having same gain/loss are forbidden by the pseudo-inversion symmetry. So for the square lattice case, all the waves are localized at one corner. For the triangle lattice case however, it is possible to localize wave at one corner or three corners as shown in Figs. 3(d) and 3(e). ### Photonic corner skin modes due to bipolar second-order NHSE We now study non-Hermitian photonic crystals in a square lattice with pseudo inversion symmetry, whose unit cell is shown in the insert figure of Fig. 4(a). The gain (loss) is added to the air by tuning the relative permittivity to \(\epsilon_{r}=1-0.8i\) (\(\epsilon_{r}=1+0.8i\)). The spectrum of system with PBC in both directions is studied and shown in Fig. 4(a). There are two clusters corresponding to the second bulk band and third bulk band. The line band gap is between two bulk bands. We then study the spectrum of mixed boundary condition and the results are shown in Fig. 4(b). It is observed that the bulk band clusters become tighter than that in PBC. The reason is that the nonreciprocal bulk band combined with the gain/loss makes the system support first-order NHSE, so that the bulk spectrum is sensitive to the boundary condition [68]. We also see that there are complex chiral edge states across the line band gap. The spectrum for the system with OBC in both directions is shown in Fig. 4(c). Again the bulk spectrum becomes tighter. More importantly, the spectrum of chiral edge states has a dramatic change from complex in the mixed boundary to real. Such dramatic change is one feature of NHSE. Specifically, we witness NHSE occuring to topological edge states and can hence be understood as second-order NHSE. We focus on the spectrum of edge states in the second band gap. In the mixed boundary condition, the spectrum of edge states can be split into two parts as shown in Fig. 4(b): in first part marked by I, the up (down) edge states has loss (gain) corresponding to \(\gamma>0\) and in second part marked by II, the up (down) edge states has gain (loss) corresponding to \(\gamma<0\). According to Eq. 3, these two parts have different localization behavior in OBC. The eigenfields with OBC are shown in Figs. 4(d)-4(f). It is seen that for part I the waves are localized at left-up corner shown in Fig. 4(d), at the transition point the waves are extended along the edge shown in Fig. 4(e) and for part II the waves are localized at right-down corner shown in Fig. 4(f). Such phenomena called bipolar NHSE has been observed in 1D system and are usually connected with a twisted spectral winding [71; 72; 73; 74]. Here we observe the similar phenomena in a 2D system, which can be understood as a bipolar second-order NHSE. Next we examine a similar photonic crystal but in a triangle lattice configuration. The unit cell is shown in the insert figure of Fig. 5(a). Similar phenomena have been observed. Compared with the square lattice case, the bulk spectrum is less sensitive to the boundary conditions as shown in Figs. 5(a)-5(c). The spectrum of the edge states is however still sensitive to the boundary condition, which are complex in mixed boundary condition Figure 4: Corner skin modes in square lattice with pseudo inversion symmetry. (a) Spectrum for the system with PBC in both directions. Unit cell is shown in the inset. (b) Spectrum of the system with mixed boundary condition, PBC in one direction and OBC in another direction. The edge states for upper (lower) boundary are colored by purple (green). (c) Spectrum for the system with OBC in both directions. (d)(e)(f) The field profiles for the topological edge states in open boundary condition. (d) corner skin modes with lower energy are localized at left-up corner, (e) extended topological edge states at the transition point and (f) corner skin modes with higher energy is localized at right-down corner. and real in OBC as shown in Figs. 5(b) and 5(c). There are also two parts, part I corresponding to \(\gamma>0\) and part II corresponding to \(\gamma<0\). In part I, the waves are localized at left-up corner as shown in Fig. 5(d), at phase transition point the waves are extended along the edge as shown in Fig. 5(e) and in part II the waves are localized at right-down corner as shown in Fig. 5(f). ### Multiple-corner skin modes Previous studies show the photonic corner skin modes are localized at one corner. Next we provide a case where the photonic corner skin modes are localized at multiple corners. The unit cell is shown in insert figure of Fig. 6(a). The unit cell preserves pseudo six fold rotation symmetry with \(\epsilon_{r}(\mathbf{r})=\epsilon_{r}^{\dagger}(C_{6}\mathbf{r})\) and \(\mu_{r}(\mathbf{r})=\mu_{r}^{\dagger}(C_{6}\mathbf{r})\), which includes the pseudo inversion symmetry with \(\epsilon_{r}(\mathbf{r})=\epsilon_{r}^{\dagger}(-\mathbf{r})\) and \(\mu_{r}(\mathbf{r})=\mu_{r}^{\dagger}(-\mathbf{r})\). The spectrum for the system with PBC in both directions is shown in Fig. 6(a). As in the previous case, there are two bulk band clusters. The spectrum for the system with mixed boundary condition and OBC in both directions are show in Figs 6(b) and 6(c). Obviously, the bulk spectrum is almost unchanged in different boundary conditions. The edge spectrum is again quite sensitive to the boundary condition insofar as the edge spectrum is complex under mixed boundary condition and real under OBC. Focusing on the spectrum of the edge states, we again see that they still can be split into two parts, part I and part II. Under mixed boundary condition, the spectrum of edge states are complex in part I corresponding to \(\gamma>0\) and real in part II corresponding to \(\gamma=0\). According to previous analysis, the waves are localized in part I and extended in part II when we choose OBC in both directions. Such results are confirmed by the field profiles. For states belonging to part I, the waves are localized at three corners as shown in Fig. 6(d). For part II, the waves are extended as shown in Figs. 6(e) and 6(f). Figure 5: Same as in Fig. 4, except for triangle photonic crystal. (a) The spectrum for the system with PBC in both directions. (b) The spectrum for the system with mixed boundary condition. (c) The spectrum for the system with OBC in both directions. (d)(e)(f) The field profiles are shown to illustrate the transition between different localization behavior of corner modes. Figure 6: Corner skin modes in a triangle lattice configuration with pseudo \(C_{6}\) symmetry. (a) The spectrum of the system with PBC in both directions. (b) The spectrum of the system with mixed boundary condition. (c) The spectrum for the system with OBC in both directions. (d) The field profiles for one corner skin modes where the fields are localized at three corners. (e)(f) Two examples of extended topological edge states. Conclusion and discussion In this article we have presented an innovative way to localize light by adding designed gain/loss to photonic crystals with chiral edge states. The light is localized at corners due to second-order NHSE, subject to particular symmeries in the gain/loss introduced to the lattice. Specifically, photonic corner skin modes due to bipolar second-order NHSE and multiple-corner skin modes are predicted in continuous systems. The resulting interesting localization behavior of light should be of experimental interest and may be used to enhance light-matter interactions. Compared with the better known first-order NHSE, our results clearly demonstrate that second-order NHSE can be engineered by use of crystal symmetries. Photonic crystals are hence identified as a verstile platform to investigate and exploit second-order NHSE.
2306.16455
Efficient sampling of noisy shallow circuits via monitored unraveling
We introduce a classical algorithm for sampling the output of shallow, noisy random circuits on two-dimensional qubit arrays. The algorithm builds on the recently-proposed "space-evolving block decimation" (SEBD) and extends it to the case of noisy circuits. SEBD is based on a mapping of 2D unitary circuits to 1D {\it monitored} ones, which feature measurements alongside unitary gates; it exploits the presence of a measurement-induced entanglement phase transition to achieve efficient (approximate) sampling below a finite critical depth $T_c$. Our noisy-SEBD algorithm unravels the action of noise into measurements, further lowering entanglement and enabling efficient classical sampling up to larger circuit depths. We analyze a class of physically-relevant noise models (unital qubit channels) within a two-replica statistical mechanics treatment, finding weak measurements to be the optimal (i.e. most disentangling) unraveling. We then locate the noisy-SEBD complexity transition as a function of circuit depth and noise strength in realistic circuit models. As an illustrative example, we show that circuits on heavy-hexagon qubit arrays with noise rates of $\approx 2\%$ per CNOT, based on IBM Quantum processors, can be efficiently sampled up to a depth of 5 iSWAP (or 10 CNOT) gate layers. Our results help sharpen the requirements for practical hardness of simulation of noisy hardware.
Zihan Cheng, Matteo Ippoliti
2023-06-28T18:00:02Z
http://arxiv.org/abs/2306.16455v2
# Efficient sampling of noisy shallow circuits via monitored unraveling ###### Abstract We introduce a classical algorithm for sampling the output of shallow, noisy random circuits on two-dimensional qubit arrays. The algorithm builds on the recently-proposed "space-evolving block decimation" (SEBD) [Napp et al., PRX 12, 021021 (2022)] and extends it to the case of noisy circuits. SEBD is based on a mapping of 2D unitary circuits to 1D _monitored_ ones, which feature measurements alongside unitary gates; it exploits the presence of a measurement-induced entanglement phase transition to achieve efficient (approximate) sampling below a finite critical depth \(T_{c}\). Our noisy-SEBD algorithm unravels the action of noise into measurements, further lowering entanglement and enabling efficient classical sampling up to larger circuit depths. We analyze a class of physically-relevant noise models (unital qubit channels) within a two-replica statistical mechanics treatment, finding weak measurements to be the optimal (i.e. most disentangling) unraveling. We then locate the noisy-SEBD complexity transition as a function of circuit depth and noise strength in realistic circuit models. As an illustrative example, we show that circuits on heavy-hexagon qubit arrays with noise rates of \(\approx 2\%\) per CNOT, based on IBM Quantum processors, can be efficiently sampled up to a depth of 5 iSWAP (or 10 CNOT) gate layers. Our results help sharpen the requirements for practical hardness of simulation of noisy hardware. ###### Contents * I Introduction * II Background * II.1 Random circuit sampling * II.2 MPS simulation and the entanglement barrier * II.3 2D shallow circuits and the SEBD algorithm * II.4 Monitored dynamics * III Unraveling noise into monitored trajectories * III.1 Noise models and unraveling * III.2 Sampling noisy circuits * III.3 Entanglement-optimal unravelings * III.4 Unital Qubit Channels * IV Noisy-SEBD algorithm * IV.1 Description of the algorithm * IV.2 Numerical results: entanglement phase transition * IV.3 Application: IBM quantum processors * V Discussion * A Derivation of the unraveling cost function from statistical mechanical model * A.1 Quasientropy * A.2 Generalized measurement * A.2.3 Mapping to a classical statistical mechanical model * A.2.4 Two replicas: classical Ising model * B Maxmization of the target function for unital channels * C Measurement-induced phase transition for the optimal weak measurement in 1D random circuits * D Data Collapse * E Phase boundary at large \(T\) * F Benchmarks * G Converting gate fidelity to noise rate ## I Introduction Before quantum computers can do anything useful, they must be able to reliably beat classical computers at some task, ideally one that is quantifiable, well-understood theoretically, and has reasonable experimental requirements. _Random circuit sampling_ (RCS) has emerged as one of the leading candidates for this role: it is well-suited to the architectures of present-day gate-based quantum processors, and--at least in the ideal case of noiseless computation--its hardness is firmly established in complexity theory [1; 2; 3; 4; 5]. As a result, it has become the focus of pioneering experimental efforts in the last few years [6; 7; 8; 9]. All such experiments, however, are by necessity carried out on present-day noisy, intermediate scale quantum (NISQ) processors, where the question of classical simulation hardness is much more nuanced. This has spurred much interest in the exploration of how noise affects the boundary between "easy" and "hard" simulation problems. It is now established that RCS in the presence of a finite noise rate can be simulated in polynomial time [10]. However the algorithm of Ref. [10] is not practical, leaving open the question of simulability of finite-sized, noisy RCS experiments with reasonable classical resources. This issue is subtle and depends on many variables, such as circuit architecture, size and depth, details of the noise models, choice of target metrics, etc. A powerful classical approach is based on _tensor networks_, which leverage limited entanglement and work best in 1D; for this reason, experiments have focused on 2D qubit arrays and picked highly-entangling gate sets. Large circuit depth also generates more entanglement and thus makes simulation harder. However, in practice, depth is limited by the presence of noise, as more gates also cause the accumulation of more errors. Additionally, the presence of noise in the quantum experiment lowers the bar for classical simulation: an apples-to-apples comparison requires that we tolerate similar levels of error from the classical algorithm as well. This opens the door to various strategies based on tensor networks that can outperform sufficiently-noisy quantum computers [11; 12; 13]. Characterizing the practical hardness of noisy RCS problems is thus a pressing question in quantum information science. It is also closely related to recent developments in nonequilibrium many-body physics regarding dynamical phases of quantum information in open systems. In particular, the fate of unitary circuits that are sampled _during_ the dynamics (a special case of open-system evolution) has attracted much interest due to the discovery of entanglement phases that occur as a function of the measurement rate [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43]. The emergence of a phase with limited entanglement (area-law) at high measurement rate in these circuits has been used to develop an efficient algorithm for sampling the output of shallow unitary circuits in 2D [44]. The algorithm, dubbed "space-evolving block decimation" (SEBD) and sketched in Fig. 1(a), works at depths \(T\) below a finite critical depth \(T_{c}\) (model-dependent, but \(\approx 4-5\) in typical models). Circuits with \(T=3\) in 2D are capable of universal quantum computation and thus hard to sample in the worst case [45], making this result surprising. Another very recent development in dynamical phases of quantum information is the discovery of a sharp noise-induced phase transition in RCS [9; 46], which was identified at a fixed number of errors per circuit layer (i.e. noise strength \(\varepsilon\sim 1/N\), \(N\) being the number of qubits). In the strong-noise phase, it was argued [9] that the processor's output can be classically "spoofed", while in the weak-noise phase simulation is conjectured to be practically hard. In this work, we consider the problem of noisy RCS on shallow 2D circuits, Fig. 1(a-b). Unlike other works, our goal is not to sample from the ideal output within some tolerance determined by the noise; instead, we aim to accurately sample the noisy output itself--a closely related but different problem. To this end we develop a classical algorithm, dubbed noisy-SEBD, and show that it undergoes a complexity phase transition (from quadratic to exponential in the linear size of the system) as a function of circuit depth \(T\) and noise strength \(\varepsilon\). The physical principle behind the algorithm is that noise can be viewed as a sequence of (fictitious) measurements done by the environment on the system, Fig. 1(c). Measurements can lower the amount of entanglement in the system and drive a phase transition to an area-law entangled phase, where tensor network simulation is efficient; as a consequence, noise can drive a phase transition in the complexity of noisy-SEBD. Since the unraveling of noise into measurements is not unique, we are free to optimize it in order to lower this threshold as much as possible. In this work we use a two-replica statistical mechanics model to optimize the unraveling, finding _weak measurements_ to be more disentangling than stochastic projective measurements; the effect of this optimized choice lowers the threshold noise rate by as much as a factor of \(\approx 2\), thus significantly expanding the "easy" phase. Combined with the depth-induced complexity transition in the original (noiseless) SEBD algorithm [44], this defines a phase boundary in the space of circuit depth \(T\) and noise strength \(\varepsilon\), sketched in Fig. 1(d). This adds to our growing understanding of the boundaries of "practical" simulability of noisy quantum systems. Moreover, it places sharp constraints on the possibility of achieving beyond-classical computation by scaling RCS experiments in space only--i.e., by growing quantum processor size at fixed circuit depth. This highlights the importance of further improvements in error rates of NISQ hardware. The paper is structured as follows. Sec. II reviews background material on random circuit sampling, matrix-product state simulation methods, the SEBD algorithm and monitored dynamics. In Sec. III we discuss the unraveling of noise into monitored trajectories and the choice of entanglement-optimal unravelings, including explicit solutions for unital qubit channels. Sec. IV presents the noisy-SEBD algorithm, numerical simulations of its complexity phase transition, and an illustrative application to circuits based on IBM Quantum's heavy-hexagon qubit arrays. We conclude in Sec. V by summarizing our results, their implications and connections with other works, and directions for future research. ## II Background ### Random circuit sampling Random circuit sampling (RCS) has emerged as a leading candidate for early demonstrations of quantum computational supremacy, combining good fit with existing hardware capabilities and robust complexity-theoretic arguments for classical hardness [1; 2; 3; 4; 5]. The idea is to draw a random instance \(U\) from an ensemble of local unitary circuits of depth \(T\), run it on a quantum computer prepared in the initial state \(|\mathbf{0}\rangle\equiv|0\rangle^{\otimes N}\), and measure the state of each qubit in the computational basis, obtaining a bitstring \(\mathbf{z}\in\{0,1\}^{N}\). Ideally, this process samples bitstrings from the probability distribution \(P_{U}(\mathbf{z})=|\bra{\mathbf{z}}U\ket{\mathbf{0}}|^{2}\), which is computationally hard to do for classical computers. Intuitively, this is due to the production of extensive entanglement in the system over the course of typical instances of the unitary evolution \(U\)[47] which causes tensor network classical algorithms to fail. At the same time, sufficiently generic ensembles of unitary gates in \(U\) ensure that various other strategies for efficient classical simulation (such as stabilizers [48] or matchgates [49; 50; 51]) are not viable. These theoretical insights have motivated pioneering experimental efforts in the past few years to demonstrate RCS-based quantum computational supremacy on present-day NISQ hardware [6; 7; 8; 9]. Verification of successful RCS is nontrivial. The experiments employ a linear cross-entropy diagnostic \[\mathsf{XEB}=2^{N}\sum_{\mathbf{z}}p_{\mathrm{exp}}(\mathbf{z})p_{U}(\mathbf{z})-1, \tag{1}\] where \(p_{U}(\mathbf{z})\) is the previously-defined ideal distribution (to be computed classically), and \(p_{\mathrm{exp}}(\mathbf{z})\) is the distribution of bitstrings obtained from the experiment, which generally differs from the ideal one due to imperfect implementation and uncontrolled noise. This quantity is convenient as it can be estimated by sampling from the experiment: \[\mathsf{XEB}=2^{N}\langle p_{U}(\mathbf{z})\rangle_{\mathbf{z}\sim p_{\mathrm{exp}}}-1. \tag{2}\] Furthermore, it is designed in such a way that \(\mathsf{XEB}=1\) if the experiment is perfect (\(p_{\mathrm{exp}}=p_{U}\)), while \(\mathsf{XEB}=0\) if it is completely noisy (\(p_{\mathrm{exp}}(\mathbf{z})=2^{-N}\) for all \(\mathbf{z}\)). In real-world conditions, with finite noise per gate, it has been argued that \(\mathsf{XEB}\sim f^{NT}\) in the regime of interest to the experiment [6], where \(f\) is the average fidelity per gate (more detailed results on the conversion of local noise into global depolarizing noise have been obtained subsequently [52; 53]). This lowers the bar for classical algorithms: they do not need to achieve a perfect score \(\mathsf{XEB}=1\), but only to exceed the fidelity of the NISQ experiment. This has led to a flurry of activity in the past few years to develop approximate classical algorithms that can efficiently simulate RCS with \(\mathsf{XEB}\) scores and runtimes comparable to those of the NISQ experiments [54; 55; 11; 12; 13]. Furthermore, it was recently shown that there exists a polynomial-time algorithm for RCS with constant noise per gate [10], which uses a Feynman path-integral representation for the amplitudes \(\bra{\mathbf{z}}U\ket{\mathbf{0}}\) wherein noise damps the contribution of most paths. While this set Figure 1: Main ideas of this work. (a) Sampling of 2D shallow unitary circuits. The qubit array has linear dimensions \(L_{x}\), \(L_{y}\) and the circuit has depth \(T\). The SEBD algorithm is based on an MPS simulation carried out along a spatial direction (e.g. \(L_{y}\)), approximating the wavefunction of a 1D subsystem of qubits on a light cone surface (green). It exploits the measurements on the top boundary of the circuit to disentangle the wavefunction. (b) We consider noisy circuits with uncorrelated local noise: unitary gates (blue rectangles) are interspersed with single-qubit noise channels (orange circles). (c) The noise channels are unraveled as measurements (generically weak), represented here as interactions with a fresh ancilla which is then measured. These fictitious measurements can further disentangle the wavefunction. (d) Qualitative sketch of the complexity phase diagram of the noisy-SEBD algorithm. An entanglement phase transition separates an β€œeasy” phase (low depth and/or strong noise), where the computational cost of sampling the noisy circuit output via noisy-SEBD is \(\sim L_{x}L_{y}\exp(T)\), from a β€œhard” phase where the same cost becomes \(\sim L_{y}\exp(L_{x}T)\). The location of the phase boundary is model-dependent. The line \(\varepsilon=0\) yields the finite-depth complexity transition of Ref. [44] (noiseless SEBD) while the \(1/T=0\) line (which stands for \(T=O(L)\)) yields the standard measurement-induced phase transition in two spatial dimensions [23]. We conjecture the scaling \(\varepsilon_{c}(T)\sim\varepsilon_{c,2D}+O(1/T)\) at large \(T\). We emphasize that this is a transition in the complexity of the noisy-SEBD algorithm, _not_ of the sampling task itself [10]. tles the complexity of noisy RCS formally, the algorithm comes with a very large exponent and is expected to be impractical at the relevant noise strengths. Thus the _practical_ issue of efficient classical simulation of finite-sized noisy circuits remains open. ### MPS simulation and the entanglement barrier The interplay of entanglement and noise and its effects on the complexity of classical simulation become especially transparent in the case of matrix-product state (MPS) simulations of random circuits, limited to 1D (or quasi-1D) geometries [56; 57; 58]. The idea is to represent a wavefunction \(\ket{\psi}=\sum_{\mathbf{z}}c_{\mathbf{z}}\ket{\mathbf{z}}\) as a product of three-index tensors via \(c_{\mathbf{z}}=\mathrm{Tr}(A_{1}^{z_{1}}A_{2}^{z_{2}}\cdots A_{N}^{z_{N}})\), where each \(A_{i}^{z}\) is a \(\chi\times\chi\) matrix (\(i\) labels position in the chain, \(z\) is the physical state \(\ket{z}\), and two virtual indices are implicit). The _bond dimension_\(\chi\) is a cutoff on the Schmidt rank of the state \(\ket{\psi}\), which makes the state classically representable. The MPS ansatz allows approximate simulations with high accuracy whenever the entanglement entropy1 of \(\ket{\psi}\) about each cut obeys \(S\ll\ln(\chi)\). Given the linear growth of entanglement in random circuits [47], the MPS method enables accurate simulation for short circuit depths \(T\lesssim\ln(\chi)\), after which the truncation of bond dimension incurs a large error. One can nonetheless carry out finite-\(\chi\) simulations for larger depths; the effect of MPS truncation error on the fidelity with the true state is found to be qualitatively similar to the effect of noise in the quantum experiments [11; 13]. Thus if noise strength is large enough, classical MPS simulations may beat NISQ experiments at the task of approximating a given ideal random circuit. Footnote 1: In fact compression into MPS form depends on the the behavior of small Schmidt eigenvalues which is captured by Renyi entropies \(S_{n}\) with \(n<1\). In typical random circuits the von Neumann and Renyi entropies have similar scaling for all values of \(n\) away from \(0\). A different task is to classically simulate (or sample from) noisy circuits themselves. Here, classical simulation needs to incorporate noise and thus mixed states. This can be accomplished by using matrix-product operators (MPO): a mixed state \(\rho=\sum_{\mathbf{z},\mathbf{z}^{\prime}}c_{\mathbf{z},\mathbf{z}^{\prime}}\ket{\mathbf{z}}\bra{ \mathbf{z}^{\prime}}\) is represented as a product of four-index tensors via \(c_{\mathbf{z}\mathbf{z}^{\prime}}=\mathrm{Tr}\Big{(}A_{1}^{z_{1},z_{1}^{\prime}}A_{2} ^{z_{2},z_{2}^{\prime}}\cdots A_{N}^{z_{N},z_{N}^{\prime}}\Big{)}\), where each \(A_{i}^{z,z^{\prime}}\) is a \(\chi\times\chi\) matrix. (A representation that guarantees positivity is given by _matrix-product density operators_ (MPDO) [59].) In the presence of noise and unstructured random gates, the unique steady state is expected to be the maximally mixed state \(\rho=\mathbb{I}/2^{N}=\bigotimes_{i}(\mathbb{I}/2)_{i}\), which is manifestly disentangled, and can be written as an MPO of bond dimension \(\chi=1\) (i.e. the tensors \(A_{i}^{z,z^{\prime}}\equiv\delta_{z,z^{\prime}}/2\) carry no virtual indices). Thus entanglement grows initially due to random unitary interactions, \(S\sim T\); this persists until the effects of noise are felt, at depth \(T\sim 1/p\), \(p\) being noise strength; after that, \(S\) decreases to zero. Thus to accurately simulate the noisy dynamics one has to overcome an "entanglement barrier" of height \(S\sim 1/p\), with computational cost \(\sim\mathrm{poly}(N,\exp(1/p))\)[60; 61]. While polynomial in the system size \(N\), for realistic noise strength \(p\sim 10^{-2}\) the cost can still be prohibitively large. Moreover, the efficient MPO simulation only applies to 1D; in higher dimension, small entanglement generally does not guarantee efficient simulation [58]. ### 2D shallow circuits and the SEBD algorithm Due to MPS simulations, 1D circuit architectures require large depths (diverging faster than \(\log(N)\)) for hardness. On the contrary, 2D circuits could be hard already for finite depth \(T\), as tensor network methods in two or more dimensions are generally not efficient. As hardness of simulation generally scales exponentially in the treewidth of the tensor network, experimental RCS works in two-dimensional circuits [6; 7; 8; 9] set \(T\sim\sqrt{N}\) to maximize classical hardness. Still, it is natural to ask at what depth \(T\) the classical simulation would become hard (asymptotically in large \(N\)). It is straightforward to note that \(T\leq 2\) is easy, as the output state \(U\ket{\mathbf{0}}\) is given by a tensor product of decoupled dimers (\(T=1\)) or one-dimensional subsystems each hosting an MPS of finite bond dimension (\(T=2\)). However, starting at depth \(T=3\), it is possible to prepare states whose exact simulation is provably hard [45]. Surprisingly, Ref. [44] has shown that _approximate sampling_ from 2D circuits up to a finite depth \(T_{c}\geq 3\) is in fact possible with polynomial time classical algorithms. One of the methods they propose, dubbed _space-evolving block decimation_ (SEBD), is based on reducing the sampled (2+1)D circuit to a (1+1)D circuit featuring measurements alongside unitary gates. Below a critical depth \(T_{c}\), the state that needs to be simulated classically obeys an area-law for entanglement [62] and can thus be accurately simulated via MPS methods. The emergence of this low-entanglement phase is an instance of a general phenomenon taking place in _monitored dynamics_, i.e. time evolution that combines unitary interactions and measurements, which we review next. ### Monitored dynamics Measurements can disentangle a quantum state. Given a many-body wavefunction \(\ket{\psi}\), measuring a qubit \(i\) in the computational basis leaves behind a product state between \(i\) and the rest of the system: \(\propto\ket{z}_{i}\otimes\bra{z}\psi\rangle_{\neg i}\) (\(z\in\{0,1\}\) is the measurement outcome, obtained randomly from the Born rule, and \(\neg i\) denotes the rest of the system). This not only disentangles \(i\), but may also destroy entanglement globally--as an extreme example, measuring any one qubit of a GHZ state \(\frac{1}{\sqrt{2}}(\left|0\right\rangle^{\otimes N}+\left|1\right\rangle^{\otimes N})\) in the computational basis leaves behind a global product state. At the same time, in systems with local interactions, entanglement is generated locally. This asymmetry between entanglement creation and destruction suggests that monitored dynamics, featuring a finite rate \(p\) of measurements alongside local unitary interactions, should generically lead to states with low entanglement, for any rate \(p>0\)[17]. Surprisingly, it was found that monitored dynamics can instead successfully stabilize a highly entangled phase as long as the rate of measurement \(p\) is below a critical threshold \(p_{c}>0\)[14; 15; 16]. The phases are characterized by the structure of entanglement in pure output states of the dynamics, \(\left|\psi_{\mathbf{m}}\right\rangle\), which are indexed by the measurement record \(\mathbf{m}\) (the sequence of classical outcomes collected during the dynamics). In particular, the scaling of entanglement entropy \(S_{A}\) for a subsystem \(A\) in these states undergoes a transition from an area-law \(S_{A}\sim\left|\partial A\right|\) (\(\partial A\) is the boundary of \(A\)) to a volume-law \(S_{A}\sim\left|A\right|\). These scalings are generally washed out in the mixed state \(\rho=\sum_{\mathbf{m}}p_{\mathbf{m}}\left|\psi_{\mathbf{m}}\right\rangle\!\!\left\langle \psi_{\mathbf{m}}\right|\) obtained by discarding the measurement record. The stability of entanglement in the volume-law phase (\(p<p_{c}\)) is explained qualitatively by the emergence of a quantum error correcting code that manages to hide information from future measurements for a long time [63; 20; 24]. This coding perspective is illustrated concretely by the behavior of a reference qubit \(R\) initially entangled with a monitored many-body system [25]. Let \(S_{R}(t)\) be the entanglement entropy of the reference qubit as a function of time \(t\) in the monitored dynamics; at late times one generally has \(S_{R}(t)\sim e^{-t/\tau}\), with a time constant \(\tau\) dubbed the _purification time_. The behavior of \(\tau\) changes sharply at the transition. In the volume-law phase it obeys \(\tau\sim\exp(N)\), signifying a successful encoding of at least some information about the state of \(R\), which is protected from measurements for a long time. In the area-law phase \(\tau\) becomes \(O(1)\), showing that the encoding fails. Finally, at the critical point (\(p=p_{c}\)) \(\tau\) diverges algebraically, \(\tau\sim N^{z/d}\), with \(z\) a dynamical critical exponent (typically \(z=1\)[39]) and \(d\) the spatial dimension. In the following we will make use of the purification time as a practical diagnostic for the underlying entanglement phase, as it is numerically more efficient to compute than the entanglement entropy of large subsystems. In one dimension, the entanglement phase transition also corresponds to a transition in the classical simulation complexity of the dynamics: area-law states in 1D have constant entanglement, and can be efficiently simulated with MPS methods. This is the principle behind the SEBD algorithm [44], in which a 2D sampling task is reduced to a \((1+1)\)D monitored dynamics and simulated by MPS methods in the area-law phase. Ref. [64] has extended this approach to continuous-time Markovian open-system dynamics: the system-environment coupling, in the form of a Lindbladian master equation, can be "unraveled" into trajectories [65] (stochastic pure-state evolutions) that contain quantum measurments, which in turn can lower entanglement and help the accuracy of MPS simulation. As the unraveling is a simulation artifact and is not physical, it can be optimized to minimize the amount of entanglement. Ref. [64] proposes an adaptive scheme that chooses the unraveling with the lowest expected value of the entropy at each position and time during the evolution; above a threshold noise strength, the trajectories enter an area-law phase and efficient MPS simulation becomes possible. In this work we build on the approaches of Refs. [64; 44] to address the problem of sampling from _noisy, shallow_ circuits in 2D. ## III Unraveling noise into monitored trajectories In this Section we review the basics noise models and their unraveling and discuss the entanglement-optimal unraveling to use in the noisy-SEBD algorithm. ### Noise models and unraveling In quantum computers, it is often a good approximation to treat the inevitable interactions with the environment as Markovian noise, modeled by quantum channels (completely-positive trace preserving maps on the space of density matrices [66]). Mathematically a quantum channel \(\Phi\) can be represented by as a sum of Kraus operators \(\{M_{i}\}\), \[\Phi(\rho)=\sum_{i}M_{i}\rho M_{i}^{\dagger}, \tag{3}\] which must obey the trace-preservation condition: \[\sum_{i}M_{i}^{\dagger}M_{i}=\mathbb{I}. \tag{4}\] As examples, the dephasing channel can be represented by \[\Phi(\rho)=(1-\varepsilon)\rho+\varepsilon Z\rho Z, \tag{5}\] i.e. with Kraus operators \(\{\sqrt{1-\varepsilon\mathbb{I}},\sqrt{\varepsilon}Z\}\), and the depolarizing channel by \[\Phi(\rho)=(1-\varepsilon)\rho+\frac{\varepsilon}{3}\left(X\rho X+Y\rho Y+Z \rho Z\right), \tag{6}\] with Kraus operators \(\{\sqrt{1-\varepsilon\mathbb{I}},\sqrt{\varepsilon/3}X,\sqrt{\varepsilon/3}Y, \sqrt{\varepsilon/3}Z\}\). In both cases \(\varepsilon\) is the noise strength. One important property of the Kraus operators representation is that it is not unique. For a given quantum channel, different sets of Kraus operators are equivalent under unitary transformations \(U\): \[M_{j}^{\prime}=\sum_{i}U_{ji}M_{i}. \tag{7}\] This equivalence also holds for non-square, semi-unitary transformations \(U\) that only satisfy \(U^{\dagger}U=\mathbb{I}\). As an important consequence, even when a channel \(\Phi\) can be unraveled into unitary processes (such as the dephasing and depolarizing channels above), this equivalence allows the freedom to choose an unraveling into non-unitary operators, which correspond to measurements. For example, the dephasing channel can be unraveled into a stochastic projective measurement, \[M_{0}=\sqrt{1-2\varepsilon}\mathbb{I},\ M_{1}=\sqrt{2\varepsilon}|0\rangle \langle 0|,\ M_{2}=\sqrt{2\varepsilon}|1\rangle\langle 1| \tag{8}\] (i.e., a projective measurement of \(Z\) taking place with probability \(2\varepsilon\)). It can also be unraveled into a weak measurement of \(Z\), \[\begin{split} M_{0}&=\sqrt{1-\varepsilon}\cos\theta \mathbb{I}-\sqrt{\varepsilon}\sin\theta Z,\\ M_{1}&=\sqrt{1-\varepsilon}\sin\theta\mathbb{I }+\sqrt{\varepsilon}\cos\theta Z,\end{split} \tag{9}\] where \(\theta\) is a free parameter tuning the strenght of the measurement (\(\theta=0\) returns a unitary unraveling). ### Sampling noisy circuits Returning to the noisy circuit problem, we wish to sample from the distribution \(P_{\mathcal{N}}(\mathbf{z})=\bra{\mathbf{z}}\mathcal{N}(|\mathbf{0}\rangle\!\bra{\mathbf{0}} |\,|\mathbf{z})\), with \(\mathcal{N}\) the channel describing the noisy circuit evolution: \[\mathcal{N}=\Phi^{\otimes N}\circ\mathcal{U}_{T}\circ\Phi^{\otimes N}\circ \mathcal{U}_{T-1}\circ\cdots\Phi^{\otimes N}\circ\mathcal{U}_{1}. \tag{10}\] Here \(\Phi\) is a single-qubit noise of strength \(\varepsilon\) as before, while \(\mathcal{U}_{t}(\rho)=u_{t}\rho u_{t}^{\dagger}\) describes the \(t\)-th layer of the (ideal) shallow unitary circuit, \(U=u_{T}u_{T-1}\cdots u_{1}\). By expanding each \(\Phi\) into its Kraus operators, \(\Phi(\rho)=\sum_{i}M_{i}\rho M_{i}^{\dagger}\), we can rewrite the probability distribution \(P_{\mathcal{N}}(\mathbf{z})\) as \[\begin{split} P_{\mathcal{N}}(\mathbf{z})&=\bra{\mathbf{z}} \mathcal{N}(|\mathbf{0}\rangle\!\bra{\mathbf{0}}|\,|\mathbf{z})=\sum_{\mathbf{m}}|\bra{\mathbf{z}} \mathbb{M}_{\mathbf{m}}\ket{\mathbf{0}}|^{2},\\ \mathbb{M}_{\mathbf{m}}&=\left(\bigotimes_{x=1}^{N}M_{m_ {T,x}}\cdot u_{T}\right)\cdots\left(\bigotimes_{x=1}^{N}M_{m_{1,x}}\cdot u_{1 }\right),\end{split} \tag{12}\] where each index \(m_{t,x}\) labels the Kraus operator for the noise channel \(\Phi\) acting on qubit \(x\) at step \(t\), and \(\mathbf{m}\) is a shorthand for the whole collection of indices \(\{m_{t,x}\}\). Thus the operators \(\{\mathbb{M}_{\mathbf{m}}\}\) are a set of Kraus operators for the channel \(\mathcal{N}\). We can view each \(\mathbf{m}\) as a _quantum trajectory_ of the evolution: the initial state \(\ket{\mathbf{0}}\) evolves into the pure state with probability ; the true (mixed) state is recovered as a stochastic mixture of the trajectories. It is straightforward to see that is the _joint_ probability of drawing trajectory \(\mathbf{m}\) and sampling bitstring \(\mathbf{z}\) at the end. Then, Eq. (11) can be written as \[P_{\mathcal{N}}(\mathbf{z})=\sum_{\mathbf{m}}P_{\mathcal{N}}(\mathbf{z},\mathbf{m}), \tag{13}\] i.e. the marginal distribution obtained by summing over trajectories. This insight is widely used in simulations of open-system dynamics [65; 66; 67; 68; 69]. The trajectory method allows one to simulate pure states rather than density matrices, which is often much more memory-efficient. This comes at the expense of having to average over many trajectories, e.g. in order to Monte Carlo-sample the expectation value of a target operator. In our case, however, we only aim to _sample_ from the distribution \(P_{\mathcal{N}}(\mathbf{z})\), so there is no need for trajectory averaging: any joint sample \((\mathbf{z},\mathbf{m})\) drawn from a simulation of the trajectory dynamics yields a valid sample \(\mathbf{z}\) from the desired distribution \(P_{\mathcal{N}}(\mathbf{z})\). We emphasize that the \(\mathbf{m}\) samples and their distribution are purely mathematical artifacts of the method: the decomposition of \(\Phi\) into Kraus operators includes a gauge degree of freedom that can be fixed arbitrarily. However, the marginal distribution \(P_{\mathcal{N}}(\mathbf{z})\) is gauge-invariant and physical, corresponding to the true experimental distribution of bitstrings. Beyond the practical advantage of simulating pure rather than mixed state evolution, the gauge degree of freedom in the choice of unraveling can be exploited to minimize entanglement within the trajectory [64], thus extending the applicability of tensor network methods. Below we address the question of which unraveling yields the lowest entanglement for a given channel. ### Entanglement-optimal unravelings For a given model of dynamics (e.g. a Hamiltonian or an individual instance of a RUC), one can locally optimize the unraveling of each noise channel \(\Phi\) separately [64]. Here we take a simpler approach, and look for a general prescription that works well _on average_ over RUCs. Specifically, we aim to exploit the area-law phase in monitored dynamics (Sec. II.4), which is driven by the density of measurements in the dynamics. Thus we look for an unraveling of \(\Phi\) into measurements, so as to increase the effective density of measurements and facilitate a transition to the area-law phase [70]. As we have seen in Sec. III.1, there are multiple inequivalent ways of unraveling noise into measurements (e.g. weak vs stochastic projective measurements);therefore we look for the unraveling \(\mathbf{M}=\{M_{i}\}\) of the nose channel \(\Phi\) that has the strongest disentangling effect on the dynamics. Working at a "mean-field" level in Haar-random circuits, we consider the scaling of average purity in trajectories of the dynamics within a two-replica setting. This maps onto the partition function of a \(\mathbb{Z}_{2}\) Ising magnet, whose ordered and disordered phase corresponds to volume-law and area-law entanglement, respectively. Our goal is to facilitate simulation by minimizing entanglement, therefore we aim to _minimize_ the couplings in the magnet. The problem is analyzed in Appendix A, where we show that the minimization of the coupling amounts to maximizing the following objective function: \[x(\mathbf{M})=\sum_{i}\frac{\operatorname{tr}\left(M_{i}^{\dagger}M_{i}M_{i}^{\dagger }M_{i}\right)}{2\operatorname{tr}\left(M_{i}^{\dagger}M_{i}\right)}. \tag{14}\] This has a natural physical interpretation: if we view the Kraus operators \(\{M_{i}\}\) as _instruments_ of a generalized measurement (POVM) \(\{M_{i}^{\dagger}M_{i}\}\), and apply such measurement to the fully-mixed state \(\mathds{I}/2\), we obtain post-measurement states \(\rho_{i}\equiv M_{i}M_{i}^{\dagger}/\operatorname{Tr}\left(M_{i}M_{i}^{\dagger }\right)\) with probabilities \(\pi_{i}\equiv\operatorname{Tr}\left(M_{i}^{\dagger}M_{i}\right)/2\); the objective function is given by the average purity of the post-measurement states: \(x(\mathbf{M})=\sum_{i}\pi_{i}\operatorname{Tr}\left(\rho_{i}^{2}\right)\). Thus within this approach, the unraveling that minimizes many-body entanglement in the RUC dynamics is also the one that best purifies a single mixed qubit. This is a feature of Haar-random circuits and likely not true of more structured models. We note also that this cost function (post-measurement purity of a mixed qubit) is in fact identical to the entanglement of a Bell pair state after measuring one qubit, which was proposed in Ref. [70] as a heuristic measure of the disentangling power of different unravelings. The optimal unraveling \(\mathbf{M}_{\text{opt}}\) is given by \[\mathbf{M}_{\text{opt}}=\operatorname{argmax}_{\mathbf{M}}\left[x(\mathbf{M})\right], \tag{15}\] where \(\mathbf{M}=\{M_{i}\}\) ranges over Kraus decompositions of \(\Phi\), and thus is subject to (semi-)unitary gauge freedom per Eq. (7). In general, the semi-unitary gauge freedom makes the optimization nontrivial as it allows for infinitely many parameters. However, below we show that for a broad and physically relevant class of quantum channels there exists an optimal unraveling of minimal rank which can be obtained analytically. ### Unital Qubit Channels Let us consider _unital_ qubit channels, which are defined as those that leave the fully-mixed state \(\mathds{I}/2\) invariant. Up to unitary transformations (which we can ignore in the RUC setting), unital qubit channels have the canonical form [71] \[\Phi(\rho)=p_{0}\rho+p_{x}X\rho X+p_{y}Y\rho Y+p_{z}Z\rho Z \tag{16}\] with \(p_{\alpha}\geq 0\) and \(\sum_{\alpha}p_{\alpha}=1\). We start with a set of \(n\) Kraus operators \(\mathbf{M}=\{M_{i}\}\), where each element can be represented as \[M_{i}=a_{i}\mathds{I}+b_{i}\mathbf{\sigma}\cdot\tilde{\mathbf{u}}_{i}, \tag{17}\] where \(a_{i}\) and \(b_{i}\) are real non-negative numbers2, \(\tilde{\mathbf{u}}_{i}=\left(e^{i\phi_{i,x}}u_{i,x},e^{i\phi_{i,y}}u_{i,y},e^{i\phi _{i,z}}u_{i,z}\right)\) are complex unit vectors (i.e. \(\tilde{\mathbf{u}}_{i}^{*}\cdot\tilde{\mathbf{u}}_{i}=1\)), and \(\mathbf{\sigma}=(X,Y,Z)\). The Kraus operators of Eq. (17) must yield the Choi matrix of the unital channel \(\Phi\) in Eq. (16), i.e. they must satisfy Footnote 2: Note we can assume \(a_{i}\) real and non-negative by redefining \(M_{i}\) by an overall phase; the phase of \(b_{i}\) can then be absorbed into \(\tilde{\mathbf{u}}_{i}\). \[\sum_{i}M_{i}\otimes M_{i}^{*}=\sum_{\alpha=0,x,y,z}p_{\alpha}\sigma^{\alpha} \otimes(\sigma^{\alpha})^{*} \tag{18}\] (we set \(\sigma^{0}=\mathbb{I}\)). This gives the following four relations: \[\sum_{i}a_{i}^{2}= p_{0},\quad\sum_{i}b_{i}^{2}=1-p_{0}, \tag{19}\] \[\sum_{i}a_{i}b_{i}\tilde{\mathbf{u}}_{i}= \mathbf{0},\quad\sum_{i}\tilde{u}_{i,\alpha}^{*}\tilde{u}_{i,\beta}b_ {i}^{2}=p_{\alpha}\delta_{\alpha\beta}. \tag{20}\] The optimization of the target function \(x(\mathbf{M})\) subject to these constraints is carried out in Appendix B. The objective function is maximized when \(a_{i}=\sqrt{p_{0}/n}\) and \(b_{i}=\sqrt{(1-p_{0})/n}\) for all \(i\), and the unit vectors \(\tilde{\mathbf{u}}_{i}\) are real: \(\tilde{\mathbf{u}}_{i}=\mathbf{u}_{i}=(u_{i,x},u_{i,y},u_{i,z})\). The remaining constraints on the vectors \(\{\mathbf{u}_{i}\}\), Eq. (20), read \[\sum_{i}\mathbf{u}_{i}=\mathbf{0},\quad\sum_{i}u_{i,\alpha}u_{i,\beta}=\frac{np_{ \alpha}}{1-p_{0}}\delta_{\alpha\beta}. \tag{21}\] As an example, a solution with \(n=4\) is \[\begin{cases}\mathbf{u}_{0}=\left(\sqrt{p_{x}},\sqrt{p_{y}},\sqrt{p_{z}}\right)/ \sqrt{1-p_{0}},\\ \mathbf{u}_{1}=\left(\sqrt{p_{x}},-\sqrt{p_{y}},-\sqrt{p_{z}}\right)/\sqrt{1-p_{0} },\\ \mathbf{u}_{2}=\left(-\sqrt{p_{x}},\sqrt{p_{y}},-\sqrt{p_{z}}\right)/\sqrt{1-p_{0} },\\ \mathbf{u}_{3}=\left(-\sqrt{p_{x}},-\sqrt{p_{y}},\sqrt{p_{z}}\right)/\sqrt{1-p_{0} },\end{cases} \tag{22}\] i.e. the vertices of a regular tetrahedron inscribed in the Bloch sphere, up to a rescaling \(\sqrt{3p_{\alpha}/(1-p_{0})}\) of each axis. In the case of \(p_{x}=p_{y}=p_{z}\) (depolarizing noise, Eq. (6)), the conditions in Eq. (21) become \[\mathbb{E}_{i}[\mathbf{u}_{i}]=\mathbf{0},\qquad\mathbb{E}_{i}[u_{i,\alpha}u_{i,\beta} ]=\frac{1}{3}\delta_{\alpha\beta}, \tag{23}\] where \(\mathbb{E}_{i}\) denotes averaging over \(i\) with respect to the uniform probability distribution \(\mathsf{Pr}(i)=1/n\). The conditions in Eq. (23) define a _spherical 2-design_, i.e. a probability distribution on the sphere whose first two moments coincide with those of the uniform distribution. Such distributions exist if \(n=4\) or \(n\geq 6\). In particular, the minimal (\(n=4\)) optimal unraveling is \[\begin{cases}M_{0}=\sqrt{\frac{1-\varepsilon}{4}}\mathbb{I}+\sqrt{\frac{ \varepsilon}{12}}\left(X+Y+Z\right),\\ M_{1}=\sqrt{\frac{1-\varepsilon}{4}}\mathbb{I}+\sqrt{\frac{\varepsilon}{12}} \left(X-Y-Z\right),\\ M_{2}=\sqrt{\frac{1-\varepsilon}{4}}\mathbb{I}+\sqrt{\frac{\varepsilon}{12}} \left(-X+Y-Z\right),\\ M_{3}=\sqrt{\frac{1-\varepsilon}{4}}\mathbb{I}+\sqrt{\frac{\varepsilon}{12}} \left(-X-Y+Z\right),\end{cases} \tag{24}\] i.e., a weak measurement along 4 directions corresponding to the vertices of a regular tetrahedron. Similarly, a solution to Eq. (23) with \(n=6\) is given by the vertices of a regular octahedron, corresponding to weak measurements of the Pauli \(X\), \(Y\) and \(Z\) operators. For the dephasing channel Eq. (5) (\(n=2\)), an optimal unraveling is \[\begin{cases}M_{0}=\sqrt{\frac{1-\varepsilon}{2}}\mathbb{I}+\sqrt{\frac{ \varepsilon}{2}}Z,\\ M_{1}=\sqrt{\frac{1-\varepsilon}{2}}\mathbb{I}-\sqrt{\frac{ \varepsilon}{2}}Z.\end{cases} \tag{25}\] In all these cases, the most disentangling unraveling takes the form of _weak measurements_ rather than stochastic projective measurements (e.g. Eq. (8) for the case of dephasing). In Appendix C we verify that, for the optimal unraveling Eq. (25), the measurement-induced phase transition occurs at a lower value of \(\varepsilon\) (\(\varepsilon_{c}\simeq 0.044\)) compared to the unraveling into stochastic projective measurements Eq. (8) (\(\varepsilon_{c}\simeq 0.084\)--note the probability of doing a measurement is \(p=2\varepsilon\) and the well-known MIPT critical point is at \(p_{c}\simeq 0.168\)[26]). We see that our simple mean-field approach is already sufficient to lower the critical noise strenght by almost a factor of 2. It would be interesting to test whether locally- and adaptively-optimized unravelings as in Ref. [64] could further improve this threshold. For the rest of this work, we will consider dephasing or depolarizing noise and use the optimal unraveling Eq. (25) unless otherwise specified. ## IV Noisy-Sebd algorithm ### Description of the algorithm We consider noisy random circuits in 2D with finite depth \(T\) acting on a grid with \(N=L_{x}\times L_{y}\) qubits. The goal of our algorithm is to sample bitstrings \(\mathbf{z}\) from the distribution \(P_{\mathcal{N}}(\mathbf{z})\), Eq. (11), determined by the circuit instance and noise channel \(\Phi\) (whose parameters we assume are known), with a small error. Let us first review the noiseless case, studied in Ref. [44] and illustrated in Fig. 1(a). Due to locality, the outcome \(z_{i}\) on any given qubit only depends on the evolution within its past lightcone. Thus to sample all the outcomes \(z_{i}\) on the first row of qubits, \(y=1\), we only need to apply gates and channels on qubits within the past lightcone of the line \(\{(x,y=1,t=T)\}\), which includes qubits with \(y\leq T\). This corresponds to a circuit of depth \(T\) on \(\leq L_{x}T\) qubits, which can be simulated efficiently via MPS methods. At this point one can successfully sample the outcomes \(z_{i}\) for the first row of qubits, and move on to the second row (\(y=2\)) iterating the same approach, performing only the gates and chan Figure 2: Schematic depiction of the 2D qubit array and effective 1D subsystem used in the SEBD algorithm. (a,b) Example of 2D qubit array with \(L_{x}=5\), showing the gate sequences used: ABCD for depth \(T=4\) (a), ABCDB for depth \(T=5\) (b). The region enclosed by the solid loop corresponds to the effective 1D subsystem, and the region enclosed by the dashed loop corresponds to sites which are measured after applying gates in the past lightcone. (c) Equivalent 1D circuit for \(T=4\) and \(L_{x}=5\) with gate sequence ABCD. The effective 1D system has size \(2L_{x}=10\), with gates up to the third nearest neighbor, and each measurement is followed by a reset to the \(|0\rangle\) state. The dashed lines enclose a unit cell whose architecture repeats periodically in time. nels within the past lightcone of \(\{(x,y=2,t=T)\}\), etc. This effectively maps the 2D shallow circuit to an equivalent 1D circuit, where the readout measurements are converted to mid-circuit measurements and resets (See Fig. 2(c) as an example). As a result of this mapping, the spatial direction along \(L_{y}\) becomes the time direction for the 1D circuit (an idea also known as _space-time duality_ in quantum circuits [36, 37, 38]). The SEBD algorithm is based on MPS simulation of this effective 1D dynamics for the purpose of sampling the \(\mathbf{z}\) outcomes. The issue with iterating this approach indefinitely is that the entanglement in the quasi-1D state of \(L_{x}\times T\) qubits in principle grows with each step, up to the point where MPS simulation fails. However, Ref. [44] observed that, due to the mid-circuit measurements, the effective dynamics may enter an area-law phase, wherein the entanglement remains finite and approximate sampling can be carried out efficiently with high accuracy. In general, the effective 1D circuit consists of \(N_{\rm 1D}=cTL_{x}\) qubits, where \(c\) is a constant depending on the lattice geometry (the slope of the lightcone, in the models considered here \(c\approx 1/2\)), and the spatial range of the two-qubit gates is proportional to \(T\). Equivalently, one may view the effective system as a quasi-1D strip of size \(L_{x}\times cT\) with nearest-neighbor gates in both directions. Either way, each circuit layer on \(cTL_{x}\) qubits is followed by the measurement of \(L_{x}\) qubits (a full row), giving a ratio of measurements to unitary operation of \(\sim 1/T\). When this ratio is sufficiently high (i.e. \(T\) sufficiently low), the system enters an area-law phase and SEBD is efficient. Let us now add noise to the picture. As discussed in Sec. III.2, we can unravel the noise channels \(\Phi\) into an arbitrary set of Kraus operators, simulate the pure-state trajectories, sample from the joint distribution \(P_{\mathcal{N}}(\mathbf{z},\mathbf{m})\) and keep only the \(\mathbf{z}\) samples. We adopt the entanglement-optimal unraveling of unital noise channels discussed in Sec. III.3 to suppress entanglement in the effective 1D state at the level of quantum trajectories. The simulated dynamics now features mid-circuit measurements with two distinct originis: a density \(\propto 1/T\) coming from the final sampling step in the 2D circuit, and a density \(\propto\varepsilon\) coming from the unraveling of noise. Below a critical noise rate \(\varepsilon_{c}\), dependent on the model and the circuit depth \(T\), the entanglement of effective 1D state satisfies area law, and thus allows efficient classical simulation. These ideas are schematically summarized in Fig. 1. ### Numerical results: entanglement phase transition In this section, we numerically study the entanglement phase transition in the effective 1D subsystem that is used in the noisy-SEBD algorithm (Fig. 2(c)). As an example, we choose shallow circuits that act on a 2D square lattice (Fig. 2(a-b)) with unitary gates similar to those employed in Google's RCS experiment [6]. The two-qubit gates are iSWAP-like fermionic simulation gates \[\text{fSim}(\pi/2,\pi/6)=\begin{pmatrix}1&0&0&0\\ 0&0&-i&0\\ 0&-i&0&0\\ 0&0&0&e^{-i\pi/6}\end{pmatrix} \tag{26}\] sandwiched between single-qubit rotations randomly chosen from the set \(\{X^{\pm 1/2},Y^{\pm 1/2},W^{\pm 1/2},V^{\pm 1/2}\}\), where \(W=(X+Y)/\sqrt{2}\) and \(V=(X-Y)/\sqrt{2}\) are Hadamard-like gates (note that \(W^{\pm 1/2}\) and \(V^{\pm 1/2}\) are non-Clifford). The two-qubit gates are applied to bonds of the square lattice according to the sequence ABCD for \(T=4\) (Fig. 2(a)) and ABCDB for \(T=5\) (Fig. 2(b)). These specific sequences are chosen to be the most entangling for the effective 1D dynamics, defined as giving the longest single-qubit purification time as discussed in the following. The noise channel \(\Phi\) is taken to be the depolarizing channel, Eq. (6), with strength \(\varepsilon\). As a diagnostic for the entanglement phase transition which underpins the efficiency of noisy-SEBD, we use the single-qubit purification time \(\tau\) discussed in Sec. II.4. This is advantageous from the numerical point of view, relative to a direct calculation of the bipartite entanglement entropy, since it does not suffer from the large finite-size drifts arising from the logarithmic divergence of entropy at the critical point [26]. The phase transition between area-law and volume-law entanglement scaling is probed by the mutual information between the single reference qubit and the rest of the system. In practice, we introduce a reference qubit which initially forms a Bell pair with a system qubit. At later time the average entropy of the reference qubit is captured by the exponential decay Figure 3: Purification time \(\tau\) of a single probe qubit as a function of the noise rate \(\varepsilon\) for (a) depth \(T=4\) with gate sequence ABCD and (b) depth \(T=5\) with gate sequence ABCDB. Scaling collapse of the data for (c) \(T=4\) and (d) \(T=5\). The data are averaged over \(5\cdot 10^{3}-3\cdot 10^{4}\) realizations. ing relation \(S_{R}(t)\sim e^{-t/\tau}\). In the volume-law phase (low measurement/noise rate), \(S_{R}\) can remain nonzero for a long time \(\tau\sim\exp(L_{x})\). Physically this implies that there is finite measurement-induced entanglement between opposite ends of the system as one takes \(L_{x}\), \(L_{y}\) to infinity jointly with \(L_{y}=\text{poly}(L_{x})\); this can be interpreted as emergent quantum teleportation [72] and has recently been explored experimentally [43]. On the other hand, in the area-law phase (high measurement/noise rate), \(S_{R}\) decays rapidly to zero with \(\tau=O(1)\). At the critical point, we expect \(\tau\sim L_{x}^{z}\), where \(z\) is the dynamical critical exponent (typically \(z=1\) for measurement-induced transitions in short-range interacting 1D systems [25; 26]). In Fig. 3, we show a finite size scaling analysis of \(\tau/L_{x}\) for both \(T=4\) (gate sequence ABCD) and \(T=5\) (gate sequence ABCDB), obtained from MPS simulation of the space-wise dynamics up to \(L_{y}=2L_{x}\), where only \(L_{x}\leq L_{y}\leq 2L_{x}\) are used for fitting to avoid the early-time transient effect. The existence of a finite-size crossing point of the ratio \(\tau/L_{x}\) for all system sizes (Fig. 3(a,b)) indicates \(z=1\), which is consistent with the emergence of 1+1D conformal symmetry at the transition. Therefore, we use the scaling ansatz \[\tau(\varepsilon,L_{x})=L_{x}F\left[\left(\varepsilon-\varepsilon_{c}\right) L_{x}^{1/\nu}\right], \tag{27}\] to determine the location of the critical point \(\varepsilon_{c}\) and the correlation length critical exponent \(\nu\). From the data collapse, Fig. 3(b)(d) (See Appendix D for method details), we locate \(\varepsilon_{c}=0.040(2)\) with \(\nu=1.3(1)\) for \(T=4\) (ABCD) and \(\varepsilon_{c}=0.053(2)\) with \(\nu=1.2(1)\) for \(T=5\) (ABCDB). For both values of \(T\) we find correlation length exponent \(\nu\approx 1.3\), which is consistent with the know value for the measurement-induced phase transition in 1D, as expected. Moreover we see that increasing circuit depth \(T\) leads to a larger critical noise rate \(\varepsilon_{c}\), which is the qualitative behavior sketched in Fig. 1(d). These numerical results support our qualitative expectations for the complexity of noisy-SEBD. Since the hardness of the method is exponential in \(T\), obtaining accurate predictions for the phase boundary at larger \(T\) becomes increasingly challenging. However, it is reasonable to conjecture that, for large \(T\), 1. as the quasi-1D system of \(cT\times L_{x}\) qubits approaches a 2D limit, one should recover the 2D measurement-induced phase transition which occurs at a finite noise rate \(\varepsilon_{c,2D}\)[73; 23]; 2. the transition should occur at a finite _total_ noise rate, comprising the unraveled measurements (rate \(\varepsilon\)) and the sampling of the final state (rate \(\sim 1/T\)), thus \(\varepsilon_{c}(T)=\varepsilon_{c,2D}+O(1/T)\). We find evidence in support of these conjectures in Clifford circuits, which can be simulated in polynomial time by the stabilizer method [48] (though note that this limits us to the projective unraveling of the noise channel), see Appendix E. Finally, in Appendix F we verify the accuracy of the noisy-SEBD algorithm by benchmarking its output against direct MPO simulations of the noisy dynamics and against stabilizer simulations of Clifford circuits. ### Application: IBM quantum processors Quantitatively, the results in the previous section show that circuits on square lattice architectures, on NISQ platforms such as Google's Sycamore processor with native iSWAP-like gates and \(\gtrsim 98\%\) two-qubit gate fidelities (translating to \(\varepsilon\lesssim 0.01\) in our parametrization, see Appendix G), are already in the "hard phase" at depth \(T=4\). Since depth \(T=3\) is in the easy phase already for noiseless SEBD (i.e., for \(\varepsilon=0\)), the inclusion of noise does not allow efficient simulation of an additional gate layer in this setting. This class of circuits is however a worst-case scenario, representing highly scrambling dynamics optimized for hardness of simulation [6; 74]. Here we consider the application of noisy-SEBD to circuits on quantum processors with heavy-hexagon geometry and native CNOT gates, such as IBM Quantum's family of processors, Fig. 4(a,b). We consider again highly-scrambling random ciruits with iSWAP two-qubit gates and single-qubit gates randomly chosen from \(\{X^{\pm 1/2},Y^{\pm 1/2},W^{\pm 1/2},V^{\pm 1/2}\}\). An iSWAP gate can be compiled into two native CNOT gates, plus single-qubit gates. Thus the effective noise rate for iSWAP gates is twice the CNOT error, giving e.g. \(\approx 96\%\) median gate fidelity on the Osprey processor (corresponding to \(\varepsilon\approx 0.025\) in our parametrization, see Appendix G). We consider subsystems of a qubit array based on the Condor processor, Fig. 4(a). The full system has \(N=1,121\) qubits and linear size \(L_{x}=43\). The gate sequence and unit cell for the SEBD algorithm are shown in Fig. 4(b), for the \(N=65\), \(L_{x}=11\) Hummingbird processor. We characterize the efficiency of the (noisy-)SEBD algorithm by computing the half-system bipartite entanglement entropy \(S\), Fig. 4(c), and single-qubit purification time \(\tau\), Fig. 4(d), for both noiseless and noisy random circuits with depth \(T=5\) (measured in units of iSWAP gates, thus corresponding to 10 CNOT gates on each qubit). As the linear system size is varied between \(L_{x}=7\) and \(L_{x}=43\), we observe a clear volume-law phase in the noiseless case, with volume-law entropy \(S\propto L_{x}\) and exponential purification time \(\tau\sim\exp(L_{x})\). On the contrary, for noise rate \(\varepsilon=0.02\), we see a weak growth of \(S\) with \(L_{x}\), consistent with critical scaling \(S\sim\ln(L_{x})\) or eventual saturation to an area-law. The ratio \(\tau/L_{x}\) also decreases, either to a finite constant (critical behavior) or to zero (area-law behavior). Finally, for \(\varepsilon=0.025\), both diagnostics are indicative of an area-law phase. We conclude that in this case the inclusion of a realistic noise rate in the simulation algorithm is sufficient to drive a transition in complexity of the SEBD algorithm. This makes the difference between an asymptotically-efficient and asymptotically-hard MPS simulation of the sampling problem. We note that, while circuits with low depth such as \(T=5\) can likely be simulated by brute-force tensor network methods up to hundreds or thousands of qubits, the cost of such methods remains generically exponential in the linear size of the system \(L_{x}\). On next-generation processors with \(N\sim 10^{5}\) qubits those methods would become intractable, while noisy-SEBD would remain practical in the easy phase. ## V Discussion We have introduced a classical algorithm, noisy-SEBD, to sample from the output distribution of noisy, shallow circuits in two dimensions. The algorithm uses the insight of mapping a 2D RCS problem to a 1D monitored dynamics problem (space-evolving block decimation, SEBD [44]), while also unraveling the action of noise into additional measurements on the system. At sufficiently low depth and sufficiently strong noise, this enables efficient MPS simulation of the monitored quantum trajectories, and thus efficient sampling from the appropriate noisy output distribution. Given that the unraveling of noise into measurements is arbitrary, it can be optimized so as to reduce the amount of entanglement [64]. Here we have focused on a "mean-field" approach where single-qubit noise channels at all positions and times are unraveled in the same way, chosen based on the two-replica statistical mechanics description of the circuit upon averaging over random gates. We have found that, for unital channels (such as dephasing and depolarizing), the optimal unraveling is based on uniform weak measurements, rather than stochastic projective measurements. The difference between unravelings is substantial--in the standard model of brickwork circuits in 1D, the noise threshold corresponding to the measurement-induced entanglement transition is reduced by a factor of about 2 for the optimal weak-measurement unraveling (\(\varepsilon_{c}\approx 0.04\)) compared to the usual projetive measurement unraveling (\(\varepsilon_{c}\approx 0.08\), i.e. measurement rate \(p_{c}=2\varepsilon_{c}\approx 0.16\)). This is consistent with prior observations in Ref. [70], and the optimization technique could be of independent interest for the study of measurement-induced entanglement transitions. While noisy RCS was shown to be classically-simulable in polynomial time based on a sampling of "Feynman paths" in Pauli operator space [10], the polynomial scaling of the algorithm features a large exponent (proportional to \(1/\varepsilon\), i.e. of order 100 in present-day experiments) that makes the algorithm impractical. This leaves open the question of "practical hardness" for finite-sized RCS experiments. Our results contribute to sharpen the requirements for such practical hardness by identifying a phase in the parameter space of depth \(T\) and noise strength \(\varepsilon\) [Fig. 1(d)] where noisy RCS can be classically simulated via a straightforward MPS algorithm in time3\(\sim N\exp(T)\). We have located the entanglement phase transitions in circuit architectures based on real-world quantum processors. In square lattices with native iSWAP-like gates, we found that noisy-SEBD allows the efficient sampling of circuits of depth \(T=3\), like SEBD in the noiseless case; but for realistic noise rates of \(\varepsilon\lesssim 1\%\) it does not increase the depth threshold (i.e., \(T=4\) remains in the hard phase). On heavy-hexagon lattices with native CNOT gates, we have instead found that the inclusion of realistic noise rates can increase the depth threshold, as shown in Fig. 4(c,d). Footnote 3: This scaling corresponds to an MPS with a number \(\propto L_{x}\) of tensors, of constant bond dimension and physical dimension \(\propto 2^{cT}\), evolved for \(\propto L_{y}\) steps. Our results add to the growing body of work on noise-induced phase transitions in computational complexity. Recent works have studied the simulation of noisy RCS via MPS simulations truncated to constant bond dimension \(\chi\)[11; 13], finding that the accumulated truncation error behaves similarly to noise in the quantum Figure 4: (a) Layout of 1121-qubit IBM quantum processor Condor; 65-qubit Hummingbird is shown as blue region. (b) Gate sequence for circuits with depth \(T=5\) (ABCDA) on 65-qubit quantum processor Hummingbird, of linear size \(L_{x}=11\). Other IBM processors have similar layout: Eagle with \(L_{x}=15\), Osprey with \(L_{x}=27\), and Condor with \(L_{x}=43\). The region surrounded by a solid line depicts the 1D effective subsystem for noisySEBD simulation of random circuits with \(T=5\) (gate sequence ABCDA). (c) Bipartite entanglement entropy \(S\) and (d) purification time \(\tau\) of a reference qubit, for noiseless (\(\varepsilon=0\)) and noisy (\(\varepsilon=0.02\) and \(0.025\)) circuits of depth \(T=5\) as a function of linear system size \(L_{x}\), which refers to subsets of the heavy-hexagon lattice in panel (a). The data are averaged over \(10^{3}-10^{4}\) realizations of the random circuits. experiment--i.e. causes an exponential decay of the linear cross entropy, Eq. (1); to beat this classical simulation method, the quantum processor must be below a finite noise threshold. We remark that the task considered in those works is different from the one considered here. Namely the goal in Refs. [11; 13] is to simulate the _ideal_ (noiseless) bitstring distribution better than the noisy quantum experiment, as quantified e.g. by fidelity or linear cross-entropy. Notably this allows for an exponentially-small (in \(N\) and \(T\)) fidelity between simulation and experiment, as long as the former is closer to the ideal result. In contrast, our goal is to simulate with high accuracy the _noisy_ bitstring distribution itself. This is a significant distinction physically as the effect of uncontrolled MPS truncation is _a priori_ very different from that of local noise, even at the same level of fidelity (e.g., MPS truncation does not obey locality). We note also that this task requires precise knowledge of the noise model on the quantum processor, whose characterization is a separate challenge [75]. Even more recently, a noise-induced phase transition in RCS has been reported [9; 46] in deep quantum circuits with noise rates scaled as \(\varepsilon\sim 1/N\), i.e. a constant number of errors per layer. The scaling of linear cross entropy [Eq. (1)] in these models was predicted to sharply change as a function of \(\varepsilon N\), from an "easy phase" where the system appears to break into finite-sized clusters from the point of view of linear cross-entropy, to a phase where it behaves as a single large cluster. The former phase is easy to spoof classically, while the latter is conjectured to be practically-hard. This transition is also different from the one studied here. For one, it applies to a vanishing noise rate \(\varepsilon=O(1/N)\) rather than a finite \(\varepsilon=O(1)\). Additionally, and more importantly, it is a phase transition in an observable (albeit a complex one like linear cross-entropy) that reflects an intrinsic property of the system, whereas the transition studied here is a property of a simulation algorithm (noisy-SEBD) that is not intrinsic to the system. We emphasize again that noisy-SEBD is only one possible strategy for classical sampling. It remains an interesting open question to identify other simulation algorithms with polynomial cost \(O(N^{c})\), with \(c\) an \(\varepsilon\)-independent constant (unlike in the Feynman path sampling approach of Ref. [10] or the direct MPO approach for 1D circuits of Ref. [60]) below a finite noise threshold \(\varepsilon_{c}\). Another interesting direction for future work is the possibility of extending these results to higher dimension or all-to-all connected systems, e.g. trapped ion quantum computers. While MIPTs are known to arise in all these cases, the ensuing area-law for entanglement can only be exploited for efficient MPS simulation in 1D (including in the effective 1D subsystems of shallow 2D circuits used in SEBD). Whether other formulations of the MIPT, e.g. the dynamical purification approach [24; 25], can be exploited for efficient simulation in more general systems is an interesting open question. _Note added._ Upon completion of this manuscript, we became aware of an independent work on related topics appearing in the same arXiv posting [76]. Our works are complementary and our results agree where they overlap. ###### Acknowledgements. M. I. thanks Aram Harrow for helpful discussions. We thank Soonwon Choi for bringing Ref. [70] to our attention. Numerical simulations were carried out on computational resources provided by Texas Advanced Computing Center (TACC) and on Stanford Research Computing Center's Sherlock cluster. M. I. was partly supported by the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF8686. ## Appendix A Derivation of the unraveling cost function from statistical mechanical model Here we study the most disentangling unraveling of a single-qubit noise channel \(\Phi\) in the context of 1D brickwork random circuits, by mapping the entropy calculation of random circuits to a classical statistical mechanical model [21; 44]. ### Quasientropy Consider a subsystem \(A\) of a one-dimensional qudit chain. The \(k\)-Renyi entropy for the reduced density matrix \(\rho_{A}\) is defined as \[S_{k}\left(A\right)=\frac{1}{1-k}\log\left(\frac{Z_{k,A}}{Z_{k,\varnothing}}\right) \tag{10}\] where \[Z_{k,\varnothing}=\operatorname{Tr}(\rho)^{k},\quad Z_{k,A}=\operatorname{Tr} \bigl{(}\rho_{A}^{k}\bigr{)} \tag{11}\] The von Neumann entropy is given by the \(k\to 1\) limit \[S_{1}(A)=-\operatorname{Tr}\left(\frac{\rho_{A}}{\operatorname{tr}\rho}\log \frac{\rho_{A}}{\operatorname{tr}\rho}\right). \tag{12}\] For a hybrid random circuit, we are interested in the trajectory-averaged behavior of \(k\)-Renyi entropy \[\langle S_{k}(A)\rangle=\frac{\mathbb{E}_{C}\left[\operatorname{Tr}\rho S_{k }(A)\right]}{\mathbb{E}_{C}\left[\operatorname{Tr}\rho\right]}=\frac{1}{1-k} \frac{\mathbb{E}_{C}\left[\operatorname{Tr}\rho\log\frac{Z_{k,A}}{Z_{k, \varnothing}}\right]}{\mathbb{E}_{C}\left[\operatorname{Tr}\rho\right]} \tag{13}\] where \(\mathbb{E}_{C}\) represents the combined average of Haar random circuits \(\mathbb{E}_{U}\) and the average over Kraus operators \(\mathbb{E}_{M}\) (i.e. quantum trajectories). Replica tricks can be applied to cure the average of the logarithm. However, to get direct mapping to the stat-mech model, one can alternatively consider the \(k\)-quasientropy \(\hat{S}_{k}(A)\)[44], i.e. the \(k\)-th moment of the entanglement spectrum, weighted by the \(k\)-th power of the measurement outcome probability, \[\tilde{S}_{k}(A)= \frac{1}{1-k}\log\left(\frac{\mathbb{E}_{C}\left[\operatorname{tr} \left(\rho\right)^{k}\frac{Z_{k,A}}{\mathbb{E}_{A,\varnothing}}\right]}{\mathbb{ E}_{C}\left[\operatorname{tr}\left(\rho\right)^{k}\right]}\right)\] \[= \frac{1}{1-k}\log\left(\frac{\mathbb{E}_{C}\left[Z_{k,A}\right]}{ \mathbb{E}_{C}\left[Z_{k,\varnothing}\right]}\right) \tag{10}\] Importantly, in the limit \(k\to 1\), \(\tilde{S}_{k}(A)\rightarrow\langle S_{1}(A)\rangle\), similar to the \(k\)-Renyi entropy. The \(k\)-quasientropy has a natural mapping with a classical stat-mech model: the quantities \(\mathbb{E}_{C}[Z_{k,A/\varnothing}]\) can be viewed as partition functions for a (2+0)-dimensional spin model with different boundary conditions. ### Generalized measurement A general measurement procedure can be represented by a set of Kraus operators \(\{M_{i}\}\) satisfying \(\sum_{i}M_{i}^{\dagger}M_{i}=\mathbb{I}\)[66]. For each quantum trajectory, a specific Kraus operator is chosen with probability \(p_{i}=\operatorname{tr}\left(M_{i}\rho M_{i}^{\dagger}\right)\) and gives the updated state \(\rho^{\prime}=M_{i}\rho M_{i}^{\dagger}/p_{i}\). However, in the calculation of the \(k\)-quasientropy, it is more convenient to re-parametrize the generalized measurement in terms of operators \(\tilde{M}_{i}\) and classical probabilities \(\mu_{i}\) satisfying \[\operatorname{tr}\left(\tilde{M}_{i}^{\dagger}\tilde{M}_{i}\right) =q,\qquad\mathbb{E}_{\mathbf{M}}M^{\dagger}M=\sum_{i}\mu_{i}\tilde{M}_{i}^{ \dagger}\tilde{M}_{i}=\mathbb{I}, \tag{11}\] where \(q\) is the local Hilbert space dimension. It is easy to show \(\mu_{i}\) is given by \[\mu_{i}=\frac{1}{q}\operatorname{tr}\left(M_{i}^{\dagger}M_{i}\right) \tag{12}\] and that \(\mu_{i}\geq 0\) and \(\sum_{i}\mu_{i}=1\) (i.e. \(\mu_{i}\) is a probability distribution). The advantage of this reparametrization is that it renders the \(k\)-quasientropy invariant under trivial decomposition of Kraus operators, which is important for optimizing the unraveling. As an example, consider the sets \(\mathbf{M}_{0}=\{\sigma_{z}\}\) and \(\mathbf{M}_{1}=\{\sigma_{z}/\sqrt{2},\sigma_{z}/\sqrt{2}\}\), which differ by a trivial decomposition of \(\sigma_{z}\) and are hence physically equivalent (both describe the unitary transformation \(\rho\mapsto\sigma^{z}\rho\sigma^{z}\)). However, if we define the average over a Kraus set as \(\mathbb{E}_{\mathbf{M}}[f]=\sum_{M\in\mathbf{M}}f(M\rho M^{\dagger})\), then one can verify \(\mathbb{E}_{\mathbf{M}_{0}}[Z_{k,\varnothing}]=1\) but \(\mathbb{E}_{\mathbf{M}_{1}}[Z_{k,\varnothing}]=2^{1-k}\). Using the reparametrized Kraus operators and defining the average as \(\mathbb{E}_{\mathbf{M}}[f]=\sum_{i}\mu_{i}f(\tilde{M}_{i}\rho\tilde{M}_{i}^{ \dagger})\) instead yields the consistent results \(\mathbb{E}_{\mathbf{M}_{0}}[Z_{k,\varnothing}]=\mathbb{E}_{\mathbf{M}_{1}}[Z_{k, \varnothing}]=1\). It is also worth mentioning that, in the limit of interest (\(k\to 1\)), these two formalisms are equivalent. ### Mapping to a classical statistical mechanical model The goal of this mapping is to calculate the averaged partition functions \(\mathbb{E}_{C}\left[Z_{k,X}\right]\) where \(X=A\) or \(\varnothing\): \[\mathbb{E}_{C}\left[Z_{k,X}\right]=\mathbb{E}_{U}\mathbb{E}_{\mathbf{M}}\left[ \operatorname{tr}\left(\left(\cdots MU\rho_{0}U^{\dagger}M^{\dagger}\cdots \right)^{\otimes k}\mathcal{S}_{X}^{\otimes k}\right)\right] \tag{13}\] where \(\mathcal{S}_{X}\) is the operator that implements a cyclical permutation of the \(k\) replicas only in the \(X\) subsystem. By using the Haar measure calculus [77], the average over replicated gates \((U\otimes U^{*})^{\otimes k}\) can be expanded onto permutations of the replicas, giving a partition function of "spins" valued in the permutation group of \(k\) elements \(S_{k}\). Hence the average over single-site measurement operators can be evaluated by appropriately contracting with connecting permutations. In this sense, each unitary gate can be views as two permutation nodes \(\{\sigma,\tau\}\in S_{k}\), which form a honeycomb lattice as shown in Fig. 5(b). The total partition function can be evaluated by contracting the nodes with proper weights \(w^{(k)}\left(\sigma_{\mathbf{r}},\tau_{\mathbf{r}^{\prime}}\right)\) on the links: \[\mathbb{E}_{C}\left[Z_{k,X}\right]=\sum_{\{\sigma_{\mathbf{r}},\tau_{\mathbf{r}}\}} \prod_{\langle\mathbf{r},\mathbf{r}^{\prime}\rangle}w^{(k)}\left(\sigma_{\mathbf{r}},\tau_ {\mathbf{r}^{\prime}}\right) \tag{14}\] with distinct couplings on dashed and solid links, as shown in Fig. 5(b). The average over Haar random unitary gates gives the coupling for dashed links: \[w_{u}^{(k)}\left(\sigma,\tau\right)=\mathrm{Wg}_{q^{2}}\left(\tau\sigma^{-1}\right) \tag{15}\] where \(\mathrm{Wg}_{q^{2}}\) is the Weingarten function [77]. These couplings are independent of measurments. The solid links in Fig. 5(b), on the other hand, are given by \[w_{m}^{(k)}(\sigma,\tau)=\mathbb{E}_{\mathbf{M}}\prod_{c}\operatorname{tr}\left( \left(M^{\dagger}M\right)^{\lambda_{c}}\right) \tag{16}\] where \(c\) denotes the number of cycles in permutation \(\tau\sigma^{-1}\in S_{k}\) and \(\lambda_{c}\) is the length of cycle \(c\). Using the convention above for averaging over Kraus operators, \(\mathbb{E}_{\mathbf{M}}[f]=\sum_{i}\mu_{i}f(\tilde{M}_{i}\rho\tilde{M}_{i})\), we obtain \[w_{m}^{(k)}(\sigma,\tau) =\sum_{i}\mu_{i}\prod_{c}\operatorname{tr}\Bigl{[}(\tilde{M}_{i}^ {\dagger}\tilde{M}_{i})^{\lambda_{c}}\Bigr{]}\] \[=\sum_{i}\mu_{i}^{1-k}\prod_{c}\operatorname{tr}\Bigl{[}(M_{i}^ {\dagger}M_{i})^{\lambda_{c}}\Bigr{]}. \tag{17}\] ### Two replicas: classical Ising model Although the combination of Eq. (15) and Eq. (17) gives the exact expression for the partition function, the possible negative weights of \(w_{u}(\sigma,\tau)\) impede the direct mapping to a physical system with real interactions at a real temperature. For the case of \(k=2\), this sign problem can be circumvented by integrating out all \(\tau\) variables, which gives to a classical Ising model defined on a triangular lattice as shown in Fig. 5(c): \[\mathbb{E}_{C}\left[Z_{k,X}\right]=\sum_{\{\sigma\}}\prod_{\langle\sigma_{1}, \sigma_{2},\sigma_{3}\rangle}\left[\sum_{\tau=\pm 1}w_{m}^{(2)}(\sigma_{1},\tau)w_{m}^{(2)}( \sigma_{2},\tau)w_{u}^{(2)}(\sigma_{3},\tau)\right]\equiv\sum_{\{\sigma\}}\prod _{\langle\sigma_{1},\sigma_{2},\sigma_{3}\rangle}w^{(2)}\left(\sigma_{1}, \sigma_{2},\sigma_{3}\right) \tag{16}\] where \(\langle\sigma_{1},\sigma_{2},\sigma_{3}\rangle\) denotes a lower-facing triangle with three neighboring vertices \(\sigma_{1},\sigma_{2},\sigma_{3}\). For \(k=2\), we have \[w_{u}^{(2)}\left(\sigma,\tau\right)=\begin{cases}\dfrac{1}{q^{4}-1}&\text{if } \sigma=\tau\\ -\dfrac{1}{q^{2}\left(q^{4}-1\right)}&\text{if }\sigma\neq\tau\end{cases} \tag{17}\] and denote \[w_{m}^{(2)}\left(\sigma,\tau\right)=\begin{cases}u&\text{if }\sigma=\tau\\ v&\text{if }\sigma\neq\tau\end{cases} \tag{18}\] where \(u\) and \(v\) are determined by specific Kraus operators via Eq. (14). Using the definition of \(\mu_{i}\), one can see that \[u =\sum_{i}\mu_{i}^{-1}\operatorname{tr}\left(M_{i}^{\dagger}M_{i} \right)^{2}=q^{2} \tag{19}\] \[v =\sum_{i}\mu_{i}^{-1}\operatorname{tr}\left(\left(M_{i}^{\dagger }M_{i}\right)^{2}\right)=q\sum_{i}\frac{\operatorname{tr}\left(M_{i}^{ \dagger}M_{i}M_{i}^{\dagger}M_{i}\right)}{\operatorname{tr}\left(M_{i}^{ \dagger}M_{i}\right)} \tag{20}\] With these, one can express the three-body interaction \(w^{(2)}\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)\) explicitly. Due to the permutation symmetry \(\sigma\rightarrow\bar{\sigma}\) (where \(\bar{\sigma}\) denotes the other permutation with \(S_{2}\equiv\mathbb{Z}_{2}\)) and spatial reflection symmetry \(\sigma_{1}\leftrightarrow\sigma_{2}\), one only needs to specify \[w^{(2)}\left(\sigma,\sigma,\sigma\right)= \frac{u^{2}}{q^{4}-1}-\frac{v^{2}}{q^{2}\left(q^{4}-1\right)} \tag{21}\] \[w^{(2)}\left(\sigma,\sigma,\bar{\sigma}\right)= \frac{v^{2}}{q^{4}-1}-\frac{u^{2}}{q^{2}\left(q^{4}-1\right)}\] (22) \[w^{(2)}\left(\sigma,\bar{\sigma},\sigma\right)= \frac{uv}{q^{2}\left(q^{2}+1\right)} \tag{23}\] Furthermore one may reexpress the three-body interaction as the product of three two-body interaction \(w^{(2)}\left(\sigma_{1},\sigma_{2},\sigma_{3}\right)=Ce^{-J_{d}\sigma_{1} \sigma_{2}-J_{d}\sigma_{1}\sigma_{3}-J_{d}\sigma_{2}\sigma_{3}}\), where \(J_{d}\) and \(J_{h}\) are the two-body interaction strength for diagonally and horizontally neighboring sites respectively, whose expressions are \[J_{d} =\frac{1}{4}\log\left(\frac{q^{2}x^{2}-1}{q^{2}-x^{2}}\right) \tag{24}\] \[J_{h} =\frac{1}{4}\log\left(\frac{x^{2}\left(q^{2}-1\right)^{2}}{\left( q^{2}x^{2}-1\right)\left(q^{2}-x^{2}\right)}\right) \tag{25}\] where \(x\equiv v/u\) is a dimensionless parameter given by \[x=\frac{1}{q}\sum_{i}\frac{\operatorname{tr}\left(M_{i}^{\dagger}M_{i}M_{i}^ {\dagger}M_{i}\right)}{\operatorname{tr}\left(M_{i}^{\dagger}M_{i}\right)}. \tag{26}\] Since \(\operatorname{tr}\left(M^{\dagger}M\right)^{2}\geq\operatorname{tr}\left( \left(M^{\dagger}M\right)^{2}\right)\) for all Kraus operators, one has \(u\geq v\), hence \(J_{d}\leq 0\) and \(J_{h}\geq 0\) for \(q\geq 2\). Figure 5: (a) The quantum circuit consists of brick-wall unitaries (blue rectangles) and generalized measurements (red dots). (b) The mapped classical statistical mechanical model on the honeycomb lattice, where dashed links are weighted by the Weingarten function and the solid links are weighted by the generalized measurements. (c) Classical Ising model for \(k=2\), after integrating out \(\tau\) nodes. The statistical mechanics of this classical Ising model can be solved exactly [78; 79] and the critical point \(x_{c}\), separating paramagnet and ferromagnet phases, is determined by the relation \(2e^{2J_{h}}=e^{-2J_{d}}-e^{2J_{d}}\). Solving for \(x_{c}\) gives \[\frac{1}{x_{c}}=\frac{q^{2}-1}{q^{2}+1}+\frac{\sqrt{2q^{4}+2}}{q^{2}+1}. \tag{10}\] This phase transition between an ordered and disordered phase in the statisitical mechanical model then maps onto to the area-law/volume-law phase transition of the 2-quasientropy in the hybrid random circuit. As we see, within this two-replica analysis the transition is controlled exclusively by the parameter \(x\), Eq. (10), with an ordered phase (volume-law) for \(x<x_{c}\) and disordered phase for \(x>x_{c}\) (area-law). Therefore, to obtain the optimal unraveling for a given quantum channel, we aim to maximize the function \(x(\mathbf{M})\). This justifies the use of \(x(\mathbf{M})\) as a cost function for the unraveling of \(\Phi\) in the main text. ## Appendix B Maxmization of the target function for unital channels To compute Eq. (14) for the parametrizaion of Kraus operators in Eq. (17), we first evaluate \(M_{i}^{\dagger}M_{i}\). Omitting the subscript \(i\) for ease of notation, we have \[M^{\dagger}M=(a^{2}+b^{2})\mathbb{I}+2b(a\mathbf{u}_{R}+b\mathbf{u}_{I}\wedge\mathbf{u}_{ R})\cdot\mathbf{\sigma}, \tag{11}\] with \(\tilde{\mathbf{u}}=\mathbf{u}_{R}+i\mathbf{u}_{I}\). (We used the identity \(\sigma^{\alpha}\sigma^{\beta}=\sigma^{0}\delta_{\alpha,\beta}+i\varepsilon_{ \alpha\beta,\sigma}\gamma^{\gamma}\).) Letting \(\mathbf{v}\equiv a\mathbf{u}_{R}+b\mathbf{u}_{I}\wedge\mathbf{u}_{R}\), the operator \(M^{\dagger}M\) has eigenvalues \(\lambda_{\pm}=(a^{2}+b^{2})\pm 2b\|\mathbf{v}\|\). It follows that the ratio in the definition of \(x(\mathbf{M})\), Eq. (14), reads \[\frac{\operatorname{Tr}\Bigl{[}(M_{i}^{\dagger}M_{i})^{2}\Bigr{]}}{2\operatorname {Tr}\Bigl{(}M_{i}^{\dagger}M_{i}\Bigr{)}}=\frac{a_{i}^{2}+b_{i}^{2}}{2}+\frac{ 2b_{i}^{2}\|\mathbf{v}_{i}\|^{2}}{a_{i}^{2}+b_{i}^{2}}. \tag{12}\] Focusing on the second term, we have \(\|\mathbf{v}\|^{2}=a^{2}\cos^{2}(\theta)+b^{2}\cos^{2}(\theta)\sin^{2}(\chi)\), where \(\|\mathbf{u}_{R}\|\equiv\cos(\theta)\) (note \(\tilde{\mathbf{u}}\) is a unit vector, so \(\|\mathbf{u}_{R}\|^{2}+\|\mathbf{u}_{I}\|^{2}=1\)) and \(\chi\) is the angle between \(\mathbf{u}_{R}\) and \(\mathbf{u}_{I}\). Since we aim to maximize \(x(\mathbf{M})\), we can take \(\chi=\pi/2\) and then maximize over \(\theta\) the function \(f(\theta)=a^{2}\cos^{2}(\theta)+b^{2}\cos^{2}(\theta)\sin^{2}(\theta)\). The maximum depends on the ratio of \(a\) and \(b\): \[\max_{\theta}f(\theta)=\begin{cases}a^{2}&\text{if }a>b,\\ \frac{(a^{2}+b^{2})^{2}}{4b^{2}}&\text{if }a\leq b.\end{cases} \tag{13}\] It follows that \[x(\mathbf{M})=1-\frac{1}{2}\sum_{i:a_{i}\geq b_{i}}\frac{(a_{i}^{2}-b_{i}^{2})^{2} }{a_{i}^{2}+b_{i}^{2}}, \tag{14}\] where \(a_{i}\) and \(b_{i}\) are subject to the constraints \(\sum_{i}a_{i}^{2}=p_{0}\) and \(\sum_{i}b_{i}^{2}=1-p_{0}\), Eq. (19). For strong noise, \(p_{0}\leq 1/2\), it is possible to choose \(a_{i}\leq b_{i}\) for all \(i\). This gets rid of the negative term in the sum and gives \(x(\mathbf{M})=1\), i.e. complete purification, which is optimal. However for weak noise (\(p_{0}>1/2\)) we have \(\sum_{i}a_{i}^{2}>\sum_{i}b_{i}^{2}\), so it is not possible to avoid the sum over \(i\) in the above expression. We first handle the case in which \(a_{i}>b_{i}\) for all \(i\). Introducing \(c_{i}=a_{i}^{2}+b_{i}^{2}\) and \(d_{i}=a_{i}^{2}-b_{i}^{2}\), we have \[x(\mathbf{M})=1-\sum_{i}\frac{d_{i}^{2}}{2c_{i}}\leq 1-\frac{(\sum_{i}d_{i})^{2}}{2 \sum_{i}c_{i}} \tag{15}\] where we used Sedrakyan's inequality (i.e. Cauchy-Schwartz applied to the vectors \(\{\sqrt{c_{i}}\}\) and \(\{d_{i}/\sqrt{c_{i}}\}\)). The constraints in Eq. (19) impose \(\sum_{i}c_{i}=1\) and \(\sum_{i}d_{i}=2p_{0}-1\), therefore \[x(\mathbf{M})\leq 1-\frac{(2p_{0}-1)^{2}}{2}. \tag{16}\] This is saturated by setting \(a_{i}=\sqrt{p_{0}/n}\) and \(b_{i}=\sqrt{(1-p_{0})/n}\) for all \(i\). Next, we consider the case in which \(a_{i}\leq b_{i}\) for some \(i\). We use primed sums to denote sums over \(i\) that are restricted only to those indices: \(\sum_{i:\ a_{i}\leq b_{i}}=\sideset{}{{}^{\prime}}{\sum}_{i}\). Let us define \(\gamma\equiv\sideset{}{{}^{\prime}}{\sum}_{i}c_{i}\) (we have \(0\leq\gamma\leq 1\), where \(\gamma=0\) recovers the previous case); then, by the same reasoning as above, we have \[x(\mathbf{M})\leq 1-\frac{(2p_{0}-1-\sideset{}{{}^{\prime}}{\sum}_{i}d_{i})^{2}}{2(1- \gamma)} \tag{17}\] Now \(-\sideset{}{{}^{\prime}}{\sum}_{i}d_{i}=\sideset{}{{}^{\prime}}{\sum}_{i}b_{i}^{2 }-a_{i}^{2}\) is non-negative by definition, so \[x(\mathbf{M})\leq 1-\frac{(2p_{0}-1)^{2}}{2(1-\gamma)}, \tag{18}\] where the right-hand side is maximized for \(\gamma=0\). Thus the symmetric solution \(a_{i}=\sqrt{p_{0}/n}\) and \(b_{i}=\sqrt{(1-p_{0})/n}\) for all \(i\) is optimal in general. Finally, note that when \(a_{i}>b_{i}\), the maximum in Eq. (13) is achieved at \(\theta=0\), i.e. \(\|\mathbf{u}_{I}\|=0\); thus in the solution above, the unit vectors are real: \(\tilde{\mathbf{u}}=\mathbf{u}_{R}\). Appendix C Measurement-induced phase transition for the optimal weak measurement in 1D random circuits Here we analyze measurement-induced phase transition corresponding to weak measurements from optimal unraveling in Eq. (25) against stochastic projective measurements in Eq. (8). We consider 1D Haar random brick-wall circuits with periodic boundary condition and optimal weak measurements applied between layers of unitaries. We use the tripartite mutual information4 Footnote 4: For \(n\neq 1\) this quantity is not a proper mutual information. \[I_{3,n}= S_{n}(A)+S_{n}(B)+S_{n}(C)-S_{n}(A\cup B)\] \[-S_{n}(A\cup C)-S_{n}(B\cup C)+S_{n}(A\cup B\cup C), \tag{10}\] where \(S_{n}(X)\) is the Renyi entropy and \(A,B,C\) are contiguous subsystems of size \(L/4\) (i.e. the system is divided into quarters and three such subsystems are used). \(I_{3,n}\) was argued to be system-size-independent constant at criticality and thus can be used to accurately locate the critical point [26]. In Fig. 6 we show the \(I_{3,n}\) for \(n=0.8,\ 1.0,\ 2.0\) at late time \(t=4L\), varying the system size \(L\) from \(8\) to \(24\). Since the crossing points drift to lower \(\varepsilon\) as system size increases, we only consider the data collapse for \(L=16,20,24\). The obtained critical point \(\varepsilon_{c,\text{weak}}\approx 0.044\) is smaller than \(\varepsilon_{c,\text{projective}}\approx 0.084\)[26] (Note that in our convention \(\varepsilon\) is interpreted as noise rate which is half of measurement rate in the context of the measurement-induced phase transition: \(p=2\varepsilon\)). This is consistent with our argument that the for a given noise rate, optimal weak measurements are more disentangling than projective measurements. ## Appendix D Data Collapse In Sec. IV.2 we determine the critical point \(\varepsilon_{c}\) and correlation length exponent \(\nu\) by finding a data collapse Figure 6: (a-c) Tripartite mutual information \(I_{3}\) (shown as \(\log(-I_{3})\)) in 1D random circuits with dephasing noise of strength \(\varepsilon\) unravelled into optimal weak measurements, for Renyi indices (a) \(n=0.8\), (b) \(n=1.0\), and (c) \(n=2.0\). (d-f) Data collapse of \(\log(-I_{3})\) vs \((\varepsilon-\varepsilon_{c})L^{1/\nu}\), with fit parameters indicated. The data are averaged over \(5\cdot 10^{2}-5\cdot 10^{3}\) realizations Figure 7: Color map of data collapse object function \(R\), Eq. (10). The black contour encloses a region satisfying \(R\leq 1.3\cdot R_{\text{min}}\), which gives an estimation of the error. satisfying the scaling ansatz \[\tau(\varepsilon,L_{x})=L_{x}F\left[\left(\varepsilon-\varepsilon_{c}\right)L_{x} ^{1/\nu}\right]. \tag{10}\] To quantify the collapse we consider the similar objective function as in Ref. [26]. We first sort the data by \(x_{i}=\left(\varepsilon_{i}-\varepsilon_{c}\right)L_{x,i}^{1/\nu}\) where \(i=\{1,\cdots,n\}\) labels sorted data points and define \(y_{i}=\tau(x_{i})/L_{x,i}\). Then the objective function \(R(\varepsilon_{c},\nu)\) is defined as \[R(\varepsilon_{c},\nu)=\frac{1}{n-2}\sum_{i=2}^{n-1}\left(y_{i}-\bar{y}_{i} \right)^{2}, \tag{11}\] where \[\bar{y}_{i}=\frac{(x_{i+1}-x_{i})y_{i-1}-(x_{i-1}-x_{i})y_{i+1}}{x_{i+1}-x_{i- 1}}, \tag{12}\] is the estimation of \(y_{i}\) given by linear interpolating \((x_{i-1},y_{i-1})\) and \((x_{i+1},y_{i+1})\). Then the aim is to minimize Eq. 11 over \((\varepsilon_{c},\nu)\). As an example, a color plot of \(R\) is shown in Fig. 7 for data set of \(T=5\) (ABCDB) shown in Fig. 3 (c). We estimate the error by consider a region enclosed by \(R=1.3\cdot R_{\text{min}}\), which is shown as a black contour in Fig. 7. ## Appendix E Phase boundary at large \(T\) Here we present the results of stabilizer simulations of Clifford circuits to study the entanglement phase transition in circuits of large depth \(T\). The goal is to qualitatively study the phase boundary in Fig. 1(d) at values of \(T\) that are beyond what can easily be studied via exact simulation of generic circuits (e.g. \(T=4,5\) in Fig. 3). Due to the restriction of stabilizer simulation, we cannot unravel depolarizing noise into weak measurements. For this reason we use stochastic projective measurements, bearing in mind that this will give a _larger_ numerical value of the threshold noise strength \(\varepsilon_{c}\) (cf Appendix C). We expect the qualitative behavior of \(\varepsilon_{c}(T)\) to be similar across weak and projective unravelings. We consider Clifford circuits that mimic the models analyzed in Sec. IV.2. To approximate the iSWAP-like gates, we sample two-qubit Clifford gates that are iSWAP with 90% probability and SWAP otherwise (corresponding to a uniform sampling of the _dual-unitary_ half of the two-qubit Clifford group [39]). These gates are sandwiched between random single-qubit Clifford rotations. A projective measurement of \(Z\) is applied with probability \(p=2\varepsilon\) on each qubit after each gate. Specifically, we simulate the space-wise evolution of the circuit as in e.g. Fig. 2(c). This involves a quasi-1D subsystem which is a strip of length \(L_{x}\) and width \(1+T/4\). As a diagnostic of the phase transition we use the tripartite mutual information \(I_{3}(A:B:C)\) between contiguous regions that make up strips of length \(L_{x}/4\). Results in Fig. 9 show a transition at critical noise rate \(\varepsilon_{c}(T)\) that increases with \(T\), as expected. We have, e.g., \(\varepsilon_{c}(T=8)\simeq 0.070\) which increases to \(\varepsilon_{c}(T=24)\simeq 0.133\). We additionally simulate the conventional MIPT in square 2D circuits, whose dimensions \(L_{x}=L_{y}=L\) are jointly increased. The tripartite mutual information \(I_{3}\), for a partition of the square into 4 rectangles of size \(L/4\times L\), also shows a transition. We observe \(\varepsilon_{c,2D}\simeq 0.16\) Figure 8: Estimated probability of 10 randomly generated output bitstrings of noisy Clifford circuits of depth \(T=4\), with \(L_{x}=5,L_{y}=5\) (top row) and \(L_{x}=9,L_{y}=9\) (bottom row). Probabilities are estimated by averaging over \(K\) trajectories, with \(K=10^{3}\)-\(10^{5}\) indicated on top. In the top row, the averaged probabilities \(p_{\text{ave}}\) (from noisy-SEBD and stabilizer simulations) are normalized by the exact value obtained from MPO simulation. In the bottom row, the averaged probabilities \(p_{\text{ave}}\) from noisy-SEBD are normalized by the average of \(10^{6}\) trajectories of stabilizer simulations. The ratios converge towards 1 (horizontal lines) with increasing \(K\). consistent with the value \(p_{c,2D}=0.3116(1)\) reported in Ref. [73] for this circuit architecture5 (recall \(p=2\varepsilon\)). Footnote 5: See Appendix A.1 therein. While the circuit architecture is the same, the gate set is slightly differentβ€”random Clifford gates, not restricted to SWAP and iSWAP. Finally, in Fig. 10 we compare the observed critical points for shallow circuits of depth \(T\), \(\varepsilon_{c}(T)\), with the 2D MIPT \(\varepsilon_{c,2D}\). We see good agreement with the conjectured form \(\varepsilon_{c}(T)\simeq\varepsilon_{c,2D}+O(1/T)\), sketched in Fig. 1(d). ## Appendix F Benchmarks Here we benchmark our noisy-SEBD algorithm against stabilizer simulations of Clifford random circuits and MPO simulation with controlled errors. Since the depolarizing channel with general \(\varepsilon\) is not a Clifford operation, we consider the probabilistic trace setup [80] i.e. replacing the depolarizing channel by a probabilistic mixture of an identity operation with probability \(1-\frac{4}{3}\varepsilon\) and a trace channel (or erasure) \(\rho\mapsto(\mathbb{I}/2)_{i}\otimes\operatorname{Tr}_{i}\rho\) with probability \(\frac{4}{3}\varepsilon\). With this replacement, each instance/trajectory in this random ensemble is classically simulatable [48], and on average reproduces the effect of the depolarizing noise with strength \(\varepsilon\). We first consider the noisy sampling problem in the architecture given in Sec. IV.2 with \(\varepsilon=0.02\), \(L_{x}=5\), \(L_{y}=5\) and \(T=4\). In this case the system size is small enough that direct MPO simulation of the density matrix can be implemented with small error and can thus serve as a benchmark for noisy-SEBD. In the upper row of Fig. 8, we show the averaged final probability \(P_{\mathcal{N}}(\mathbf{z})\) of 10 randomly chosen output bitstrings \(\mathbf{z}\) from noisy Clifford circuits, varying the number of sampled trajectories \(K\) from \(10^{3}\) to \(10^{5}\) (note we unravel depolarizing noise into probabilistic erasure for the stabilizer simulation, and into weak measurements for noisy-SEBD; we use the same \(K\) for both methods). The probabilities \(P_{\mathcal{N}}(\mathbf{z})\) are scaled by the exact reference value obtained from MPO simulation, whose truncation error is kept below \(10^{-10}\). As the trajectory number increases, one can see both SEBD and Clifford simulation show a good convergence to the exact result. We then consider the same circuit architecture but with \(L_{x}=9\), \(L_{y}=9\), where exact MPO computation would require large computational effort. Therefore, in this case we directly compare SEBD results against the Clifford results (with \(10^{6}\) trajectories for the latter). Results are shown in the bottom row of Fig. 8, again displaying good agreement. ## Appendix G Converting gate fidelity to noise rate The convention for noise strength \(\varepsilon\) used in this work is in terms of single-qubit channels as in Eq. (6). The usual figure of merit for two-qubit gates is the average fidelity \(f\). To convert between the two, we note that \[f =\int\mathrm{d}\psi\ \langle\psi|\,\Phi^{\otimes 2}[|\psi \rangle\!\langle\psi|]\,|\psi\rangle\] \[=(1-\varepsilon)^{2}+[1-(1-\varepsilon)^{2}]\int\mathrm{d}\psi \ \langle\psi|\,P\,|\psi\rangle^{2} \tag{10}\] where the integral is over the Haar measure on the two-qubit Hilbert space, \(P\) is any traceless Pauli operator, and the formula applies equally to dephasing and depolarizing noise (in fact to any unital noise channel upon setting \(1-\varepsilon\mapsto p_{0}\), cf Eq. (16)). The Haar integral yields Figure 10: Noise threshold for the entanglement phase transition as a function of circuit depth \(T\) (data from Fig. 9). Dashed line indicates a linear fit of datapoints \(T=16,20,24\) to \(a+b/T\); the extrapolation to \(T=\infty\) is in good agreement with the 2D MIPT datapoint. Figure 9: Entanglement phase transition in Clifford circuits with dephasing noise of strength \(\varepsilon\). First 5 panels refer to quasi-1D subsystems that mimic the noisy-SEBD simulation of 2D circuits of depth \(T=8,12,16,20,24\) (\(T=4\), not shown, is found to be in the area-law phase for all \(\varepsilon\)). Last panel shows data for the conventional MIPT in 2D square lattices of size \(L\times L\), \(L=8,16,32\), with circuits of depth \(O(L)\). Vertical dashed lines indicate estimates of the critical point. Data obtained by averaging between \(4\times 10^{2}\) and \(2\times 10^{4}\) realizations of the random circuits depending on system size.
2308.00943
IIDS: Design of Intelligent Intrusion Detection System for Internet-of-Things Applications
With rapid technological growth, security attacks are drastically increasing. In many crucial Internet-of-Things (IoT) applications such as healthcare and defense, the early detection of security attacks plays a significant role in protecting huge resources. An intrusion detection system is used to address this problem. The signature-based approaches fail to detect zero-day attacks. So anomaly-based detection particularly AI tools, are becoming popular. In addition, the imbalanced dataset leads to biased results. In Machine Learning (ML) models, F1 score is an important metric to measure the accuracy of class-level correct predictions. The model may fail to detect the target samples if the F1 is considerably low. It will lead to unrecoverable consequences in sensitive applications such as healthcare and defense. So, any improvement in the F1 score has significant impact on the resource protection. In this paper, we present a framework for ML-based intrusion detection system for an imbalanced dataset. In this study, the most recent dataset, namely CICIoT2023 is considered. The random forest (RF) algorithm is used in the proposed framework. The proposed approach improves 3.72%, 3.75% and 4.69% in precision, recall and F1 score, respectively, with the existing method. Additionally, for unsaturated classes (i.e., classes with F1 score < 0.99), F1 score improved significantly by 7.9%. As a result, the proposed approach is more suitable for IoT security applications for efficient detection of intrusion and is useful in further studies.
KG Raghavendra Narayan, Srijanee Mookherji, Vanga Odelu, Rajendra Prasath, Anish Chand Turlapaty, Ashok Kumar Das
2023-08-02T04:52:41Z
http://arxiv.org/abs/2308.00943v1
# _Iids_: Design of Intelligent Intrusion Detection System for Internet-of-Things Applications ###### Abstract With rapid technological growth, security attacks are drastically increasing. In many crucial Internet-of-Things (IoT) applications such as healthcare and defense, the early detection of security attacks plays a significant role in protecting huge resources. An intrusion detection system is used to address this problem. The signature-based approaches fail to detect zero-day attacks. So anomaly-based detection particularly AI tools, are becoming popular. In addition, the imbalanced dataset leads to biased results. In Machine Learning (ML) models, \(F_{1}\) score is an important metric to measure the accuracy of class-level correct predictions. The model may fail to detect the target samples if the \(F_{1}\) is considerably low. It will lead to unrecoverable consequences in sensitive applications such as healthcare and defense. So, any improvement in the \(F_{1}\) score has significant impact on the resource protection. In this paper, we present a framework for ML-based intrusion detection system for an imbalanced dataset. In this study, the most recent dataset, namely \(CICIoT2023\) is considered. The random forest (RF) algorithm is used in the proposed framework. The proposed approach improves \(3.72\)%, \(3.75\)% and \(4.69\)% in precision, recall and \(F_{1}\) score, respectively, with the existing method. Additionally, for unsaturated classes (i.e., classes with \(F_{1}\) score \(<0.99\)), \(F_{1}\) score improved significantly by \(7.9\)%. As a result, the proposed approach is more suitable for IoT security applications for efficient detection of intrusion and is useful in further studies. Feature Selection, Class Balancing, Machine Learning, Intrusion Detection, Internet-of-Things, Security. ## I Introduction In recent years, Internet of Things (IoT) is widely using in many applications such as healthcare, defense, automation, and smart cities. By 2030, the expected increase in IoT devices worldwide would be approximately \(29\) billions [1]. In recent days, along with this IoT growth, the intrusion attacks are on the rise. For example, according to a recent report by NOKIA, around one million devices are involved in DDoS attacks in 2023 [2]. Therefore, design of a model for early detection of attacks becomes an emerging and challenging problem [3]. In 1987, Denning [4] introduced the concept of Intrusion-Detection System (IDS) that aims at early detection of possible attacks against information systems. After Denning's seminal work, there are many approaches, such as signature, statistical and anomaly-based, as presented in the literature [5, 6, 7, 8]. Comparatively, the anomaly-based approaches are more effective in detecting intrusion attacks, including zero-day attacks [9, 10]. In anomaly-based detection, Machine Learning (ML) and Deep Learning (DL) models are attracting more attention. The efficiency of these models depends mainly on the class distribution in dataset [11, 12]. The IoT-based IDS datasets such as _Bot-IoT_, _IoT-23_, and \(CICIoT2022\) contain various attacks that affect IoT networks [13]. However, in most datasets, an extensive network topology with real IoT devices is not considered during data collection. The \(CICIoT2023\)[13] is a real IoT attack dataset that includes an expansive topology which involves \(105\) real IoT devices. The \(33\) attack types are identified and grouped into seven high-level classes such as DoS, DDoS, Web-based, Recon, spoofing, Mirai, and brute force. Here, malicious IoT devices perform all attacks specifically targeting other IoT devices. In addition, the dataset encompasses various attacks not found in other IoT datasets. The benign data-capturing procedure focuses on collecting IoT traffic during idle states and with human interactions such as echo dot, sensor data, and smart camera video feeds. Since the dataset covered many possible attacks from the real-time network scenarios, studying the performance of various ML algorithms using such datasets facilitates better IDS solutions. In the IoT literature, there are many studies on various imbalanced datasets. In 2022, Elghalhoud \(et\) al.[14] used random oversampling (ROS) to balance datasets in their proposed model on _BoT-IoT_ and _ToN-IoT_. Improvements are observed in various performance metrics such as recall, the \(F_{1}\) score and Area Under Curve (AUC). Rashid \(et\) al. [15] presented another framework to study the imbalanced IoT datasets such as _DS2OS_ and _Contiki_ by using the Synthetic Minority Oversampling. They observed an considerable enhancements in \(F_{1}\) score. In 2023, Bowen \(et\) al. [16] studied the imbalanced datasets including the IoT attack dataset _IoT-23_. The notable improvements are observed in performance metrics \(F_{1}\) score and recall. In conclusion, the application of class balancing techniques considerably improves the classification performance. In real-life applications, including IoT, security is a crucial and sensitive issue. Even a small improvement in early detection of any zero-day attacks significantly impacts the resource protection. The performance of ML models is analyzed using the metrics such as Accuracy, Precision, Recall and \(F_{1}\) score. Note that the \(F_{1}\) score is an important metric in measuring the model accuracy in terms of class-level correct predictions. For example, we consider a data with \(1000\) samples with \(960\) non-target samples and \(40\) target samples. Assume that a model predicts correctly the non-target samples. Suppose confusion matrix from the model is as follows: true negative (TN) is \(960\), true positive (TP) is \(1\), false negative (FN) is \(39\) and false positive (FP) is \(0\). Then, we can observe that the accuracy of the model is \(96.1\)% and precision is \(100\)% but recall is \(2.5\)%, that is, the model can detect only the \(2.5\)% of target samples. It is a very critical even though the model accuracy seems high but has extremely low recall and \(F_{1}\) score (\(4.9\)%). It indicates that the model is nearly failing to detect the target samples. Through the paper, we name the samples with less than \(99\)% \(F_{1}\) score as "Unsaturated Classes (USC)". In security applications, such as healthcare and defense, false detection of target samples may lead to unrecoverable consequences. Therefore, \(F_{1}\) score is an important metric in the performance analysis of the model. Hence in this paper, we mainly consider the \(F_{1}\) score to treat the imbalanced nature of the samples. ## II Data Description and Observations The \(CICIoT2023\) is a benchmark dataset designed by the Canadian Institute for Cybersecurity (CIC) to evaluate large-scale attacks within the IoT environment. The dataset is publicly available at the University of New Brunswick (UNB). It consists of \(46686579\) samples collected from \(105\) IoT devices. There are \(33\) attack classes and one benign class in the dataset with \(46\) features. The class-wise data distribution of \(CICIoT2023\) is depicted in Fig. 1. From the Fig. 1, it is observed that the dataset has skewed sample sizes. For example, the class \(DDoS-ICMP\_Flood\) has the largest sample and the class \(Uploading\_Attack\) has the smallest sample with a ratio of \(5751:1\). When the training data has a skewed sample size distribution, it under-performs on the minority class instances [11]. So it is essential to perform data/class balancing to avoid biased results. ## III Methodology The architecture of the proposed framework is depicted in Fig. 2. The proposed framework consists of three stages, namely (i). feature selection; (ii) class balancing; (iii) classification and assessment. The first stage consists of selection of an optimal feature subset from the dataset. In the second phase, the data is split into training and testing sets, and then data balancing techniques are applied on the training-set. Finally, in the third phase, the ML methods are applied on the pre-processed data and the classification performance is evaluated. The details of each stage of the methodology are discussed in the following. **Feature Selection**: In the first stage, feature subsets are selected using the following methods: 1. **CfsSubsetEval (CFS)**[17] using the Weka tool [18]: It is a correlation-based feature selection method. Based on the predictability and the degree of redundancy of features the subset is assessed. Thus the features with higher correlation with the class labels and lower inter-correlation are selected. The first approach involves employing the CFS feature subset selection to obtain Fig. 1: Class distribution of \(CICIoT2023\) dataset the top \(k\) features from the existing feature set resulting in a subset of six features out of \(46\) original features. 2. **Intersection of RFE and MRMR methods (IRM):** In this IRM approach, the \(k\) best features through Random Forest (RF) with Recursive Feature Elimination (RFE) are selected. In the second step, another set of \(k\) best features is chosen using the minimum Redundancy Maximum Relevance (mRMR) [19] technique. Within these two subsets, the top \(25\) features are chosen and an intersection of \(11\) features is determined. **Class Balancing**: In this stage, the selected dataset is pre-processed using sklearn (scikit-learn.org) standard scalar to normalise the dataset to unit variance. The \(CICIoT2023\) dataset is split into train and test sets with \(80:20\) ratio. In the analysis of the dataset, it is observed that there is a significant imbalance among the categories of attacks. Hence balancing techniques are deployed for improving the performance for minority classes without loosing performance for the major classes. The following class balancing techniques are applied on the train set. 1. Random Oversampling (ROS) [20] artificially increases the number of samples in the minority classes. This process includes randomly duplicating instances from the minority class until it achieves the desired balancing with the majority classes. 2. Balanced Random Forest Classifier sampling (BRFC) [21] improves upon the standard Random Forest algorithm for handling imbalanced datasets. In a typical Random Forest algorithm, decision trees are trained on bootstrapped samples, which may favour the majority class in an imbalanced dataset. BRFC addresses this by adjusting the bootstrapping process to create balanced samples with an equal number of instances for both classes. This allows the ensemble to prioritize the minority class, enhancing performance on imbalanced datasets. The above sampling methods convert the train set of size \(N_{tr}\times k\) into a balanced dataset \(D\) of dimensions \(N_{R}\times k\), where \(N_{R}\) is the sample size of balanced data. In Dataset \(D\), each class has \(N_{L}\) samples where \(N_{L}=N_{R}/N_{c}\) where \(N_{c}\) is the number of classes. **Classification and Assessment**: In the third stage, the balanced dataset \(D\) is used to train the random forest models. The trained ML model is used to determine the class labels of the test set. A step-by-step process of the proposed framework is given in Algorithm 1. Different ML models and frameworks are evaluated based on the ML metrics given below. The metrics used in this research are : 1) Precision, 2) Recall, 3) \(F_{1}\) score, 4) Accuracy and 5) Cohen Kappa score. ``` 0: Dataset with \(n\) features \(D=\{df_{1},df_{2},df_{3},\cdots,df_{n}\}\) Description: 1:for\(df_{i}\in D\)do 2: Data Normalization: scaling to unit variance. 3:endfor 4:for\(df_{i}\in D\)do 5: Identify \(k\) best features from \(D\) 6: Feed the data with selected features, \(SF=\{sf_{1}\), \(sf_{2}\), \(sf_{3},\cdots,sf_{k}\}\) to ML classifiers 7:endfor 8:for each \(sf_{j}\in SF\)do 9: Apply class balancing 10:return Class balanced data 11:endfor 12:for each class balanced data \(\in\)\(SF\)do 13: feed to ML classifiers 14:endfor 15:\(y\in classification\ output\{c_{1},c_{2},\cdots,c_{m}\}\) ``` **Algorithm 1** The Proposed Intelligent Intrusion Detection System (IIDS) ## IV Implementation Following the feature selection and the class balancing stages, in the classification stage, the random forest algorithm is applied on the normalized set of the selected and balanced features from \(CICIoT2023\) dataset. To demonstrate the efficacy of the proposed architecture, the following frameworks (FW) are implemented and compared. * **FW1**: Base model [13], consists of the original \(46\) features from the CIC dataset analyzed with the random forest algorithm. * **FW2**: Consists of application of class balancing methods on the FW1. Specifically, the methods RoS and BRFC methods described in Sec. III are applied in conjunction with FW1. * **FW3**: FW1 with feature selection methods in Sec. III. Fig. 2: Architecture of the proposed framework ## IV Conclusion Fig. 3: Proposed Performance Comparison in terms of \(F_{1}\) Score * **FW4**: Proposed method with feature subset selection and class balancing methods as described in Sec. III and Fig. 2. ### _Assessment Scenarios_ * **Number of Categories:** In each of the above mentioned experiments, the classification performance is assessed for the following class label setups: 1) all the \(34\) categories are considered 2) the events grouped to \(8\) high level categories and 3) binary classification where normal event vs. attack is detected. * **Focus on Unsaturated Classes (USC):** In this study, a number of events (classes) are identified as unsaturated classes (USC) for which the \(F_{1}\) score \(<0.99\) with the base model. For the complementary set of classes, saturated classes (SC) ( \(F_{1}\) score \(\geq 0.99\)) the performance may not improve as much and will be presented in the results section. Hence the focus is on the USC where the performance can be improved with various methods. The performance improvement for this set of classes is analyzed based on the class specific \(F_{1}\) score. ## V Results and Analysis As discussed in the Sec. IV-A, the number of classes considered for categorization is dependent on the nature of attacks. Based on the kind of attacks, \(33\) attacks are grouped into the following seven categories: DoS, DDoS, Web-based, Recon, spoofing, Mirai, and bruteforce. Since each of these categories are attack related, for the binary classification, they are combined as a unified attack class [13]. The following analyses is based on the results for these three levels of categorization. ### _Analysis on ML Frameworks_ In the experiment 1, the results obtained from the four frameworks with variations as discussed in Sec. IV with the Random Forest algorithm are presented in Table I. Note the Random Forest algorithm is chosen as it outperforms other models as discussed in the [13]. From Table I, there is a minor improvement in the FW2 Base + RoS in comparison with FW1. There is 2.62% improvement in recall score with ROS. However, there is a little improvement in the precision, \(F_{1}\) score, accuracy and kappa. Thus the Balancing methods alone may not provide significant improvement. **FW3 vs. FW1:** In the FW3, the two feature subset selection techniques are applied separately on the dataset. In the first approach, the CFS identified \(6\) best features out of \(46\) features. The results of FW3: CFS, i.e., classification metrics corresponding to the RF classifier on these 6 CFS features are shown in the Table. I. In the second feature selection approach, the top 25 features from the RF with RFE, another set of \(25\) best features using the MRMR technique are selected. Next an intersection of these subsets consisting of \(11\) features is chosen. The results of RF model obtained from RF_MRMR are listed in Table. I. For the FW3 CFS, there is \(3.3\)%, \(2.93\)% and \(4.16\)% overall improvement on the base model FW1 in terms of precision, recall, \(F_{1}\) score respectively. Note the recall for FW3 RF_MRMR's is better than that of FW3 CFS. The accuracy and kappa measures are almost same for these two variatous. From this comparison, it is observed that the FW3: CFS outperforms than the FW3: RF_MRMR. **FW4 vs. Other Frameworks** For the FW4 (proposed method), the results obtained from the four combinations of the feature sets and the class balancing methods are listed in Table. I. In comparison with FW1, FW4: CFS+ROS outperforms in terms of precision and \(F_{1}\) score. With FW4: CFS+ROS, there is an improvement of 3.45%, 2.87%, and 4.3% in comparison with the base model in terms of precision, recall, and \(F_{1}\) score respectively. FW4 RF_MRMR+ROS recall is 0.47% better than the CFS+ROS. The accuracy and kappa measures are similar for these variations. Again in comparison with FW1, Fw4: CFS+BRFC is outperforming in terms of precision and \(F_{1}\) score. With FW4: CFS+BRFC, there is an improvement of 3.72%, 3.75%, and 4.69% on the base model FW1 in terms of precision, recall, and \(F_{1}\) score respectively. The recall for FW4: RF_MRMR+BRFC is 0.72% better than that of the FW4: CFS+BRFC. From these observations, the FW4: CFS+BRFC approach performs well with only \(6\) features compared to that of the base model and other frameworks. ### _Analysis on Unsaturated Classes_ Fig. 3 illustrates the \(F_{1}\) scores of the chosen variations among the four frameworks (1). FW1; (2). FW2: BRFC; (3). FW3: CFS and (4). FW4: CFS + BRFC. The \(F_{1}\) score is generally improving in the FW3 and FW4 in comparison with FW1. Note that the set of classes following _DDoS_ACK_Fragment_, have \(F_{1}\) score of atleast \(0.99\), which, based on the definition of USC in the Introduction, has become saturated classes. The rest of the classes, on the left of _DDoS_ACK_Fragment_, constitutes the USC. The focus in terms of \(F_{1}\) gain is on this set of USC. The inset in the Fig. 4 clearly shows the \(F_{1}\) of the USC for the above mentioned four framework variations. For unsaturated classes, the average gain in \(F_{1}\) score for a framework with respect to FW1 is given in Fig. 4. A maximum gain of \(7.9\)% is achieved with FW4: CFS+BRFc. Hence the CFS plays a critical role in improving the \(F_{1}\) of the unsaturated classes by \(7.04\)%. Its combination with BRFC class balancing method provides a further improvement of 0.9%. #### V-B1 Relation to sample size There is an intersection between the unsaturated classes and the minority classes (classes with relatively small sample size). For instance, the sample size of \(DictionaryBruteForce\), a minority class is \(13064\). For this class, the \(F_{1}\) score for the FW1 is 5.6%, the FW3: CFS it is \(46.27\)%, FW4: CFS+BRFC is \(48.31\)%. Thus FW4 improves by \(42.71\)% with respect to FW1 as shown in Fig. 4. Similarly, other minority classes \(BrowserHijacking\) and \(SqIInjection\) have sample sizes \(5859\) and \(5245\) respectively. i.e., nearly 0.01% of the overall sample size. The \(F_{1}\) scores for FW1 are 13.62% are 0.3% and FW4: CFS+BRFC are \(40.06\)% and 17.6%. Hence, as shown in Fig. 4, the \(F_{1}\) gains for these two classes are \(26.44\)% and \(17.3\)%. #### V-C2 Importance in Intrusion Detection The results presented in Fig. 4 illustrate the improvement in the \(F_{1}\) score for unsaturated classes. The \(F_{1}\) score in the security applications is an important metric. Following observations illustrate the importance of improved prediction of specific attack classes: a) For instance, identifying \(DictionaryBruteForce\) attacks in IoT devices is of utmost importance for ensuring the security and integrity of the devices, protecting user data, and ensuring the overall safety of the IoT ecosystem. b) Further, it is crucial to identify and address \(BrowserHijacking\) in IoT devices to safeguard security, privacy, and data of users while preserving the integrity and reputation of device manufacturers and service providers. c) Additionally, identifying \(SqlInjection\) vulnerabilities in IoT devices also plays a crucial role in safeguard data, providing device integrity, protect networks, and preserve the trust of consumers and stakeholders in the IoT ecosystem. It is an essential step toward establishing a secure and resilient IoT infrastructure. ## VI Conclusion and Future Work In this paper, we studied the impact of feature selection and class balancing techniques on machine learning algorithms on the \(CICIoT2023\) dataset. We analysed the performance of the proposed \(IIDS\) with ML algorithms. We compared our results with the base model, and the proposed IIDS improves \(3.72\)%, \(3.75\)%, and \(4.69\)% in Precision, Recall, and \(F1\)-score, respectively. In addition, with unsaturated classes analysis, we obtained a significant improvement of \(7.9\)% compared to the base model. Finally, it is concluded that the combination of feature selection with CFS and the class balancing with BRFC techniques outperformed the other frameworks. **Future work:** In future work, we aim to extend our study on the feature selection and class balancing techniques and how they influence the ML-based models for imbalanced datasets particularly in intrusion detection systems for IoT applications.
2301.02487
Watching your call: Breaking VoLTE Privacy in LTE/5G Networks
Voice over LTE (VoLTE) and Voice over NR (VoNR) are two similar technologies that have been widely deployed by operators to provide a better calling experience in LTE and 5G networks, respectively. The VoLTE/NR protocols rely on the security features of the underlying LTE/5G network to protect users' privacy such that nobody can monitor calls and learn details about call times, duration, and direction. In this paper, we introduce a new privacy attack which enables adversaries to analyse encrypted LTE/5G traffic and recover any VoLTE/NR call details. We achieve this by implementing a novel mobile-relay adversary which is able to remain undetected by using an improved physical layer parameter guessing procedure. This adversary facilitates the recovery of encrypted configuration messages exchanged between victim devices and the mobile network. We further propose an identity mapping method which enables our mobile-relay adversary to link a victim's network identifiers to the phone number efficiently, requiring a single VoLTE protocol message. We evaluate the real-world performance of our attacks using four modern commercial off-the-shelf phones and two representative, commercial network carriers. We collect over 60 hours of traffic between the phones and the mobile networks and execute 160 VoLTE calls, which we use to successfully identify patterns in the physical layer parameter allocation and in VoLTE traffic, respectively. Our real-world experiments show that our mobile-relay works as expected in all test cases, and the VoLTE activity logs recovered describe the actual communication with 100% accuracy. Finally, we show that we can link network identifiers such as International Mobile Subscriber Identities (IMSI), Subscriber Concealed Identifiers (SUCI) and/or Globally Unique Temporary Identifiers (GUTI) to phone numbers while remaining undetected by the victim.
Zishuai Cheng, Mihai Ordean, Flavio D. Garcia, Baojiang Cui, Dominik Rys
2023-01-06T12:49:41Z
http://arxiv.org/abs/2301.02487v1
# Watching your call: Breaking VoLTE Privacy in LTE/5G Networks ###### Abstract. Voice over LTE (VoLTE) and Voice over NR (VoNR), are two similar technologies that have been widely deployed by operators to provide a better calling experience in LTE and 5G networks, respectively. The VoLTE/NR protocols rely on the security features of the underlying LTE/5G network to protect users' privacy such that nobody can monitor calls and learn details about call times, duration, and direction. In this paper, we introduce a new privacy attack which enables adversaries to analyse encrypted LTE/5G traffic and recover any VoLTE/NR call details. We achieve this by implementing a novel mobile-relay adversary which is able to remain undetected by using an improved physical layer parameter guessing procedure. This adversary facilitates the recovery of encrypted configuration messages exchanged between victim devices and the mobile network. We further propose an identity mapping method which enables our mobile-relay adversary to link a victim's network identifiers to the phone number efficiently, requiring a single VoLTE protocol message. We evaluate the real-world performance of our attacks using four modern commercial off-the-shelf phones and two representative, commercial network carriers. We collect over 60 hours of traffic between the phones and the mobile networks and execute 160 VoLTE calls, which we use to successfully identify patterns in the physical layer parameter allocation and in VoLTE traffic, respectively. Our real-world experiments show that our mobile-relay works as expected in all test cases, and the VoLTE activity logs recovered describe the actual communication with 100% accuracy. Finally, we show that we can link network identifiers such as International Mobile Subscriber Identities (MSI), Subscriber Concealed Identifiers (SUCI) and/or Globally Unique Temporary Identifiers (GUTI) to phone numbers while remaining undetected by the victim. VoLTE privacy, mobile-relay attack, 5G security, LTE security + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + Footnote †: journal: Journal: Journal: Information Systems + first picks up and demodulates the radio signal to bits and then modulates bits and transmits to reception using proper radio resources (e.g., carrier frequency and transmission time), whereas the repeater is only amplifying the power of the signals and functions only on the physical layer. Several other attacks have been proposed which are able to tamper, recover or _fingerprint_ the data transmitted over-the-air. Tampering Internet data, recovering voice data and _impersonating_ attacks are proposed by Rupprecht et al. (Rupprecht et al., 2017; Rupprecht et al., 2018; Rupprecht et al., 2018). In contrast, several weaker attackers (Rupprecht et al., 2018; Rupprecht et al., 2018) are proposed to _fingerprint_ victim's data, which can monitor victims' activities about browsing websites and watching videos. These attacks significantly break the privacy requirements of LTE/5G which requires that no one is able to monitor users' activities. In this paper, we present the first study focused on the analysis of encrypted VoLTE traffic consisting of both signalling data, the VoLTE messages exchanged between a UE and the IMS, and voice data, representing voice activities observed in windows of 20ms. These insights allow us to develop means for monitoring specific VoLTE activities enabling us to learn conversation states of targeted victims and their relationship with other victims, while being located in one or more areas, e.g., victim A calls victim B at a time T and talks for the majority of the conversation. ### Contributions We develop, deploy and test a novel LTE/5G mobile-relay, based on open source software and commercial off-the-shelf (COTS) hardware, significantly improving on existing work (Rupprecht et al., 2017). Using this relay, which allows us to intercept and monitor connections between victim UEs and commercial eNodeBs, in this paper, we show: 1. The first privacy attack that targets encrypted LTE and 5G-SA traffic to extract VoLTE activity logs which describe call times, duration, and speaker direction for users in mobile networks. 2. A novel and efficient identity mapping method which links phone numbers to LTE and 5G-SA network identifiers. Our attack is completely undetectable when used to link phone numbers to temporary identifiers, and has minimal protocol interference when linking them to permanent ones. 3. Several physical layer improvements to the mobile-relay adversary, which greatly improve the effectiveness of this attacker. We evaluate the feasibility of our contributions above by testing them using four COTS phones and two major commercial carriers. ## 2. Preliminaries In this section, we give an overview of the main, relevant technologies investigated in this paper. ### LTE/5G network communication From a high-level view, as previously stated, LTE and 5G networks consist of three main components: the user equipment, the eNodeB, and the evolved packet core. The EPC contains all the software and hardware components that provide necessary functionalities such as data and voice communication services between UEs, authentication and billing. Communication between these three entities is done differently, based on the requirements and location, as shown in Fig. 1. Given that both the eNodeB and the EPC are components of the carrier network's infrastructure, the security here is mostly ensured through physical means such as having wired connections to transport the S1 Application Protocol (S1AP) protocol messages. The radio link between the UE and the eNodeB, on the other hand, is susceptible to interception and interference from any number of actors and, therefore, has more security and reliability features built-in. While an attacker that wants to target specific services running inside the EPC can consider both these links as viable, the radio link provides a significantly more accessible and less tamper-evident entry point, if the security features can be circumvented. We continue by presenting a brief overview of the protocol layers used on the radio access link, which is the one targeted by our mobile-relay adversary. **LTE/5G radio access architecture.** LTE and 5G protocols use a wide range of frequency bands located from 1GHz to 6GHz and mmWaves (30-300GHz) in the new 5G standard. Data modulation and encoding on these frequencies are handled at the physical layer (PHY) of the protocol and can be done using Frequency-Division Duplex (FDD), Time-Division Duplexing (TDD) or FDD Supplemental Downlink (SDL). The Medium Access Control (MAC) layer is the first logical layer of the protocol stack and is responsible for exchanging measurements and parameters such as channel quality indicators and modulation schemes, which are used to adjust the PHY layer and ensure the best quality of communication. The Radio Link Control (RLC) layer sits above the MAC layer and provides necessary error correction, segmentation and broadcast capabilities to the layers above. The Packet Data Convergence Protocol (PDCP) is the layer which handles cryptographic keys and provides encryption and integrity protection to the layers above. This is particularly important in an adversarial setting because all traffic encapsulated in PDCP packets (such as VoLTE traffic) is at least encrypted. Finally, the network layer is formed of three sub-layers: (1) the Radio Resource Control (RRC) sub-layer which connects the UE to the eNodeB and facilitates the exchange of configuration messages for the lower layers, including MAC and PHY layers, using encrypted PDCP messages; (2) the Non-Access Stratum (NAS) sub-layer which connects the UE to the EPC through RRC messages initially and Figure 1. Overview of 5G/LTE radio access network architecture. Components marked in red are 5G specific and do not contain any security-related features. Some 5G sub-layers have been omitted for brevity. then S1AP messages, and is responsible for authentication and mobility within the network, and (3) the IP (or user-plane (UP)) sub-layer which connects the UE to the core network through encrypted PDCP packets and is responsible for providing user services such as Internet access or VoLTE. ### Mobile-relay adversarial node We design and build a mobile-relay adversary that is positioned between the victim UE and the eNodeB and behaves as a Man-in-the-Middle attacker. This relay adversary maintains two independent physical layer radio connections: one to connect to victim UE(s), and another with the eNodeB (see Fig. 2) similar to the one proposed in (Zhou et al., 2017). As, these two physical connections are separately maintained, and thus direct traffic forwarding is only possible at higher layers, e.g., PDCP and RRC (see Fig. 1). Maintaining the connections, however, is challenging because after the initial connection stages, all subsequent physical layer configuration parameters are exchanged using encrypted RRC messages. This forces the attacker to continuously guess the physical layer parameters in order to maintain its radio connections alive. We discuss our improvements and how we reliably address the problems in Section 3. ### VoLTE service In this section, we describe the VoLTE service following IMS deployed in the carrier's network, the radio bearers used to transmit VoLTE traffic, related protocols and the VoLTE client application specifics provisioned on UEs. **IMS.** IMS is a standalone system for providing IP multimedia services, session management and media control. An important component of IMS is the Proxy Call Session Control Function (P-CSCF) entity, which directly interacts with VoLTE clients. The Session Initiation Protocol (SIP) together with the Real-time Transport Protocol (RTP) and the RTP Control Protocol (RTCP) are used in VoLTE to manage call sessions, deliver audio data and report transmission state, respectively. In this work, we exploit leaks from these protocols in order to reveal details about connections that should be protected, thus breaking the privacy of VoLTE. **Radio bearers.** 3GPP assigns different services with different transmission priorities indicated by QoS Class Identifier (QCI) to improve user experience (Beraera et al., 2017). To this end, LTE sets up an Evolved Packet-switched System (EPS) Bearer between UE and Packet Data Network Gateway (P-GW) for each QCI, and identifies these bearers with Data Radio Bearer (DRB) ids. Each DRB is associated with a Logical Channel ID (LCID) at the MAC layer. When using VoLTE, SIP packets are transmitted on DRB2 using LCID 4 and QCI 5, while RTP packets use DRB3, LCID 5 and QCI 1. RTCP packets can be transmitted either on DRB2 or on DRB3 which depends on the carriers' configuration. To further reduce the VoLTE bandwidth, 3GPP introduces Robust Header Compression (ROHC) to squeeze bulky protocol headers (e.g., IPv6 header, UDP header, RTP header) to exactly 3 bytes (Bera et al., 2017; Zhou et al., 2017). In this work, we mostly focus on the traffic transmitted on DRB2 and DRB3 which is related to VoLTE activities. **SIP/RTP/RTCP.** As shown in Fig. 2, after DRB2 is established, the UE registers to the IMS and then subscribes to events from the IMS (e.g., incoming call events). When a call is accepted, as a consequence of receiving an _Invite_ message from a caller, a DRB3 bearer is established to prepare for the transmission of audio data. The audio data is sent using RTP packets. The call session is terminated when a _Byte_ message is sent. This results in the immediate release of DRB3. During the conversation, two types of RTP packets can be sent, one contains the encoded audio frame, and the other contains a single _Comfort Noise_ frame. The first type of packet is transferred every 20ms while the latter is transferred every 160ms. And the size of _Comfort Noise_ frame is 6 bytes which is much smaller than other frames (Bera et al., 2017; Zhou et al., 2017; Zhou et al., 2017). This frame, however, is only sent when the Voice Activity Detector (VAD) identifies that the speaker has not spoken in the last sampling period, the purpose being to save the bandwidth and battery life. The use of _Comfort Noise_ frame allows us to monitor the victim's voice activity with a high granularity by analysing uplink and downlink bit-rate separately. We detail this more in Section 3.3. **VoLTE client.** VoLTE client is usually part of the software stack running on COTS phones, however, and uses the aforementioned public protocols (e.g., SIP, RTP) to provide VoLTE services. This client connects to the carrier's IMS and encodes the user's operations as specific SIP messages based on predefined templates. These templates are only relevant to specific vendor implementations but, based on our observations, they are static. This enables an attacker to compile VoLTE signalling logs (e.g., SIP messages) by evaluating the communication characteristics of the traffic. Figure 2. **VoLTE protocol message diagram. The mobile-relay adversary is located between the victim UE(s) and commercial eNodeB. The relay maintains two independent physical layer radio connections and forwards encrypted PDCP layer traffic between the UE(s) and the eNodeB. _Scheduling Request_ procedure outlines the method in which UE requests an uplink transmission resource to transmit data, from the mobile-relay. Every other type of traffic is normally encrypted by the UE or the eNodeB and thus forwarded without alterations.** ## 3. Breaking Privacy Using VolLTE The process of breaking users' privacy using VoLTE (or VoNR in 5G) mainly involves recovering the VoLTE activity logs belonging to the victim, including both signalling and voice logs. We refer to _signalling logs_ as the part of the traffic comprised of SIP messages exchanged between the victim UE and the carrier's IMS. Conversely, by _voice logs_ we refer exclusively to the voice packets exchanged between victims. By leveraging these self-computed logs we can reveal the links between the anonymised network identifiers (e.g., SUCI, Temporary IMSI (\(\Gamma\)-IMSI)) and real victim identities, i.e. phone numbers. To this end, we use a mobile-relay to collect victim identifiers and the encrypted VoLTE traffic exchanged between UEs and the IMS. We exploit the static nature of VoLTE data to extract meaningful information from the encrypted traffic. In the following, we introduce our threat model followed by descriptions of our attacks. ### Threat Model We begin our threat model analysis by introducing the main goals of the adversary as: (1) _data collection_, which represents the adversary's goal to stealthily collect relevant data, such as plaintext network configuration parameters, identifiers and encrypted traffic; (2) _VoLTE data analysis_, the goal of successfully processing the collected traffic for the purposes of extracting meaningful information such as VoLTE logs; and (3) _real-world identity mapping_, the goal of associating collected traffic to real-world victims identified through their phone numbers. Next, we map these against three types of adversaries sorted from weakest to strongest as follows. First, our weakest adversary is a completely _passive adversary_ located between the UE and the network provider. This adversary is able to achieve both the _data collection_ and _traffic analysis_ goals. This is a similar attacker model to the one proposed by Rupprecht et al. (Rupprecht et al., 2017), which is able to redirect Radio Frequency (RF) domain data flows through an attacker controlled node, however, we expand the capabilities of this with additional data processing at the radio communication level greatly improving stealthiness and reliability. This adversary is able to observe both uplink and downlink radio communication data between the UE and the network at the physical layer. While this attack does require the adversary to initiate a standard UE attach procedure, we maintain that this attacker can be seen as passive as it remains silent with respect to the data flow, the attach procedure is indistinguishable from a legitimate one, and the attacker does not have access to any cryptographic material belonging either to the network or the UE. We also highlight that, from a functional point of view, RF data redirection is not a necessary requirement and attacker models, such as the fully passive one proposed by Kotuliak et al. (Kotuliak et al., 2018), would be equally efficient. Our next two attacker models deal with the problem of _real-world identity mapping_, which requires some form of data exchange between the attacker and the victim. As such, our mid-strength model is a _passive adversary with call capabilities_. We require that this attacker has knowledge of the victim's phone number and can initiate VoLTE calls identical to a standard UE. Additional UE functionality however is not required. This attacker can remain undetectable given that it fully obeys protocols by only interacting with the victim using stranded functionally. Finally, our strongest adversary is an _active adversary_ which is able to initiate calls and perform modifications to the data exchanged between the UE and the network. This adversary, however, still does not have any access to cryptographic materials belonging to the network or the UE. Due to its ability to modify traffic, this attacker is potentially detectable. We discuss the challenges of detecting this attack in Section 6.1. We implement our attacks, using COTS UEs, software-defined radio (SDR) devices, and a modified version of the open-source srsRAN mobile communication software stack (Shen et al., 2017). ### Obtaining physical layer parameters The physical layer of a 5G/LTE network, in the normal mode of operation, allocates radio resources, i.e. the smallest data units used by mobile networks, dynamically in order to avoid interference and exploit the bandwidth efficiently. This process begins when a UE sends a _Scheduling Request (SR)_ message to the eNodeB component of the network to request an Uplink Shared Channel (UL-SCH) resource for uplink data transmissions. After the connection is established, the UE needs to periodically report to the eNodeB the channel quality using _Channel Quality Indicator (CQI)_ messages, which affect the Modulation and Coding Scheme (MCS) used between the two. In case the UE fails repeatedly to send _SR_ or _CQI_ reports, the radio connection is terminated (Blekker et al., 2016; Blekker et al., 2016). Due to reasons related to signal changes, optimal resource allocation, establish/release EPS bearer, and/or bandwidth efficiency, RLC, MAC, and PHY parameters can be updated by the eNodeB through _RRCConnectionReconfiguration_ messages. While RLC and MAC parameters remain fairly static over the course of a connection, physical layer parameters, which are used to orchestrate the all connected subscribers on the radio spectrum, are frequently adjusted. Without knowledge of these, the adversary is unable to maintain the connection between the victim and the eNodeB as it cannot allocate or use the correct radio resources. Furthermore, when such a situation is encountered, the radio connection is immediately released and is followed by a new random access procedure. An example of these parameters is shown in Fig. 3 where the _physicalConfigDedicated_ entry specifies the physical layer parameters. The two most important entities are _schedulingRequestConfig_ which is responsible for requesting radio resources to be used for sending uplink data (i.e. via the Physical Uplink Shared Channel (PU-SCH)), and _cgi-ReportConfig_ which instructs on the type of MCS the eNodeB should use. Given the location of our mobile-relay, the attacker can continuously monitor the communication stream and look for encrypted _RRCConnectionReconfiguration_ messages2. When such a message is detected, the eNodeB interface of mobile-relay opens up all proper radio resources, i.e. all slots in the time domain and sub-carriers in the frequency domain, and then waits for the victim UE to use one of them. The mobile-relay continuously monitors the radio resources used by the victim UE to transmit uplink data until the mobile-relay obtains the physical layer parameters, then the mobile-relay applies these parameters on both eNodeB and UE interface and removes redundant radio resources. We describe the details of guessing _schedulingRequestConfig_ and _cqi-ReportConfig_ as follows. **Recovering _schedulingRequestConfig_ parameters.** After receiving an _Scheduling Request (SR)_ message from a UE at a time \(T\), the eNodeB assigns this UE a radio resource for transmitting uplink data. This assignment is communicated to the UE via _Uplink Grant (UL-Grant)_ at time \(T+4ms\). If the UE does not receive _UL-Grant_ response at \(T+4ms\), it will send another _SR_ request at the next available period. This process can be repeated until it reaches the maximum re-transmission threshold allowed, which is indicated by the _dsr-TransMax_ parameter. The process is shown in Fig. 2. In order to compute _sr-ConfglIndex_ and _sr-PUCCH-ResourceIndex_ we proceed as follows. The process begins with the mobile-relay listening for a _RRCCnnectionReconfiguration_ message sent by the commercial eNodeB. When this is observed, the relay starts monitoring all slots in the time domain and all sub-carriers in the frequency domain. Then, using the first _SR_ message intercepted, the relay extracts the system frame and sub-frame number, however these two values are insufficient to calculate the _SchedulingRequest_ parameter. In order to acquire this, the relay ignores this _SR_ message, which forces the victim to re-send another _SR_ message in the next period. After observing this second _SR_ message, the adversary can compute the periodicity \(p\) and the _subframe-offset_ by simple subtraction. Finally, the _sr-ConfigIndex_ is obtained through a lookup operation in the 3GPP Table 10.1.5-1 (Cheng et al., 2017) where the _sr-PUCCH-ResourceIndex_ is the index of the radio resource used by the _SR_ message in the frequency domain. At this stage, the relay adversary knows the _schedulingRequestConfig_ parameters and can use them to configure both its eNodeB and its UE interfaces. By dropping the first SR, however, the mobile-relay causes a time delay in the transmission of the _RRCCnnectionReconfigurationComplete_ message. This time delay depends on the periodicity of SR, which normally is 10ms or 20ms. However, this delay will not trigger any connection failures given that (1) the guessing procedure is fast and only takes a maximum of two periods (e.g., 20ms) and (2) there are no timeouts available for receiving _RRCCnnectionReconfigurationComplete_ messages by the eNodeB. Furthermore, this re-transmission procedure is a common occurrence which triggers failures only if the maximum number of re-transmissions is reached. The threshold, however, is sufficiently large (e.g., 64 re-transmissions for Carrier1) for our relay implementation to calculate the parameters without breaking the radio connection. We detail our procedure in Algorithm 1. **Recovering _CQI-ReportConfig_ parameters.** This process is similar to the one used to recover _schedulingRequestConfig_ parameters, however it requires a few slight changes as follows. First, for Multiple Input Multiple Output (MIMO) connections the UE uses at least two antennas to send and receive radio signals. The 3GPP standard introduces the Rank Indicator (RI) parameter to measure to what extent the signals sent by one antenna interfere with the signals of the others, such that the eNodeB can adjust its transmission parameters and avoid serious interference. Therefore, the adversary needs to guess this _ri-ConfigIndex_ parameter only when using MIMO is detected. Second, when guessing _schedulingRequestConfig_, the first _SR_ is dropped. However, when guessing _CQI-ReportConfig_, the first message cannot be dropped since it affects the MCS used for downlink data which may not be correctly decoded if the _CQI_ message is dropped. However, processing the first _CQI_ message has no effect on the guessing procedure because the relay will receive a second message regardless of whether the first one is dropped or processed, as _CQIs_ are periodic messages. **Recording VoLTE traffic.** Targeting VoLTE traffic specifically, for any reason, including recording, should not be possible when using EEA2 encryption algorithms which rely on non-deterministic encryption schemes such as AES-CTR. This however is not the case. By looking at the non-encrypted MAC sub-header at our mobile-relay, the attacker can learn the Logical Channel ID (LCID) of the sub-PDU (see Section 6 in (Cheng et al., 2017)). Because VoLTE traffic uses specific LCID 4 and LCID 5 it can be directly targeted by the adversary. In the following, we show how this recorded traffic is used to reveal information about a victim. ### VoLTE traffic analysis The main purpose of VoLTE traffic analysis is to process collected traffic and extract VoLTE activity logs, including signalling and voice logs. A related adversarial model to ours, which exploits protocol miss-implementations, has been used to recover encrypted voice data in LTE networks by Ruprecht et al. (Ruprecht et al., 2017). Here we focus on recovering VoLTE logs using metadata traffic information protected by standard LTE/NR security, allowing our adversary to mount attacks against both LTE and 5G networks which correctly implement the standard mandated security features. As stated in Section 2, VoLTE signalling is generated according to predefined templates and has static communication characteristics. Our work exploits Figure 3. An example of physical layer configuration indicated by eNodeB. _cqi-ReportConfig_ and _schedulingRequestConfig_ are important to indicate the time (e.g., sub-frame in time domain) and frequency (e.g., sub-carrier in frequency domain) to send _CQI_ and _SR_ messages. These configuration messages are encrypted and parameter values are unknown to the adversary. these characteristics similarly to Xie et al. (Xie et al., 2018), however, while they analyse plaintext Voice over WiFi (VoWiFi) traffic collected on a malicious Access Point (AP), we deal with the more complex case of extracting meaningful logs from intercepted LTE/5G traffic, which uses both IPsec and standard EEA2 user-plane encryption. **IP packet reassembly.** Mobile LTE/5G networks use fragmentation to efficiently transfer oversized application messages (e.g., VoLTE, Hypertext Transfer Protocol (HTTP)). When transmitting data over a mobile connection, each TCP (or UDP) segment is first encapsulated in an IP packet and then in a PDCP layer packet. Each PDCP packet contains a _Sequence Number_ and an encrypted and integrity protected IP packet as payload. Segmentation or concatenation can happen at lower layers if required by the protocol, but because encryption only happens at the PDCP layer, an adversary can revert these operations and restore PDCP packets. A passive mobile-relay adversary can further obtain information about the direction _dir_ (i.e. uplink or downlink) and arrival time _time_ of PDCP packets by simply observing traffic. The adversary, however, does not have any information about the contents of PDCP packets. In order to make sense of these and reconstruct meaningful VoLTE messages that can be analysed we leverage generic knowledge about network protocols. First, we assume that each TCP or (UDP) segment is efficiently used according to the Maximum Transmission Unit (MTU), i.e. the size of all fragments in a sequence except the last one is equal to the MTU at the moment of segmentation. The MTU is determined from the Maximu_SDU_size contained in NAS messages and is same as the one observed by the attacker's UE. Using this assumption, we give an efficient packet reassembly algorithm. Briefly, based on observation, VoLTE related packets are usually split into three fragments. Our algorithm tries to reconstruct these sequences by looking at neighbouring packets and trying to allocate them to a category, e.g., first, middle, or last, based on the relationship between their real size and their MTU. Once reassembled, the adversary requires some protocol context relevant info to the type of VoLTE traffic (i.e. TCP, UDP, TCP over IPsec, or UDP over IPsec) to calculate the size of the SIP signalling payload by subtracting all protocol headers from IP packet length. We obtain this information from Control Information (CI) packets (i.e. SYNC, FIN, ACK) which are transferred between peers when TCP connection setup, tear down, or maintenance. Although CI packets are encrypted, the adversary is still able to locate them by examining packet size, e.g., the TCP header length of SYNC, SYNC_ACK, and ACK are 40, 32, and 20, respectively. **VoLTE signalling identification.** After IP packets have been reassembled from encrypted PDCP traffic, the adversary needs to identify VoLTE data streams. The main challenge is to link the encrypted messages to specific VoLTE operations such as _Invite, Cancel_, and restore the communication logs. This can be accomplished as follows. First, a one-off operation is required, where the adversary builds a database which encodes VoLTE message characteristics corresponding to each type of operation. This process can be accomplished easily by using standard diagnostic tools, e.g., SCAT (Krishnan et al., 2017), to analyse network traffic on an attacker controlled UE. While this traffic is usually encrypted at the IPsec level, all the session keys can be obtained with readily available tools such as SIMTrace (Shen et al., 2018). With the decrypted VoLTE messages, the adversary is able to construct a message characteristics database specific to a victim network carrier such as the one shown in Table 3. Using this database the adversary is able to map encrypted VoLTE messages to their corresponding operations by evaluating their direction, encrypted size and type of operation. We observe that message characteristics depend on the VoLTE software provisioned in the baseband firmware, and the carrier used, are consistent for same model devices, and are fairly static between models. At the end of the mapping operation, the adversary is able to extract complete VoLTE signalling logs which contain the following five features: (1) _identity_: the victim's identity such as Subscriber. Concealed identifier (SUCI), IMSI, phone number; (2) _timestamp_: the time of day of the VoLTE call; (3) _call direction_: incoming or outgoing call for victim; (4) _establish status_: the response of callee (i.e. accepted, declined or missed); (5) _termination cause_: which UE ended the call session and for what reason (e.g., caller cancelled during ring period, callee hang-up during conversation); (5) _call duration_: the duration time (in second) of this VoLTE call. **VoLTE voice activity.** In addition to the features mentioned above, the adversary is also able to extract the victim's voice activity to an accuracy window of 20ms by analysing _Comfort Noise_ frames. To do this, first, the adversary refines voice related traffic by filtering out RTCP packets from the collected DRB3 traffic because RTCP packets can be transferred on the DRB3 or the DRB2 alongside RTP which depends on the carrier's configuration. RTCP packets can be easily identified based on their fixed size (e.g., 128 or 140 bytes). The _Comfort Noise_ frames are encoded within RTP packets as the special frames which contain background noise parameters instead of encoded audio data, and they are generated only when Voice Activity Detection (VAD) detects that the speaker has not spoken in the last sample period. Given that no actual data needs to be encoded in these frames, the size of _Comfort Noise_ frame is 6 bytes which is smaller than others (e.g., Adaptive Multi-Rate Wideband (AMR-WR) generates 132 or 477 bits) (Brock et al., 2015; Brock et al., 2015). Additionally, _Comfort Noise_ frames have a lower re-transmission frequency, as low as one packet every 160 ms whereas other frames are re-transmitted every 20 ms (Brock et al., 2015; Krishnan et al., 2017). Once a _Comfort Noise_ frame is observed, the adversary automatically learns that the victim has not spoken in the last 160 ms. ### Identity mapping using VoLTE The main goal of identity mapping is to link the collected network identifier (i.e. IMSI, SUCI, Globally Unique Temporary Identifier (GUTI)) to the victim's real-word identity (i.e. phone number) to further monitor a specific victim's VoLTE activities. First, we discuss our _passive mapping with call capability_ which maps anonymised identity (i.e. SUCI and GUTI) to the real-world identity. To this end, the adversary needs to make a VoLTE call towards the victim to trigger VoLTE traffic between the victim's UE and the IMS. Then, the collected traffic is analysed to obtain the victim's VoLTE logs (Section 3.3). The analysed traffic is combined with details related to the call, available to the attacker from its own UE, in order to link the phone number of the victim to its identity. This procedure does not require the victim to perform any response action related to the incoming call, because several signalling messages (e.g., Invite, Ring) are exchanged between the victim UE and the IP Multimedia Subsystem (IMS) before the actual ringing event on the UE happens. Observing these messages in the logs is sufficient to perform the correlation. This is mostly a one-off operation because even temporary identities remain the same for extended periods of time (Ring et al., 2016; Wang et al., 2017). This is also supported by our observation of GUTI reallocation, which is discussed in Section 4.4. When the victim's UE connects to our mobile-relay again, there is no need to repeat this mapping procedure if the victim's GUTI has not changed since the previously observed value. The stronger _active mapping_ procedure needs an additional step in order to break the Evolved Packet-switched System (EPS) security context. This procedure is similar to the Uplink IMSI Extractor proposed by Erni et al. (Erni et al., 2017), which overshadows the uplink _Attach/Service Request_ message. However, our attack remains undetectable because we do not trigger a _Security Mode Reject_ fault at victim UE. In Fig. 4, we show an example of _Attach Request_ message containing user's GUTI. We modify the M-Temporary Mobile Subscriber Identity (M-TMSI) value in this message to 0x12345678 using our mobile-relay and keep the remaining values unchanged. This causes the message authentication code of this message to become invalid, which in turn, causes the carrier to respond with an _Identity Request_ message which forces the UE to start the Authentication and Key Agreement (AKA) procedure (Beng et al., 2016). The adversary is now able to obtain the victim's IMSI from the subsequent plaintext _Identity Response_. The mapping procedure remains the same as the previous _passive mapping_. ## 4. Real-World Results We verify the feasibility of our attack using four COTS UEs which we connect to two commercial carriers. In the following, we describe our experimental setup and continue with our test procedures and results. ### Experimental setup In Fig. 5 we present our experimental setup, and we depict these components and their functions as follows: * **UEs.** We use Android Debug Bridge (ADB) to operate Android phones, e.g., toggling airplane mode and dailing VoLTE calls. Samsung S7 and S8 allow us to collect Control Plane (CP) and User Plane (UP) information from the diagnostic interface using SCAT (Kang et al., 2017). For iPhone 11, we toggle airplane mode using the _Mirror iPhone_ via Apple Watch and capture UP traffic using rvictl (Krause et al., 2017). The OS, chipset and baseband versions of the tested UEs are shown in Table 2. * **Mobile-relay.** Our mobile-relay runs on Arch Linux with Kernel 5.17.1-arch-1 and Intel i5-8250U, and consists of two Ettus USRP B210 controlled by a modified version of the srsRAN v21.10 (Zheng et al., 2017) software stack. One B210 acts as the eNodeB interface towards the victim UE(s), while the other simulates a UE interface towards the commercial eNodeB. The eNodeB component copies the configuration from the targeted commercial eNodeB. * **Commercial eNodeB and carriers.** We connect our mobile-relay to the commercial eNodeB and use specific commercial network USIM cards on the victim UE to mimic real-world use. We test our attacks on two major commercial network carriers: Carrier1 and Carrier2. Carrier1 uses MIMO while Carrier2 uses Carrier Aggregation (CA). ### Experimental procedure In the following, we give a high-level description of our experimental procedures. After, we continue with details and specific insights learned from our tests. 1. **Monitoring the victim UE.** We first activate the airplane mode on victim UE. After starting mobile-relay, we disable airplane mode and wait for victim UE to connect to our relay. Once the UE is registered to the network, we perform a number of VoLTE activities, such as dailing, answering and declining calls, in order to generate VoLTE traffic. We continuously monitor control plane traffic at the relay level. We immediately start the guessing procedure when _RRCConnectionReconfiguration_ message is observed. Figure 4. An example of _Attach Request_ which uses GUTI as a user identifier. The adversary modifies the _M-TMSI_ to 0x12345678 in order to break the security context established by the previous AKA procedure to force the network to reinitialize the authentication with the UE. Figure 5. Experimental setup. Our mobile-relay software implementation runs on the laptop computer. Two USRP B210 SDRs are connected, one acting as an eNodeB and the other as a UE interface. 2. **Collecting identities.** For the _passive attack_, we collect victim's identities that are contained in _Attach/Service Request_ messages. For the _active attack_, we modify the _Attach/Service Request_ message which triggers a break in the EPS security context between the victim UE and the network, due to integrity protection checks failing. This forces the victim to identify itself using long term IMSI identity. 3. **Analysis of VoLTE logs.** We use the method described in Section 3.3 to extract the victim's VoLTE activities, including signalling logs and voice logs. 4. **Identity mapping.** In order to map the collected identity to an actual phone number, we make a VoLTE call towards the victim UE from the attacker controlled UE. By analysing the corresponding VoLTE traffic between the victim and the attacker, we can identify which phone is associated with the dialed phone number. ### Guessing physical layer parameters As introduced in Section 3.2, the adversary needs to know physical layer parameters in order for the mobile-relay to maintain the radio connections. We develop a _guessing_ procedure for these, which requires the adversary to observe the parameter patterns of the radio bearers contained in the _RRCCnnectionReconfiguration_ messages. **Physical parameters' analysis procedure.** We collect the Control Plane (CP) data for 60 hours for each carrier. Collected data shows that most parameters of _physicalConfigledicated_ are fixed while only _cgi-ReportPeriodic_ and _schedulingRequestConfig_ have slight variations. We summarise the major parameters in Table 1. Parameters _cgi-FormatIndicatorPeriodic_, _simuaneousAckNackAndCQI_ and _dsr-TransMax_ always have the same values, while _cgi-pmi-ConfigIndex_ and _sr-ConfigIndex_ refreshed every time. For Carrier1, we observed that parameters _cgi-PUCCH-ResourceIndex_ and _ri-ConfigIndex_ are fixed. These however vary between a small set of values for Carrier2. The _sr-PUCCH-ResourceIndex_ parameter has several values both for Carrier1 and Carrier2. By observing this pattern we were able to reduce the complexity of guessing real-word parameters as follows: (1) for fixed parameters, we just set them to the observed value every time; (2) for changing parameters, which have limited options, we first analyse their occurrence frequency and then try the options in priority decreasing order. For example, _sr-PUCCH-ResourceIndex_ for the Carrier2 has 28 options, however the top option takes 53.14% and top-five options take 83%. Finally, (3) we find that the periodicity of _SR_ are fixed for each LCID in both Carrier1 and Carrier2 (e.g., Carrier2 sets periodicity as 20, 10, 10 for LCLD 5, 6 and 7, respectively). This stable periodicity provides the ability to immediately calculate _sr-ConfigIndex_ after the first request has arrived (as shown in Line 7-8 in the Algorithm 1). **Dealing with radio signal interference.** During the guessing period, a major challenge is dealing with radio signal interference as the mobile-relay opens all proper resources in frequency and time domain to look for specific victim UE's Physical Uplink Control Channel (PUCCH) messages (_SR_ and CQI). Messages transmitted from non-targeted UEs can be received by the mobile-relay which causes interference in distinguishing between messages originating from victim UE and the ones from the non-targeted UEs. Fig. 6 shows such an environment observed in a real-world relay deployment, where a victim UE and a non-targeted UE connect to the mobile-relay and to the commercial eNodeB, separately. The mobile-relay not only receives the radio signals transmitted from the victim UE but also from the non-targeted UE. However, using distance measurements the adversary can distinguish between a victim UE connected to the relay and non-targeted UEs as follows. Assuming the setup in Fig. 6, in the normal case, the distance \(d1\) between a non-targeted UE and commercial eNodeB is different from the distance \(d2\) between the same non-targeted UE and the mobile-relay, therefore, one can compute the propagation delay of both paths as \(d1/c\) and \(d2/c\) respectively. eNodeB measures this propagation delay also and uses the _Time Advance (TA)_ parameter to instruct UEs to align their internal clocks by adjusting uplink data transmission time to be slightly ahead i.e. \(2*d1/c\) (see Section 8 in (Brandes et al., 2017) and Section 4.2.3 in (Brandes et al., 2017)). Since the non-targeted UE are aligned to the commercial eNodeB rather than the mobile-relay, the time delay of the received PUCCH messages transmitted from non-targeted UE's at mobile-relay is \((d2-d1)/c\). However, the time delay of victim UE's messages at mobile-relay is 0 since victim UE has aligned to mobile-relay using _TA_. Another \begin{table} \begin{tabular}{l|l|c|c} \hline \multicolumn{2}{l|}{Parameters} & Carrier1 & Carrier2 \\ \hline \multirow{5}{*}{CQI} & _cgi-PUCCH-ResourceIndex_ & βœ“ & \(\uparrow\) \\ & _cgi-pmi-ConfigIndex_ & βœ— & βœ— \\ & _cgi-FormatIndicatorPeriodic_ & βœ“ & βœ“ \\ & _ri-ConfigIndex_ & βœ“ & \(\uparrow\) \\ \hline \multirow{5}{*}{SR} & _simultaneousAckNackAndCQI_ & βœ“ & βœ“ \\ & _sr-PUCCH-ResourceIndex_ & βœ— & \(\uparrow\) \\ \cline{1-1} & _sr-ConfigIndex_ & βœ— & βœ— \\ \cline{1-1} & _dsr-TransMax_ & βœ“ & βœ“ \\ \hline \end{tabular} \end{table} Table 1. Physical layer configuration parameters as observed for Carrier1 and Carrier2 where βœ“ represents static values, \(\uparrow\) a small search space and βœ— that no optimisations are possible. Figure 6. Parameter detection using radio signal interference. Non-targeted UE connects to commercial eNodeB with distance \(d1\) and targeted UE connects to mobile-relay with distance \(d3\). The distance between non-targeted UE and mobile-relay is \(d2\). Since \(d2\) is not equal to \(d1\), the propagation delays of these two parts are different. signal feature which can be leveraged to identify the victim UE is the Signal-to-Noise Ratio (SNR) which indicates the quality of the radio channel used by this received message. The higher the SNR, the better the signal quality. In this work, we use these two features (i.e. TA and SNR) of the radio channel to determine if the received messages are transmitted by victim UE or not. In Fig. 7, we show real-world measurements for _TA_ and _SNR_ as obtained from intercepted PUCCH messages during a _guessing_ period. As expected, the _TA_ of victim UE's messages are located around \(0\mu\)s while those from others are distributed between \(-20\mu\)s to \(20\mu\)s. The _SNR_ of victim UE's messages are quite high, above 20dB, in contrast, the _SNR_ of others is quite lower, almost all of them below 0dB. Based on these observations, our relay is able to accurately identify the targeted UE and adjust the physical parameters accordingly. **Connectivity results.** All evaluated UEs are able to complete the authentication procedure, and setup default Internet, VoLTE signalling and voice bearers as shown in Table 2. Complete VoLTE functionality is achieved for Carrier1. For Carrier2, however, barriers are successfully established only for the Samsung S7. This is caused by hardware limitations of USRP B210, specifically by the Carrier Aggregation (CA) which requires at least two channels running at different carrier frequencies. Unfortunately, the B210 only supports one. In the case of the S7, the baseband firmware first establishes one connection to the eNB and then attempts a secondary one. This, however, is unsuccessful when using the B210 due to the above mentioned limitations. Unlike other firmware though, the S7 does not disconnect the first established connection upon the failure of the second. In order to evaluate the success rate of guessing physical layer parameters, we execute the connection procedure between the victim UE and the mobile-relay 60 times. Our results show a success rate of 91.67%. When investigating the root causes for the occasional failures, we observe that most are caused by hardware limitations related to the attacker processing power. Effectively, our implemented attacker is unable to process data at the required rates such that it can decode all candidate resource blocks and identify the targeted scheduling requests. We estimate that attackers with better hardware (e.g., faster CPUs) will easily achieve better results. ### Analysing VoLTE signalling log The analysis of the communication characteristics of VoLTE signalling is an important step before moving on to real-world experiments. Here, we simulate four common scenarios to generate and analyse VoLTE traffic and evaluate traffic identification performance. These scenarios, and the specific SIP messages encountered, are briefly described in the following. 1. _Call cancelled during ringing by the caller._ In this scenario, the caller sends an _Invite_ message to the callee to trigger the new call session setup. The callee responds with a _Ring_ message to the caller. Upon receiving this message, the caller terminates this session by sending its own _Cancel_ message to the callee. 2. _Call cancelled during conversation by the caller._ This is similar to the previous scenario with the main difference is the call session is cancelled during conversation by the caller. After the callee responds with _Ring_ the caller does nothing and waits for the _OK_ (_Invite_) response which is sent by the callee when the incoming call is accepted. Then, after the conversation starts and audio data is observed on DRB3, the caller terminates the call by sending a _Bye_ request message. 3. _Call declined by the callee._ In this scenario, the callee responds with a _Busy Here_ message after _Ring_ message to terminate the session between itself and the IMS. After the IMS receives the _Busy Here_ response, it redirects the call session to the callee's voice mail if voice mail is enabled, otherwise, IMS sends _Busy Here_ response to the caller to terminate the session between the caller and IMS. 4. _Call cancelled during conversation by the callee._ This is similar to the second scenario with the difference being that the _Bye_ request message is sent from the callee rather than the caller. **VoLTE signalling analysis procedure.** We execute the scenarios above on a Samsung S7, a Samsung S8 and an iPhone with Carrier1. We also test the iPhone with Carrier2 where we collect and analyse VoLTE signals. Our test scenario involves making a VoLTE call between two victim UEs, one connected through our mobile-relay and the other connected directly to the network carrier. We repeat each scenario five times and collect 1386 SIP messages in total. Even though the calls are identical, during our tests, we observe that the number of generated SIP messages is not constant for each call as \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Phone} & \multirow{2}{*}{OS Ver.} & \multirow{2}{*}{Chipset} & \multirow{2}{*}{Baseband Ver.} & \multicolumn{2}{c|}{Carrier1} & \multicolumn{2}{c|}{Carrier2} \\ \cline{4-7} & & & AKA & & Bearers & AKA & Bearers \\ \hline iPhone 11 & 15.4.1 & Apple A13 & 3.02.01 & βœ“ & βœ“ & βœ“ & βœ— \\ \hline Samsung S7 & 8.0.0 & Qualcomm & G935FXXU8EUE1 & βœ“ & βœ“ & βœ“ & βœ“ \\ \hline Samsung S8 & 9.0 & Exynos & G9500ZHS6DUD1 & βœ“ & βœ“ & βœ“ & βœ— \\ \hline Pixel 5 & 12.0 & Qualcomm & g7250-00188-220211-B-8174514 & βœ“ & βœ“ & βœ“ & βœ— \\ \hline \end{tabular} \end{table} Table 2. Overview of the configurations of UEs and network carriers where βœ“ means that the UE has complete functionality with the carrier and βœ— that the UE only has partial functionality due to hardware limitations of B210 SDR. Carrier1 requires use of MIMO. For this carrier, all four phones successfully complete AKA authentication procedure and successfully set up bearers (e.g., Internet, VoLTE). Carrier2 requires use of Carrier Aggregation. With this carrier tested phones complete the AKA procedure but only the Samsung S7 is able to set up EPS bearers. This is because Carrier Aggregation (CA) is not feasible when using B210 SDRs. shown in Table 3. For example, the Samsung S7 sends a _200 OK (Update)_ message, however, the S8 and iPhone 11 do not. The collected data additionally shows that (1) the IPsec configurations for carriers 1 and 2 are the same, with one exception, Carrier2 encrypts IPsec payloads using _AES-CBC_ while Carrier1 uses plaintexts; (2) SIP messages can be sent with either _TCP-over-IPsec_ or _UDP-over-IPsec_; (3) the MTUs are 1308 and 1276 for uplink and downlink for Carrier2, and 1212 for both uplink and downlink for Carrier1. We further analyse the size of each SIP message and find the communication characteristics as shown in Table 3. We detail these in the following. 1. [leftmargin=*] 2. For most SIP messages the size is relatively constant, showing only minor variations, while the size falls within two or three byte ranges for some messages (e.g., downlink _183 Session Process_ message). Falling into different byte ranges is determined to be caused by they are generated in different contexts though they share the same operation type. For example, a caller receives a _200 OK (Invite)_ response message in both the callee accepted and declined scenarios, however, the former establishes the normal conversation and the latter redirects the call to the callee's voice mail. 3. For downlink SIP messages, the signal size is similar within a carrier even though the UEs are different. For example, within Carrier1, the size of downlink _Invite_ message for tested iPhone11, Samsung S7 and S8 are similar as \(2371\pm 6\), \(2358\pm 8\) and \(2357\pm 5\). This is reasonable because downlink signals are generated by the carrier's IMS which keeps the same. However, for different carriers, the downlink size is various since the carriers' IMSs are different. The downlink _Invite_ messages of iPhone 11 have different lengths i.e. for Carrier2 messages are located in the \([2219\pm 2,2000\pm 0]\) bytes range while for Carrier1 they are usually of constant length e.g., \(2371\pm 6\) bytes. 4. For uplink SIP messages, the signal size is related to carrier and phone brand. The uplink characteristics are similar for the same phone brand within a carrier. For example, the size of uplink _Invite_, _100 Trying (Invite)_, _183 Session Process_ messages for Samsung S7 and S8 are similarly as \(2479\pm 0\), \(338\pm 1\), \(1437\) and \(2494\), \(336\), \(1435\) bytes. **Real-word results.** We make 16 VoLTE calls on the Samsung S7 and S8 with Carrier1 to evaluate our attack. We set the MTUs as the observed value as 1212 bytes for both uplink and downlink, and we use the method introduced in Section 3.3 to preprocess collected encrypted PDCP packets and identify encrypted SIP messages using databases (as shown in Table 3). We record 130 SIP messages with our relay and we map them to specific VoLTE operations with 83.07% accuracy. We further analyse the causes where we fail to correctly identify messages and find that most are caused by the size similarities between operations, e.g., the size of uplink _180 Ring_ message from the Samsung S7 with Carrier1 is \(877\pm 1\) bytes while _486 Busy Here_ message has \(878\pm 1\) bytes. Therefore, we further revise the signalling log based on context (e.g., _486 Busy Here_ response can not happen before _180 Ring (Invite)_ response), which enables us to achieve 100% accuracy. Fig. 8(b) shows an example of the recovered SIP messages from a victim UE. ### Monitoring voice activity In order to evaluate voice activity, we set up a VoLTE call from the iPhone 11 to a victim which uses Samsung S7 UE. Once the call is established, an audio sample is played from the iPhone 11. We terminate the call after 105 seconds. The call generates 3353 RTP packets in the downlink direction and 4864 packets in the uplink. In order to identify RTP packets which contain _Comfort Noise_ frame, we set a threshold at 10 bytes per message (6 bytes for _Comfort Noise_ frame, 1 byte for AMR header and 3 bytes for Robust Header Compression header). We show the analysis result of downlink RTP packets in Fig. 8. We can see that the downlink traffic has a bigger bit-rate when the callee is speaking than during silence periods. The large packet size observed at the start of the conversation is caused by the ROHC context which has not been established. The complete voice activity is obtained by analysing both uplink and downlink traffic. ### Mapping victims' identity In the following, we present the results of Globally Unique Temporary Identifier (GUTI) reallocation observed with Carrier1 and Figure 8. Time-sorted downlink RTP traffic representation. The sizes of the frames which contain audio data (blue) are significantly larger when compared to _Comfort Noise_ frames (purple). The first several frames (red) are much larger than the rest because the Robust Header Compression (ROHC) context has not been established. Figure 7. The scatter of _TA_ and _SNR_ of the messages received by mobile-relay during a _guessing_ period. The messages transmitted from the victim UE have higher _SNR_ above \(20\)dB and stable _TA_ as \(0\mu\)s, while the _SNR_ for other messages transmitted from non-targeted UE is quite low and _TA_ of these messages are distributed between \(-20\mu\)s to \(20\mu\)s. Carrier2, followed by the evaluation of _passive mapping with call capability_ and _active mapping_. We connect the Samsung S7 and the S8 to Carrier1 and Carrier2 for 60 hours and make calls every 10 minutes to collect Control Plane (CP) data. We find that the GUTI remains constant during the whole observed period. Therefore, the mapping between the victim's GUTI and the phone number is valid for extended periods of time and the VoLTE calls towards the victim are not frequently required. In Fig. 9 we show the results of _passive mapping_. The real signalling log is shown in Fig. 8(a) and the VoLTE signalling analysis results obtained at our mobile-relay are shown in Fig. 8(b). By using the sequence between the messages and their timestamps, an attacker can easily associate a known phone number with the observed activity. And in the case of an _active mapping_ attack, the victim's UE is forced to register to the network through a new Authentication and Key Agreement (AKA) procedure, which further reveals the victim's long term IMSI identity. ## 5. Relay Evaluation in 5G Networks We evaluate the performance of our mobile-relay using a private 5G network deployed with ssrRAN (Samsung et al., 2018) and Open5GS (Krishnan et al., 2017). When compared to LTE, 5G provides significant improvements to privacy (e.g., the introduction of concealed identifiers), and bandwidth efficiency (e.g., the addition of native QoS on the SDAP layer). However, these improvements do not prevent the attacks discussed in this paper, with one partial exception which we discuss below. In 5G, the initial access to the network, i.e. the Random Access Channel (RACH) procedure, can be performed in two ways depending if the network uses a standalone (SA) or a non-standalone (NSA) deployment, Fig. 9(a). The SA version represents the native, efficient 5G procedure. The NSA is a backwards compatible version intended to pigyback on existing 4G/LTE infrastructure. When deploying our relay in a 5G-SA environment we were able to efficiently target the RACH procedure. This is because the initial access to 5G-SA is very similar to LTE in that it uses a contention-based random access channel to initialize the radio connection and configure the default Internet bearer using a _RRCConnectionReconfiguration_ message. Thus, our relay is able to begin the guessing procedure when the _RRCConnectionReconfiguration_ is observed, wait for scheduling request messages, and compute physical layer parameters using the allocation of NR Physical Uplink Control Channel (NR-PUCCH) values. This process, however, is slightly more difficult in 5G-SA than LTE because LTE follows stricter rules for allocating resource blocks for PUCCH messages (Krishnan et al., 2017). We give an example of the 5G-SA SR parameter configuration in Fig. 11. The specific SR resource parameters are configured by _schedulingRequestResourceToAddModlist_ which is part of the plain-text _RRCCsetup_ message. In our 5G-SA experiment, we observe that the gNB does not update these SR parameters when setting up the default Internet bearer. This is expected given that our tests are conducted in a controlled environment, with only one UE connected, which results in conditions that satisfy the latency requirement of Internet bearer and therefore do not require any updates to the SR resource. Deploying the relay in 5G-NSA setting is significantly more difficult. As shown in Fig. 9(b), in 5G-NSA the UE reports signal measurements of surrounding NR cells after being connected to the LTE network. The LTE network can then select a gNodeB station according to the measurements received and request the radio resources on behalf of the UE (e.g., C-RNTI, scheduling request resources) from the gNodeB. Then, the LTE network sends the requested configuration to the UE using a _RRCConnectionReconfiguration_ message, and instructs the UE to connect to the gNodeB as a secondary cell. Therefore, the initial access between UE and gNodeB in 5G-NSA uses a contention-free RACH with the preamble parameters indicated in a _RRCConnectionReconfiguration_ received from the eNodeB. Additionally, the _RRCConnectionReconfigurationComplete_ message is transferred on the established LTE bearer rather than a 5G bearer, which further complicates the problem as no immediate uplink message can be observed by the attacker. As such, maintaining relay radio connections in 5G-NSA is significantly more difficult because: (1) the adversary needs to guess more parameters than in LTE and 5G-SA, such as the preamble parameters and the Figure 10. Random Access Channel (RACH) procedure as used in LTE/5G-SA(left) and 5G-NSA (right). Figure 9. VoLTE signalling logs from both the victim’s UE and the mobile-relay adversary. The log recovered by the mobile-relay adversary is identical to the reference log. This can be used by an adversary to link the victim’s identity to phone number. C-RNTI, and (2) the relay needs to maintain a longer full-spectrum listening window to look for the targeted scheduling request messages. While (1) could be addressed given that the values required are available in other non-encrypted messages, as discussed in Section 6.4, our computationally limited attacker is unable to maintain reliable full spectrum listening windows for sufficient periods in order to address (2). ## 6. Discussion ### Attack detection _IMSI-Catcher apps_. We tested the efficiency of IMSI-Catcher apps against our mobile-relay implementation using both a naive self-developed app, which compares the base station reported signal strength with the UE's directly measured one, as well as a 3rd party app i.e. CellularPrivacy (Han et al., 2017). Our tests were conducted on a Sams S8 connected through the mobile-relay to Carrier1. Neither app was able to identify our mobile-relay. This is expected, as our passive mobile-relay forwards messages between victim UEs and commercial eNodeBs without any knowledge of cryptographic material. Furthermore, the eNodeB part of the mobile-relay relays valid messages obtained from the commercial eNodeB, making it harder to distinguish between the two. With respect to our self-developed app, we were able to make an interesting observation, namely that the signal strength directly measured by the UE only started to increase significantly for distances less than one meter, which are not realistic from an attacker's perspective. _False Base Stations (FBS) detection._ When attempting detection of our active attack (i.e. which is used to obtain the victim's IMSI), we need to modify M-Temporary Mobile Subscriber Identity (M-TMSI) values _once_, which causes either the value itself or the MAC signature of the _Attach Request_ message to become invalid and could be, potentially, detectable. However, under normal circumstances, it is common for the _Attach Request_ messages to be invalidated in situations such as when the M-TMSI value expires, or when moving to another Mobility Management Entity (MME) group. For this reason, the LTE/5G standard allows multiple re-transmission and corruption of the message itself is not considered malicious. The 3GPP standard proposes a new potential method for detecting FBSs which uses CRC checksums to verify each physical resource block (Section 6.23 (Krishnan et al., 2017)). This allows the network to link specific physical layer messages such as _Scheduling Request_ to specific resource blocks. However, this approach is unlikely to fix the underlying causes which enable us to MITM the connection. The relay could easily be modified to ensure that _Uplink Grant_ messages, which inform slot allocations, are processed before resource blocks are allocated to the victim UE thus circumventing the benefits of the CRCs. ### Implications of our work In this paper we discuss several attacks that enable an adversary to establish a reliable physical layer MITM position which, in turn, allows them to obtain a victim's identity and recover its VoLTE activity log. Given sufficient hardware resources, an adversary can easily extend our attack to target multiple victims, potentially even located in different geographic areas, simultaneously. We speculate that such an attack could have larger privacy implications, given that such an adversary could correlate call information and determine relationships and activities between these victims simply by using the sequences and timestamps of recovered signalling logs and voice logs. ### Limitations The main limitations of our attack it that it only recovers metadata rather than plaintext such as spoken language or words. While plaintext recovery such as (Srivastava et al., 2017) and (Srivastava et al., 2017) have been shown to work with SIP these do not work with VoLTE/NR. The main reason is that VoLTE/NR uses Adaptive Multi-Rate (AMR) speech coding algorithm instead of the Variable Bit-Rate codec (VBR). The size of VBR coded packet is determined by the encoded audio and thus leaks some information about the encoded payload, however, AMR generates fixed-length packets. Therefore, the choice of using AMR codes in VoLTE/NR represents one of the primary reasons why recognition attacks are limited. The second significant limitation of our relay is represented by the difficulty to man-in-the-middle LTE Carrier Aggregation (CA) and 5G-NSA connections. Both of these require a relay that supports at least two frequency carriers, a feature that was not available on the B210 SDR. Another related issue is the contention-free RACH procedure which uses _RRCConnectionReconfiguration_ encrypted messages to relay physical layer parameters to the UE and which increases the difficulty of obtaining these 5G-NSA networks. ### Attack mitigations and defences Attack mitigations and defences for the proposed work fall in two main categories: (1) preventing VoLTE traffic identification and (2) increasing the difficulty of deploying the mobile-relay. As stated previously, VoLTE sequence recovery mainly relies on using metadata such as message length and type to identify Figure 11. _SchedulingRequest_ parameters in 5G-SA. messages. Plaintext padding techniques could help mitigate the problem to some extent, however they would not be advisable in a mobile communication scenario due to the significant impact on bandwidth. For example, when using the Samsung S7 UE with Carrier1, the maximum, average, and minimum uplink VoLTE message lengths are 2479, 1170, and 337 bytes, respectively (see Table 3). In order to achieve the best protection, padding for all messages would need to be done to the maximum size (e.g., 2479B) however this would result in an uplink bandwidth drop of about 48.5%. Disabling Voice Activity Detection (VAD) prevents the attacker from learning voice activity information, however, it results in significant waste of bandwidth and spectrum resources. For example, with VAD enabled a one-minute VoLTE call between Alice and Bob with 50% voice saturation generates 1687 uplink RTP packets. With VAD disabled the same call generates 3000 uplink packets representing a 77.8% increase. The key method for preventing the mobile-relay deployment is to increase the difficulty of guessing physical layer parameters. First, we can randomize the _sr-PUCCH-ResourceIndex_ and decrease the value of _dsr-TranMax_. However, the LTE PUCCH is located at the edge of carrier bandwidth (Bordes and others, 2019) (Section 5.4.3), therefore, the option for _sr-PUCCH-ResourceIndex_ is limited. As introduced in Section 3.2, we need at least one scheduling request message to calculate physical layer parameters, therefore setting _dsr-TranMax_ to 1 can hinder this computation. Lower values for _dsr-TranMax_ do have implications for the robustness of the network in poor signal circumstances (e.g., when the UE is behind walls, or is far away from the base station). Another possibility is to increase the time window between receiving _RRCConnectionReconfiguration_ and sending _RRCConnectionReconfigurationComplete_ messages, which complicates guessing by extending the search window. However, this window extension increases the possibility of radio signal interference (see Section 4.3). As such, we believe that a slightly modified version of 5G-NSA, described in the following, is most likely to be efficient against our physical layer relay. First, a successfully deployed relay needs to obtain the physical layer parameters from the _Scheduling Request (SR)_ messages. Then, the attacker also requires knowledge about the victim's C-RNTI identity in order to select the correct downlink messages to be forwarded to the target UE. As discussed in Section 5, in the 5G-NSA attachment procedure these specific parameters are sent to the UE inside an encrypted _RRCConnectionReconfiguration_ message which makes the attack more difficult, it requires an extended listening window for capturing the _SR_ message, and forces the attacker to recover the new, 5G C-RNTI value from a different message, i.e. the _BufferStatusReporting (BSR)_. While protecting the _SR_ is not possible as it contains low level configuration for the physical layer which needs to be directly available to the UE, the C-RNTI could be. One relatively straight-forward method would involve two minor alterations to the 5G-NSA procedure. First, a new security context should be established on the 5G C-RNTI, instead of only temporarily relying on it to facilitate the contention-free RACH. Second the 5G C-RNTI needs to be kept secret, thus it should not be transmitted inside MAC layer messages such as _BSR_, but instead should be moved on to the RRC layer. We believe that these changes would significantly reduce the attack surface, however, they represent significant changes to procedures in both 5G and LTE standards and therefore would require extensive testing on specialized prototype infrastructure which goes beyond the purpose of this work. ### Ethical Considerations In developing and evaluating our attacks, we comply with the law and other users' privacy by controlling the transmission powers of our mobile-relay in order to avoid attracting neighbouring UEs and cause interference with commercial eNodeBs. ## 7. Conclusion While a lot of privacy related research in LTE and 5G is focused on the radio interface, VoLTE/NR privacy has remained largely unexplored. In this work, we showed two types of privacy attacks: a VoLTE/NR activity monitoring attack, which exploits encrypted PDCP data and recovers VoLTE/NR activities, and an identity recovery attack, which is able to obtain and link network identifiers to victims' phone numbers using VoLTE/NR traffic. We also proposed and implemented several improvements to the relay attacker, which greatly improve its undetectability and reliability. We have further shown the real-world performance of our attacks by recovering victims' VoLTE/NR activity logs from the encrypted traffic collected, and then linking their anonymised identifiers to their real-life correspondents. Finally, we conclude by providing a discussion on the mitigations and defense for the proposed attacks. ###### Acknowledgements. This work is partially funded by the China Scholarship Council (CSC) with awards to Zishuai Cheng, and Engineering and Physical Sciences Research Council (EPSRC) under grants EP/R012598/1, EP/R0080000 and 11 and EP/V000454/1.